This paper argues that there is a poor fit between the use of precise methods of case definition and case ascertainment for reporting infection rates, and the measurement uncertainty inherent in hospital infection surveillance. Epidemiological precision in counting and reporting merely creates the illusion of accurate measurement: undercounting is inevitable, and the true burden of healthcare-associated infection (HCAI) remains poorly understood. Syndromic surveillance is a promising alternative that embraces measurement uncertainty to avoid systematic undercounting. This may lead to a better understanding of the burden of HCAIs, whilst providing a yardstick sensitive enough to evaluate the effectiveness of interventions to reduce infections.
The ability of a test or screening method to detect the presence of disease is measured by two complementary dimensions: sensitivity and specificity. Sensitivity refers to the ability to detect a particular disease if it is present, whereas specificity refers to the ability to correctly determine when a particular disease is not present. Low sensitivity leads to ‘false negative’ outcomes (i.e., disease is present but is not detected), and low specificity leads to ‘false positive’ outcomes (i.e., disease is “detected” when it is not actually present). Neither type of error is desirable: false negatives lead to increased morbidity and mortality, and false positives result in a waste of scarce resources for unnecessary (and possibly harmful) treatment. However, because sensitivity and specificity are complementary, an increase of one usually implies a decrease of the other. A test or screening method is said to be of high quality when it exhibits both high sensitivity and high specificity.
These concepts of specificity and sensitivity are certainly applicable in appraising the quality of surveillance for healthcare-associated infections (HCAIs). For example, a common test for methicillin-resistant Staphylococcus aureus (MRSA) involves a polymerase chain reaction (PCR) assay of a nasal swab, which has been demonstrated to show a sensitivity of 88% and specificity of 90%.1
Hospital infection surveillance tends to be predicated on adhering to accepted case definitions, and following prescribed methods for case ascertainment, typically including confirmation via laboratory test. What tends to be missing from the hospital infection surveillance discourse, however, is the application of sensitivity and specificity to the whole of the hospital infection surveillance operation, from noticing symptoms, to initiation of testing, to diagnosis. The ability to accurately measure the incidence of healthcare-associated infections depends upon far more than the acuity of laboratory tests. Such tests must be ordered before they can be carried out, and this in turn requires all the steps that comprise a surveillance process, including that: symptoms must be evident, they must be observed and recorded, then they must be acted upon, and then the testing outcome must be reported and tallied. As will be seen below, this complex chain of activities undermines both sensitivity and specificity of hospital infection surveillance.
Surveillance and Uncertainty
Many of the multiple steps involved for infection surveillance involve a degree of uncertainty or somewhat arbitrary decision criteria. First, if a patient becomes colonized with MRSA or Clostridium difficile (C.diff) but does not actively manifest the disease, it is unlikely to be detected, and it would not count as an infection as a matter of policy. This is despite the reality that the patient has, in fact, been infected. When symptoms are present, the measurement outcome may still be uncertain. For example, if a patient experiences diarrhea, whether a test for C.diff is even ordered depends first upon whether the patient thinks to report the episode, and then it depends upon clinical judgment as to whether there are other plausible causes that may negate the motivation to order a test. If the test is ordered and the results of a panel of toxin tests are mixed, further clinical judgment is required about what diagnosis (if any) to make. In the case of respiratory infections, decisions about whether to test – and for which viral pathogens – is even less clear; where treatment is likely to be simply supportive, testing may not be indicated as a matter of routine.2 In such cases, it is uncertain if symptoms are even documented in a patient’s chart. Moreover, many infections may present as sub-clinical and simply do not progress enough to warrant attention prior to the patient being discharged.3 In the case of surgical site infections, symptoms may not manifest at all until many weeks after the patient has been sent home.4,5 Finally, the proportion of infection events that patients bring with them upon admission is uncertain. Community-acquired illnesses do not properly belong to the measurement of HCAIs. Given the totality of these challenges, and because nosocomial infection case ascertainment is typically possible only when laboratory tests are ordered, undercounting of nosocomial cases is inevitable. This perpetuates a systematic underestimation of the burden of disease among hospitalized patients and the healthcare facilities that treat them6.
What we can say about this state of affairs is that the sensitivity of hospital infection surveillance is unknown but is almost certainly poor. This is not a matter of incompetent performance, but simply reflects the amount of complexity and uncertainty inherent in HCAI surveillance as described above. While additional resources for infection surveillance would likely help improve measurement to a degree, this is largely not a problem solvable by money. This is because infection surveillance in hospitals simply continues to be uncertain. However, by holding firm to precise methods of case definition and case ascertainment, only the illusion of accurate measurement is created. The reality, however, is that the burden of HCAIs is not yet well understood. This is caused by low sensitivity inherent in hospital infection surveillance, as it is currently practiced. Notable is the Cochrane systematic review finding of no demonstrable relationship between reported hand hygiene rates and reported HCAI rates, suggestive of significant measurement validity problems7.
Syndromic surveillance (SS) is generally understood to comprise the systematic collection and geographically-based analysis of clinically-observed symptoms presenting at hospitals and/or clinics, in lieu of waiting for laboratory test results.8 SS was developed as a means to rapidly screen large populations in the event of a suspected bioterrorism episode, but has been taken up by public health authorities as an effective means for community surveillance of infectious outbreaks more generally.9 The two principal advantages of SS are immediacy of reporting and extremely high sensitivity (albeit at the considerable expense of specificity). The immediacy results from real-time reporting of symptoms as they present, and sensitivity results from every observed instance of symptoms of interest being reported. In community settings, this allows for the rapid identification of gastrointestinal and respiratory outbreaks by detecting sudden spikes in the incidence of these symptoms at healthcare facilities, compared to expected, seasonal rates.10
Beyond community-based application, SS has been shown to be a very sensitive method for outbreak detection in hospital emergency room settings11 and for veterinary inpatient settings.12 However, it has not been widely used in acute care settings to date. Conducting SS manually in a hospital setting can be expected to be labor intensive and unsustainable. However, SS can be made efficient through the use of three complementary automation tools. The primary tool consists of a tablet-based checklist to support electronic data capture of sentinel infection symptoms collected by a healthcare worker conducting daily rounds solely for observing symptoms that may indicate infection. The secondary tool consists of an automated phone and/or web-based contact system to continue to collect self-reported symptoms from patients for 30 days post-discharge, which is generally considered adequate to capture most surgical site infections.13 The tertiary tool comprises an analytical module to compute and report seasonally-adjusted weekly averages of the number and variety of symptoms of infection observed.
Typically, a dramatic increase in the sensitivity of measurement given by syndromic surveillance is accompanied by a concomitant drop in specificity. For example, fever is a hallmark of infection but can also result from blood transfusions or chemotherapy; diarrhea may be caused by allergies or a wide range of medications. In stark contrast to surveillance by strict case definition and case ascertainment, the absolute count of suspected infections will certainly be overestimated. However, the non-infectious (spurious) causes of infection-like symptoms will not generally be influenced by factors that do cause infections, such as seasonality, institutional outbreaks or lapses in hygiene practices. Therefore, the rate of spurious infection-like symptoms will tend to be constant, as long as the patient population remains consistent. This is a very important feature in the data, because changes from day to day or week to week will be more likely to reflect changes due to infections, than due to the ‘background noise’ of spurious, infection-like symptoms characteristic of a consistent patient population. Reliable discrimination of true infection cases from ‘background noise’ symptoms does require a considerable amount of historical data, and discriminant ability will increase over time as the natural variation in the patient population becomes more sharply defined by a growing dataset.
There are three main benefits associated with implementing syndromic surveillance for infection in acute care environments. The first, already stated, is the ability to properly gauge the burden of HCAIs. The second benefit pertains to intervention research, which tends to be bedeviled by insufficient statistical power because of the very large number of patients needed for observational studies to accrue enough reported HCAIs.14 A larger number of cases identified via SS would boost the validity of observational research designs which greatly translate to infection control research. The third benefit is a consequence of continued daily contact with patients after discharge, where proactive monitoring may result in lower emergency room visits and readmissions, both of which tend to be costly, and some of which are likely avoidable.
Traditional epidemiologic methods for hospital infection surveillance result in systematic undercounting, thereby providing precision without accuracy. This low-sensitivity approach prevents the true burden of HCAIs from being understood, and it hampers the measurement of interventions to reduce infections. Syndromic surveillance amplifies the sensitivity of HCAI measurement, avoiding this problem. Syndromic surveillance is not well-suited to replace traditional hospital infection surveillance methods, but should be regarded as complementary to current methods, in order to better understand the burden of HCAIs as well as to underpin the effective measurement of interventions to improve patient safety.
Conflict of interest disclosure:
Dr. Furness has been developing a commercial syndromic surveillance system in his role at Infonaut, and he has a pecuniary interest in the success of the company.
1 Dangerfield B, Chung A, Webb B, Seville MT. Predictive value of methicillin-resistant Staphylococcus aureus (MRSA) nasal swab PCR assay for MRSA pneumonia. Antimicrobial Agents and Chemotherapy 58(2), 859-64.
2 Voirin, N. et al. (2010). Hospital-acquired influenza: a synthesis using the Outbreak Reports and Intervention Studies of Nosocomial Infection (ORION) statement. Journal of Hospital Infection, 71(1), 1-14.
3 Centers for Disease Control and Prevention (2015). Identifying Healthcare Associated Infections (HAI) for NHSN Surveillance. Available: http://www.cdc.gov/nhsn/pdfs/pscmanual/2psc_identifyinghais_nhsncurrent.pdf
4 McIntyre LK, Warner KJ, et al. (2009). The incidence of post-discharge surgical site infection in the injured patient. The Journal of Trauma 66(2), 407-410.
5 Staszewicz W, Eisenring MC et al. (2015). Thirteen years of surgical site infection surveillance in Swiss hospitals. Journal of Hospital Infection 88(1), 40-47.
6 Sievert D, Ricks P, Edwards JR, Schneider A, Patel J, et al. (2013). Antimicrobial-Resistant Pathogens Associated with Healthcare-Associated Infections Summary of Data Reported to the National Healthcare Safety Network at the Centers for Disease Control and Prevention, 2009–2010. Infection Control & Hospital Epidemiology, 34, 1-14.
7 Gould DJ, Moralejo D, Drey N, Chudleigh JH. (2010). Interventions to improve hand hygiene compliance in patient Care. The Cochrane Library, Issue 9.
8 Sosin, DM. (2003). Syndromic surveillance: the case for skillful investment. Biosecurity Bioterrorism: Biodefense Strategy, Practice, Science 1(4), 247-253.
9 Henning K. (2004). What is Syndromic Surveillance? Morbidity and Mortality Weekly Report, September 24, 2004 / 53(Suppl);5-11
10 Chen D, Cunningham J, Moore KM, Tian J. Spatial and temporal aberration detection methods for disease outbreaks in syndromic surveillance systems. Annals of GIS. 2011;17(4): 211-220.
11 Andersson, T., Bjelkmar, P., Hulth, A., Lindh, J., Stenmark, S., & Widerström, M. (2014). Syndromic surveillance for local outbreak detection and awareness: evaluating outbreak signals of acute gastroenteritis in telephone triage, web-based queries and over-the-counter pharmacy sales. Epidemiology and Infection, 142(02), 303-313.
12 Ruple-Czerniak, A., Aceto, H.W., Bender, J.B., Paradis, M.R., Shaw, S.P., Van Metre, D.C., Weese, J.S., Wilson, D.A., Wilson, J.H. and Morley, P.S. (2013), Using Syndromic Surveillance to Estimate Baseline Rates for Healthcare-Associated Infections in Critical Care Units of Small Animal Referral Hospitals. Journal of Veterinary Internal Medicine, 27: 1392–1399.
13 Surgical Site Infection Surveillance Service (2013). Protocol for the Surveillance of Surgical Site Infection. Available: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/364412/Protocol_for_surveillance_of_surgical_site_infection_June_2013.pdf
14 Bettiol E, Rottier WC, Del Toro MD, Harbarth S, Bonten MJ et al. (2014). Improved treatment of multidrug-resistant bacterial infections: utility of clinical studies. Future Microbiology 9(6), 757-71.