79 Assessing Inter-rater Reliability (IRR) of Surveillance Decisions by Infection Preventionists (IPs)

Friday, March 19, 2010: 11:15 AM
Regency VI-VII (Hyatt Regency Atlanta)
Jeanmarie Mayer, MD , University of Utah School of Medicine, SLC, UT
Janelle Howell, MPH, MHA , University of Utah School of Medicine, SLC, UT
Tom Green, PhD , University of Utah School of Medicine, SLC, UT
Michael Rubin, MD, PhD , IDEAS Center, Salt Lake City, UT
William R. Ray , University of Utah School of Medicine, SLC, UT
Brian Nordberg, BS , University of Utah School of Medicine, SLC, UT
Candace L. Hayden, MSPH , University of Utah School of Medicine, SLC, UT
Pat Nechodom, MPH , University of Utah School of Medicine, SLC, UT
Matthew Samore, MD , IDEAS Center, Salt Lake City, UT
Background: The National Healthcare Safety Network (NHSN) provides standardized surveillance definitions for infections.  Although subjective judgment is required, limited information exists on the IRR of IPs applying surveillance criteria in practice.
Objective: Determine the agreement between IPs reviewing the same records to identify episodes of central line associated bloodstream infections (CLABSI).
Methods: Simulated electronic health records (EHRs) for IP review were created from actual EHRs at a 121 bed Veterans Affairs (VA) healthcare facility.  Power calculations determined that IPs could be divided into 4 groups of 4 with each group reviewing data for different sets of 30 records, for an efficient distribution of 120 records among 16 IPs.  IPs were recruited from 138 VA facilities with 16 randomly selected. A random selection of 120 patients hospitalized between Aug 2000 and Dec 2005 with a positive blood culture greater than 2 days post-admission were sampled, with 25% oversampled from local CLABSI cases.  Data relevant for surveillance including microbiology, bed movement with dates, caregiver notes, antimicrobial use, and chest radiograph reports were provided in familiar, web-based EHR format.  For each submitted CLABSI report, IPs were asked to describe their decision and provide a level of certainty.  For all 120 records, we also created a CLABSI reference standard using an objective algorithm based on microbiology results, time since admission, and presence of a central venous catheter.
Results: We report a preliminary analysis of 150 records reviewed by 5 IPs and where 2 IPs reviewed the same records. The mean time for 5 IPs to each review their 30 records was 10.6 hours (range, 6.6 - 13.7 hours), with mean time to review an individual record of 21 minutes (range, 3 - 64 minutes).  Up to 40% of the case decisions were described as "uncertain”.  The proportion of uncertain decisions reported as CLABSI varied among IPs, ranging from 0/3 (0%) to 4/4 (100%).  Based on objective rules, 133 episodes in 120 records were categorized as: 48 (36%) were contaminants and 85 (64%) were true BSIs.  Of the true BSIs, 23 (28%) were secondary and 62 (72%) were primary.  2/3 of primary BSIs had evidence of a central venous catheter, for 41 CLABSI.  Overall mean Kappa with 95% CI for agreement between IPs for one completed set of records with 2 IP reviews was 0.45 (-0.11 to +0.80) whereas overall mean kappa for agreement between the CLABSI reference standard and IPs was 0.74 (+0.61 to +0.87).
Conclusions: Although preliminary data are limited, initial indications are that agreement between IPs was poor, similar to that of each IP compared to an objective reference standard. Surveillance was time consuming with a number of decisions deemed uncertain. As part of our future work, we plan to assess more efficient surveillance methods, including a blending of automated and manual review.