80 Can electronic algorithm (EA) surveillance for central line associated bloodstream infections (CLABSIs) improve comparisons of CLABSI rates between hospitals?

Friday, March 19, 2010: 11:30 AM
Regency VI-VII (Hyatt Regency Atlanta)
Michael Y. Lin, MD, MPH , Rush University Medical Center, Chicago, IL
Keith F. Woeltje, MD, PhD , Washington University School of Medicine, St. Louis, MO
Yosef M. Khan, MBBS, MPH , Ohio State University Medical Center, Columbus, OH
Joshua A. Doherty, BS , Washington University School of Medicine, St. Louis, MO
Tara B. Borlawsky, MA , Ohio State University Medical Center, Columbus, OH
Kurt B. Stevenson, MD, MPH , Ohio State University Medical Center, Columbus, OH
Bala Hota, MD, MPH , John H. Stroger, Jr. Hospital of Cook County, Chicago, IL
Robert A. Weinstein, MD , John H. Stroger, Jr. Hospital of Cook County and Rush University Medical Center, Chicago, IL
William E. Trick, MD , John H. Stroger, Jr. Hospital of Cook County, Chicago, IL

Background:

Between-hospital comparisons of CLABSI rates are improved if surveillance definitions are consistently applied across institutions; however, infection preventionists (IPs) apply surveillance definitions with some subjectivity, degrading inter-observer reliability. EAs rely on objective criteria for CLABSI detection.

Objective:

1) To assess the level of agreement between EA and IP surveillance on a sample of positive blood culture episodes. 2) To compare both EA and IP against a standardized review (SR). 3) To assess variation of agreement (heterogeneity) from one intensive care unit (ICU) to another, and whether ignoring episodes with only a single common skin commensal (CSC) improves agreement and reduces heterogeneity.

Methods:

Seven ICUs (4 MICU, 3 SICU) from 4 medical centers participated (2004-2006). A random sample of positive blood culture episodes was evaluated by three methods: (1) “IP”: IPs prospectively performed routine CLABSI surveillance using pre-2008 National Healthcare Safety Network (NHSN) surveillance definitions (this rule allowed a single CSC to be considered a true CLABSI if appropriate antibiotic therapy was instituted); (2) “EA”: an electronic algorithm approximating pre-2008 NHSN definitions was applied retrospectively. (3) “SR”: a single study IP at each medical center retrospectively performed a blinded standardized review using pre-2008 NHSN definitions. Kappa (K) agreement was assessed between methods, and heterogeneity among ICU-strata was assessed using a chi-square statistic for equality of kappa values (SAS 9.1.3). Analyses were repeated after assigning blood culture episodes with only a single CSC as CLABSI-negative for all 3 methods (post-2008 NHSN approximation).

Results:

586 positive blood culture episodes were reviewed (Table). Agreement between EA/IP was poor (K = 0.12, 95% confidence interval [CI] 0.07 – 0.18). Agreements for EA/SR and IP/SR were equivalent and both higher than EA/IP agreement; however, both had significant heterogeneity among ICU strata. Approximating post-2008 NHSN definitions substantially improved EA/SR agreement (K = 0.59, CI 0.52 – 0.65), but had little effect on IP/SR agreement. Furthermore, using post-2008 NHSN approximation, there no longer was inter-ICU heterogeneity between EA and SR (P = 0.78), while considerable heterogeneity remained for the IP/SR review (P = 0.02).

Conclusions:

Using a blinded standardized review as the comparator, after ignoring single CSCs, an electronic algorithm had better agreement and less between-ICU heterogeneity than routine IP surveillance. Significant heterogeneity of agreement between human reviews (IP/SR) suggests that IPs inconsistently apply CLABSI surveillance definitions. Objective methods of surveillance such as EA should lead to more valid inter-institution comparisons of CLABSI rates.