LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Electronic surveillance criteria for non-ventilator-associated hospital-acquired pneumonia: Assessment of reliability and validity.

Photo from wikipedia

OBJECTIVE Surveillance of non-ventilator-associated hospital-acquired pneumonia (NV-HAP) is complicated by subjectivity and variability in diagnosing pneumonia. We compared a fully automatable surveillance definition using routine electronic health record data to… Click to show full abstract

OBJECTIVE Surveillance of non-ventilator-associated hospital-acquired pneumonia (NV-HAP) is complicated by subjectivity and variability in diagnosing pneumonia. We compared a fully automatable surveillance definition using routine electronic health record data to manual determinations of NV-HAP according to surveillance criteria and clinical diagnoses. METHODS We retrospectively applied an electronic surveillance definition for NV-HAP to all adults admitted to Veterans' Affairs (VA) hospitals from January 1, 2015, to November 30, 2020. We randomly selected 250 hospitalizations meeting NV-HAP surveillance criteria for independent review by 2 clinicians and calculated the percent of hospitalizations with (1) clinical deterioration, (2) CDC National Healthcare Safety Network (CDC-NHSN) criteria, (3) NV-HAP according to a reviewer, (4) NV-HAP according to a treating clinician, (5) pneumonia diagnosis in discharge summary; and (6) discharge diagnosis codes for HAP. We assessed interrater reliability by calculating simple agreement and the Cohen κ (kappa). RESULTS Among 3.1 million hospitalizations, 14,023 met NV-HAP electronic surveillance criteria. Among reviewed cases, 98% had a confirmed clinical deterioration; 67% met CDC-NHSN criteria; 71% had NV-HAP according to a reviewer; 60% had NV-HAP according to a treating clinician; 49% had a discharge summary diagnosis of pneumonia; and 82% had NV-HAP according to any definition according to at least 1 reviewer. Only 8% had diagnosis codes for HAP. Interrater agreement was 75% (κ = 0.50) for CDC-NHSN criteria and 78% (κ = 0.55) for reviewer diagnosis of NV-HAP. CONCLUSIONS Electronic NV-HAP surveillance criteria correlated moderately with existing manual surveillance criteria. Reviewer variability for all manual assessments was high. Electronic surveillance using clinical data may therefore allow for more consistent and efficient surveillance with similar accuracy compared to manual assessments or diagnosis codes.

Keywords: surveillance criteria; surveillance; diagnosis; electronic surveillance; hap according

Journal Title: Infection control and hospital epidemiology
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.