Abstract Comparative accuracy studies evaluate the relative performance of two or more diagnostic tests. As any other form of research, such studies should be reported in an informative manner, to… Click to show full abstract
Abstract Comparative accuracy studies evaluate the relative performance of two or more diagnostic tests. As any other form of research, such studies should be reported in an informative manner, to allow replication and to be useful for decision‐making. In this study we aimed to assess whether and how components of test comparisons were reported in comparative accuracy studies. We evaluated 100 comparative accuracy studies, published in 2015, 2016 or 2017, randomly sampled from 238 comparative accuracy systematic reviews. We extracted information on 20 reporting items, pertaining to the identification of the test comparison, its validity, and the actual results of the comparison. About a third of the studies (n = 36) did not report the comparison as a study objective or hypothesis. Although most studies (n = 86) reported how participants had been allocated to index tests, we could often not evaluate whether test interpreters had been blinded to the results of other index tests (n = 40; among 59 applicable studies), nor could we identify the sequence of index tests (n = 52; among 90 applicable studies) or the methods for comparing measures of accuracy (n = 59). Two‐by‐four table data (revealing the agreement between index tests) were only reported by 9 of 90 paired comparative studies. More than half of the studies (n = 64) did not provide measures of statistical imprecision for comparative accuracy. Our findings suggest that components of test comparisons are frequently missing or incompletely described in comparative accuracy studies included in systematic reviews. Explicit guidance for reporting comparative accuracy studies may facilitate the production of full and informative study reports.
               
Click one of the above tabs to view related content.