To cite: Taylor KS, Mahtani KR, Aronson JK. BMJ EvidenceBased Medicine Epub ahead of print: [please include Day Month Year]. doi:10.1136/ bmjebm-2020-111651 © Author(s) (or their employer(s)) 2020. No commercial… Click to show full abstract
To cite: Taylor KS, Mahtani KR, Aronson JK. BMJ EvidenceBased Medicine Epub ahead of print: [please include Day Month Year]. doi:10.1136/ bmjebm-2020-111651 © Author(s) (or their employer(s)) 2020. No commercial reuse. See rights and permissions. Published by BMJ. Data extraction is the process of a systematic review that occurs between identifying eligible studies and analysing the data, whether it can be a qualitative synthesis or a quantitative synthesis involving the pooling of data in a metaanalysis. The aims of data extraction are to obtain information about the included studies in terms of the characteristics of each study and its population and, for quantitative synthesis, to collect the necessary data to carry out metaanalysis. In systematic reviews, information about the included studies will also be required to conduct risk of bias assessments, but these data are not the focus of this article. Following good practice when extracting data will help make the process efficient and reduce the risk of errors and bias. Failure to follow good practice risks basing the analysis on poor quality data, and therefore providing poor quality inputs, which will result in poor quality outputs, with unreliable conclusions and invalid study findings. In computer science, this is known as ‘garbage in, garbage out’ or ‘rubbish in, rubbish out’. Furthermore, providing insufficient information about the included studies for readers to be able to assess the generalisability of the findings from a systematic review will undermine the value of the pooled analysis. Such failures will cause your systematic review and metaanalysis to be less useful than it ought to be.
               
Click one of the above tabs to view related content.