In 2008, Michael Bracken, at Yale University, summarised some of the developments in the field, highlighting the worryingly poor quality of animal studies and making the case for more preclinical… Click to show full abstract
In 2008, Michael Bracken, at Yale University, summarised some of the developments in the field, highlighting the worryingly poor quality of animal studies and making the case for more preclinical systematic reviews. Between 2005 and 2010, Korevaar et al. estimated that 163 preclinical systematic reviews had been published, while 246 were identified between 2009 and 2013 by Mueller et al. As more and more animal studies were scrutinised as part of the systematic review process, it gradually became apparent that much animal research was conducted to a low standard and was therefore unable to generate robust, reliable data. This made uncomfortable reading for animal researchers, who were found to report low rates of random allocation, allocation concealment and blinded outcome assessment. Studies that take these accepted precautions to reduce biases are less likely to suggest differential effects than studies that do not observe these precautions. It soon became evident that large bodies of animal research had overstated the benefits of their experimental interventions. Tsilidis et al. demonstrated this clearly in the field of preclinical neurological research, as did Crossley et al. in the field of preclinical stroke research. The accumulating preclinical systematic reviews also revealed that animal samples are typically small, leading to underpowered and therefore unreliable studies, as Emily Sena, convenor of CAMARADES, showed in her 2014 overview. In short, systematic reviews provided overwhelming evidence that animal studies suffer from poor experimental design and a lack of scientific rigour, raising doubts about the robustness of their findings and consequently, their clinical relevance. Selective analysis and biased outcome reporting – the practice of reporting only the most positive outcomes and analyses from among the many performed and studied – was also revealed to be a problem in animal research. Again, this leads to an overestimate of beneficial treatment effects, ultimately creating a body of evidence with an inflated proportion of studies with positive results. Incomplete reporting was revealed to be another limiting factor. Even basic information, such as the number of animals used in experiments, was found to be missing, as was reporting on attrition. This – the loss of animals through death or exclusion – can dramatically alter the results of a study and, again, have the effect of making animal studies appear more positive than they actually are. Publication bias (the phenomenon whereby studies are more likely to be published if they present ‘positive’ findings) was found to be a significant problem, leading once more to the benefits of animal studies being overstated. And citation bias, first reported in the clinical field, was found to be an issue in animal research. A German study of 109 investigator brochures, the documents presented to ethics review boards by those applying to conduct Phase I and II trials in humans, revealed that only 6% of the preclinical animal studies referenced in the brochures reported an outcome demonstrating no effect; the vast majority – 82% – were described as reporting positive findings. Unsurprisingly then, when scientists from Astra Zeneca reviewed 255 protocols for forthcoming animal experiments, they found that over half needed amending to ensure proper experimental Journal of the Royal Society of Medicine; 2022, Vol. 115(6) 231–235
               
Click one of the above tabs to view related content.