Millar and colleagues1 describe an analysis of the use of promotional language (“hype”) in abstracts of National Institutes of Health (NIH)–funded research projects. They extracted more than 900 000 abstracts… Click to show full abstract
Millar and colleagues1 describe an analysis of the use of promotional language (“hype”) in abstracts of National Institutes of Health (NIH)–funded research projects. They extracted more than 900 000 abstracts from the NIH RePORTER (Research Portfolio Online Reporting Tools: Expenditures and Results) archive and measured the frequency of certain terms from 1985 to 2020. They found that the use of most promotional adjectives—such as novel, critical, and innovative—gradually and continuously increased, while the use of others—such as scalable and transformative—was essentially nonexistent in 1985 but later became much more common beginning in the late 2000s. The authors acknowledge that the existence of hype in grant applications is not surprising, given that “the genre is inherently promissory,” and go on to write that applicants “increasingly describe their work in subjective terms and rely on appeals to emotion.” While effective communication has long been central to the conduct of science, this report by Millar and colleagues1 highlights how scientists convey the quality—in this case the anticipated quality—of their work. Not surprisingly, scientists believe that their work is and will be of high quality. As has been widely publicized, though, there is increasing concern about a high prevalence of poorquality science, what is sometimes referred to as a reproducibility crisis or a systematic absence of rigor. The NIH has articulated its concerns. In 2012, Landis et al2 called for improvements in the reporting of randomization, blinding, sample-size calculation, and data management in preclinical research. In 2014, NIH leaders described initiatives to improve the rigor of science that the agency funds.3 Most recently, in 2021, an NIH Working Group issued a report on steps the agency and scientific community can take to enhance the rigor, transparency, and translatability of animal research.4 The report by Millar and colleagues1 raises the question as to whether there are alternatives to promotional adjectives to convey the novelty and rigor—or lack of rigor—of scientific proposals or reports. Bibliometric methods exist to distinguish science that is disruptive as opposed to developmental or incremental. There are approaches to enable a no-hype, high-quality study design. Examples include the preclinical Experimental Design Assistant, which some funding agencies require and others encourage, and the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement for clinical trials. There are also tools to ensure rigorous conduct and reporting. These include preclinical registered reports, the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guideline, clinical trial registration, standardized clinical trial reports, and the CONSORT (Consolidated Standards of Reporting Trials) guideline. Beyond these examples, others have undertaken to produce the equivalent of report cards for rigor. Button et al5 found a high prevalence of underpowered studies in preclinical and clinical neuroscience and described this as a “power failure” given the inherent likelihood that underpowered studies produce misleading invalid findings. Ramirez et al6 systematically coded thousands of published articles, finding low rates of reporting on randomization, masking, sample size estimation and of reporting on sex. At the NIH, the AlzPED (Alzheimer’s Disease Preclinical Efficacy Database) posts objective assessments of thousands of scientific reports across more than 20 domains. An analysis of more than 1000 studies found low rates of sample size calculation, blinding, and randomization.7 There are also established, well-accepted methods for grading the quality of clinical research studies that may be of interest for writers of systematic reviews and clinical guidelines. Scientists may use promotional adjectives to describe their work—both work they propose and work they report—but, as Millar and colleagues1 imply, we as a scientific community need to + Related article
               
Click one of the above tabs to view related content.