There is a problem with p. Specifically, there is a problem with the declaration of statistical significance when p ≤ .05. Longer ago than recently, the American Statistical Society, among… Click to show full abstract
There is a problem with p. Specifically, there is a problem with the declaration of statistical significance when p ≤ .05. Longer ago than recently, the American Statistical Society, among others, called for changing the way researchers report results, including not reporting p values without also providing context for understanding what those values mean and eliminating the claim of statistical significance from research papers. Recently, the American Statistical Society has become more insistent in the call for us to change not only language but practice, as well. There are many good reasons for this insistence, and in case you missed the discussion, please see the 43 articles published in The American Statistician in March 2019 for details. The editorial in that issue provides excellent detail about the problems with p values and statements of statistical significance (Wasserstein, Schirm, & Lazar, 2019). The editors also, helpfully, introduce some potential solutions to p-value use. These solutions, which are further detailed in the collected articles, include the use of minimal important effect size (Amrhein, Trafimow, & Greenland, 2019) and second-generation p value, which takes practical significance into account (Greevy, Welty, Blume, DuPont, & Smith, 2019). Other suggestions are available; most of us will need to work closely with statisticians to make sure we choose the most appropriate approach for our work. Briefly, because some of you have asked us what to do about reporting statistical results in papers submitted to NursingResearch,we support the recommendation ofWasserstein et al. (2019) to accept uncertainty and to be thoughtful, open, and modest in reporting your statistical results. More specifically and as Hayat (2010) noted almost a decade ago in this Journal, significance testing is a subjective procedure; tests of significance do not provide an objective measure of scientific evidence nor do p values have any clinicalmeaning. A specific p value depends on many factors, including statistical power, effect size, and sample size. Thus, when reporting a p value, a context for understanding the value needs to be included, such as confidence intervals, odds ratios, hazard ratios, or regression coefficients, all of which provide an estimate of the magnitude of an effect. Matthews (2019) suggests providing clarity to the width of the confidence interval as another way to contextualize p values. The Publication Manual of the American Psychological Association, which is used by this Journal for manuscript style, will publish a new edition
               
Click one of the above tabs to view related content.