LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

No evidence for nudging after adjusting for publication bias

Photo by chrisluengas from unsplash

Thaler and Sunstein’s “nudge” (1) has spawned a revolution in behavioral science research. Despite its popularity, the “nudge approach” has been criticized for having a “limited evidence base” (e.g., ref.… Click to show full abstract

Thaler and Sunstein’s “nudge” (1) has spawned a revolution in behavioral science research. Despite its popularity, the “nudge approach” has been criticized for having a “limited evidence base” (e.g., ref. 2). Mertens et al. (3) seek to address that limitation with a timely and comprehensive metaanalysis. Mertens et al.’s headline finding is that “choice architecture [nudging] is an effective and widely applicable behavior change tool” (p. 8). We propose their finding of “moderate publication bias” (p. 1) is the real headline; when this publication bias is appropriately corrected for, no evidence for the effectiveness of nudges remains (Fig. 1). Mertens et al. (3) find significant publication bias, through Egger regression. Their sensitivity analysis (4) indicates that the true effect size could be as low as d = 0.08 (if publication bias is severe). Mertens et al. argue that severe publication bias is only partially supported by the funnel plot and proceed largely without taking publication bias into account in their subsequent analyses. However, the reported Egger coefficient (b = 2.10) is “severe” (5). A newly proposed bias correction technique, robust Bayesian metaanalysis (RoBMA) (6), avoids an all-or-none debate over whether or not publication bias is “severe.” RoBMA simultaneously applies 1) selection models that estimate relative publication probabilities (7) and 2) models of the relationship between effect sizes and SEs [i.e., Precision Effect Test and Precision Effect Estimate with Standard Error (6, 8, 9)]. Multimodel inference is then guided mostly by those models that predict the observed data best (6, 9, 10). RoBMA makes multimodel inferences about the presence or absence of an effect, heterogeneity, and publication bias (6, 9). Table 1 compares the unadjusted results to the publication bias–adjusted results.* Since publication bias–corrected three-level selection models are computationally intractable, we analyzed the data in two ways: 1) ignoring the threelevel structure (column 2) and 2) using only the most precise estimate from studies with multiple results (column 3). Strikingly, there is an absence of evidence for an overall effect and evidence against an effect in the “information” and “assistance” intervention categories, whereas the evidence is undecided for “structure” interventions. When using only the most precise estimates, we further find evidence against an effect in most of the domains, apart from “other,” “food,” and “prosocial” (the evidence is indecisive) and weak evidence for the overall effect. However, all intervention categories and domains apart from “finance” show evidence for heterogeneity, which implies that some nudges might be effective, even when there is evidence against the −0.4 −0.2 0.0 0.2 0.4 Cohen's d Combined 0.04 [0.00, 0.14] BF01 = 0.95 Intervention category:

Keywords: effect; publication; publication bias; evidence nudging

Journal Title: Proceedings of the National Academy of Sciences of the United States of America
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.