A year after BASP banned statistical inference

Last year, as I noted, there was a big fuss about the journal Basic and Applied Social Psychology, whose editors decided to ban all statistical inference.1 No p-values, no confidence intervals, not even Bayesian posteriors; only descriptive statistics allowed.

The latest (Feb 2016) issue of Significance magazine has an interview with David Trafimow, the editor of BASP [see Vol 13, Issue 1, “Interview” section; closed access, unfortunately].

The interview suggests Trafimow still doesn’t understand the downsides of banning statistical inference. However, I do like this quote:

Before the ban, much of the reviewer commentary on submissions pertained to inferential statistical issues. With the ban in place, these issues fall by the wayside. The result has been that reviewers have focused more on basic research issues (such as the worth of the theory, validity of the research design, and so on) and applied research issues (such as the likelihood of the research actually resulting in some sort of practical benefit).

Here’s my optimistic interpretation: You know how sometimes you ask a colleague to review what you wrote, but they ignore major conceptual problems because they fixated on finding typos instead? If inferential statistics are playing the same role as typos—a relatively small detail that distracts from the big picture—then indeed it could be OK to downplay them.2

Finally, if banning inference forces authors to have bulletproof designs (a sample so big and well-structured that you’d trust the results without asking to see p-values or CI widths), that would truly be good for science. If they allowed, nay, required preregistered power calculations, then published the results of any sufficiently-powered experiment, this would even help with the file-drawer problem. But it doesn’t sound like they’re necessarily doing this.


Related posts:

Footnotes:

  1. Let’s stop calling it “the journal that banned p-values.” If that’s all they’d done, then it’d be OK. Instead, it’s “the journal that banned all of statistical inference,” which I think is harder to defend.
  2. However, they’re allowing stat inference in original submissions to be reviewed—just not in the final publication. So maybe this optimistic view doesn’t hold.

2 thoughts on “A year after BASP banned statistical inference

  1. What’s odd to me is that in many common cases in empirical psychology, the sufficient statistics for any statistical inference you would want to do are a small number of descriptive statistics. That is, if you randomize to four conditions and report the Ns, means, and sds, what else do I need to know? Really, this just makes it so any reader who cares about statistical inference (as any reasonable and expert reader should) has do some math.

    It would be interesting to do a meta-analysis of articles in this journal to see:
    (a) are there sufficient descriptive stats to do statistical inference?
    (b) how much evidence are the studies really providing?

  2. When rejecting a paper, it’s generally easiest to highlight one glaring issue that should be sufficient for a rejection. If, for example, a significant result is obviously driven by one or two outliers, then highlighting that problem is more efficient than elaborating at great lengths about the flaws of a design that didn’t even find the claimed result to begin with. Criticisms in a referee report are, after all, not meant to be exhaustive (unless the recommendation is to invite a resubmission).

Comments are closed.