One of the more salutary consequences of the “replication crisis” has been a flurry of articles and blog posts re-examining basic statistical issues such as the relations between N and statistical power, the importance of effect size, the interpretation of confidence intervals, and the meaning of probability levels. A lot of the discussion of what is now often called the “new statistics” really amounts to a re-teaching (or first teaching?) of things anybody, certainly anybody with an advanced degree in psychology, should have learned in graduate school if not as an undergraduate. It should not be news, for example, that bigger N’s give you a bigger chance of getting reliable results, including being more likely to find effects that are real and not being fooled into thinking you have found effects when they aren’t real. Nor should anybody who had a decent undergrad stats teacher be surprised to learn that p-levels, effect sizes and N’s are functions of each other, such that if you know any two of them you can compute the third, and that therefore statements like “I don’t care about effect size” are absurd when said by anybody who uses p-levels and N’s.

But that’s not my topic for today. My topic today is Bayes’ theorem, which is an important alternative to the usual statistical methods, but which is rarely taught at the undergraduate or even graduate level. (1) I am far from expert about Bayesian statistics. This fact gives me an important advantage: I won’t get bogged down in technical details; in fact that would be impossible, because I don’t really understand them. A problem with discussions of Bayes’ theorem that I often see in blogs and articles is that they have a way of being both technical and dogmatic. A lot of ink – virtual and real – has been spilled about the exact right way to compute Bayes Factors and advocating that all statistical analyses should be conducted within a Bayesian framework. I don’t think the technical and dogmatic aspects of these articles are helpful – in fact I think they are mostly harmful – for helping non-experts to appreciate what thinking in a semi-Bayesian way has to offer. So, herewith is my extremely non-technical and very possibly wrong (2) appreciation of what I call Bargain Basement Bayes.

**Bayes Formula:** Forget about Bayes Formula. I have found that even experts have to look it up every time they use it. For many purposes, it’s not needed at all. However, the principles behind the formula are important. The principles are these:

1. First, Bayes assumes that belief exists in degrees, and assigns numbers to those degrees of belief. If you are certain that something is false, it has a Bayes “probability” of 0. If you are certain it’s true, the probability is 1. If you have absolutely no idea, whatsoever, the probability is .5. Everything else is in between.

Traditional statisticians hate this. They don’t think a single fact, or event, can even have a probability. Instead, they want to compute probabilities that refer to frequencies within a class, such as the number of times out of hundred a result would be greater than a certain magnitude under pure chance given a certain N. But really, who cares? The only reason anybody cares about this traditional kind of probability is because after you compute that nice “frequentist” result, you will use the information to decide what you believe. And, inevitably, you will make that decision with a certain degree of subjective confidence. Traditional statistics ignores and even denies this last step, which is precisely where it goes very, very wrong. In the end, beliefs are held by and decisions based on those beliefs are made by people, not numbers. Sartre once said that even if there is a God, you would still have to decide whether to do what He says. Even if frequentist statistics are exactly correct (3) you still have to decide what to do with them.

2. Second, Bayes begins with what you believed to be true before you got your data. And then it asks, now that you have your data, how much should you change what you used to believe? (4)

Traditional statisticians hate this even more than they hate the idea of putting numbers on subjective beliefs. They go on about “prior probabilities” and worry about how they are determined, observe (correctly) that there is no truly objective way to estimate them, and suspect that the whole process is just a complicated form of inferential cheating. But the traditional model begins by assuming that researchers know and believe absolutely nothing about their research topic. So, as they then must, they will base everything they believe on the results of their single study. If those results show that people can react to stimuli presented in the future, or that you can get people to slow their walks to a crawl by having them unscramble the word “nldekirw” (5) then that is what we have to believe. In the words of a certain winner of the Nobel Prize, “we have no choice.”

Bayes says, oh come on. Your prior belief was that these things were impossible (in the case of ESP) or, once the possibility of elderly priming was explained, that it seemed pretty darned unlikely. That’s what made the findings “counter-intuitive,” after all. Conventional statistics ignores these facts. Bayes acknowledges that claims that are unlikely to be true, a priori, need extra-strong evidence to become believable. I am about the one millionth commentator to observe that social psychology, in particular, has been for too long in thrall to the lure of the “counter intuitive result.” Bayes explains exactly how that got us into so much trouble. Counter-intuitive, by definition, means that the finding had a low Bayesian prior. Therefore, we should have insisted on iron-clad evidence before we started believing all those cute surprising findings, and we didn’t. Maybe some of them are true, who knows at this point. But the clutter of small-N, underpowered single studies with now-you-see-it-now-you-don’t results are in a poor position to tell us which they are. Really, we almost need to start over.

3. Third, Bayes is in the end all about practical decisions. Specifically, it’s about decisions to believe something, and to do something or not, in the real world. It is no accident, I think, that so many Bayesians work in applied settings and focus on topics such as weather forecasting, financial planning, and medical decisions. In all of these domains, the lesson they teach tends to be – as Kahneman and Tversky pointed out long ago – we underuse baserates (6). In medicine, in particular, the implications are just starting to be understood in the case of screening for disease. When the baserate (aka the prior probability) is low, then even highly diagnostic tests are at a very high probability of yielding false positives, which entail significant physical, psychological, and financial costs. Traditional statistical thinking, which ignores baserates, leads one to think that a positive result of a test with 90% accuracy means that the patient has a 90% chance of having the disease. But if the prevalence in the population is 1%, the actual probability given a positive test is less than 10%. In subjective, Bayesian terms of course! Extrapolating this to the context of academic research, the principle implies that we overestimate the diagnosticity of single research studies, especially when the prior probability of the finding is low. I think this is why we were so willing to accept implausible, “counter-intuitive” results on the basis of inadequate evidence. To our current grief.

You don’t have to be able to remember Bayes’ formula to be a Bargain Basement Bayesian. But, as in all worthwhile bargain basements, you can get something valuable at a low cost.

Footnotes

1. In a recent graduate seminar that included students from several departments, I asked who had ever taken a course that taught anything about Bayes. One person raised her hand. Interestingly, she was a student in the business school.

2. Hi Simine.

3. They aren’t.

4. Bayes is sometimes called the “belief revision model,” which I think is pretty apt.

5. Wrinkled

6. Unless the data are presented in an accessible, naturalistic format such as seen in the work by Gerd Gigerenzer and his colleagues, which demonstrates how to present Bayesian considerations in terms other than the intimidating-looking formula.

Extremely lucid explanation of Bayes that works for both subjective as well as “objective” Bayesian analyses. Interestingly, if one applies a prior probability of 0 to an event, then no amount of empirical data can change that belief.

A point that I believe is often overlooked is that both Bayesian and classical frequentist approaches differ only in their initial conclusions. The Bayesian approach tempers the data with the beliefs one had before collecting and observing the data. As one has more data (runs more studies, others conduct replication studies, etc), all of the approaches will converge to the same conclusion. We only see dramatic differences between Bayesian perspectives and classical interpretations when the data are uncertain and weak such as when p = .04 and the confidence interval is wide. With weak data, regardless of significance, the choice of statistical philosophies to employ matters greatly. With abundant data it becomes essentially irrelevant as all approaches will lead not to Rome but to the correct conclusion. This is why very large samples and replication work are so critical.

“Extremely lucid explanation of Bayes that works for both subjective as well as “objective” Bayesian analyses”

The explanation completely ignores objective bayes since according to objective bayes the probability does not assign numbers to degrees of belief in a proposition but rather the objective bayes quantifies the plausibility of a proposition (no belief there). That’s objective bayes as represented by E.T. Jaynes, G. Box or R. Cox. I’m not sure what the quotation marks “objective” are supposed to indicate and who the “objective” bayesians are.

The omission of objective bayes from the discussion is disappointing, since modern bayesian textbooks such as Gelman’s or Kruschke’s prefer this interpretation and so do most of the practicing bayesians. Actually, I don’t know of any active (in terms of publishing and research) subjective bayesians.

The question in any Bayesian analysis always rests on the choice of the prior distribution. This may be purely subjective, as in the example outlined in the introduction to Wagenmakers, Wetzels, Borsboom, & van der Maas’s (2011 link) critique of Bem’s work. Alternatively one can use objective Bayes such as Jeffrey’s prior which provide posterior distributions of parameters that have Bayesian credible intervals which match frequentist confidence intervals in many cases. The broader point is that the prior is always a choice in the analysis and assigns prior probabilities to parameter values. The purely subjective prior is difficult to reconcile with the scientific method as different researchers faced with the same data can then reach different conclusions. Much of the recent work I have been reading on Bayes factors concerns the choices one makes for the prior distributions of the hypotheses (e.g., point null hypothesis vs specified distribution on the alternative). Here there is more room for subjectivity to enter the analysis, but that is for another time.

My take on Funder’s post is that as readers of published work we should adopt often a skeptical perspective and the subjective skeptical prior works well as a heuristic for the reader. This becomes even more important when one considers possible researcher degrees of freedom and p-hacking as sources of bias in published work. There is no panacea in terms of reporting results that will cure the ills afflicting the field. However, taking care of how we consume and digest those results is a very important step.

Readers might find this article interesting: “The Bayesian New Statistics: Two Historical Trends Converge” blogged about here and available at SSRN here.

An example of a prior of zero – the effect of homeopathic medicine on [whatever you like]. An infinite dilution of an ingredient can’t have an effect. Bayesians never waver at news of successful homeopathy trials, not even large and apparently well executed ones. Frequentists, in contrast, wobble for a bit until the trial is proven fraudulent.

Pingback: Moderator interpretations of the Reproducibility Project | The Hardest Science

Pingback: Why doesn’t personality psychology have a replication crisis? | funderstorms