Bargain Basement Bayes

One of the more salutary consequences of the “replication crisis” has been a flurry of articles and blog posts re-examining basic statistical issues such as the relations between N and statistical power, the importance of effect size, the interpretation of confidence intervals, and the meaning of probability levels. A lot of the discussion of what is now often called the “new statistics” really amounts to a re-teaching (or first teaching?) of things anybody, certainly anybody with an advanced degree in psychology, should have learned in graduate school if not as an undergraduate. It should not be news, for example, that bigger N’s give you a bigger chance of getting reliable results, including being more likely to find effects that are real and not being fooled into thinking you have found effects when they aren’t real. Nor should anybody who had a decent undergrad stats teacher be surprised to learn that p-levels, effect sizes and N’s are functions of each other, such that if you know any two of them you can compute the third, and that therefore statements like “I don’t care about effect size” are absurd when said by anybody who uses p-levels and N’s.

But that’s not my topic for today. My topic today is Bayes’ theorem, which is an important alternative to the usual statistical methods, but which is rarely taught at the undergraduate or even graduate level. (1)  I am far from expert about Bayesian statistics. This fact gives me an important advantage: I won’t get bogged down in technical details; in fact that would be impossible, because I don’t really understand them. A problem with discussions of Bayes’ theorem that I often see in blogs and articles is that they have a way of being both technical and dogmatic. A lot of ink – virtual and real – has been spilled about the exact right way to compute Bayes Factors and advocating that all statistical analyses should be conducted within a Bayesian framework. I don’t think the technical and dogmatic aspects of these articles are helpful – in fact I think they are mostly harmful – for helping non-experts to appreciate what thinking in a semi-Bayesian way has to offer. So, herewith is my extremely non-technical and very possibly wrong (2) appreciation of what I call Bargain Basement Bayes.

Bayes Formula: Forget about Bayes Formula. I have found that even experts have to look it up every time they use it. For many purposes, it’s not needed at all. However, the principles behind the formula are important. The principles are these:

1. First, Bayes assumes that belief exists in degrees, and assigns numbers to those degrees of belief. If you are certain that something is false, it has a Bayes “probability” of 0. If you are certain it’s true, the probability is 1. If you have absolutely no idea, whatsoever, the probability is .5. Everything else is in between.
Traditional statisticians hate this. They don’t think a single fact, or event, can even have a probability. Instead, they want to compute probabilities that refer to frequencies within a class, such as the number of times out of hundred a result would be greater than a certain magnitude under pure chance given a certain N. But really, who cares? The only reason anybody cares about this traditional kind of probability is because after you compute that nice “frequentist” result, you will use the information to decide what you believe. And, inevitably, you will make that decision with a certain degree of subjective confidence. Traditional statistics ignores and even denies this last step, which is precisely where it goes very, very wrong. In the end, beliefs are held by and decisions based on those beliefs are made by people, not numbers. Sartre once said that even if there is a God, you would still have to decide whether to do what He says. Even if frequentist statistics are exactly correct (3) you still have to decide what to do with them.

2. Second, Bayes begins with what you believed to be true before you got your data. And then it asks, now that you have your data, how much should you change what you used to believe? (4)
Traditional statisticians hate this even more than they hate the idea of putting numbers on subjective beliefs. They go on about “prior probabilities” and worry about how they are determined, observe (correctly) that there is no truly objective way to estimate them, and suspect that the whole process is just a complicated form of inferential cheating. But the traditional model begins by assuming that researchers know and believe absolutely nothing about their research topic. So, as they then must, they will base everything they believe on the results of their single study. If those results show that people can react to stimuli presented in the future, or that you can get people to slow their walks to a crawl by having them unscramble the word “nldekirw” (5) then that is what we have to believe. In the words of a certain winner of the Nobel Prize, “we have no choice.”
Bayes says, oh come on. Your prior belief was that these things were impossible (in the case of ESP) or, once the possibility of elderly priming was explained, that it seemed pretty darned unlikely. That’s what made the findings “counter-intuitive,” after all. Conventional statistics ignores these facts. Bayes acknowledges that claims that are unlikely to be true, a priori, need extra-strong evidence to become believable. I am about the one millionth commentator to observe that social psychology, in particular, has been for too long in thrall to the lure of the “counter intuitive result.” Bayes explains exactly how that got us into so much trouble. Counter-intuitive, by definition, means that the finding had a low Bayesian prior. Therefore, we should have insisted on iron-clad evidence before we started believing all those cute surprising findings, and we didn’t. Maybe some of them are true, who knows at this point. But the clutter of small-N, underpowered single studies with now-you-see-it-now-you-don’t results are in a poor position to tell us which they are. Really, we almost need to start over.

3. Third, Bayes is in the end all about practical decisions. Specifically, it’s about decisions to believe something, and to do something or not, in the real world. It is no accident, I think, that so many Bayesians work in applied settings and focus on topics such as weather forecasting, financial planning, and medical decisions. In all of these domains, the lesson they teach tends to be – as Kahneman and Tversky pointed out long ago – we underuse baserates (6). In medicine, in particular, the implications are just starting to be understood in the case of screening for disease. When the baserate (aka the prior probability) is low, then even highly diagnostic tests are at a very high probability of yielding false positives, which entail significant physical, psychological, and financial costs. Traditional statistical thinking, which ignores baserates, leads one to think that a positive result of a test with 90% accuracy means that the patient has a 90% chance of having the disease. But if the prevalence in the population is 1%, the actual probability given a positive test is less than 10%. In subjective, Bayesian terms of course! Extrapolating this to the context of academic research, the principle implies that we overestimate the diagnosticity of single research studies, especially when the prior probability of the finding is low. I think this is why we were so willing to accept implausible, “counter-intuitive” results on the basis of inadequate evidence. To our current grief.

You don’t have to be able to remember Bayes’ formula to be a Bargain Basement Bayesian. But, as in all worthwhile bargain basements, you can get something valuable at a low cost.

Footnotes
1. In a recent graduate seminar that included students from several departments, I asked who had ever taken a course that taught anything about Bayes.  One person raised her hand.  Interestingly, she was a student in the business school.
2. Hi Simine.
3. They aren’t.
4. Bayes is sometimes called the “belief revision model,” which I think is pretty apt.
5. Wrinkled
6. Unless the data are presented in an accessible, naturalistic format such as seen in the work by Gerd Gigerenzer and his colleagues, which demonstrates how to present Bayesian considerations in terms other than the intimidating-looking formula.