Does (effect) Size Matter?

Personality psychologists wallow in effect size; the ubiquitous correlation coefficient, Pearson’s r, is central to nearly every research finding they report.  As a consequence, discussions of relationships between personality variables and outcomes are routinely framed by assessments of their strength.  For example, a landmark paper reviewed predictors of divorce, mortality, and occupational achievement, and concluded that personality traits have associations with these life outcomes that are as strong as or stronger than traditional predictors such as socio-economic status or cognitive ability (Roberts et al., 2007).  This is just one example of how personality psychologists routinely calculate, care about, and even sometimes worry about the size of the relationships between their theoretical variables and their predicted outcomes.

Social psychologists, not so much.  The typical report in experimental social psychology focuses on p-level, the probability of the magnitude of the difference between experimental groups occurring if the null hypothesis of no difference were to be true.   If this probability is .05 or less, then: Success!  While effect sizes (usually Cohen’s d  or, less often, Pearson’s r) are reported more often they they used to be – probably because the APA Publication Manual explicitly requires it (a requirement not always enforced) – the emphasis of the discussion of the theoretical or even the practical importance of the effect typically centers around whether it exists.  The size simply doesn’t matter.

Is this description an unfair caricature of social psychological research practice?  That’s what I thought until recently.  Even though the typical statistical education of many experimentally-oriented psychologists bypasses extensive discussion of effect size in favor of the ritual of null-hypothesis testing, I assumed that the smarter social psychologists grasped that an important part of scientific understanding involves ascertaining not just whether some relationship between two variables “exists,” but how big that relationship is and how it compares to various benchmarks of theoretical or practical utility.

It turns out I was wrong.  I recently had an email exchange with a prominent social psychologist who I greatly respect. [i] I was shocked, therefore, when he wrote the following[ii]:

 …the key to our research… [is not] to accurately estimate effect size. If I were testing an advertisement for a marketing research firm and wanted to be sure that the cost of the ad would produce enough sales to make it worthwhile, effect size would be crucial. But when I am testing a theory about whether, say, positive mood reduces information processing in comparison with negative mood, I am worried about the direction of the effect, not the size (indeed, I could likely change the size by using a different manipulation of mood, a different set of informational stimuli, a different contextual setting for the research — such as field versus lab). But if the results of such studies consistently produce a direction of effect where positive mood reduces processing in comparison with negative mood, I would not at all worry about whether the effect sizes are the same across studies or not, and I would not worry about the sheer size of the effects across studies. This is true in virtually all research settings in which I am engaged. I am not at all concerned about the effect size (except insofar as very small effects might require larger samples to find clear evidence of the direction of the effect — but this is more of a concern in the design phase, not in interpreting the meaning of the results). In other words, I am yet to develop a theory for which an effect size of r = .5 would support the theory, but an effect size of r = .2 (in the same direction) would fail to support it (if the effect cannot be readily explained by chance). Maybe you have developed such theories, but most of our field has not.

To this comment, I had three reactions.

First, I was startled by the claim that social psychologists don’t and shouldn’t care about effect size. I began my career during the dark days of the Mischelian era, and the crux of Mischel’s critique was that personality traits rarely correlate with outcomes greater than .30. He never denied that the correlations were significant, mind you, just that they weren’t big enough to matter to anybody on either practical or theoretical grounds. Part of the sport was to square this correlation, and state triumphantly (and highly misleadingly) that therefore personality only “explains” “9% of the variance.”  Social psychologists of the era LOVED this critique[iii]! Some still do. Oh, if only one social psychologist had leapt to personality psychology’s defense in those days, and pointed out that effect size doesn’t matter as long as we have the right sign on the correlation… we could have saved ourselves a lot of trouble (Kenrick & Funder, 1988).

Second, I am about 75% joking in the previous paragraph, but the 25% that’s serious is that I actually think that Mischel made an important point – not that .30 was a small effect size (it isn’t), but that effect size should  be the name of the game.  To say that an effect “exists” is a remarkably simplistic statement that on close examination means almost nothing.  If you work with census data, for example, EVERYTHING — every comparison between two groups, every correlation between any two variables — is statistically significant at the .000001 level. But the effect sizes are generally teeny-tiny, and of course lots of them don’t make any sense either (perhaps these should be considered “counter-intuitive” results). Should all of these findings be taken seriously?

Third, if the answer is no, then we have to decide how big an effect is in fact worth taking seriously. And not just for purposes of marketing campaigns! If, for example, a researcher wants to say something like “priming effects can overwhelm our conscious judgment” (I have read statements like that), then we need to start comparing effect sizes. Or, if we are just going to say that “holding a hot cup of coffee makes you donate more money to charity” (my favorite recent forehead-slapping finding) then the effect size is important for theoretical, not just practical purposes, because a small effect size implies that a sizable minority is giving LESS money to charity, and that’s a theoretical problem, not just a practical one.  More generally, the reason a .5 effect size is more convincing, theoretically, than a .2 effect size is that the theorist can put less effort into explaining why so many participants did the opposite of what the theory predicted.

Still, it’s difficult to set a threshold for how big is big enough.  As my colleague pointed out in a subsequent e-mail – and as I’ve written myself, in the past — there are many reasons to take supposedly “small” effects seriously.  Psychological phenomena are determined by many variables, and to isolate one that has an effect on an interesting outcome is a real achievement, even though in particular instances it might be overwhelmed by other variables with opposite influences.  Rosenthal and Rubin (1982) demonstrated how a .30 correlation was enough to be right, about two times out of three.  Ahadi and Diener (1989) showed that if just a few factors affect a common outcome, the maximum size of the effect of any one of them is severely constrained.  In a related vein, Abelson (1985) calculated how very small effect sizes – in particular, the relationship between batting average and performance in a single at-bat – can cumulate fairly quickly into large differences in outcomes (or ballplayer salaries).  So far be it from me to imply that a “small” effect, by any arbitrary standard, is unimportant.

Now we are getting near the crux of the matter.  Arbitrary standards – whether the .05 p-level threshold or some kind of minimum credible effect size – are paving stones on the road to ruin.  Personality psychologists routinely calculate and report their effect sizes, and as a result have developed a pretty good view of what these numbers mean and how to interpret them.  Social psychologists, to this day, still don’t pay much attention to effect sizes so haven’t developed a base of experience for evaluation. This is why my colleague Dan Ozer and I were able to make a splash as young beginning researchers, simply by pointing out that, for example, the effect size of the distance of the victim on obedience in the Milgram study was in the .30’s (Funder & Ozer, 1983).  The calculation was easy, even obvious, but apparently nobody had done it before.  A meta-analysis by Richard et al. (2003) found that the average effect size of published research in experimental social psychology is r = .21.  This finding remains unknown, and probably would come as a surprise, to many otherwise knowledgeable experimental researchers.

But this is what happens when the overall attitude is that “effect size doesn’t matter.”  Judgment lacks perspective, and we are unable to separate that which is truly important from that which is so subtle as to be virtually undetectable (and, in some cases, notoriously difficult to replicate).

My conclusion, then, is that effect size is important and the business of science should be to evaluate it, and its moderators, as accurately as possible.  Evaluating effect sizes is and will continue to be difficult, because (among other issues) they may be influenced by extraneous factors, because apparently “small” effects can cumulate into huge consequences over time, and because any given outcome is influenced by many different factors, not just one or even a few.  But the solution to this difficulty is not to regard effect sizes as unimportant, much less to ignore them altogether.  Quite the contrary, the more prominence we give to effect sizes in reporting and thinking about research findings, the better we will get at understanding what we have discovered and how important it really is.

References

Abelson, R. P. (1985). “A variance explanation paradox: When a little is a lot.” Psychological Bulletin, 97, 129–133.

Ahdadi, S., & Diener, E. (1989). Multiple determinants and effect size. Journal of Personality and Social Psychology, 56, 398-406.

Funder, D.C., & Ozer, D.J. (1983). Behavior as a function of the situation. Journal of Personality and Social Psychology, 44, 107-112.

Kenrick, D.T., & Funder, D.C. (1988). Profiting from controversy: Lessons from the person-situation debate. American Psychologist, 43, 23-34.

Nisbett, R.E., (1980). The trait construct in lay and professional psychology. In L. Festinger (Ed.), Retrospections on social psychology  (pp. 109-130). New York: Oxford University Press.

Richard, F.D., Bond, C.F., Jr., & Stokes-Zoota, J.J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331-363.

Roberts, B.W., Kuncel, N.R., Shiner, R., Caspi, A., & Goldberg L.R. (2007). The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. Perspectives in Psychological Science, 2, 313-345.

Rosenthal, R., & Rubin, D.B. (1982). A simple, general-purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74, 166-169.

[i] We served together for several years on a grant review panel, a bonding experience as well as a scientific trial by fire, and I came to admire his incisive intellect and clear judgment.

[ii] I obtained his permission to quote this passage but, understandably, he asked that he not be named in order to avoid being dragged into a public discussion he did not intend to start with a private email.

[iii] See, e.g., Nisbett, 1980, who raised the “personality correlation” to .40 but still said it was too small to matter.  Only 16% of the variance, don’t you know.

Speaking of replication…

A small conference sponsored by the European Association of Personality, held in Trieste, Italy last summer, addressed the issue of replicability in psychological research.  Discussions led to an article describing recommended best practices, and the article is now “in press” at the European Journal of Personality.  You can see the article if you click here.

Update November 8: Courtesy of Brent Roberts, the contents of the special issue of Perspectives in Psychological Science on replicability are available.  To go to his blog post, with links, click here.

The perilous plight of the (non)-replicator

               As I mentioned in my previous post, while I’m sympathetic to many of the ideas that have been suggested about how to improve the reliability of psychological knowledge and move towards “scientific utopia,” my own thoughts are less ambitious and keep returning to the basic issue of replication.  A scientific culture that consistently produced direct replications of important results would be one that eventually purged itself of many of the problems people having been worrying about lately, including questionable research practices, p-hacking, and even data fraud.

But, as I also mentioned in my previous post, this is obviously not happening.  Many observers have commented on the institutional factors that discourage the conduct and, even more, the publication of replication studies.  These include journal policies, hiring committee practices, tenure standards, and even the natural attractiveness of fun, cute, and counter-intuitive findings.  In this post, I want to focus on a factor that has received less attention: the perilous plight of the (non) replicator.

The situation of a researcher who has tried and failed to replicate a prominent research finding is an unenviable one.  My sense is that the typical non-replicator started out as a true believer, not a skeptic.  For example, a few years ago I spent sabbatical time at a large, well-staffed and well-equipped institute in which several researchers were interested in a very prominent finding in their field, and wished to test further hypotheses they had generated about its basis.  As good scientists, they began by making sure that they could reproduce the basic effect.  To their surprise and increasing frustration, they simply could not.  They followed the published protocol, contacted the original investigator for more details, tweaked this, tweaked that.  (As I said, they had lots of resources.)  Nothing.  Eventually they simply gave up.

Another anecdote.  A graduate student of a colleague of mine was intrigued by a finding published in Science.  You don’t see psychological research published in that ultimately prestigious journal very often, so it seemed like a safe bet that the effect was real and that further creative studies to develop its theoretical foundation would be a great project towards a dissertation and a research career.  Wrong.  After about three years of failing to replicate the original finding, the advisor finally had to insist that the student find another topic and start over.  You can imagine the damage this experience did to the student’s career prospects.

Stories like these are legion, but you don’t see many of them in the published literature. Indeed, I suspect most failures to replicate are never written up, much less submitted for publication. There are probably many reasons, but consider just one:  What happens when a researcher does decide to “go public” with a failure – or even repeated, robust failures – to replicate a prominent finding?  If some recent, highly publicized cases are any guide, several unpleasant outcomes can be anticipated.

First, the finding will be vehemently defended, sometimes not just by its originator but also by the acolytes that a surprising number of prominent researchers seem to have attracted into loyal camps.[i]  The defensive articles, written by prominent people with considerable skills, are likely to be strongly argued, eloquent, and long.  The non-replicator has a good chance of being publicly labeled as incompetent if not deliberately deceptive, and may be compared to skeptics of global warming!  Even a journalist who has the temerity to write about non-replication issues risks being dismissed as a hack.  This situation can’t be pleasant. It takes a certain kind of person to be willing to be dragged into it – and not necessarily the same kind of person who was attracted to a scientific career in the first place.

It gets worse.  The failed replicator also risks various kinds of subtle and not-so-subtle retaliation.  I was at a conference a few weeks ago where I heard, first-hand, from a researcher who found that a promotion letter that subtly but powerfully derogated the researcher’s career was not only an outlier with respect to the other letters in the file, but was written by a practitioner in a field that the researcher’s work had dared to question.  Another first-hand story concerned a researcher who, after publishing some reversals of findings that had been pushed for years by a powerful school of investigators, found that external reviews of submitted journal articles on other topics had suddenly turned harshly critical.  And, in an episode I had the opportunity to observe directly, a professor and graduate student who had a paper questioning an established finding actually accepted for publication in a prominent journal found themselves subjected to threats!  The person who “owned” the original effect said to them: you need to withdraw this paper.  I’m the most prominent researcher in the field and the New York Times will surely call me for comment.  I will be forced to publicly expose your incompetence.  Your career will be damaged; your student’s career will be ruined.  The threat concluded, darkly: I say this as a friend; I only have your best interests at heart.

Do you know other stories like this?  There is a good chance you do.  Publishing a failure to replicate a prominent finding, or even challenging the accepted state of the evidence in any way, is not for the squeamish.  No wonder the typical response of a failed replicator is simply to drop the whole thing and walk away.  The reaction makes sense, and from the point of view of individual self-interest – especially for a junior researcher — is probably the rational thing to do.  But it’s disastrous for the accumulation of reliable scientific knowledge.

This is a cultural problem that needs to be solved.  As individuals and as members of a research culture, we need to clarify two things.  First, we have to make clear that denunciations of people with contrary findings as incompetent or deceptive, retaliation through journal reviews and promotion letters, and overt threats, are, in a phrase, SERIOUSLY NOT OK.  This should go without saying, but – judging from what we’ve seen happen recently – apparently it doesn’t.

Second, and only slightly less obviously, we should try to recognize that a failure to confirm one of your findings does not have to be viewed as an attack.  Indeed, a colleague attending this same meeting pointed out that a failure to replicate is a sort of compliment: it means your work was interesting and potentially important enough to merit further investigation!  It’s much worse – and far more common – simply to be ignored.  A failure to replicate should be seen, instead of an attack, as an invitation to clarify what’s going on.  After all, if you couldn’t replicate one of your effects in your own lab what would you do?  Attack yourself?  No, you’d probably sit down and try to figure out what happened.  So why is it so different if it happens in someone else’s lab?  This could be the beginning of a joint effort to share methods, look at data together, and come to a collaborative understanding of an important scientific issue.

I know I’m dreaming here.  Even a psychologist knows enough about human nature to understand that such an outcome goes against all of our natural defensive inclinations.  But it’s a nice thought, and maybe if we hold it in mind even as an unattainable ideal it might help us to be not quite so vehement, a little less personal, and a bit more open minded in our responses to scientific challenge.

How can we enforce better responses to failures to replicate?  Sociology teaches us that in small communities gossip is an effective mechanism to enforce social norms.  Research psychology is effectively a small town, a few thousand people at the most spread out around the world but in regular contact nonetheless.  So the late-night gossip about defensive reactions, retaliation, and threats is one way to ensure that such conduct carries a social price.

In the longer term, we need to change our overall social norm of what’s acceptable.  We need to accept, practice, and, above all, teach constructive approaches to scientific controversy.  This is a very long road.  But, as the proverb tells us, it starts with one step.

Note: This post is based on a brief talk given at a conference on the “Decline Effect,” held at UC Santa Barbara in October, 2012.  The conference was organized by Jonathan Schooler and sponsored by the Fetzer-Franklin Foundation. As always, this post expresses my personal opinion and not necessarily that of any other institution or individual.


[i] Typically, their defense will draw on the existence of “conceptual replications,” studies that found theoretically parallel effects using different methods.  However, as Hal Pashler has noted, no matter how many conceptual replications are reported, there is no way to know how many failed efforts never saw the light of day.  This is why it is essential to find out whether the original effect was reliable.

Replication, period.

Can we believe everything (or anything) that social psychological research tells us?  Suddenly, the answer to this question seems to be in doubt.  The past few months have seen a shocking series of cases of fraud –researchers literally making their data up — by prominent psychologists at prestigious universities.  These revelations have catalyzed an increase in concern about a much broader issue, the replicability of results reported by social psychologists.  Numerous writers are questioning common research practices such as selectively reporting only studies that “work” and ignoring relevant negative findings that arise over the course of what is euphemistically called “pre-testing,” increasing N’s or deleting subjects from data sets until the desired findings are obtained and, perhaps worst of all, being inhospitable or even hostile to replication research that could, in principle, cure all these ills.

Reaction is visible.  The European Association of Personality Psychology recently held a special three-day meeting on the topic, to result in a set of published recommendations for improved research practice, a well-financed conference in Santa Barbara in October will address the “decline effect” (the mysterious tendency of research findings to fade away over time), and the President of the Society for Personality and Social Psychology was recently motivated to post a message to the membership expressing official concern.  These are just three reactions that I personally happen to be familiar with; I’ve also heard that other scientific organizations and even agencies of the federal government are looking into this issue, one way or another.

This burst of concern and activity might seem to be unjustified.  After all, literally making your data up is a far cry from practices such as pre-testing, selective reporting, or running multiple statistical tests.  These practices are even, in many cases, useful and legitimate.  So why did they suddenly come under the microscope as a result of cases of data fraud?  The common thread seems to be the issue of replication.  As I already mentioned, the idealistic model of healthy scientific practice is that replication is a cure for all ills.  Conclusions based on fraudulent data will fail to be replicated by independent investigators, and so eventually the truth will out.  And, less dramatically, conclusions based on selectively reported data or derived from other forms of quasi-cheating, such as “p-hacking,” will also fade away over time.

The problem is that, in the cases of data fraud, this model visibly and spectacularly failed.  The examples that were exposed so dramatically — and led tenured professors to resign from otherwise secure and comfortable positions (note:  this NEVER happens except under the most extreme circumstances) — did not come to light because of replication studies.  Indeed, anecdotally — which, sadly, seems to be the only way anybody ever hears of replication studies — various researchers had noticed that they weren’t able to repeat the findings that later turned out to be fraudulent, and one of the fakers even had a reputation of generating data that were “too good to be true.”  But that’s not what brought them down.  Faking of data was only revealed when research collaborators with first-hand knowledge — sometimes students — reported what was going on.

This fact has to make anyone wonder: what other cases are out there?  If literal faking of data is only detected when someone you work with gets upset enough to report you, then most faking will never be detected.  Just about everybody I know — including the most pessimistic critics of social psychology — believes, or perhaps hopes, that such outright fraud is very rare.  But grant that point and the deeper moral of the story still remains:  False findings can remain unchallenged in the literature indefinitely.

Here is the bridge to the wider issue of data practices that are not outright fraudulent, but increase the risk of misleading findings making it into the literature.  I will repeat: so-called “questionable” data practices are not always wrong (they just need to be questioned).  For example, explorations of large, complex (and expensive) data sets deserve and even require multiple analyses to address many different questions, and interesting findings that emerge should be reported.  Internal safeguards are possible, such as split-half replications or randomization analyses to assess the probability of capitalizing on chance.  But the ultimate safeguard to prevent misleading findings from permanent residence in (what we think is) our corpus of psychological knowledge is independent replication.  Until then, you never really know.

Many remedies are being proposed to cure the ills, or alleged ills, of modern social psychology.  These include new standards for research practice (e.g., registering hypotheses in advance of data gathering), new ethical safeguards (e.g., requiring collaborators on a study to attest that they have actually seen the data), new rules for making data publicly available, and so forth.  All of these proposals are well-intentioned but the specifics of their implementation are debatable, and ultimately raise the specter of over-regulation.  Anybody with a grant knows about the reams of paperwork one now must mindlessly sign attesting to everything from the exact percentage of their time each graduate student has worked on your project to the status of your lab as a drug-free workplace.  And that’s not even to mention the number of rules — real and imagined — enforced by the typical campus IRB to “protect” subjects from the possible harm they might suffer from filling out a few questionnaires.  Are we going to add yet another layer of rules and regulations to the average over-worked, under-funded, and (pre-tenure) insecure researcher?  Over-regulation always starts out well-intentioned, but can ultimately do more harm than good.

The real cure-all is replication.  The best thing about replication is that it does not rely on researchers doing less (e.g., running fewer statistical tests, only examining pre-registered hypotheses, etc.), but it depends on them doing more.  It is sometimes said the best remedy for false speech is more speech.  In the same spirit, the best remedy for misleading research is more research.

But this research needs to be able to see the light of day.  Current journal practices, especially among our most prestigious journals, discourage and sometimes even prohibit replication studies from publication.  Tenure committees value novel research over solid research.  Funding agencies are always looking for the next new thing — they are bored with the “same old same old” and give low priority to research that seeks to build on existing findings — much less seeks to replicate them.  Even the researchers who find failures to replicate often undervalue them.  I must have done something wrong, most conclude, stashing the study into the proverbial “file drawer” as an unpublishable, expensive and sad waste of time.  Those researchers who do become convinced that, in fact, an accepted finding is wrong, are unlikely to attempt to publish this conclusion.  Instead, the failure becomes fodder for late-night conversations, fueled by beverages at hotel bars during scientific conferences.  There, and pretty much only there, can you find out which famous findings are the ones that “everybody knows” can’t be replicated.

I am not arguing that every replication study must be published.  Editors have to use their judgment.  Pages really are limited (though less so in the arriving age of electronic publishing) and, more importantly, editors have a responsibility to direct the limited attentional resources of the research community to articles that matter.  So any replication study should be carefully evaluated for the skill with which it was conducted, the appropriate level of statistical power, and the overall importance of the conclusion.  For example, a solid set of high-powered studies showing that a widely accepted and consequential conclusion was dead wrong, would be important in my book[1].  And this series of studies should, ideally, be published in the same journal that promulgated the original, misleading conclusion.  As your mother always said, clean up your own mess.

Other writers have recently laid out interesting, ambitious, and complex plans for reforming psychological research, and even have offered visions of a “research utopia.”  I am not doing that here.  I only seek to convince you of one point:  psychology (and probably all of science) needs more replications.  Simply not ruling replication studies as inadmissible out-of-hand would be an encouraging start.   Do I ask too much?

Note: Thanks to Sanjay Srivastava for originally publishing this as a guest post on his blog.  Since I happen to be the president-elect of the Society for Personality and Social Psychology, I should also add that this essay represents my personal opinion and does not express the policies of the Society or the opinions of its other officers.


[1] So would a series of studies confirming that an important surprising and counter-intuitive finding was actually true.  But most aren’t, I suspect.