When Did We Get so Delicate?

Replication issues are rampant these days. The recent round of widespread concern over whether supposedly established findings can be reproduced began in biology and the related life sciences, especially medicine. Psychologists entered the fray a bit later, largely in a constructive way. Individuals and professional societies published commentaries on methodology, journals acted to revise their policies to promote data transparency and encourage replication, and the Center for Open Science took concrete steps to make doing research “the right way” easier. As a result, psychology was viewed not as the poster child of replication problems, quite the opposite. It became viewed as the best place to look for solutions to these problems.

So what just happened? In the words of a headline in the Chronicle of Higher Education, the situation in psychology has suddenly turned “ugly and odd.”  Some psychologists whose findings were not replicated are complaining plaintively about feeling bullied. Others are chiming in about how terrible it is that people’s reputations are ruined when others can’t replicate their work. People doing replication studies have been labeled the “replication police,” “replication Nazis” and even, in one prominent psychologist’s already famous phrase, “shameless little bullies.” This last-mentioned writer also passed along an anonymous correspondent’s description of replication as a “McCarthyite nightmare.”  More sober commentators have expressed worries about “negative psychology” and “p-squashing.” Concern has shifted away from the difficulties faced by those who can’t make famous effects “work,” and the dilemma about whether they dare to go public when this happens. Instead, prestigious commentators are worrying about the possible damage to the reputations of the psychologists who discovered these famous effects, and promulgating new rules to follow before going public with disconfirmatory data.

First, a side comment: It’s my impression that reputations are not really damaged, much less ruined, by failures to replicate. Reputations are damaged, I fear, by defensive, outraged reactions to failures to replicate one’s work. And we’ve seen too many of those, and not enough reactions like this.

But now, the broader point: When did we get so delicate? Why are psychologists, who can and should lead the way in tackling this scientific issue head-on, and until recently were doing just that, instead becoming distracted by reputational issues and hurt feelings?

Is anybody in medicine complaining about being bullied by non-replicators, or is anyone writing blog posts about the perils of “negative biology”? Or is it just us? And if it’s just us, why is that? I would really like to know the answer to this question.

For now, if you happen to be a psychologist sitting on some data that might undermine somebody’s famous finding, the only advice I can give you is this:  Mum’s the word.  Don’t tell a soul.  Unless you are the kind of person who likes to poke sticks into hornets’ nests.

The “Fundamental Attribution Error” and Suicide Terrorism

Review of: Lankford, A. (2013) The myth of martyrdom: What really drives suicide bombers, rampage shooters, and other self-destructive killers. Palgrave Macmillan.
In Press, Behavioral and Brain Sciences (published version may differ slightly)

In 1977, the social psychologist Lee Ross coined the term “fundamental attribution error” to describe the putative tendency of people to overestimate the importance of dispositional causes of behavior, such as personality traits and political attitudes, and underestimate the importance of situational causes, such as social pressure or objective circumstances.  Over the decades since, the term has firmly rooted itself into the conventional wisdom of social psychology, to the point where it is sometimes identified as the field’s basic insight (Ross & Nisbett 2011). However, the actual research evidence purporting to demonstrate this error is surprisingly weak (see, e.g., Funder 1982; Funder & Fast 2010; Krueger & Funder 2004), and at least one well-documented error (the “false consensus bias” (Ross 1977a) implies that people overestimate the degree to which their behavior is determined by the situation.

Moreover, everyday counter-examples are not difficult to formulate. Consider the last time you tried, in an argument, to change someone’s attitude. Was it easier, or harder than you expected?  Therapeutic interventions and major social programs intended to correct dispositional problems, such as tendencies towards violence or alcoholism also are generally less successful than anticipated. Work supervisors and even parents, who have a great deal of control over the situations experienced by their employees or children, similarly find it surprisingly difficult to control behaviors as simple as showing up on time or making one’s bed. My point is not that people never change their minds, that interventions never work, or that employers and parents have no control over employees or children; it is simply that situational influences on behavior are often weaker than expected.

Even so, it would be going too far to claim that the actual “fundamental” error is the reverse, that people overestimate the importance of situational factors and underestimate the importance of dispositions.  A more judicious conclusion would be that sometimes people overestimate the importance of dispositional factors, and sometimes they overestimate the importance of situational factors, and the important thing, in a particular case, is to try to get it right. The book under review, The Myth of Martyrdom (Lankford 2013), aims to present an extended example of an important context in which many authoritative figures get it wrong, by making the reverse of the fundamental attribution error (though the book never uses this term): When trying to find the causes of suicide terrorism, too many experts ascribe causality to the political context in which terrorism occurs, or the practical aims that terrorists hope to achieve. Instead, the author argues, most, if not all, suicide terrorists are mentally disturbed, vulnerable, and angry individuals who are not so different from run-of-the-mill suicides, and who are in fact highly similar to “non-terrorist” suicidal killers such as the Columbine or Sandy Hook murderers. Personality and individual differences are important; suicide terrorists are not ordinary people driven by situational forces.

Lankford convincingly argues that misunderstanding suicide terrorists as individuals who are rationally responding to oppression or who are motivated by political or religious goals is dangerous, because it plays into the propaganda aims of terrorist organizations to portray such individuals as brave martyrs rather than weak, vulnerable and exploitable pawns. By spreading the word that suicide terrorists are mentally troubled individuals who wish to kill themselves as much or more than they desire to advance any particular cause, Lankford hopes to lessen the attractiveness of the martyr role to would-be recruits, and also remove any second-hand glory that might otherwise accrue to a terrorist group that manages to recruit suicide-prone operatives to its banner.

Lankford’s overall message is important.  However, the book is less than an ideal vehicle for it. The evidence cited consists mostly of a hodge-podge of case studies which show that some suicide terrorists, such as the lead 9/11 hijacker, had mental health issues and suicidal tendencies that long preceded their infamous acts. The book speaks repeatedly of the “unconscious” motives of such individuals, without developing a serious psychological analysis of what unconscious motivation really means or how it can be detected. It rests much of its argument on quotes from writers that Lankford happens to agree with, rather than independent analysis. It never mentions the “fundamental attribution error,” a prominent theme within social psychology that is the book’s major implicit counterpoint, whether Lankford knows this or not. The obvious parallels between suicide terrorists and genuine heroes who are willing to die for a cause is noted, but a whole chapter (Ch. 5) attempting to explain how they are different fails to make a distinction that was clear to this reader. In the end, the book is not a work of serious scholarship. It is written at the level of a popular, “trade” book, in prose that is sometimes distractingly overdramatic and even breathless. Speaking as someone who agrees with Lankford’s basic thesis, I wish it had received the serious analysis and documentation it deserves, as well as being tied to other highly relevant themes in social psychology.  Perhaps another book, more serious but less engaging to the general reader, lies in the future. I hope so.

For, the ideas in this book are important.  One attraction of the concept of the “fundamental attribution error,” and the emphasis on situational causation in general, is that it is seen by some as removing limits on human freedom, implying that anybody can accomplish anything regardless of one’s abilities or stable attributes. While these are indeed attractive ideas, they are values and not scientific principles. Moreover, an overemphasis on situational causation removes personal responsibility, one example being the perpetrators of the Nazi Holocaust who claimed they were “only following orders.” A renewed attention on the personal factors that affect behavior not only may help to identify people at risk of committing atrocities, but also restore the notion that, situational factors notwithstanding, a person is in the end responsible for what he or she does.

References
Funder, D. C. (1982) On the accuracy of dispositional vs. situational attributions. Social Cognition 1:205–22.
Funder, D. C. & Fast, L. A. (2010) Personality in social psychology. In: Handbook of social psychology, 5th edition, ed. D. Gilbert & S. Fiske, pp. 668–97. Wiley.
Krueger, J. I. & Funder, D. C. (2004) Towards a balanced social psychology: Causes, consequences and cures for the problem-seeking approach to social behavior and cognition. Behavioral and Brain Sciences 27:313–27.
Lankford, A. (2013) The myth of martyrdom: What really drives suicide bombers, rampage shooters, and other self-destructive killers. Palgrave Macmillan.
Ross, L (1977a) The false consensus effect: An egocentric bias in social perception and attribution processes Journal of Experimental Social Psychology 13(3):279–301.
Ross, L. (1977b) The intuitive psychologist and his shortcomings: Distortions in the attribution process. In: Advances in experimental social psychology, vol. 10, ed. L. Berkowitz, pp. 173–220. Academic Press.
Ross, L. & Nisbett, R. E. (2011) The person and the situation: Perspectives of social psychology, 2nd edition. Pinter and Martin.

Why I Decline to do Peer Reviews (part two): Eternally Masked Reviews

In addition to the situation described in a previous post, there is another situation where I decline to do a peer review. First, I need to define a couple of terms. “Blind review” refers to the practice of concealing the identity of reviewers from authors. The reason seems pretty obvious. Scientific academia is a small world, egos are easily bruised, and vehicles for subtle or not-so-subtle vengeance (e.g., journal reviews and tenure letters) are readily at hand. If an editor wants an unvarnished critique, the reviewer’s identity needs to be protected. That’s why every journal (I know of) follows the practice of blind review.

“Masked review” is different. In this practice, the identity of the author(s) is concealed from reviewers. The well-intentioned reason is to protect authors from bias, such as bias against women, junior researchers, or researchers from non-famous institutions. Some journals use masked review for all articles; some offer the option to authors; some do not use it at all.

A few years ago, I did a review of an article submitted to Psychological Bulletin. The journal had a policy of masked review posted on its masthead, noting that that the identity of the author(s) is concealed from the reviewers “during the review process.” I liked the article and wrote a positive review. The other two reviewers didn’t like it, and the article was rejected. I was surprised, when I received my copy of the rejection letter, that the authors’ identity was still redacted.

So I contacted the editor. I was sure there had been some (minor) mistake. But the editor refused to reveal who the authors were, saying that the review was masked. I pointed out the phrase in the statement of journal policy that authors’ identity would be concealed “during the review process.” I had assumed this meant, only during the review process. The editor replied that while he could see my point, he could only reveal the authors’ name(s) with the authors’ permission. This seemed odd but I said ok, go ahead, ask the authors if I can know who they are. The answer came back that I could, if I revealed my own identity!

Now, I should not have had any problem with this, right? My own review was positive, so this was probably a chance to make a new friend. I only wanted to know the authors’ identity so that I could follow their work in general, and the fate of this particular article in particular. Still, the implications disturbed me. If the rule is that author identity is unmasked after the review process only if the reviewer agrees to be identified to the author, then it seems that only writers of positive reviews would learn authors’ identity, because they are the only ones would agree. Authors of negative reviews would be highly unlikely to allow their identity to be revealed because of possible adverse consequences – recall this is the very reason for “blind” review in the first place. And, the whole situation makes no sense anyway. What’s the point of continuing to mask author identity after the review is over?

At this time, ironically, I was a member of the Publications and Communications Board of the American Psychological Association, which oversees all of its journals including Psychological Bulletin. And then, though the normal rotation, I became Chair of this august body! There was a sort-of joke around the P&C Board, that every Chair got one “gimme,” a policy change that everybody would go along with to allow the Chair to feel like he or she had made a mark. The gimme I wanted was to change APA’s policy on masked review to match what the statement at Psychological Bulletin implied was its policy already: Authors’ identities would be revealed to reviewers at the conclusion of the review process.

The common sense of this small change, if that’s what it even was, seemed so obvious that arguments in its favor seemed superfluous. But I came up with a few anyway:
1. The purpose of masked review, in the words of the APA Editor’s Handbook, is “to achieve unbiased review of manuscripts.” This purpose is no longer served once review is over.
2. Reviewers are unpaid volunteers. One of the few rewards of reviewing is early and first-hand contact with the research literature, which allows one to follow the development of research programs by researchers or teams of researchers over time. This reward is to some extent – to a large extent? – removed by concealing author identity even when the review is over. Moreover, the persistent concealment of author identity signals a distrust of reviewers who have given of their time.
3. Important facts can come to light when author identity is revealed. A submitted article may be a virtual repeat of a previous article by the same authors (self-plagiarism), it may contradict earlier work by the same authors without attempting to resolve the contradiction, or it may have been written by a student or advisor of a reviewer who may or may not have noticed and may or may not have notified the editor if he or she did notice. These possibilities are all bad enough during the review process; they can permanently evade detection unless author identity is unmasked at some point.
4. The APA handbook already acknowledges that masking is incomplete at best. The action editor knows author identity, and the mask often slips in uncontrolled ways (e.g., the reviewer guessing – correctly or not). So ending masking at the end of the review process is a way to equalize the status of all authors rather than have their identity guessed correctly in some cases and incorrectly guessed in others — which itself could have odd consequences for the person who was thought to be the author, but wasn’t.

Do these arguments make sense to you? Then you and I are both in the minority. The arguments failed. The P&C Board actually did vote to change APA policy, as a personal favor I think, but the change was made contingent on comments from the Board of Editors (which comprises the editors of all the APA journals). I was not included in the Board of Editors meeting, but word came back that they did not like my proposal. Among the reasons: an author’s feelings might get hurt! And, it might hurt an author’s reputation if it ever became known that he or she had an article rejected. Because, it seems, this never happens to good scientists.

Today, the policy at Psychological Bulletin reads as follows: “The identities of authors will be withheld from reviewers and will be revealed after determining the final disposition of the manuscript only upon request and with the permission of the authors.” This is pretty much where the editor of the Bulletin came down, years ago, when I tried to find out an author’s identity. I guess I did have an impact on how this policy is now worded, if not its substance.

So here is the second reason that I (sometimes) decline to do peer reviews. If the authors’ identity is masked, I ask the editor whether the masking will be removed when the review process is over. If the answer is no, then I decline. The answer is usually no, so I get to decline a fair number of reviews.

Postscript: After writing the first draft of this blog, I was invited to review a (masked) article submitted to the Bulletin. I asked my standard question about unmasking at the conclusion of the review process. Instead of an answer, I received the following email: “As it turns out, your review will not be needed for me to make a decision, so unless you have already started, please do not complete your review.” So, I didn’t.