Thursday, November 3, 2011

The Problem of Scientific Fraud and How to Get Around It

Scientific American reprinted this article from Nature about a prominent psychologist who has now admitted that at least thirty of his published, peer-reviewed papers were fraudulent, using the word "massive" to describe the scale of it:

Stapel's eye-catching studies on aspects of social behaviour such as power and stereo­typing garnered wide press coverage. For example, in a recent Science paper (which the investigation has not identified as fraudulent), Stapel reported that untidy environments encouraged discrimination ( Science 332, 251-253; 2011).
"Somebody used the word 'wunderkind'," says Miles Hewstone, a social psychologist at the University of Oxford, UK. "He was one of the bright thrusting young stars of Dutch social psychology -- highly published, highly cited, prize-winning, worked with lots of people, and very well thought of in the field."
I looked at the paper in Science that they linked to, and surprise, surprise!  It is one of those subjects that simply because of the language that the abstract uses "victim of discrimination" would certainly be given a pass by the left, who utterly dominates psychology (and most other social sciences):
Being the victim of discrimination can have serious negative health- and quality-of-life–related consequences. Yet, could being discriminated against depend on such seemingly trivial matters as garbage on the streets? In this study, we show, in two field experiments, that disordered contexts (such as litter or a broken-up sidewalk and an abandoned bicycle) indeed promote stereotyping and discrimination in real-world situations and, in three lab experiments, that it is a heightened need for structure that mediates these effects (number of subjects: between 40 and 70 per experiment). These findings considerably advance our knowledge of the impact of the physical environment on stereotyping and discrimination and have clear policy implications: Diagnose environmental disorder early and intervene immediately.
As the article from Nature is careful to point out: this is not one of the articles that is known to be based on fraudulent data.  Unfortunately, as the Bellesiles scandal demonstrated, scholars have a bad habit of turning off their critical thinking skills when what they are reading conforms to what they want to be true.  It is a difficult problem to solve; almost everyone has some tendency to believe that what they want to be true is true.  The only real solution is diversity within the scholarly community.  Academics need to recognize that allowing a wide range of political beliefs within the ivory tower is not just a matter of fairness, but also a matter of making sure that every paper gets examined critically, and that includes by scholars who disagree with the core assumptions of the paper.

2 comments:

  1. The problem with peer review is that it is challenging the paper at the highest level of its content, which requires the attention and energy of the author's peers - who have their own work to do.

    What is more needed, IMHO, is auditing - reviewers who go through the paper, and verify all the footnotes and arithmetic. ISTM that would catch a lot of the problems, without tying down researchers.

    (Court decisions need to be audited too. Verify the case citations, and any references to facts of the case. I've seen reports at Volokh Conspiracy of cases where the judge's decision included supposedly factual statements that were contradicted by evidence in the case.)

    ReplyDelete
  2. I frequently peer review articles. It's not at the peer review stage that you can detect outright frauds like this one.

    In the hard sciences, when you review a paper you are generally anonymous to the author and aren't allowed to communicate with them, which makes direct questioning difficult if not impossible. In general all that happens is a round or two of comments, questions, and corrections, not a real dialog.

    I general when I am presented with a paper I do derive the same equations, study the theory for faults, check all the references (which was much more painful before most papers were available on line), but when it's an experimental paper I don't have access to the data and can't detect fraud. At that point you have to take the author's word that the theory matches the experiment. I can express doubt, but that's not generally enough to reject a paper unless there's enough detail to find a gross error in experimental setup.

    In this case as in most fraud cases doubts were raised by a series of questionable results that were too good, not by internal inconsistencies that a reviewer could have detected.

    Part of the scientific method is that results of an experiment must be repeatable. That's one of the places where climate "science" fails to be a science and is more properly a study. I agree with our host that having different philosophies in a field as politicized as psychology would be a good thing in that more people would be interested in testing the hypothesis.

    But honestly, that whole field needs some serious reform. I'm related to several experimental psychologists and the variance in ability to analyze data, determine confidence intervals, and even design experiments varies widely in that field. That mathematical analysis isn't a strong point of psychologists shouldn't surprise anyone. It's one of the reasons that studies are frequently refuted.

    ReplyDelete