It’s a fascinating article title right? The story is pretty wild when you read the details of it too. most of the evidence of what happened is public and linked in the story so I encourage y’all to read them.
In short, I don’t think this is isolated to this one researcher. In fact on one of her papers there are TWO INDEPENDANT CASES of statistical fraud by two different people.
Another interesting thing to note is that overall when it comes to the ability to reproduce a study, many of these peer reviewed studies do not hold up to scrutiny. One of the articles covering this story mentioned that less than 50% of the studies were able to be reproduced.
The high rate of failure to replicate is not, in and of itself, evidence of fraud. It’s primarily a problem with low power to detect plausible effects (ie small sample sizes). That’s not to say there isn’t much deliberate fraud or p-hacking going on, there’s far too much. But the so-called replication crisis was entirely predictable without needing to assume any wrongdoing. It happened primarily because most researchers don’t fully understand the statistics they are using.
The Harvard scholar is being accused of deliberately fabricating study results by changing data in a spreadsheet on at least one of the studies.
I think the other commenter mentioned lack of replicability because that’s often one of the first indications that the original research results were fraudulent. Inability to reproduce will cause people to go digging through the original data, which is how this stuff gets found in many cases.
It’s a fascinating article title right? The story is pretty wild when you read the details of it too. most of the evidence of what happened is public and linked in the story so I encourage y’all to read them.
In short, I don’t think this is isolated to this one researcher. In fact on one of her papers there are TWO INDEPENDANT CASES of statistical fraud by two different people.
Another interesting thing to note is that overall when it comes to the ability to reproduce a study, many of these peer reviewed studies do not hold up to scrutiny. One of the articles covering this story mentioned that less than 50% of the studies were able to be reproduced.
Here’s another article that puts the number higher than two-thirds:
https://www.npr.org/sections/health-shots/2018/08/27/642218377/in-psychology-and-other-social-sciences-many-studies-fail-the-reproducibility-te
The high rate of failure to replicate is not, in and of itself, evidence of fraud. It’s primarily a problem with low power to detect plausible effects (ie small sample sizes). That’s not to say there isn’t much deliberate fraud or p-hacking going on, there’s far too much. But the so-called replication crisis was entirely predictable without needing to assume any wrongdoing. It happened primarily because most researchers don’t fully understand the statistics they are using.
There was a good paper published on this recently: Understanding the Replication Crisis as a Base Rate Fallacy
And this is a nice simple explanation of the base rate fallacy for anyone who can’t access the paper: The p value and the base rate fallacy
tl;dr p<0.05 does not mean what most researchers think it means
The Harvard scholar is being accused of deliberately fabricating study results by changing data in a spreadsheet on at least one of the studies.
I think the other commenter mentioned lack of replicability because that’s often one of the first indications that the original research results were fraudulent. Inability to reproduce will cause people to go digging through the original data, which is how this stuff gets found in many cases.