Posted on 08/30/2005 10:29:44 AM PDT by LibWhacker
Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.
John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.
"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.
In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.
Massaged conclusions
Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through - such as whether a particular gene influences a particular disease - it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.
Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.
Surprisingly, Ioannidis says another predictor of false findings is if a field is "hot", with many teams feeling pressure to beat the others to statistically significant findings.
But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.
"When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.
If the intent of the article was to "test the audience" or "entertain the audience" I think you've achieved both objectives.
Sigh. All in that field of study. But now that you mention it, many papers in those other fields would also have to be systematically wrong.
Searching for the truth is a completely different thing.
Couldn't have said it better.
To find the truth requires being truthful to ones self
Wow. Just wow. A parallel of "It's the seriousness of the charge that matters!". Liberal leftist socialists are just plain unbelievable. And scary.
80% of all statistics are made up on the spot.
You gotta love it.
It makes me right with a .05 probability or being wrong (based on a sample size of 1).
Oh, doc, you're a man after my own heart, lol! I worked in that field as a statistician helping researchers get published in a "publish or perish" environment. My job? Very often it was just to refute a peer reviewer's criticisms, so that the article could get published. Much to my dismay, my counter-arguments were always accepted and all the articles went on to publication.
That's one reason I'd much rather work with physicists than medical researchers. Not that medical researchers aren't smart, they are very smart, of course. But as a rule, they just don't seem to be as interested in every aspect of their research as you guys are. So the stats get the short shrift from the medicos.
B-mesons will only decay to D+3pi about 1% of the time, and yet somehow, we call it science.
They are.
In biotech, the process is: test a drug on an animal. Check every metric you can think of, using 95% confidence and not correcting for the fact that you're looking at multiple endpoints. (If you still can't get significance, report a "strong trend.")
Publish the results with your buddy at the journal. Start biotech company to further investigate these astonishing early results. Get to Phase III testing, where you have to specify an endpoint in advance. Fail miserably (but by now all corporate officers are rich from selling stock).
Refinance like crazy, and test another drug, or the same failed drug for a new indication. Repeat for 30 years.
"John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece..."
So, given that Ioannis is the Greek version of the name John, he's... John John of the John School.
Alright, I'll stop clowning around now.
Rush Limbaugh discussed this today.
Except for his paper, of course.
Now how about studying popular news reports on scientific research? They may be approaching 100% wrong.
LoL!!
God told me.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.