Posted on 09/14/2007 3:21:08 AM PDT by gridlock
We all make mistakes and, if you believe medical scholar John Ioannidis, scientists make more than their fair share. By his calculations, most published research findings are wrong.
Dr. Ioannidis is an epidemiologist who studies research methods at the University of Ioannina School of Medicine in Greece and Tufts University in Medford, Mass. In a series of influential analytical reports, he has documented how, in thousands of peer-reviewed research papers published every year, there may be so much less than meets the eye.
These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. "There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," Dr. Ioannidis said. "A new claim about a research finding is more likely to be false than true."
The hotter the field of research the more likely its published findings should be viewed skeptically, he determined.
(snip)
Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. "People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual," Dr. Ioannidis said.
(snip)
Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.
(snip)
(Excerpt) Read more at online.wsj.com ...
Just out of curiosity, was there ever an attempt to use the data from multiple shuttle runs and rather than fit a line thorugh each run, do a partial least squares model across all the runs?
“Just out of curiosity, was there ever an attempt to use the data from multiple shuttle runs and rather than fit a line thorugh each run, do a partial least squares model across all the runs?”
That’s what we did — which is why I analogized it to the stock market. You could get a good average fit for prior data but it was always useless for predicting how the next flight would go. Too many variables affecting thing — inclination, altitude, sun angle, atmospheric density, landing time, ascending vs descending node landing, and probably a bunch of other stuff that we did not even know about.
It looked pretty, sounded impressive — and was absolutely fricking useless.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.