Posted on 09/14/2007 3:21:08 AM PDT by gridlock
We all make mistakes and, if you believe medical scholar John Ioannidis, scientists make more than their fair share. By his calculations, most published research findings are wrong.
Dr. Ioannidis is an epidemiologist who studies research methods at the University of Ioannina School of Medicine in Greece and Tufts University in Medford, Mass. In a series of influential analytical reports, he has documented how, in thousands of peer-reviewed research papers published every year, there may be so much less than meets the eye.
These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. "There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," Dr. Ioannidis said. "A new claim about a research finding is more likely to be false than true."
The hotter the field of research the more likely its published findings should be viewed skeptically, he determined.
(snip)
Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. "People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual," Dr. Ioannidis said.
(snip)
Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.
(snip)
(Excerpt) Read more at online.wsj.com ...
No doubt, but there is an even more fundamental problem--journals typically are reluctant to publish negative results. If you are a professor seeking tenure, publications are the coin of the realm even more than grant dollars. It is simple enough to violate a few assumptions of inferential staistics, squeeze out "significance" and then proceed to the peremptory "Further research is needed."
Add in a dose of Lysenkoization and leftwing advocacy and pretty soon you have...the sociology department.
If celebrity made them broke, they wouldn’t pursue it. After sex, it’s all about the money.
i thought the check was the peer review.
Nah. He'd be wrong.
When I was just a green engineer I was involved in a study to define “unmodeled” accelerations acting on the Space Shuttle during entry. These accelerations were due to a wide number of things — APU exhaust, various other venting, variations in atmospheric density, etc.
The effects were minor — they increased navigation uncertainty maybe a few thousand feet by the time they started getting navigation information from the runway. (It sounds like a lot, but when you are 100 miles out, and have a large landing footprint, it’s noise.)
Every mission, post landing, I would do a least-squares fit of the radar observation of the Orbiter after the deorbit burn, solving for a vent. Every mission we would come up with a different result. Every mission I would do a linear regression on the results we had accumulated to date and get a “magic” vent that minimized the error on the past missions. And on the following mission, we would plug the vent in, and it worsened navigation accuracy.
Why? For the same reason you can come up with a set of equations that perfectly match the past history of the Dow Jones averages (or whatever market index you choose) and have it fall apart withing a few months of using it as prediction tool. There were uncontrolled variables driving the actual behavior, and all my linear regression was doing was matching past history — not describing what was going on.
My lead was this crusty old biddy that had never taken a linear regression or probability and statistic course in her life. When I finally worked up the nerve to point out the problems with her approach, using my textbooks to explain the underlying theory, I was rewarded. I was pulled off the project, given a task doing software maintenance (a job I loathed), and some other bright engineer was put in my place.
He took the data I accumulated and published a nice shiny paper on improving Shuttle navigation accuracy by predicting unmodeled accelerations in the deorbit phase. The model did not produce squat for accuracy, and discarded soon after it became apparent, but that guy was the winner and I was the loser in that round.
What is "self-serving data analysis" then? Fraud-lite?
Some people don't know and understand what they're doing. Their logic is bad and they're stuck with a faulty understanding of things that leads them further astray, or simply astray. It's nothing connected with honesty. It some cases the problem can be extreme, and overlaps honest/dishonest. That leads to a notable confusion.
I have been saying this for years. I know people in industry that tweak variables in order to arrive at outcomes that satisfy government regulators, so they can stay in business.
That retention pond too small? No problem, decrease the estimate of predicted annual rainfall by 1/2 an inch and, wallah, you now meet building code. Done all the time, by everyone. Especially by "environmental researchers".
Great, this partially from the school that unleashed the gypsy moth, aka the tent caterpillar on America (okay, it was in the late 19th century).
I was working on a job where I installed a new type of desiccant air conditioner as part of a pilot program to see whether or not they would be appropriate for wide application with my employer's facilities. It was a great machine, on a theoretical level, but it had some serious problem in actual application.
It operated about 50% more efficiently than our conventional systems, but it had a tendency to concentrate harmful vapors from the exhaust stream and release them suddenly into the room during system failure. Then it started shedding the desiccant into the room, and nobody was able to hazard a guess how harmful it was to breath this stuff.
I struggled with that machine for about a year. Then, finally, I informed my boss that my task was to determine whether or not the technology was appropriate, and I was going to report a negative result. My boss was great about it, and never held it against me. Our upper chain-of-command was none to happy with me, though.
“What is “self-serving data analysis” then? Fraud-lite?”
Read my post 48. What the guy that replaced me did? *That* was “self-serving data analysis.” He did not lie. He just came up with conclusions that were meaningless, but impressive. (The paper stated he had a “baseline of 22 Shuttle missions.)
TC, I do too. There’s a phase III clinical test I know of that should start enrolling in the next couple/few months. It is for the very drug I’ve been alluding to. Depending on your dad’s condition there would be a chance to get him into it. PM me if you’re interested. I have no connections but can steer you in the right direction.
GB,
p.
I was in ocean research for many years. Worked for DoC.
People do not want to know the truth of which you speak, but I will personally testify to it.
Can we blame them for kudzu, too? :-).
Interesting story. Certainly the project lead was engaged in a kind of fraud-lite though. Seems she made a conscious decision to skew the results she was afraid the study might have with you around. As for your replacement it could plausibly be he was just not competent about statistics.
University professorships are not all that lucrative. Upper middle class if you're lucky and have tenure. But the celebrity status is primarily amongst peers for most researchers. The big money is in industry where higer salaries are augmented with performance bonuses and cash for intelletual property.
There is one thing that most people don’t know about university research and the resulting publications. Most of the work is done by students. They are have fresh degrees and their textbook learing is fresh in their minds, but grad school is giving them their first real hands on experience with research. It’s easy for an over zealous grad student to read more into their data than is really there. A good professor will spot this and help refine the grad student’s work. It’s not intentional, but is simply untempered entheusiasm.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.