Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: inquest
I don't believe this article.

You should. The statistics they cite are unexceptional with respect to the multiple hypotheses problem (eg if you test 20 false hypotheses at the 95% confidence level), one of them will probably test as true at the 95% level simply as a matter of random noise.

For example, suppose you are testing the output of DNA chips as a preditor of diabetes. There are some 2,000 hypotheses per experiment. That is, the chip produces 2,000 measurements for, say, 2,000 genes, and then those measurements are correlated with a disease state. At the 95% confidence level, you are bound to get many many correlations that seem significant that are only noise.

I have done a lot of scientific publication, refereed journals and conferences, and serve as the editor of peer-reviewed journal. Many, many first published results are wrong. Not just because of the noise issue. Scientists are folks like everyone else and they will ignore the negative findings and report only the positive. They will not define their endpoint in advance but keep going when it looks like they are almost there and so on. These are statistical no no's but they happen all the time.

Some years ago, I found an error in a paper I had previously written with another author--we were measuring (and reported) a bug in our software, not a real phenomenon. He was the head of a prominent dept in a prominent university. He argued 'til he was blue in the face that we should not report that we had been mistaken.

The system tends to be self-correcting because other folks try to replicate the wrong results and cannot. That gets reported. So eventually the noise is filtered out.

24 posted on 08/30/2005 10:48:15 AM PDT by ModelBreaker
[ Post Reply | Private Reply | To 2 | View Replies ]


To: ModelBreaker
I figured someone wouldn't get the joke ;-)
26 posted on 08/30/2005 10:48:59 AM PDT by inquest (FTAA delenda est)
[ Post Reply | Private Reply | To 24 | View Replies ]

To: ModelBreaker
Some years ago, I found an error in a paper I had previously written with another author--we were measuring (and reported) a bug in our software, not a real phenomenon. He was the head of a prominent dept in a prominent university. He argued 'til he was blue in the face that we should not report that we had been mistaken.

Donald Johanson, is that you?

89 posted on 08/30/2005 12:38:47 PM PDT by Publius6961 (Liberal level playing field: If the Islamics win we are their slaves..if we win they are our equals.)
[ Post Reply | Private Reply | To 24 | View Replies ]

To: ModelBreaker
In olden times at my grad school, there was a professor who had a student run a study and perform a statistical test. The test showed the results were without significance.

The professor kept the data and each following year, she had a new student perform the same test, with the same result. After a few years, her student made an error in calculation and reported the results were significant.

The professor then submitted the study for publication. She included means and standard deviations for the various groups in her study.

Apparently, she was unaware that any reader of the published aricle could verify the statistical test from the means and SDs. One reader dicovered the error and demanded a retraction.

The professor's response? She loudly complained, I should never have included the standard deviations!

110 posted on 08/30/2005 1:36:52 PM PDT by Marylander
[ Post Reply | Private Reply | To 24 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson