Posted on 03/21/2010 9:37:39 AM PDT by Ernest_at_the_Beach
The quote in the headline is direct from this article in Science News for which Ive posted an excerpt below. I found this article interesting for two reasons. 1- It challenges use of statistical methods that have come into question in climate science recently, such as Manns tree ring proxy hockey stick and the Steig et al statistical assertion that Antarctica is warming. 2- It pulls no punches in pointing out an over-reliance on statistical methods can produce competing results from the same base data. Skeptics might ponder this famous quote:
If your experiment needs statistics, you ought to have done a better experiment. Lord Ernest Rutherford
There are many more interesting quotes about statistics here.
- Anthony
UPDATE: Lubo Motl has a rebuttal also worth reading here. I should make it clear that my position is not that we should discard statistics, but that we shouldnt over-rely on them to tease out signals that are so weak they may or may not be significant. Nature leaves plenty of tracks, and as Lord Rutherford points out better experiments make those tracks clear. A
==================================
Odds Are, Its Wrong Science fails to face the shortcomings of statistics
March 27th, 2010; Vol.177 #7 (p. 26)
P valueA P value is the probability of an observed (or more extreme) result arising only from chance. S. Goodman, adapted by A. Nandy
For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured sciences fidelity to fact and conferred a timeless reliability to its findings.
During the past century, though, a mutant form of math has deflected sciences heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
Its sciences dirtiest secret: The scientific method of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
Replicating a result helps establish its validity more securely, but the common tactic of combining numerous studies into one analysis, while sound in principle, is seldom conducted properly in practice.
Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that sciences love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldnt believe what you read in the scientific literature.
There is increasing concern, declared epidemiologist John Ioannidis in a highly cited 2005 paper in PLoS Medicine, that in modern research, false findings may be the majority or even the vast majority of published research claims.
Ioannidis claimed to prove that more than half of published findings are false, but his analysis came under fire for statistical shortcomings of its own. It may be true, but he didnt prove it, says biostatistician Steven Goodman of the Johns Hopkins University School of Public Health. On the other hand, says Goodman, the basic message stands. There are more false claims made in the medical literature than anybody appreciates, he says. Theres no question about that.
Nobody contends that all of science is wrong, or that it hasnt compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. A lot of scientists dont understand statistics, says Goodman. And they dont understand statistics because the statistics dont make sense.
====================================
Read much more of this story here at Science News
It’s called cherry picking.
From what I saw of the CRU emails, they didn’t only cherry pick the data they tortured it until it confessed.
The absolute first rule in the effective use of statistics is the need to accept the results, to resist the all too human urge to fudge when the we don’t get the answers we want, or expect or “know” to be true.
I sort of take issue with the premise of the article. For instance, when Eddington first measured the bending of light by an eclipse the data were not overwhelmingly clear, the “signal” was faint, but the use of statistical technique allowed him to infer, with a certain level of confidence, that the observations confirmed the General Theory of Relativity.
Dear reader remember that you can’t PROVE anything with statistics.
It all comes down to integrity and honor.
This just in. The IPCC has declared there is a consensus that this article is wrong, within a 90% confidence factor . . . .
/bingo
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.