Careful what we believe.
Long before computers there were lies,dam’lies and statistics.
My next-door-neighbor and I were discussing this very thing
a few nights ago at my fire-pit with some good bourbon.
I contend that if the climate researchers had to testify as to their grant money and their findings under oath the entire hoax would collapse
Since I don’t know what a p-value is, I cannot appreciate the depth of wisdom shown in the article.
Statistics done Lie, but Liars use Statistics.
the trans community has been lying about p values for years...
“Recently, p-values have been criticized and even banned by some journals, because they are used by researchers, who cherry-pick observations and repeat experiments until they obtain a p-value worth publishing to obtain grant money, get tenure, or for political reasons”
There is nothing wrong with P Values. If you cherry pick the data all the results are crap. It is no longer valid data.
95% of an unknown number of scientists agree with me./s
And here I thought this was going to be helpful when I met with my parole officer. Aaaarrgghh!
There are liars, damn liars, and statisticians.
This is the kernel of how Climate “Science” works.
Also, journals often refuse to consider publishing results with negative findings, which makes the problem much worse.
For example, Researcher A repeats experiment X with slight (but insubstantial) modifications, and finds that his sugar water kills cancer cells better than chance with p<0.05. Researchers B, C and D repeat the experiment but find it does nothing, and their department chair tells them to move on to the next series of experiments, because he believes (rightly) that journals are looking to publish positive findings, and generally findings of no effect will not make it through the peer review process (which is long and time consuming). I don't believe these things are intentional though. When I did biomed research I saw this a lot; researchers had good intentions but did not understand statistics. It is not just ignorance, it is often lack of sufficient intellect to be a good scientist. It would be nice if every researcher were a genius who had a natural, confident grasp of all scientific disciplines related to their area of study, but i think we set a pretty low bar for entry into STEM careers.
As others have said, theres absolutely nothing wrong with p values. They are a tool for reporting the significance of an experimental result, but like most tools, they can be misused. This is especially true if researchers dont really understand what they are and how to use them as seems to be the case in many areas of research.
For the unfamiliar, suppose you are conducting an experiment such as a drug trial. You would split your subjects into two groups, a control group that does not receive the drug and a treatment group that does. You then compare the two groups on some measure to see if that measure is different between the groups. All that sounds simple enough, but the problem is that even if youd compare two groups receiving no treatment, theyd never give EXACTLY the same measurements. There would always be some differences. The big question then is how different do the results have to be before we can really claim that the drug did something.
Thats where p values come in. A p value is a measure of probability (hence the p) that tells you how likely it is that sampling two groups of people at random and performing the measurement on those random groups would give a difference at least as large as the difference that you observed between your control and treatment groups. If that probability is relatively large, then you probably havent observed a real difference. You would be likely to observe a similar difference between ANY two groups of subjects, even absent any treatment. If that probability is small then MAYBE you found something.
The value generally accepted in most scientific work is p=0.05. That is results are not considered significant unless theres less than a 5% chance that they could have occurred randomly. That should be considered to be a guideline, but too often it seems to be a set in stone benchmark instead. There are many instances where p<=0.05 is completely inappropriate. As a simple example, suppose instead of one drug, you are testing 20 drugs as potential treatments for some condition. If you blindly use p<=0.05 as your cutoff, you would very likely be publishing a positive result when all you really found was a random difference. That should be obvious: if you roll a 20-sided die 20 times you wouldnt be all that shocked if you rolled a 1 on one of your rolls. You wouldnt conclude your die was somehow loaded based on that observation.
Similarly you should NOT automatically just publish a positive result in such a situation. A much lower p value should be used in such a case. Particle physicists, for example do just this when they search for new particles. They recognize that they are looking for a particle at a wide range of energies, essentially conducting a large number of experiments simultaneously. Therefore they require a p value on the order of 10^-9 in order to announce a result.
P values are only an easy way to communicate the relative significance of a test. They are a tool and any tool can be misused.
The problem is not P values it is liars and cheats.
Of course, even is the harder science disciplines there is a strange lack of statistical knowledge. Some area of psychology and sociology are actively opposed to the use of statistics at all. You are jussupposed to believe their conclusions because they say so.