Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: spintreebob

As others have said, there’s absolutely nothing wrong with p values. They are a tool for reporting the significance of an experimental result, but like most tools, they can be misused. This is especially true if researchers don’t really understand what they are and how to use them as seems to be the case in many areas of research.

For the unfamiliar, suppose you are conducting an experiment such as a drug trial. You would split your subjects into two groups, a control group that does not receive the drug and a treatment group that does. You then compare the two groups on some measure to see if that measure is different between the groups. All that sounds simple enough, but the problem is that even if you’d compare two groups receiving no treatment, they’d never give EXACTLY the same measurements. There would always be some differences. The big question then is how different do the results have to be before we can really claim that the drug did something.

That’s where p values come in. A p value is a measure of probability (hence the “p”) that tells you how likely it is that sampling two groups of people at random and performing the measurement on those random groups would give a difference at least as large as the difference that you observed between your control and treatment groups. If that probability is relatively large, then you probably haven’t observed a real difference. You would be likely to observe a similar difference between ANY two groups of subjects, even absent any treatment. If that probability is small then MAYBE you found something.

The value generally accepted in most scientific work is p=0.05. That is results are not considered significant unless there’s less than a 5% chance that they could have occurred randomly. That should be considered to be a guideline, but too often it seems to be a set in stone benchmark instead. There are many instances where p<=0.05 is completely inappropriate. As a simple example, suppose instead of one drug, you are testing 20 drugs as potential treatments for some condition. If you blindly use p<=0.05 as your cutoff, you would very likely be publishing a positive result when all you really found was a random difference. That should be obvious: if you roll a 20-sided die 20 times you wouldn’t be all that shocked if you rolled a 1 on one of your rolls. You wouldn’t conclude your die was somehow loaded based on that observation.

Similarly you should NOT automatically just publish a positive result in such a situation. A much lower p value should be used in such a case. Particle physicists, for example do just this when they search for new particles. They recognize that they are looking for a particle at a wide range of energies, essentially conducting a large number of experiments simultaneously. Therefore they require a p value on the order of 10^-9 in order to announce a result.


27 posted on 02/07/2020 6:13:26 PM PST by stremba
[ Post Reply | Private Reply | To 1 | View Replies ]


To: stremba

Thank you for your post.


29 posted on 02/07/2020 6:22:10 PM PST by Getready (Wisdom is more valuable than gold and diamonds, and harder to find.)
[ Post Reply | Private Reply | To 27 | View Replies ]

To: stremba

“Our Constitution was made only for a moral and religious People. It is wholly inadequate to the government of any other.”
- John Adams.

Any serious scientist/statistician would say the same about the scientific method. [See Global Warming, Gender Studies, and similar far-left interpretations of science for details.]


30 posted on 02/07/2020 6:29:19 PM PST by Pollster1 ("Governments derive their just powers from the consent of the governed")
[ Post Reply | Private Reply | To 27 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson