As others have said, theres absolutely nothing wrong with p values. They are a tool for reporting the significance of an experimental result, but like most tools, they can be misused. This is especially true if researchers dont really understand what they are and how to use them as seems to be the case in many areas of research.
For the unfamiliar, suppose you are conducting an experiment such as a drug trial. You would split your subjects into two groups, a control group that does not receive the drug and a treatment group that does. You then compare the two groups on some measure to see if that measure is different between the groups. All that sounds simple enough, but the problem is that even if youd compare two groups receiving no treatment, theyd never give EXACTLY the same measurements. There would always be some differences. The big question then is how different do the results have to be before we can really claim that the drug did something.
Thats where p values come in. A p value is a measure of probability (hence the p) that tells you how likely it is that sampling two groups of people at random and performing the measurement on those random groups would give a difference at least as large as the difference that you observed between your control and treatment groups. If that probability is relatively large, then you probably havent observed a real difference. You would be likely to observe a similar difference between ANY two groups of subjects, even absent any treatment. If that probability is small then MAYBE you found something.
The value generally accepted in most scientific work is p=0.05. That is results are not considered significant unless theres less than a 5% chance that they could have occurred randomly. That should be considered to be a guideline, but too often it seems to be a set in stone benchmark instead. There are many instances where p<=0.05 is completely inappropriate. As a simple example, suppose instead of one drug, you are testing 20 drugs as potential treatments for some condition. If you blindly use p<=0.05 as your cutoff, you would very likely be publishing a positive result when all you really found was a random difference. That should be obvious: if you roll a 20-sided die 20 times you wouldnt be all that shocked if you rolled a 1 on one of your rolls. You wouldnt conclude your die was somehow loaded based on that observation.
Similarly you should NOT automatically just publish a positive result in such a situation. A much lower p value should be used in such a case. Particle physicists, for example do just this when they search for new particles. They recognize that they are looking for a particle at a wide range of energies, essentially conducting a large number of experiments simultaneously. Therefore they require a p value on the order of 10^-9 in order to announce a result.
Thank you for your post.
“Our Constitution was made only for a moral and religious People. It is wholly inadequate to the government of any other.”
- John Adams.
Any serious scientist/statistician would say the same about the scientific method. [See Global Warming, Gender Studies, and similar far-left interpretations of science for details.]