You should. The statistics they cite are unexceptional with respect to the multiple hypotheses problem (eg if you test 20 false hypotheses at the 95% confidence level), one of them will probably test as true at the 95% level simply as a matter of random noise.
For example, suppose you are testing the output of DNA chips as a preditor of diabetes. There are some 2,000 hypotheses per experiment. That is, the chip produces 2,000 measurements for, say, 2,000 genes, and then those measurements are correlated with a disease state. At the 95% confidence level, you are bound to get many many correlations that seem significant that are only noise.
I have done a lot of scientific publication, refereed journals and conferences, and serve as the editor of peer-reviewed journal. Many, many first published results are wrong. Not just because of the noise issue. Scientists are folks like everyone else and they will ignore the negative findings and report only the positive. They will not define their endpoint in advance but keep going when it looks like they are almost there and so on. These are statistical no no's but they happen all the time.
Some years ago, I found an error in a paper I had previously written with another author--we were measuring (and reported) a bug in our software, not a real phenomenon. He was the head of a prominent dept in a prominent university. He argued 'til he was blue in the face that we should not report that we had been mistaken.
The system tends to be self-correcting because other folks try to replicate the wrong results and cannot. That gets reported. So eventually the noise is filtered out.
Donald Johanson, is that you?
The professor kept the data and each following year, she had a new student perform the same test, with the same result. After a few years, her student made an error in calculation and reported the results were significant.
The professor then submitted the study for publication. She included means and standard deviations for the various groups in her study.
Apparently, she was unaware that any reader of the published aricle could verify the statistical test from the means and SDs. One reader dicovered the error and demanded a retraction.
The professor's response? She loudly complained, I should never have included the standard deviations!