Posted on 11/04/2002 8:56:30 AM PST by Junior
WASHINGTON (AP) - In the largest block retraction ever published in the prestigious journal Science, eight papers by discredited researcher J. Hendrik Schon are being withdrawn at the request of his co-authors.
Schon, 32, was a science superstar at Lucent Technologies' Bell Labs. He published more than 80 papers in top journals, such as Science and Nature, and was sought out by other researchers because of his reputation for spectacular results with difficult problems in material sciences and electronics.
But when his work was questioned by other scientists last spring, an outside investigating committee appointed by Bell Labs concluded that Schon had fabricated data or altered experimental results in at least 16 projects between 1998 and 2001.
Schon, the committee found, "did this intentionally or recklessly and without the knowledge of any of his co-authors."
At least eight of Schon's research reports were published in Science and co-authors on the studies announced in this week's issue of the journal that they were retracting all of the papers.
"As a result of the committee's findings, we feel obligated to the scientific community to issue a retraction of the ... articles," the co-authors announced.
The articles dealt with exotic electronic experiments involving such things as organic molecular semiconductors, lasers and high temperature superconductivity.
Robert C. Haddon, a professor of chemistry and chemical and environmental engineering at the University of California, Riverside, said he agreed to become a co-author on one paper after Schon claimed success in a superconductivity experiment that Haddon originated.
"There was an experiment I tried myself in 1996 and I couldn't get it to work," said Haddon. "I heard about Hendrik's devices that seemed to work better so I suggested the experiment to him. Six weeks later, he send me an e-mail and said he got it to work."
After Schon sent him data that seemed to show success in the experiment, "I agreed to be a co-author," said Haddon.
But after the committee investigation showed Schon's data was suspect, Haddon said he had no choice but to retract the paper.
"This data (in the paper) cannot be trusted," said Haddon. "While some of it may be correct, one can't be sure."
Haddon said that when researchers combine their studies to produce a single paper, each scientist depends upon the honesty of work contributed by the other co-authors. That, he said, is the way science is supposed to work.
As the result of the questioned work, Bell Labs announced earlier this month that it was withdrawing six patent applications that had been based on Schon's research. Schon has made no public comment on the matter.
Bill O'Shea, president of Bell Labs, said in a statement last month the Schon investigation uncovered the first case of scientific misconduct in the 77-year history of the famed research organization. The lab, which until 1996 was part of AT&T, has generated more than 28,000 patents in communications and electronics and was the research home of six Nobel Prize winners in physics.
Science, published weekly by the American Association for the Advancement of Science (news - web sites), is one of the top peer-reviewed journals in the world, routinely printing landmark findings in many fields of science.
___
On the Net:
Science: www.sciencemag.org
Bell Labs: http://www.bell-labs.com
Hmm....cooking up more propaganda for the enviro-wackos?
In my business we still are required to figure out a way to sign off even if its costly. If we cannot then reports are issued to warn the reader that certain steps we would have thought necessary to take we were unable to and had we done that our opinion might have changed.
What is wrong with a peer review system that provides for a "qualified review" and indicates areas that were not possible or practical to check ?
What you have now isn't exactly inspiring.
Depends on the journal. Most journals publish only articles that have been reviewed by a set of "peers" who have some knowledge of the "field" and are therefore able to judge the rigor of the experiments and the conclusions drawn from the data presented.
"Peer-reviewed Publications" are worth more on your resume.
However, stuff slips through from time to time.
I guess what bothers me is that there doesn't sound like there is any sense of urgency to improve the chances of catching stuff by the posters here.
Patrick, I think you are being overly optimistic about this. Their entire schtick is based upon unsubstantiated, untestable speculation...I can't imagine CRI will ever back down.
The usual sequence of events is 1) Researcher does the work, 2) Write it up and submit it to a journal, 3) The journal sends it to selected anonymous reviewers who read the article, 4) Reviewers send their opinion back to journal whether the article is worthy of publication, 5) You get the "reviews" back (names of Reviewers deleted) and find out the fate of your article (Outright rejection; Provisional Acceptance, dependent upon reviewer's questions and concerns being answered to the satisfaction of the reviewer; Full Acceptance without reservations (very rare); Hold, dependent on more experiments being performed to make sure the reviewers concerns are addressed), 6) The journal follows the reviewer's instructions.
Hahahahahaha! If this place moved as slow as science does, then it wouldn't be fun.
The review "process" can literally take months. See my Post #28.
Some audits take longer. Here is my main point. The first posters on this thread were back slapping themselves on how great the system works. Im my profession we don't react the same way. When something gets published (an audit report) that is wrong we as a profession try to determine what went wrong and attempt to correct things so that it doesn't happen again. Especially in that the case sated here is not only were the reviewers thrown off track, others were anxious to co-author the work.
My surprise is that instead of looking at this situation that can and should be looked at for improvement the intial posters were filled with glee.
1) Back when I was just getting started there was a series of articles with absolutely beautiful and earth-shattering results by a young reseacher (Specter, or something like that, was his name). The results were published in some very prestigious journals. Within months other researchers in the field asked for some of the reagents to be sent to them (publishing in a journal subjects you to "public domain" laws, and most journals REQUIRE you to freely provide any-and-all of your reagents to other reseachers to repeat the experiments). Nothing worked, of course, and it was found out that the guy had falsified ALL of his data. Specter's (?) career was toast. The interesting thing here was the falsified data pointed to and proved "signal transduction" which was proved, in subsequent years, to be an essentially correct paragigm.
2) Baltimore's problems at MIT (Baltimore was a Nobel Prize winner for other experiments, many years before in his career). Too long to go into, but essentially Baltimore didn't check the results of his worker himself and put his trust in the research that was done in HIS laboratory. He put his name on the paper (opposite of what is happening in the current FreeRep article where people are taking their names off of the papers so as not to damage their careers). Another researcher, who came along later and picked up the project in Baltimore's lab couldn't repeat the experiments and was fired for incompetence. This second researcher "blew the whistle" since they felt fraud had been commited. This whole mess took 10 years of Congressional Hearings to straighten out!!!!!!!!!!
An accounting audit is a very different thing from the peer-review process employed by science journals. To have a strict analogy, the science reviewer would need to visit the lab, make sure the equipment was indeed there and functioning, examine a sampling of the records, run a few sample procedures to check some data points, etc. That would be a peer-review of the procedures followed. The scientific journal review is different, as was described above. The review there is just to see if the paper passes the "smell test" -- does the author know what he's talking about and do his claimed results seem to make sense. That filters out obviously shoddy work. The actual peer-review of the experiment itself takes place later, in other labs, when -- and if -- someone else tries to replicate the results.
I think I see now what you are getting at.
paragigm = paradigm
And you are happy to accept the results of that system. Its a surprise to me.
Let me draw a different but closer analogy. Most accounting reports are not audits and are based on merely reading what we are given by the client and saying "hey, nothing jumps out at me to say this is wrong on its face". This opinion is called a compilation. Which in this case would probably be even less work than what the reviewer did here.
I can assure you that when an accountant finds out that he signed off on a compilation that had material problems he doesn't slap himself on the back and congratulate himself on what a wonderfull system it is. He instead reviews such polices as client acceptance, his knowledge of the industry, industry materials etc. Was the peer review adequate ? Did everyone rely too much on the scientists co-authors and the co-authors relied on the reviewers ? Should works that result in a large amount of grants get extra added scrutiny because of the higher potential for fraud ? These are just some questions off the top of my head which seem natural to me. Instead the environment is one of acceptance that this is just a great system and its not possible to improve it. I am amazed especially because its an attitude that is so anti-science.
In the case here, I would want to review it to see where it went wrong and to see what if anything could be to lessen the chance of it happening again. However, I am learning real quick that most folks who post here think the system is great.
I don't recall expressing happiness. It's a system that works, almost all of the time. Yes, it relies heavily -- perhaps too heavily -- on the integrity of the individuals involved. But there are also serious constraints that make this necessary. These journals don't have the resources of large public companies which pay for expensive audits of their published figures. They exist on subscriptions, not glossy ads. They publish thousands of papers. They can't afford send out a team to review every paper they receive and to perform the physical checks I wrote about earlier. The reviewers are also busy people, who act as peer reviewers as a professional duty, not as a full-time career. The ultimate test, as I said earlier, comes when another lab attempts to reproduce the published results. Errors get discovered, as do the occasional frauds -- which are career killers, and thus aren't encouraged. The system works, better than most.
I certainly don't want to start evolution wars but in science all 'laws' are subject to question. Much value is in the 'conversation' that go's on in these circles (that conversation including publishing). Ideas and data are discussed in the broadist circles by being published. The pre-publication review is, as stated, a smell test. You can't reproduce the results of every experiment before publishing. You surely can with time.
To extend your analogy that would be akin to requiring two audits as the auditors, like the scientists, might be corrupt. I believe companies are required to change auditors at some defined period in an attempt to deal with fradulant audits. Just as in science you would think there would be no incetive to cheat knowing you will be caught. Happens anyhow.
Face it scientific method/peer review has worked very well, though imperfectly.
Not any more than usual to my knowledge.
......they tend to police themselves......
......to the best of our ability.
.......you hear no calls for congressional hearings or regulatory intervention etc.
From my Post #31,item #2:
2) Baltimore's problems at MIT (Baltimore was a Nobel Prize winner for other experiments, many years before in his career; he hypothesized, then discovered Retroviral Reverse Transcriptase). Too long to go into, but essentially Baltimore didn't check the results of his worker himself and put his trust in the research that was done in HIS laboratory. He put his name on the paper (opposite of what is happening in the current FreeRep article where people are taking their names off of the papers so as not to damage their careers). Another researcher, who came along later and picked up the project in Baltimore's lab couldn't repeat the experiments and was fired for incompetence. This second researcher "blew the whistle" since they felt fraud had been commited. This whole mess took 10 years of Congressional Hearings to straighten out!!!!!!!!!!
BTW.........the Congressional Hearings occurred (ie. the S-hit-the-fan) at the same time as Rockefeller University was nominating Baltimore to become President of the university. Their offer was withdrawn and his career was put on hold for a number of years until the whole thing got straightened out. I think he's at CalTech currently.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.