Big trouble In The World Of "Big Physics" 39
klevin writes "Hey, scientists are human too, who woulda thunk it? Nice bedtime reading for anyone who thinks science is an impartial search for knowledge and understanding. `Six months ago, Jan Hendrik Schön seemed like a slam dunk nominee for a Nobel prize. Then some of his colleagues started to take a closer look at his research.'"
Hate to say it but ... (Score:4, Insightful)
Question for the physists though, the article was a bit scant on details, what did the guy claim he could do? How was he claiming to turn materials into semi-conductors?
Re:Hate to say it but ... (Score:1)
Superconductors, not semiconductors (Score:2)
Big Science == Big Business (Score:2, Funny)
Re:Big Science == Big Business (Score:5, Interesting)
The problem is, I don't see a way around "big science" any time in the near future, in most fields. Let's face it, the easy stuff in physics has been done; small labs don't have the resources to do ground-breaking research any more, and they probably never will again. New sciences, or new branches of existing disciplines, occasionally pop up that allow the little guys to make serious contributions -- right now, bioinformatics is an example of this -- but research inevitably gets big and expensive as all the cheap discoveries are made and duly noted. There may be a solution to this problem, but damned if I know what it is.
Re: Big Science == Big Business (Score:3, Insightful)
> That attracts the best and the brightest, but unfortunately it also gives them the incentive to cheat.
Contrary to intuitions, it is often the best and brightest who cheat at universities, too. Lot of pre-med students get busted. Apparently the B&B are very good at rationalizing things to themselves.
Er.... (Score:2, Interesting)
That seems to me to be overstating things quite a lot. In addition to the new branches and disciplines you mention, small labs without monster budgets will also benefit from their own improvements in technology. Just because budgets remain relatively small doesn't mean that technological capability needs to remain stagnant; as an obvious example, processor speed seems to be a lot cheaper now than it was fifteen years ago.
But even more than the new scientific avenues that cheaper technology opens up for small labs, how can anyone say that all that's going to be discovered (without an enormous budget) has been?? It seeems to me to be very unlikely that more discoveries, even significant discoveries, are not just around the corner for small labs. It may not be as drastic as faster-than-light with household products [slashdot.org], but surely there are implications even of what we already know that have yet to be fleshed out. We don't know what is going to be discovered, but that doesn't mean nothing can be--and besides, people have been saying that everything's been discovered already since ancient Greece, and probably even before then.
Glaring Error in the Article (Score:2, Informative)
The O-rings weren't frozen stiff in the atmosphere. Ice worked into a joint while the launcher was on the pad. The rupture happened even before the thing left the ground.
Re:Glaring Error in the Article (Score:1, Insightful)
Re:Glaring Error in the Article (Score:2)
Re:Glaring Error in the Article (Score:2)
Edward Tufte has an interesting discussion of this case (how the scientists could have presented the evidence better) in Visual Explanations [edwardtufte.com].
If you read the book, he seems to be laying most of the fault at the feet of the engineers who prepared such an unconvincing presentation for NASA. In a class I took with him, however, he talked about how people making a serious decision have an obligation to make sure they are getting the information they need -- that NASA should have pressed for more information. The questions he feels the recipients of a presentation should ask are, "Show me causality," "Show me all relevant data," and "What do I really need to know?"
I asked him about this apparent shift afterwards and he said, well, there was certainly enough blame to go around. [NOTE: previous two paragraphs lifted from a comment I made on another site].
- adam
Re:Glaring Error in the Article (Score:2)
But then, I'll freely admit to not having read exactly what was said to NASA that morning (unless it was in the congressional report, paper or video - I own copies of both - in which case it didn't raise any alarms to me). Still, the whole "it wasn't said strongly enough" seemed to pop up much later as a reason for the failure overall, which indicates it may be (note I say *may*) just some people looking to redeem tehir careers or authors looking to publish a new theory to sell books.
--
Evan (no reference)
Not really "big physics" (Score:3, Insightful)
Re:Not really "big physics" (Score:2)
Re:Not really "big physics" (Score:2)
A little over the top (Score:5, Insightful)
It is a reasonable criticism directed at Science and Nature that they seem to compete with each other to publish attention-getting results (the recent bubble fusion experiment comes to mind), but what it comes down to is that a reviewer of a paper has no way to validate experimental data given to him. You have to take the research group at its word that the data are not fabricated. You can question their data reduction and analysis methods, but if they said they did this measurement and these are the resulting data then you have to take them at their word.
One of the ways science operates is that results like these are presented, and if the results are interesting enough (i.e., unexpected or never seen before) then other labs repeat and verify the experiment. When the results are confirmed, then great. If not, then the results (or at least the conclusions drawn from them) become suspect. This happened with cold fusion and it looks like bubble fusion is heading down the same road. This has happened in the past (N-rays [skepdic.com] are another example), and it will happen in many other instances that don't draw the big press stories. That is how it should work. The Salon article seems to suggest (among some valid points) that the paper reviewers should have had some all-knowing wisdom and immediately questioned the data.
I also doubt, as the article suggests, that the reputation of physicists has been harmed and that all over the world school children are crying "Say it ain't so Jan Hendrik." The biosciences have many many scandals related to data forging, or at least questionable massaging or analysis of data, because the stakes ($$) are much higher for a new drug to come to market as well as the difficulty in collecting consistent data. The biosciences continue to draw huge numbers of people into the field and it enjoys (deservedly) a positive reputation.
I also thought the article was way over the top with regard about the government funding aspect of this. It made it sound like that all the government money spent on R&D is a waste as it obviouly is going to charlatans and rouges. The author should have looked up the research dollar amounts in relation to the total government budget (such as its percentage of the GNP) as well as in relation to the total non-DoD R&D budget and see how well the NSF or the DOE compare to, say, NIH (I'll give you a hint, they are quite neglected). This isn't "Big Science" by any stretch of the imagination.
Re:A little over the top (Score:2)
If Schön was putting out 30 essentially similar papers per year, then the peer review process may have been failing to ensure that each paper had sufficient new material to warrant publication.
On a lighter note, I've always found that this [wisc.edu] pretty much summarises my idea of experimental physics.
Re:A little over the top (Score:2)
I thought a better title for the article would have been Scientific Method Works!, or even (if you're going for real sensationalism) Scientific Method Triumphs Over Capitalist Temptation!
When you boil it down you have a) someone publishing (possibly) bogus results, b) peer reviewers becoming suspicious and finally, c) someone asking hard questions because the data is public, so you can check these things. It sounds a lot like a process that's evolved to, in part, balance the quest for prestige and grant money had a hard fight, but ultimately won.
I guess Eggheads You Don't Understand Are Screwing You Over makes better press.
Sensations and other bad writing (Score:2)
The only believable explanation for this episode is one buried in the middle of the story: Schön has mental health issues. He's not just a con artist, he's a compulsive con artist, the kind that lies even when he doesn't need to, or is likely to come back and bite him. How else to explain his behavior? He must have known that nobody would be able to reproduce his results. It's a pretty common syndrome.
Of course, if you focus on Schön's mental state instead of on the "failure" of the peer review system you don't have a story!
Money for nothing (Score:2, Interesting)
Sad but hopefully good in the long run... (Score:2, Interesting)
Unfortunately, science is only a human activity so it is subject to all of the faults of people. The way the career and funding system works puts significant pressure on the shoulders of aspiring scientists and things like this will continue to happen. Fortunately, the peer review system managed to stop him in the end though it would have been a lot better if it had happened on day one.
The big dilemma is that science both has to be open to new (surprising?) results and extremely critical at the same time. Redoing an experiment can be incredibly hard and is always time consuming and expensive. A negative result is much too often no result at all.
virve
--
Science is an impartial search for knowledge. (Score:2, Insightful)
Science is an impartial search for knowledge and understanding. Falsifying the results of experiments is most definitely not science.
This is a Lesson in Good Science (Score:5, Informative)
No, I don't mean that researchers who falsify data are doing good science. But you'll notice that the falsification was caught. And it wasn't just the revelation that they included an incorrect figure and some of their plots had identical noise. Collegues have been growing suspecious of the results for over a year now because they've been unable to reproduce them.
This is good science. Scientists individually screw up all the time. I certainly have. Usually, we make honest mistakes. Sometimes, we make dishonest ones. But science is not and has never been about any one person or group. Science is a collective effort. It's not just the group doing the experiment, it's the other groups that try to reproduce it, the reviewers who look at it critically and the opponents who try their hardest to tear it apart. If you want to consider "good science", you need to add all of these into the picture. One of these segments clearly failed in this case (the original researchers) and another didn't catch it (the reviewers), the others did their job.
So, really, while the individual scientist was doing bad work, this illustrates exactly how science should work under real world circumstances.
The myth is that science is impartial! (Score:1, Insightful)
The history of the activity doesn't bear this out, the work experiences of scientists themselves doesn't bear this out, the public relations apparatus of science by way of popular press books hyping the latest and greatest doesn't bear this out, and actual survey data of scientists attitudes, as opposed to speculations about them, doesn't bear this out. When competing for fame or grant dollars, politics data distortion and hyping/fraud can play a role.
Of course this isn't everybody but it's not as small as some people, mostly those that haven't actually don't any basic science work, like to believe.
Science is a human activity with lots of compitition for prestige/ego and grant money. The problems of self-interest, whether of individuals or whole fields, is a very big part of all this.
Mechanisms that help around this are independent replication of results. However, this doesn't always work as it should either, in fact results can and have been accepted long before proper replications occur...if they ever do.
Re:The myth is that science is impartial! (Score:2)
Basically, until a scientific paper produces strongly reproducible results, results in practical applications, or has been looked over for 50 years or so, trust only that which you have reviewed yourself. Science is a political process, not an abstract search for the truth. Peer review and reproducible results are the checks that make sure that, on the whole, science tends closer to the truth. But like the second law of thermodynamics, it's a general process that overall is true: it's easy to find local examples that appear to violate it.
The work itself (Score:1, Informative)
Some of his work involved molecular transistors. He reported results of successfully making a single molecule transistor. In fact, IBM set up an entire lab based on his findings. (oops). Not to say that these things won't be developed. Most of his "findings" are scientifically sound... at least the theory behind it, but with current technology it just can't be done (according to the 100's of researchers doing the same work). In a rush to publish lots of stuff, he re-used graphs with the scales hastily changed, and other undergraduate techniques for falsifying data.
Interestingly, The articles in question started only after Batlog's name started appearing on the papers (a reputable Bell labs guy). There were some other things he was working on. A lot of what he did at the beginning was quite real, people still believe a lot of his work on Pentacene.
In the end, my point is this guy was working on organic semiconductors used in all fields, as well as exploring the possibility of superconductivity in many organic systems. Hence the confusion.
Things ain't like they used to be (Score:2)
I miss the good ol' days when scientists gone bad had an evil, ear-peircing cackle.
Now they just blend in like regular scientists.
Sigh
Graphs? (Score:2)
He would have got away with it if he simply had a Mac.
Seriously, though, why leave behind such obvious clues? It is not that hard to generate new phoney diagrams. I am glad evil people are often just as stupid as the non-evil people.
How many are *not* caught because they don't leave behind such silly clues?
Re:Graphs? (Score:1)
The scary part was that several years after that a biologist was caught falsifying results with an error like Schon's-- he had reused a gel photo from an older paper and the reviewer recognized it. I can't remember the guy's name at the moment. But, it put the same question into my head... "How many are *not* caught"? Fortunately if the reviewers don't see it, the person's competitors or collaborators will eventually find they can't repeat the fraudulent result. Redundancy is built into the system.
Re:Graphs? (Score:2)
I wonder how many research papers simply could not have their results duplicated over the years, and just simply fell by the wayside without hootin' and howlin'? I suppose if it is a mundane experiment, nobody cares much.
It seems nobody waited for duplication in this case before publishing or bragging.
I think the journals should devote one half to fully verified stuff and another to "cutting edge" stuff. That way the reader knows what is risky to trust and what is not. There is pressure to inform faster of the latest breakthru developments. That is fine as long as the reader knows the level of verification put into it at the time of presentation.
Anonymous peer review (Score:1)
Another reason for the breakdown is the hypnotizing effect of reputation. When the names of eminent people and places appear on the top of submitted papers, says Florida physicist Hebard, "reviewers react almost unconsciously" to their prestige. "People discount reports from groups that aren't well known,"
One thing that could be done to cut down on prejudice within at least this part of the review process would be to remove names and other identifying information (like institution) from papers before they get sent out for review. This would allow work to be judged a bit more on it's merits and a bit less on it's authors.
Such a change migh both slow down frauds like Schon's and also allow new ideas more easily into the public realm. Papers don't just get erroneously passed based on name. Sometimes otherwise good papers can be rejected based on the fact that the author is either unkown or out of favour. Anonymous peer review would help mute both of those effects.
Bell Labs says Mea Culpa (16 counts) (Score:1)
MURRAY HILL, N.J. -- Lucent Technologies Inc.'s Bell Labs said an independent committee investigating possible false data in published research found one scientist to be responsible for the fabrications.
Bell Labs said Wednesday in a statement that it has fired the scientist, J. Hendrick Schon, and cleared all others involved in the experiments of scientific misconduct. The experiments took place between 1998 and 2001.
The five-person committee of outside scientists and engineers found Mr. Schon committed misconduct on at least 16 occasions, mainly involving substitution of data, unrealistic precision and results that contradict known physics.
Papers by Mr. Schon and his team were called into question in May when other scientists found the experiments difficult to replicate and noted that paragraphs in several of the papers seemed similar. The experiments, involving molecular transistors, had been published in journals such as Science and Nature.
"The evidence that manipulation and misrepresentation of data occurred is compelling," the committee said. It added Mr. Schon "did this intentionally or recklessly and without the knowledge of any of his co-authors." A total of 20 researchers from Bell Labs and other institutions authored the approximately two dozen papers.
Bell Labs President Bill O'Shea said, "We are deeply disappointed that a case of scientific misconduct has occurred at Bell Labs -- the first in our 77-year history. Since Bell Labs' founding in 1925, tens of thousands of Bell Labs scientists and engineers have faithfully abided by the scientific honor code. That's an enviable track record, but we take this one exception very seriously."
In its executive summary, the committee said Mr. Schon "acknowledges that the data are incorrect in many ... instances. He states that these substitutions could have occured by honest mistake. The recurrent nature of such mistakes suggest a deeper problem. At a minimum, Hendrik Schon showed reckless disregard for the sanctity of data in the value system of science."
Bell Labs News Release (Score:1)
http://www.lucent.com/press/0902/020925.bla.html