Positive Bias Could Erode Public Trust In Science 408
ananyo writes "Evidence is mounting that research is riddled with positive bias. Left unchecked, the problem could erode public trust, argues Dan Sarewitz, a science policy expert, in a comment piece in Nature. The piece cites a number of findings, including a 2005 paper by John Ioannidis that was one of the first to bring the problem to light ('Why Most Published Research Findings Are False'). More recently, researchers at Amgen were able to confirm the results of only six of 53 'landmark studies' in preclinical cancer research (interesting comments on publishing methodology). While the problem has been most evident in biomedical research, Sarewitz argues that systematic error is now prevalent in 'any field that seeks to predict the behavior of complex systems — economics, ecology, environmental science, epidemiology and so on.' 'Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves,' he adds. Do Slashdot readers perceive positive bias to be a problem? And if so, what practical steps can be taken to put things right?"
Feelings are more important than science (Score:5, Insightful)
Right? isn't that what American schools and TV have been teaching for the last 30 years? Nerds aren't cool - facts are open to interpretation - everyone is special - you can eat more than you grow... When you have a society rewarding irrationality, what do you expect? Rigorous science?
Re:Feelings are more important than science (Score:5, Insightful)
In the tech industry we all deal with non-technical managers who drive the technical direction and often times define the message to the clients. Does science suffer the same unskilled managerial types pushing scientists to interpret results in a particular way perhaps?
I have a hard time believing a professional scientist doesn't know how to apply the scientific method, but then again incompetence is rampant in every other industry I guess, why not the scientific one..
Science comes when results are confirmed (Score:5, Insightful)
Actually, science is stll working; the real trouble comes with the publicity of the science.
You should never believe the results of any single study. Every scientist knows this; or should know this. Science comes when results are confirmed, not when somebody publishes the first paper. The real work of science just starts when somebody publishes a study saying "we show that x has the effect y." That initial paper really is no more than "here's a place to start looking." However, newspapers want to publish news, and they need to publish whatever's hot and interesting and being done today, not "well, scientist z had his team take a look at the xy phenomenon to see if there was anything interesting there, and they couldn't really find anything there, although maybe some other research lab might have different results."
And, I suppose that somebody should post a link to the obligatory xkcd: http://xkcd.com/882/ [xkcd.com]
Re:Science comes when results are confirmed (Score:5, Interesting)
There's a brilliant line on this in "The science of discworld" - I won't pretend I can quote it 100% accurately off the top of my head but it goes something like this:
"In the media you will often read that a certain scientist is trying to prove a theory. Maybe it's because journalists are trained in journalism and don't know how science works or maybe it's because journalists are trained in journalism and don't care how science works - but a good scientist never tries to prove her theory, a good scientist tries her best to disprove her theory before somebody else does it for her, failing to disprove it is what makes a theory trustworthy."
Re:Science comes when results are confirmed (Score:4, Interesting)
I think the real problem today, is that jounalists think they can be stars out of the gate. In the old days, you started out as a fact-checker, before you could get even get a word that you wrote published (usually anonymously at first). By the time you got a "by-line" you have come up through the trenches and seen all of the behind the scenes mistakes that the "star" writers made. Today you have a blog and no editor. Not only quality journalism goes out the window, but these new so-called journalists don't learn the consequences for some of their habits, because they haven't seen others make them (and haven't been motivated like a fact checker to find the problems).
Of course this "star-at-birth" issue isn't just problem with journalists, but many professions (e.g., scientists, chefs, programmers, etc)...
Re:Science comes when results are confirmed (Score:5, Informative)
That is how proper science is done (Score:3)
And Feynman said as much in his famous "Cargo Cult Science" speech, which I encourage everyone to go read.
The problem is a lot of scientists DON'T do science that way these days. Bad science, "positive bias" as this article calls it, was rampant in the behavioural sciences when I was studying them. I basically never read a paper where they falsified their theory, or where they said things were inconclusive. They always found a way to hammer the results in to support of their theory, and none of them were ev
Re:Science comes when results are confirmed (Score:5, Insightful)
I'm not a Popperite, rather a Jaynes-Cox-Bayesian, but nevertheless it is important to avoid confounding the relative strength of positive and negative evidence. Absence of evidence is not the same as evidence of absence, yet we almost invariably confound the two.
Taleb damn skippy agrees with you about publicity, however, and the near-criminality of publicity and reporting of science. A newspaper necessarily takes a scientific result or observation and transforms it two ways: First of all, it creates a narrative. It isn't just "a tornado hit Houston", but "a tornado hit Houston, possibly caused by Anthropogenic Global Warming" with the subtext "this isn't an act of nature, random an unpredictable, but is instead our fault". Aztec priests couldn't have come up with a better excuse for ripping the still beating hearts out of a stream of slaves and war captives. Second, it necessarily reduces the complexity of the result to no more than three variables, ideally one. It "Platonifies" it (according to Taleb) -- wraps it up in a pretty, easy to understand package that makes it more predictable, less random than it really was. Global warming is a much simpler "cause" than "A cold front overrunning a warm wet surface layer of air near the ground, creating turbulent rolls that break off and terminate on the ground, sustained and driven by the thermal difference, and it is a better story too.
Sadly, as you point out, real science is all too often (and should be) scientist z looked at something and didn't find much. But what they failed to find and how they looked is actually often as or more important than a study that claims to find something, especially when the latter uses questionable methodology to try to prove something, cherrypicks data (for the same purpose), ignores silent evidence (ditto) etc. Medical science is permeated with this. Nobody gets famous, or rich, or even a job, for looking for a cure for cancer and not finding one. This too is addressed by Taleb. Great book.
rgb
A perfect example of what? (Score:5, Informative)
To me it seems a perfect example of how NON SCIENTISTS have made the expectations that scientists are not able to live up to.
The report was basically "Hey, guys, this looks odd, but we've corrected for everything we could think of. You may want to see if you can replicate this".
Nobody else managed to replicate.
MEANWHILE the original posters were being told how they had it wrong. They changed their process to accord for this and retained some extra difference, despite this. Another attempt by another team showed no effect, so the original scientists CONTINUED to see what could be the cause if not FTL neutrinos.
They then found that the 50ns difference can be accounted for if the connectors were not tight.
NOT, at that time, that this was the cause of the discrepancy, but that
a) the couplings were loose NOW, but they didn't check THEN, could have loosened since then
b) an effect of this is a slight delay, enough to remove the difference they'd seen
All absolutely fine and no "positive bias" AT ALL.
But what is the public saying about it? What were the press saying about it?
THAT debacle (and how it's ignored by the parties involved in it) is a perfect example of the problems science has with people who aren't scientists.
Re:Feelings are more important than science (Score:5, Insightful)
Re: (Score:3)
Juggs?
Re:Feelings are more important than science (Score:5, Informative)
Positive bias works a little differently than you think. First, a few definitions. Positive result looks like this: "We tried X on Y and it works." Negative result looks like this: "We tried X on Y and it doesn't work." Which one looks more interesting? That's right, the positive one. Now remember that outside mathematics, science is stuck with probabilities. There's always a very small chance of false positives (and also false negatives, but those are less of a problem).
So let's suppose that we have an experiment which should come out negative with 99% certainty. There's 1% chance of false positive. Now let's have 100 scientists independently perform the experiment. Chances are that 1 scientist will get a false positive. This scientist will then publish a paper while the other 99 will give up and research something else without publishing because of the impression that negative results are not interesting.
This is how positive bias works. Those 99 negative outcomes need to be reported to show that the one false positive is a false positive but they aren't reported. It's not some kind of intentional fraud but merely a consequence of overt obsession with original and sensational results dictated by those who pay for research. It doesn't matter what the research actually means, only that it can be phrased as "We tried X on Y and it works."
Re:Feelings are more important than science (Score:5, Interesting)
During most of my career in pure physics I got negative results, which were always hell to publish. One of the things you'd hear a lot was, "Don't worry, a negative result is just as good as a positive result!"
At some point I started telling friends and colleagues who got positive results, "Don't worry, a positive result is just as good as a negative result!" Which is false, as proven by the fact that no one but me ever said it.
Negative results are hard to get published, but far more common than positive results. Furthermore, on the road to any positive result there are going to be lots of negatives: even today, working in an area where true positives are much more common, I try to put a section in every paper entitled something like "Things That Did Not Work So Well", because any experiment or computation or theory is likely to involve some dead ends that seemed like a good idea at the time, and if scientists don't report on them they will continue to seem like good ideas to people who haven't tried them, who will then waste effort on trying them, and fail to publish them when they don't work...
Re:Feelings are more important than science (Score:4, Interesting)
This is how positive bias works. Those 99 negative outcomes need to be reported to show that the one false positive is a false positive but they aren't reported.
I just want to point out that it's not quite so straightforward. A null result for an effect is not, in and of itself, negative evidence for that effect, it's just a lack of evidence for that effect. It's always possible that a different set of materials, a larger sample size, an additional control, more sophisticated stats, or any number of methodological modifications would succeed in finding an effect. 99 null results with bad materials are not evidence against even a small number (not one!) of positive results with good materials. Null results are under-reported because they are much more ambiguous, not (only) because they are harder to sensationalize.
Re: (Score:3)
Actually, it is a bit worse than that. Let us assume that eating Twinkies has no affect whatsoever on toenail cancer (TNC) rates. Let us assume that 20 groups set out to measure the affect of Twinkies on TNC. Since "proof" levels are traditionally set at p=.0.05 levels of significance, it is quite possible that not one, but two groups will get "significant" results. One group proves that Twinkies cause TNC. The other that Twinkies prevent TNC. Both groups will try to publish. They will likely succeed
Re:Feelings are more important than science (Score:5, Interesting)
Yup, that's the crux of the problem. While it may be true, as others say below, that publication bias against negative results occurs in all fields (such as physics) regardless of study funding, what we are seeing now is the influence of pharmaceutical industry funding in the clinical trials used for FDA approval of drugs (that is, a company funding the trial of its own drug).
Specifically, drug studies funded by pharmaceutical companies are four [newscientist.com] times [healio.com] more likely [latimes.com] to show a positive benefit than ones funded by neutral sources. This is a problem because nearly two-thirds of clinical trials used for FDA approval are now industry-funded.
Re:Feelings are more important than science (Score:5, Interesting)
This is a dilemma that is really goes to the heart of the philosophy of government. If the majority is irrational, is it better to give them self-determination and accept they will make frequent bad decision, or have the enlightened few rule them and impose better-informed decisions upon them?
Hint: there is no correct answer. I am not an historian but as far as I know this debate between a pure democracy and some form of republic goes back to Rome and Greece.
What is kind of weird is that the two major parties in America have developed into philosophies that are kind of opposite their names: the Democrats favor the paternalistic nanny state governed by the enlightened few (what I would call a "republic"), and the Republicans favor the ignorant mob ("democracy").
As an aside, when America was a young nation many of her leaders advocated public education as a way to narrow the gap between the elite and the general population. That does not seem to be working out real well, though.
Re:Feelings are more important than science (Score:5, Insightful)
Here's the thing, there are no enlightened few. There's just a few equally irrational people whose irrationality makes them think they are rational and all knowing.
Re:Feelings are more important than science (Score:4, Insightful)
Re: (Score:3)
Indeed. I was fortunate enough to go to a private (highly regarded) high school. One of my teachers (who was also the principal) liked to say: "The purpose of high school is to train students' bullshit detectors." Unfortunately, most teachers don't think that way, they just think their job is to cram a bunch of information down students throats, which of course they promptly forget the first day on summer vacation.
Obviously, there are sets of basic information everyone should know, but the most important s
Re: (Score:3)
It's no mistake that a large portion of students that go to college tend to be moderate to liberal. Many of the teachers and professors, whether knowingly or unknowingly, teach from a liberal standpoint and thus this is what children are taught.
I have to admit that my experience is similar. I remember simply mentioning my views on abortion on a bus during a field trip in college (I was NOT volunteering, and did try to dodge the question for the sake of everyones sanity during the hour long drive), and promptly spent the next 45 min defending my opinion, not only against the students (my peers) but the professor, with his inherent, and unavoidable position as THE authority figure present (and all of the psychological baggage that comes with it).
Re:Feelings are more important than science (Score:4, Insightful)
Right? isn't that what American schools and TV have been teaching for the last 30 years? Nerds aren't cool - facts are open to interpretation - everyone is special - you can eat more than you grow... When you have a society rewarding irrationality, what do you expect? Rigorous science?
Considering that I'm done growing, if I didn't eat more than I grow, I'd die of starvation.
Re: (Score:3)
Right? isn't that what American schools and TV have been teaching for the last 30 years? Nerds aren't cool - facts are open to interpretation - everyone is special - you can eat more than you grow... When you have a society rewarding irrationality, what do you expect? Rigorous science?
Considering that I'm done growing, if I didn't eat more than I grow, I'd die of starvation.
Interesting interpretation of what GP said. Not in context, but interesting. Obviously he meant eat more than you need to grow and sustain yourself, i.e., get fat -- and of course we can argue that getting fat is growing, just in the wrong direction; but again, that would miss his point.
Re:Feelings are more important than science (Score:4, Interesting)
Re: (Score:3, Insightful)
We still are, it's just the definition of "fit" has shifted since when we started creating the environment we live in.
Also, I personally don't miss the "good old times when we were all starving".
Re:Feelings are more important than science (Score:4, Interesting)
About every living creates the environment it lives in. Some in very obvious ways (ants, bees, beavers, moles...), some pretty subtle (grazing animals of the open plains tend to hinder the growing of shrubs and trees and thus keep the plain open). Same goes for plants, which change the immediate environment in a way to hinder concurrent species by shadowing the ground, changing water levels and chemical properties of the soil, and fend off enemies.
Re:Feelings are more important than science (Score:4, Insightful)
And humans do ? I must say that while a certain form of organisation is present in cities, only very few cities looked like any global design was done at all in them. When it comes to organisation essentially done by individual humans : animals have plenty of "local" organisation like humans ... take the tunnel structure below molehills for example. This is aside from huge projects like the hoover dam. But most things humans build are much more like the massive organic changes animals cause just as much as humans do.
Besides, humans are not unique in having big infrastructure projects. Ants is the most common example of an animal that organises itself into building huge structures on cue. So do apes. Nothing quite the scale of Hoover dam of course, but the difference is a scale difference, not intent or organisation or even intelligence.
Re: (Score:3, Funny)
Obvious Complex System (Score:5, Interesting)
Positive Bias is another word for Group Think. I guess it could also mean deception [bishop-hill.net]
Re: (Score:3)
Positive Bias is another word for Group Think. I guess it could also mean deception [bishop-hill.net]
Replying so parent gets noticed despite a downmod 'Overrated' from score zero.
Please read the link as the behavior it exposes is highly relevant to the article.
There types of articles are moronic. (Score:5, Interesting)
There are "studies", and then there is observation, modelling, prediction, model testing which is this thing called science. "Studies" are bullshit. Scientific research functions as it should. I believe the OP's article is just a chunck of sensationalist BS, or utterly ignorant of what science is (and is not).
Re:There types of articles are moronic. (Score:4, Insightful)
There are "studies", and then there is observation, modelling, prediction, model testing which is this thing called science. "Studies" are bullshit. Scientific research functions as it should. I believe the OP's article is just a chunck of sensationalist BS, or utterly ignorant of what science is (and is not).
You forgot to mention that it is yet another piece of published work that suffers from positive bias...
Re:There types of articles are moronic. (Score:5, Interesting)
There are "studies", and then there is observation, modelling, prediction, model testing which is this thing called science. "Studies" are bullshit. Scientific research functions as it should. I believe the OP's article is just a chunck of sensationalist BS, or utterly ignorant of what science is (and is not).
That is not really what TFA is talking about. Daniel Sarewitz is re-phrasing a long-known problem with "studies," as you call them, which is that complex systems are--by definition--too complex to study as a whole. I am a physical scientist, which means that I typically make or measure something in a well-controlled experiment and then change variables in order to test a hypothesis. I can basically publish a paper that says "we tried really, really hard to find it, but it wasn't there." In the life sciences, they are trying to answer vague cause-effect questions like "does this drug affect a particular type of tumor more than a placebo." Thus researchers in those fields have to create models in which they can control variables. He gives the example of mouse models, which are obviously imperfect models for human physiology. How imperfect is the question. The creeping phenomenon that he is addressing is the tendency to relax the standards for what counts as positive evidence--and I'm grossly oversimplifying--by waving your hands around about how mouse models are imperfect, but that there is definitely "a statistically significant trend." The root cause is simply the ridiculous amount of pressure that life science researchers are under to publish, which requires results, because their methodology is standardized. Those poor bastards can spend eight years on a PhD project that goes nowhere or burn four years of their tenure clock figuring out that their experimental design was flawed. *Poof* no funding, no tenure, no degree, time to consider a new career. That sort of potential downside creates the sort of forced-optimism that TFA describes.
Re: (Score:3)
Both studies and science are affected by bias inducing forces, such as 'publish or perish' policies of institutions and grant availibilty from stakeholders with an economic interest in the results. Elsevier, et al, raise similar problems with their control of the distribution of knowledge pipelines.
About a hundred years ago traffic problems became so bad that government had to step in and legislate which side of the road drivers had to use, and who had right of way at intersections. It might be that simila
Re: (Score:3)
Minor correction - the quote actually comes from Rutherford:
"All science is either physics or stamp collecting".
He was subsequently awarded the Nobel prize for Chemistry (1908). Presumably by some perversely vengeful chemists.
Wait, what? (Score:2, Insightful)
'Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves,' he adds.
No, the corrosion of public trust is the incessant idiocy coming from Fox and other Murdoch properties exclaiming "oh those silly scientists got it wrong again!" when the story is about a refinement of a model or something.
Scientists are losing the credibility war because scientists are not PR flacks and are unable to counteract the "we don't h
Re: (Score:3, Insightful)
Oh yes, blame it all on fox, the ultimate evil in the universe. It has nothing to do with studies being published making lavish claims which are then later proved false, or so wildly overblown that it's almost embarrassing. Of course, the conduct of the scientists themselves couldn't possibly be at fault and it must all be an even republican conspiracy.
Grow up, pull your head out of your ass, and realize that fox and republicans aren't the only source of evil in the world.
Re: (Score:2)
Not being an American, I don't know what FOX is putting out. But if they are pointing out that any arbitrary number of "X causes cancer", "Y prevents cancer" and "Z causes heart disease" studies are bullshit, then it is not merely their perfect legal right to do so, but they are also delivering an accurate describtion of such "research".
Just because somebody is a murderer and thus a bad man, doesn't mean he also picked your pocket, ran a red light and parked his car in front of your garage.
Re: (Score:3, Informative)
The trouble is that most such "X causes cancer" statements come from the media themselves, recklessly shortening research results saying "consumption of X correlates with a positive increase in cancer", where the nature of the correlation is unknown and the increase is very small. So it is all to often not an accurate prediction of the research. Particularly, there can often be a chinese whispers effect, where the researcher publishes a paper, the University PR department publishes a precis edited for PR pu
Re:Wait, what? (Score:4, Insightful)
This is a story about actual bias in scientists which is affecting the quality of their research.
And you completely gloss over that to take issue with Fox, which is no more ore less biased than msnbc, et al...
Look, people seek an echo chamber. "News" companies of all types just supply the demand.
I suppose YOU don't see a problem with some news organizations taking biased scientific output and unquestioningly running with it as though it were the concrete truth for ever more.
Re:Wait, what? (Score:5, Insightful)
While I agree that models are frequently refined, leading to new results, there is a disturbing trend I see, not having to do with positive bias necessarily, but with uncertainty estimation.
One thing that I've found incredibly hard to beat into undergrads taking my physics lab courses is that getting your uncertainties (or error bars) right is far more important than getting the right central value. This is because uncertainties are the only way that two experiments can be compared against each other, or the only way to compare experiment to theory. If I have two models of climate change, one of which predicts a temperature rise of 3 C ± 5% and another that predicts 4 C ± 7%, those results are in large disagreement, whereas two studies that predict 20 C ± 15% and 40 C plusmn 35% are in much closer agreement.
But I see it seems much more frequently, especially in fields like astronomy, too little thought goes into the systematic uncertainties, and you'll get 4 experiments measuring the same thing with results that cannot be reconciled if you take their statistics at face value. This was a huge problem with many of the early global warming predictions as well; every year a new estimation would come out that was completely incompatible with the previous one. Yes, these models are insanely complicated, and it's damn hard to understand all the systematics. And of course you can't put in error bars for plain old mistakes. But do it too many times, and people begin to lose any faith that your estimates can be relied on for anything.
This is the problem I see; not necessarily bias toward a positive result, but a bias toward underestimating the uncertainty of your measurement, which I suppose could be different sides of the same coin. (E.g., a result of 2 ± 0.1 is a positive result; a result of 2 ± 5 is not!).
Re: (Score:2)
Re:Wait, what? (Score:5, Insightful)
angry much?
Damn right scientists are angry. The pervasive anti-intellectualism and well-funded attacks on science to suit political ends are getting ridiculous. Positive bias in peer review is a problem that needs to be addressed, but it is all part of science - studies that cannot be replicated are examined in detail and their models either rejected or refined to be tested again. Just because something is published does not make it "the truth" - it means it is a set of conclusions based on experiments that were done by a particular team of scientists. If it's not repeatable then future publications will say so - that's how the back and forth and refining of models happens. Things don't get held back until the issue is "settled" then get published, it simply doesn't work like that.
Where things start to fall down (and where the anti-science folk with an agenda have such an easy time) is that it can be hard for the layman to sort out what to trust in a scientific publication, since the well-hammered, repeated-by-many-groups stuff is sitting alongside single publications claiming XYZ based on PQR from a single data set in a single research group. It's so easy to wade in there and say "look at this! see! scientists can't agree! it's all a big con to get more money! they'll say anything to further their agenda!".
Again, I'm not dismissing the problems of positive bias - it's a factor of peer review and the way human beings approach the reporting of science, but the job is made much harder by a barely-science-literate blogosphere working full steam to discredit anything they can to make it look like there's some sort of global conspiracy of scientists working against the general public in whatever field the propaganda machines are working against. Climate change is clearly the biggest one at the moment, but it's not restricted to that. There's almost anything related to things that challenge fossil fuels and energy research towards energy independence, then there's anything in the biology field relating to stem cells, vaccinations, pharmaceuticals, etc.
The rise of public opinion that science is somehow something to be regarded with great suspicion and that scientists are actively working against the public good is not only troubling, it's highly counterproductive. It's time we started getting angry about it, it's just almost impossible to fight back - scientists are not equipped to do so against a highly organised and well funded group of individuals who do that sort of character assassination and propaganda for a living.
troubles doing proper science (Score:3)
There's nothing wrong with science itself. But there are many barriers against doing science correctly. I can't say for sure if the barriers have gotten worse in recent decades. But I think yes, it has become harder to do good science. Why?
The Space Race and Bobby Fischer's day in the sun are long gone. They were good contests, and well-liked, with very positive outcomes, especially when compared to the alternative of nuclear war. In a way, it makes up for science having discovered The Bomb. It was
Re:Wait, what? (Score:4, Insightful)
I would suggest it all started with Al Gore and Global Warming
You would be wrong and not only that, but without a sense of history at all. I can just point out Senator William Proxmire's (D, btw) "golden fleece award" given to the Aspen Movie Map with regards to "hurr, we don't understand the tech so we must be getting scammed" to Sarah Palin's bitching about silly "fruit fly experiments...hurr, we don't need those" - which comes from a long line of idiots decrying basic science, because their minds are *that* blinkered.
Never mind that the fruit fly experiments are all about genetics because fruit flies have lifespans of ... fruit flies so it makes heritability easier to study. Never mind the fact that the Aspen Movie Map was groundbreaking and you can now look back at it and say "Gee, I wonder where Google got their idea for Street View." But no, you don't hear about that. You hear on Fox that fruit fly studies are just wasting taxpayer money. Because Sarah Palin said so. Because she's such an expert in genetics. *spit*
Then the whole Climategate thing that showed the primary movers of AGW to be complete jerks and thoroughly unlikeable people.
So? It was found to be nothing more than egos. The science itself wasn't discredited, and has only strengthened since then. And I have voiced my opinions here about how badly I thought that it was being handled and my own strong skepticism about AGW in previous years. But you know what, I have recently (as in the past couple of years) found that the anti-AGW crowd to be increasingly full of *real* integrity problems and conflicts of interest.
Back to basic science:
Here's a clue for you and your buddies: You don't get applied science and engineering and fancy new products without basic science.
--
BMO
Re:Wait, what? (Score:5, Insightful)
And there's jerks like you who claim that anyone who dare look askance at your work are "anti-intellectual" are are too stupid to sort out what to trust in a scientific publication (are you saying that scientific publications, Nature, et. al. have untrustworthy material in it?)
Ah, classic twisting of my words. I'm talking about anti-intellectualism as a movement. Those doing it are extremely smart - that's what makes them good at it. They're good at duping people into following their cause. I am not personally calling individual people stupid unless they actually demonstrate that they are.
And yes, I'm absolutely saying that scientific publications contain untrustworthy information, including the big hitters like Science and Nature - their size and prestige is no assurance of infallibility, and in fact can work against them since people are often reluctant to question them. That's the nature of scientific publishing - until results have been replicated, single-source experiments and models need to be looked at with extreme skepticism.
Last year I performed some work that disproved a piece of published literature (in a non-controversial area of chemistry). I didn't set out to disprove it - I set out to see if I could replicate the results and I determined that the published paper was incorrect. My own conclusions, method and data set were published in response, with some discussion on why the previous paper was drawing incorrect conclusions (mainly an issue with experimental control). My situation is one that is repeated constantly - it's how science works. The stuff that can't be replicated is corrected, the stuff that is replicated becomes more solid.
It's not the layman's fault that they don;t understand some of the intricacies of how peer review and scientific publishing and research works. They're not stupid for not getting that in the same way that I'm not stupid for not understanding the first thing about programming - it's simply not my area of expertise. Where the stupidity *does* arise, however, is when people start to distrust scientists out of hand because they're being told to do so by certain media outlets or special interests. It happened with vaccinations due to a corrupt doctor manipulating a very weak study with the ultimate aim to push a competing vaccine made by a company that paid him off, but it backfired spectacularly - far from getting the competing vaccine popular, people rejected vaccination entirely against their own interests, putting their own and everyone else's kids at more risk. It's this sort of media frenzy and associated public panic and distrust of science (even now, people refuse to believe scientists on the issue, despite the original study being totally debunked and Wakefield himself being struck off the medical register et and the whole thing exposed as a sham).
That's the sort of thing I'm talking about here. We saw it with the MMR vaccine, we see it with nuclear power, we see it with stem cell research, we see it with GM foods (and there *are* some legitimate issues to be raised there, being drowned out by typical media hysteria), we see it with climate science - again, there are legitimate issues to be raised and discussed on a topic that is *gigantic* in scope in the scientific community, but it's being drowned in so much media hysteria and political propaganda that it's almost impossible to get anything done.
Of course, equating people who don't drink your Kool Aid to those who deny the Holocaust really helps your cause to be seen as our nights in shining armor.
Where did I say that? You're dangerously close to Godwining the thread by trying to imply that I brought that up when I did no such thing. The hyperbole serves no one, it only makes your arguments look weak.
I'm not looking to be anyone's "night [sic] in shining armor", nor are most scientists. We just work on the science in our field and go where that leads us. If we wanted to be knights rescuing people I'd have joined the fire service or something. I became a scientist to ultimately help mankind and further our collective knowledge, but I'm no superhero or white knight.
Re:Wait, what? (Score:5, Informative)
"Scientists" are not forcing anyone to do anything - that's what politicians do. We just do the science. Now, if we end up discovering, for example, that CFCs are damaging the ozone layer, or that PbEt4 in gasoline is poisoning the air then we bring that to the attention of people who can make a decision on what to do about that. We can certainly offer suggestions of what to do about it - stop using CFCs (and develop alternatives) and stop using PbEt4 in gasoline (and work on alternatives) etc, but ultimately sometimes changes have to be made that people aren't going to like.
We don't enjoy "telling people what to do" or set about to discover things that will allow us to dictate to people what to do - ideally we want to find solutions to things that have a minimal impact on people's lives.
We're not in this to control people - we just do science. We don't want to "force" people to do anything, but sometimes what we discover necessitates change in order to benefit everyone as a whole (like removing lead from gasoline, or switching to non-CFC propellants and refrigerants). That's just life I'm afraid.
Re: (Score:3)
First of all, scientists aren't forcing anyone. They do not have that power. Governments do. Beyond that, history indicates that people do not make the right choices all on their own, and quite often will give in to selfishness and shortsightedness. Look at the North Atlantic cod fishery. For decades scientists warned that it was going to get wiped out, but market forces and the short-term self interest of the fishermen won out, until, that is, the population collapsed due to centuries of overfishing. Leavi
Re: (Score:3)
1. http://dictionary.reference.com/browse/scientific+method [reference.com]
2. Reproducibility means exactly that - if you follow the method set out in the paper can you reproduce the results that the paper claims? There's no "system" for testing reproducibility because there doesn't need to be one - you simply do what the authors of the paper did and either it works or it doesn't. If you haven't been given enough information to reproduce the paper's results then the paper is invalid and needs to be corrected.
3. Studies are
Re: (Score:3)
"This is absolutely, unalterably correct and anybody entertaining even reasonable skepticism is an IDIOT"
Nobody has a problem with reasonable skepticism. It's unreasonable skepticism, such as that you often hear in the far-right wing media, that people are getting sick of.
The anti-vaccination assholes, and the people that just refuse to believe that mankind is having an effect on the climate, are two good examples of where you see a lot of unreasonable skepticism, but there are plenty others...
I knew the far-right had officially gone full retarded when I had someone tell me climate change was a myth because A
Re:Wait, what? (Score:5, Informative)
You should thank God that FoxNews exists
Fox is the only "news" organization, ever, that went to court in the US to prove that they don't have to report news.
You are a complete and utter moron if you give whatever they say any credibility at all. This is not ad hominem. This is their own claim that whatever they report does not have to be based on any facts at all. None. Zero. Nada. They can make shit up out of whole cloth if they want. They have a Ruling saying they can.
--
BMO
How many news organizations report false stories? Here's a hint; ALL OF THEM. It happens. We are human. It's USUALLY a mistake.
The difference is when FoxNews makes a mistake, they are sued by someone like yourself, only with extra money laying around, trying to silence the opposition and run FoxNews out of business. Did anyone sue to strip the license of NBC when they released a modified 911 call in the Trayvon Martin shooting? Did CBSNews have to go to court to defend their right to report stories that turn out to be false after that blatantly false Bush National Guard story?
Why is FoxNews different?
Also, the court case you are referring to was not about FoxNews, but a local Fox affiliate. The local affiliate was sued by a former reporter who didn't like the fact that they were made to include Monsanto's side of the story, which they claimed to be false. The part you are referring to where the court said the news organizations were allowed to lie did not come from FoxNews or the court, but a group called Project Censored, which from what I can tell is an anti-government, anti-media, anti-corporation organization. Your kinda people
Re: (Score:3)
Are you? Monsanto objected to the story, and the reporters were fired. Fox successfully argued in court that it had no responsibility to tell the truth. The parent is correct, and yo
Re: (Score:3)
While the rest of your post strikes me as rather crazy, the last sentence leads to a decent "case study" on the misuse of science by fundamentalists. Research is a theme in the anti-gay movement. Both the Family Research Council (FRC) [a leading anti-gay group] and the National Association for Research and Therapy of Homosexuality (NARTH) [a leading ex-gay group] have it in their names. The anti-gay people who reference studies usually focus on health and other social problems. Topics include higher inciden
Re:Wait, what? (Score:4, Insightful)
Questioning is one thing, but wholesale declarations that the large majority of climatologists are liars or fools is another. What's more it's an old tactic developed by the Creationists in their attacks on evolution; declarations of a cabal of biologists out to hide the truth blah blah blah.
You act as if consensus is a bad thing. Do you think evolution is false because the overwhelming majority of biologists accept it? Do you think General Relativity and Quantum Mechanics is wrong because the overwhelming majority of physicists accept it? I'll wager you don't, but because AGW says "We keep vomiting vast amounts of CO2 into the atmosphere we're going to screw things up royally", which interferes with your short-term self interest, well, it must be false.
data point (Score:5, Insightful)
I received my PhD in physics, and the thesis was measuring a number, in which I measured zero within the error bar. Not particularly interesting, but valid science. My wife was in a PhD program in Biology, she also did valid science, novel measurement technique, came up with an uninteresting result, therefore was not able to publish, therefore was unable to graduate. It would have been extremely simple to fudge the result to a 2-3 sigma result 'hinting' at an interesting answer, which would have gotten published. I think certain sciences have gotten to a point where they have forgotten that if you do valid work in a novel way, then that is science and you should not be punished for the conclusion of the measurement. Most measurements you do of the natural world should probably end up being unsurprising, and thus uninteresting, but you don't graduate or get tenure with those kinds of results. I think this is the mechanism for the positive bias. That is why I do not take results from certain branches of science at face value.
Re:data point (Score:5, Insightful)
Re:data point (Score:4, Interesting)
Which is exactly what we need to do. We need an entire independent arm of the sciences dedicated to confirming established results. This is something that needs to occur separate from the pressure to come up with novel results to please grant reviewers.
Re: (Score:3)
Re: (Score:2)
No one remembers the 100s of ways Edison found not to make a lightbulb - even though they were each as important as his one successful attempt.
Re: (Score:2, Insightful)
Original commenter here:
It shouldn't be in the highest tier journals, but should be able to be published somewhere so people can find the technique and that someone had done the measurement, with result x. And if the idea and methodology is sound, it shouldn't prevent someone from getting a PhD. But it does. And it filters out a lot of people who would choose to publish results that do not further their own career, as well as filtering in people willing to do things like playing with how they cut their d
Re: (Score:3, Insightful)
Re: (Score:2)
The science should be of a publishable quality, but that doesn't always mean the result is publishable. AC's wife got screwed.
Re:data point (Score:4, Interesting)
I spent some time in the Chemistry department at a fancy ivy leave school that seemed designed to crush the spirits of biologists. While we were off drinking beer and playing volleyball against the Physics department, they were toiling night and day on multi-year projects that may or may not generate publishable results. The sick thing is that the outcome was totally decoupled from the abilities of the researchers--it was more like a test of stamina and luck. I saw talented, brilliant people turn into bitter husks. Some gave up on research and wound up teaching at private colleges, others left science completely--and not into fields that used any of their expertise. It seems like a lottery where those that get lucky (or stab the most backs) go on to academic positions at top-tens and the rest end up in rocking back and forth in the corner mumbling to themselves all day.
Of course it is a problem (Score:5, Insightful)
The solution lies in a reformation of research finance that is not focussed on how many papers X published compared to Y, but also takes into account whether they are consequential or not and if they actually comply with at least basic scientific attributes such as repeatibility, verifiablity, falsifiability, accessibility of all data and all conducted research, as well as actually conducted verification of research by independent third parties.
There should also be an outright condemnation of data mining, where data bases are checked only for the existence of attributes and correlations that happen to affirm the researchers opinion and leave all others untouched.
Fields like economics, medicine and climate have long since deteriorated to mere cargo cults due to those failings.
Re: (Score:2)
So would you have scientists publish fewer original research papers and more papers that attempt to reaffirm or disaffirm research that's been published by their peers? I'd be surprised if there isn't *some* resource that links research publications with a list of secondary papers that "Support the Same Conclusion" and "Refute the Conclusion". Given that scientists are looking to publish (churn out) a lot of papers... it seems low-hanging fruit would be studying and trying to repeat the Research/Conclusio
Re: (Score:3)
Even if there were such lists, they are not what research funding is based on.
What funding is based on is which journal published your article and how many citations it received in other papers - whether the author citing your paper has actually so much as read (much less made use of) your paper or not. Just being popular with peers helps, because you can always find some excuse or other to cite a paper even if it is semi-relevant at best.
Re: (Score:2)
Data mining is indeed a very mediocre scientific activity. Correlation on itself means nothing at all. If you want to proof something the correlation should be 100% and you should be able to explain why the correlation exists and replicate it in controlled experiments. The problem is that those slam dunk scientific discoveries are all or mostly allready found. And nowadays the poor scientists need to find something to bolster their path to glory.
Good science could be: find a correlation an proof the causali
Re:Of course it is a problem (Score:4, Interesting)
"also takes into account whether they are consequential or not"
That makes the problem worse. We need to evaluate papers based on whether they are good science or not, and not publish the ones that are bad science. Currently the "negative results" which are actually inconclusive results, are not published because they are, well, inconclusive. Unfortunately a lot of the "positive" results are also inconclusive, but they ARE published. The solution is not to publish more "negative" results, it's to stop publishing the flawed "positive" ones.
Re: (Score:2)
I'm one of those guys who is fed up with "studies" that "indicate" that X prevents Y just after another study showed that X causes Y. I'm one of those guys who read the news and knows that studies showing that some new pill is usually tested for effectiveness not against accepted treatments but merely against placebos.
Vaccination is well established and has a firm scientific grounding that can be replicated any number of ways, be it epidemiology, in-vivo or in-vitro experiments
Re: (Score:2)
Re: (Score:3, Interesting)
And ignores the fact that once published, there's no reliable corrective mechanism to propagate those results down beyond a standard literature search. I'm posting as AC because quite a few years ago I published results that I believed at the time to be correct, but were shown to be wrong in a subsequent paper. Despite this, I'm *still* being cited in new papers while the paper that refuted mine is seldom cited. Science isn't some infallible field. We make mistakes; Sometimes those mistakes are accidental,
Re: (Score:3, Funny)
Crap. I only thought I was posting as AC. Doh! :)
Replication is king... (Score:3)
Trust is the least of the problems (Score:2)
It also erodes science.
Positive bias is the wrong term. (Score:5, Informative)
They're overstating precision. Which rather then a forgivable error is an elementary mistake no trained scientist should ever make.
A VERY basic concept they teach at the lowest level of science education is the distinction between accuracy and precision. This is science 101.
Accuracy is whether or not a given conclusion is correct.
Precision is to the degree of specificity.
Typically you run into problems on complex subjects because they overstate the precision of their data or their ability analyze the data.
This can boil down to simple thinks like significant digits.
For example, I'm measuring volume to two significant digits in a giant data set with thousands of measurements. When and if I average those numbers the final average can't have more then two significant digits. That sounds elementary but you see this error made on some big studies. You'll have a situation where something is being measured in a crude sense by many sources and then in the analysis a much higher degree of specificity is implied.
Often that degree of specificity is required to make certain conclusions which is why they break the rule. This is lazy and a breach of scientific ethics. What they need to do is collect the data all over again this time to the level of specificity they need.
Simply saying its too hard to collect the data properly so they're going to make assumptions is not reasonable or ethical. I suppose you could do it so long as you kept an asterisk next to the data and the findings to make it very clear throughout that the conclusion is a guess and not in any way empirical science since at some point people were guesstimating results.
Re: (Score:3)
Re: (Score:3)
Well, it's funny when they do that with proxy data. They use samples of biomass in sediment from 10,000 years ago and pretend to derive a .01 temperature difference. You can't get that from pine cones, ice cores, and fossilized vegetation.
The real problem with all this stuff is that the data was never collected for this purpose. The standards and precision were for crop forecasts and whether or not people should go to the beach. It was never collected to be compared to a tenth of a degree across continents
Re:Positive bias is the wrong term. (Score:5, Informative)
"When and if I average those numbers the final average can't have more then two significant digits."
Yes, the average can have more precision than the individual measurements. That's actually kind of the point of an average. It can't improve accuracy though, for the most common definitions of accuracy.
Your definitions of accuracy and precision are sort of right, but also sort of misleading. And the way you use specificity is incorrect. But as you correctly point out, a lot of working scientists are a bit fuzzy on all these concepts too.
No secret in academia (Score:2)
I used to be in academia, and it was no secret that researchers almost always found the results that they had planned to find from the beginning in their studies. I don't think I ever once saw a case where a researcher started out with a hypothesis, found it was completely wrong through the research, and then let it go. There are a million ways to cook the numbers to reach the conclusion that you want to (and the one that gets you the grant money and publication). If a certain position is popular (and well
Apophenia (Score:4, Insightful)
When you have a hammer, everything looks like a nail.
Anything can become a religion, as a result. We're less critical of our data when that happens, and we "nudge" it into place.
The problem is not "science" per se but our social approach to it.
Wrong Angle (Score:2)
I would say that the bias is not so much positive as it is "get noticed and get more grant money" biased.
Unfortunately, you get noticed more and get more grant money if you find what you were looking for instead of disproving your initial hypothesis.
Also this article seems to imply that the problem that needs to be fixed is that of public trust, while I would argue that the public should distrust a community that gets it wrong so often (it is the skeptical, scientific thing to do). Unfortunately, far too ma
Science is about money now (Score:2)
Positive bias in engineering research (Score:3, Insightful)
In engineering research, there is definitely a positive bias; in fact, negative results are rarely published at all. This is both because negative results have less sex appeal than positive results and because peer reviewers are trained to outright reject publications without positive results. Although there is huge pressure to publish positive results, I'm not aware of systemic fraud in the literature. What does happen, however, is roughly this: 1) researcher gets great idea. 2) researcher tries idea. 3) idea fails to produce state-of-the-art results. 4) researcher adds hacks and kludges to marginally improve performance. 5) repeat steps 2-5. So, what you get in the end are journals filled with "positive results" that mean nothing and a bunch of "scientists" who make a living doing things that do not really resemble science at all.
Re: (Score:3)
I bet what you think are negative results are actually inconclusive results. LOTS of people, including many of the ones writing papers about positive publication bias, make that mistake. An insignificant p-value is NOT a negative result. It's an inconclusive one. In order to actually show a negative result you have to do more work, and delve into the (usually very simple) stats that few people know. Most studies don't have the power to show actual negative results, but in my experience if you actually
R&D (Score:2)
I'm not sure if it's just me, but I've got the feeling that these days, most useful and ground breaking new research and technology comes from companies, not universities.
What do you think?
Re: (Score:3)
It's just you. Corporate R&D produces products. Sometimes incremental engineering improvements. Not science.
Corporations USED to do some decent science, even more or less basic science. But not anymore.
Re: (Score:3)
I think that's a classic keyhole problem.
Private companies have a shit ton of marketing and PR.
90% of all projects in large companies fail. 10% succeed. The marketing never mentions the failure.
In the government, 90% of all project are successfully, 10% fail. Because the PR(news media) only focuses on the negative, the success don't get talked about.
This is the same thing.
.
Publication bias (Score:2)
So what do you expect academics to do? Reinvestigating a previously reported research result is unlikely to get funded, it's unlikely to be published even if it is researched. They live and die by publications.
Unless and until the publication system makes it possible for aca
Obligatory XKCD (Score:3)
It happens in Slashdot too... (Score:5, Interesting)
I remember a few days ago someone submitted a story about piracy for "The Avengers" being low compared to potential profits from them. A few high-ranked comments were like "This is yet another proof that [insert common /. parlance here]". I saw very few comments that stated the most plausible reason: a camcorded action film, with crappy audio and a shaking image, can't compete against the real thing. I thought the same thing: confirmation bias.
People do it all the time. If something can somehow support their views (specially if they don't RTFA) they'll use it as yet more confirmation. "I still don't get why this piece of evidence is discarded by everyone else! They must be delusional or have bad intentions". For example, I imagine this article will be used as evidence for: lack of funding, falling standards in the US, the demise of education, lack of scientific reasoning (maybe they'll even extend it to scientists themselves), and other common /. utterances. I wonder how many of them will actually say what I found out after RTFA...
So, everyone is playing the same game, and scientists are no exception. But hey, that study has numbers on it. At least you can try to replicate the findings, if only the entry barrier wasn't so high: these tests are *hugely* expensive. More collaboration may be a good idea. Shared laurels are better than none, right?
P.S., a nice article on confirmation bias (and other goodies) here [youarenotsosmart.com].
The myth of positive bias (Score:3)
Okay, it may not be exactly a myth. We can't tell. I strongly suspect there is actually a negative publication bias.
What most people think are "negative" results are actually inconclusive. A non-significant p-value is NOT a negative result. That misunderstanding is very widespread, and leads to lots of high level mistakes. Half of the neuroscience papers published in top journals including Nature the last two years that could make a mistake based on that fallacy, did. And neuroscience didn't seem to be particularly worse than most other fields.
A non-significant p-value is just that - not significant. Inconclusive. Getting an actual negative result is considerably more work than getting a positive one. You need to figure out what the minimum effect size you're interested in is (you should do that for positive results too, but almost everyone just uses zero for that) and show that your confidence intervals do not include it. As Ionnadis points out, you really should consider the power of your study as well (also for positive results), and take a stab at estimating the priors too.
If you go and do all that, and also do a quality experiment, in my experience you actually have a pretty good chance of getting published, because a) it's clear to the reviewers you've done a really thorough job (any idiot can run some data through a t-test and get a p-value, negative results are harder) and to show a negative result your study is probably much higher powered than a positive result one, meaning an impressively big p-value.
The problem is not that there's a positive publication bias, it's that most scientists don't know how to show negative results so there are very few negative papers around.
it's a problem, here's how to fix it (Score:3)
I'm a scientist. It's a big problem.
Here's how you fix it:
The metric for success for both researchers and their government funding sources is published papers. It's hard to change the cultural view of the scientific community, but it's easy to change government metrics. Paper publishing as a metric is easy to track between programs, but has had a terrible impact on scientific culture. It's also led to a large bias in how the government decides what areas to fund. If your metric is paper publishing and you're looking at energy issues, do you fund a sub-field with a historic high paper publication rate, a moderate paper publication rate or a low publication rate? It's fine for us here to say we'd fund the best research, but a government program manager may lose his job for picking a field with the lower publishing rate.
Other metrics such as how many other researchers use some results or whether a practical implementation of some new technique is developed will be harder to judge and take a longer time to evaluate, but would at least give us an honest assessment of the quality of government funded research. Tie future funding to what our broader society is looking for out of science, and eventually the scientific culture will follow.
Don't conflate medical research with all science (Score:3)
Let's all try to be careful, here.
Medical research was a little late the science party, and they still have serious issues to work out. Some of these problems can't be helped (low sample sizes lead to higher p values, but higher sample sizes place more human beings at risk), others can (not being skeptical enough of research conducted by organizations with a financial stake in the outcomes).
Most of these problems are peculiar to medical research and should not be conflated with all of science. While we should hold the medical research community's feet to the fire over this, the reason we are hearing so much about this is that they themselves are talking about it and they themselves are conducting the meta-research that is producing all of these articles. Other branches of science have looked down on medical research for a long time, but at least they are now getting serious about addressing the problems.
The problem with "landmark" studies (Score:3)
Something that most scientists know, but that is not widely appreciated by the public, is that "landmark" studies are particularly subject to positive bias. For a study to be acclaimed as a "landmark," it is the first report of a novel phenomenon, which by definition means that it has not yet been confirmed by other investigators, and moreover that it is in some sense unexpected--which means that there is not even much "indirect" evidence from other sources that it is correct. It is also likely to be submitted to one of the high-profile "newsy" journals, like Science or Nature, where submissions are not evaluated solely on the basis of whether the science is high quality, but are first screened (and usually by an editor, before it even gets specialist peer reviewers) on the basis of whether it is of "broad interest." Because these journals are widely read and cited , they have high impact factors [wikipedia.org], so even getting published in one of these journals is a feather in a researcher's cap. In contrast, a publication in a middle-rank journal does not give a major immediate boost to a scientist's career, but may have a long-term benefit--but only if it turns out to be correct. Being "scooped" (by having somebody else publish research leading to the same conclusion) will not prevent you from being published in a middle-rank journal, but it will knock you out of the newsy journals. So the alluring prospect of publication in a newsy journal may lead a scientist to be a bit less cautious than normal, and the risk of being "scooped" means that he/she may not take usual precautions (like seeing if somebody else in the lab can reproduce the result independently). High profile journals are stringently reviewed, but peer reviewers even in these highly-regarded journals necessarily take the authors at their word, unless there are blatant errors--nobody visits the authors' labs to look over the shoulders of the researchers or to verify that negative results have not been discarded. On the other hand, if you are planning to publish a result in a middle-rank journal, you are more likely to take the time to make sure that your conclusions will hold up over the long haul.
Working scientists know this, so they take these "landmark" studies with a grain of salt until they are independently confirmed. But the public can easily confuse "high profile" with "most reliable."
Re:Wow! I guess Science HAS become a religion (Score:5, Insightful)
Practice science, not demagoguery.
Re: (Score:3)
I've not seen it put more concisely than that yet - well done.
In relation to your second line: Practice of science is more engineering that science -
In theory there is no difference between theory and practice, but in practice there is.
Re:Wow! I guess Science HAS become a religion (Score:5, Interesting)
Saying 'science is a method' is like saying 'Christ was a Jew'. True, but it doesn't change what happened.
Science was an idea designed to seek empirical truth. To find things in such a way that those who followed after could find them again. Then people got a hold of it and started using it as a means to control one another.
Christ (even from the atheist point of view, so bear with me) had a simple message of love being service to your fellow man. Then people got a hold of it and we get monstrosities like the Crusades.
That's where the 'HAS become' part of the above phrase kicks in...
Re: (Score:2)
Re: (Score:3)
Re:Its also about non-scientists expectations (Score:4, Insightful)
Sorry, but you are asking for something that can be called the "everyone should be at the top" problem. Yes, it would be nice if everyone could understand science, scientific methods and what constitutes the difference between one result and a confirmed, peer-reviewed body of research.
It isn't going to happen that way.
What we need is to stop publishing unconfirmed, preliminary results in popular media. That by itself solves 90% of the problem. So who gets to decide when something is ready for 60 Minutes, Fox and Friends in the Morning or the Today Show? I don't know, but I know it isn't the news media and it should not be the original team reporting some unconfirmed results. Just a rule that says nothing gets copied out of scientific journals until it is marked as having been confirmed by at least one independent team would go a long, long way.