Registered Clinical Trials Make Positive Findings Vanish 118
schwit1 writes: The requirement that medical researchers register in detail the methods they intend to use in their clinical trials, both to record their data as well as document their outcomes, caused a significant drop in trials producing positive results. From Nature: "The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000. Study author Veronica Irvin, a health scientist at Oregon State University in Corvallis, says this suggests that registering clinical studies is leading to more rigorous research. Writing on his NeuroLogica Blog, neurologist Steven Novella of Yale University in New Haven, Connecticut, called the study "encouraging" but also "a bit frightening" because it casts doubt on previous positive results."
In other words, before they were required to document their methods, research into new drugs or treatments would prove the success of those drugs or treatment more than half the time. Once they had to document their research methods, however, the drugs or treatments being tested almost never worked. The article also reveals a failure of the medical research community to confirm their earlier positive results. It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.
In other words, before they were required to document their methods, research into new drugs or treatments would prove the success of those drugs or treatment more than half the time. Once they had to document their research methods, however, the drugs or treatments being tested almost never worked. The article also reveals a failure of the medical research community to confirm their earlier positive results. It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.
Not even wrong (Score:2)
It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.
Uhh... that's not even wrong. It's either something that started out OK and has been edited down into nonsense, or it shows such a fundamental misunderstanding of how controlled experiments work that, well, it's not even wrong.
Re:Not even wrong (Score:5, Insightful)
Re: (Score:2)
So what you're saying is that clinical trials totally win!
...at failing.
Re: (Score:2)
What is says is that the drug companies were:
Step 1: Gaming the clinical trials to get the results they wanted.
Step 2: Profit!
It didn't really matter to them that the results were invalid or in some cases may have caused harm. It typically takes years to sort out bad results and in the meantime, the profits are real.
Re: (Score:2)
Re: (Score:1)
Thing is those vaccines HAVE been shown to work in trials. And their trial was as rigorous as any today.
You DO know that 8% of trials showed that the process REALLY DID work, didn't you? It says so in the summary up there. So you can't insist that every trial is false. You need to see what the trial was before making a claim on any result.
And when you add in the "side effects" of NOT vaccinating, you're not being smart in refusing the vaccines.
Re:Not even wrong (Score:5, Informative)
Correct. Even if you specify your subgroups beforehand in the experimental design, you still need to modify your interpretation of statistical significance (downward) to account for the consideration of multiple hypotheses. If you're going on a fishing expedition by identifying subgroups post hoc, then you ideally need to base this correction on the potentially large number of conceivable subgroups that are available to be drawn. It's very hard to achieve real significance under those circumstances. On the other hand, you might find a subgroup result suggestive and conduct a separate follow-on study to test it independently; that's perfectly legitimate.
It goes further still (Score:2)
There's more to it still: you can actually exploit the incertitude on the measurements you're using to categorize your subjects into subgroups, in a way to ensure your drug WILL report positive effects even if it has zero real effect. It's very well explained in this short article by Tom Naughton [fathead-movie.com], complete with a numerical demonstration.
To put it shortly: you can design the subgroups' criterion in a way that overrepresents false positives and underrepresents the false-negatives that would otherwise counterb
Re: (Score:3)
'There was a derisive laugh from Alexandrov.
"Bloody argument," he asserted.
"What d'you mean 'bloody argument'?"
"Invent bloody argument, like this. Golfer hits ball. Ball lands on tuft of grass - so. Probability ball landed on tuft very small, very very small. Millions other tufts for ball to land on. Probability very small, very very very small. So golfer did not hit ball, but deliberately guided on tuft. Is bloody argument. Yes? Like Weichart's argument"'.
- "The Black Cloud" by Fred Hoyle
Re: (Score:2)
Hit golf ball.
Go to where it landed, paint a circle around it. Take picture. It landed in the circle, exactly what you wanted.
Re: (Score:3)
The trouble is that when a pharmaceutical corporation carries out research on a new drug, it very much wants that drug to be found useful, safe, and fit for use. (Although it's in the corporation's interest that any really serious side-effects should be identified, as it would lose even more money if it marketed something that turned out to be a new thalidomide).
Unlike a proper scientific team, the corporation undertakes its research wanting a particular outcome. That alone is almost enough to guarantee tha
Re: (Score:2)
Re: (Score:2)
Yes, that's exactly what is wrong.
Re: (Score:2)
I have heard that a popular method of drug research is to do clinical studies of a large number of people, but only report on the people that had positive results. You had 500 people in the trial, 10 got better, you report your 10 person drug test. Doctors should be required by law to report negative drug effects to the FDA from all their patients, even for clinical trials.
The only people who do that are quacks like Burzynski, who published a book describing his "successes" who may not have actually had cancer in the first place, and whom he didn't follow more than 6 months.
You couldn't publish something like that in a major peer-reviewed medical journal today.
What the drug companies did do until recently was commission a lot of studies, and publish only the studies with good results. The journals put an end to that by requiring every investigator to report the start of a stu
Re: (Score:3, Interesting)
You couldn't publish something like that in a major peer-reviewed medical journal today.
Anil Potti, Sheng Wang, and many others managed to do it just fine. Where drug companies get into trouble it is usually when they hire people at hospitals/companies/universities to run sites for a clinical trial. The contracts often create huge conflicts of interest that can skew the results of the trial.
Re: (Score:2)
You make a good point.
Re: (Score:2)
The trouble is that when a pharmaceutical corporation carries out research on a new drug, it very much wants that drug to be found useful, safe, and fit for use. (Although it's in the corporation's interest that any really serious side-effects should be identified, as it would lose even more money if it marketed something that turned out to be a new thalidomide).
Unlike a proper scientific team, the corporation undertakes its research wanting a particular outcome. That alone is almost enough to guarantee that any results reported will be unreliable and unsafe.
That's why some of the best studies, such as the ones on cardiology drugs, were done by the Veterans Administration. You go to a medical conference and you can hear doctors referring to "the VA study".
Kaiser Permanente and Blue Cross/Blue Shield did some good studies too, but their economic model doesn't encourage it.
Re: (Score:2)
No, it is a pretty accurate summary of what is happening. You focus post-hoc on where you got a good result. Say for example you want to test a new anti-diabetes drug. Does the drug work in the general population? Well, data doesn't support that. So then you look at subgroups. Does your data show success in say just men or just women? What about black men? Black women? White women? Etc. This isn't the only serious problem, sometimes one can choose which statistical tests to do or how to compensate for complicating factors. If you have enough choices you can make anything looks successful.
Now here's an interesting though experiment. Assume the researchers were doing a perfectly good job of analyzing results but were just terrible at predicting what that result would be. For instance, drug X doesn't work for the general population but it does actually work for black men, or this drug failed at treating angina, but it's great for erectile dysfunction [wikipedia.org].
Now this is a testable hypothesis because a second trial could confirm the surprise result from the first trial, ie Trial A: Test for angina but
Re: (Score:2)
does anyone know if they're redoing those trials with the new goalposts?
When the new goalpost looks like it might address an unmet medical need: Pretty much as soon as the patent application has been filed. If it looks like revenue, the company will either develop it or sell the IP to a company that can.
Re: (Score:1)
Many times of course it is p-hacking, trying to make something out of nothing.
The problem is that if this result holds (an 86% reduction in results for clinical trials form 17/30 to 2/25) then research will simply become too expensive.
If you feel that no further research is needed, great. If you look at your own body and that of the people you know and if you see how much lower our quality of living has become (obesity epidem
Re: (Score:2)
No, it is a pretty accurate summary of what is happening.
It's actually subject to some misinterpretation. What the OP seemed to be saying was that an experiment consists of running a trial ("I want to determine car colours, the first car I see is red, experiment concluded") and taking a conclusion from that. Obviously any subsequent "experiment" can prove the first one wrong, which is why an experiment should be about falsifying the null hypothesis. If performed correctly (which is why I used the term "controlled experiment"), you only need to do it once (alth
Re:Not even wrong (Score:4, Interesting)
Re: (Score:2)
When I first saw the article I said 'Just follow the money'. Well duh, most trials proved the newest 'wonder drug' from some big company worked when they had no need to document all of the things that went into the research. Only drugs that couldn't manage in any way to produce significant positive results would fail. That makes perfect sense when the medical scientists in question usually know what side is paying the bills it's always in their best interest.
Re: (Score:1)
It's similar to the stock market scam, where you create a dozen funds, trade them at random, and then cherry-pick the one that had done a lot better than the market, claim that it's due to your brilliance and then ask for investors (charging a hefty administration fee to each one).
You meant "mutual fund scam".
Re: (Score:2)
A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.
If by "taken seriously" one means "approved by the FDA", then two trials is pretty much the minimum (one phase II and one phase III).
Similar issues in other fields, not a perfect fix. (Score:5, Interesting)
Re: (Score:2)
It's the same in computer science. Many measurement methodologies are plain wrong or misleading [acm.org]. And in many cases, the source code of what people did is not available, so independent evaluation is not possible (someone also published a paper about that, where the author couldn't even get the code in many cases after explicitly asking for it, but I forgot the title). It's not just a problem of alpha sciences.
Re: (Score:3)
I also had this problem. And when I asked an PhD student about a thing that seems suspicious in his papers experiment setup he admitted that it's actually not as great as described.
Another big problem is research in development processes like Scrum, XP, Waterfall etc.. I argued quite a bit with my professor as all the experiments cited where like "take a bunch of students and let them work on a toy project, which never has changing requirements and is tiny in code size." Well this of course can't be used to
Re: (Score:2)
"The problems in software projects arise after _months_ not hours."
I'm doing some game coding right now. I get problems cropping up every COMPILE. What the hell do you mean by MONTHS?
Re: (Score:2)
Uh, "not sure if serious"!?
Forget compile time bugs or errors in algorithm, think more towards project management, development processes, maintainability, impact of requirement changes or new feature requests to a large project after it's basically done etc.
Re: (Score:2)
Any problem the compiler can find is easy.
Re: (Score:2)
In psychology, individuals are ... individual. If you can replicate the results exactly, then you must be doing something wrong.
This is confused. People are individuals but studies look at general trends. And those should remain the same. Humans being individuals doesn't make them immune to statistical analysis.
Re: (Score:2)
Unfortunately, not just social science.
Re:Similar issues in other fields, not a perfect f (Score:5, Interesting)
Similar issues have shown up in other fields.
Indeed. The biggest issue is statistical ignorance, but even people with a decent amount of training in stats can be fooled if they want to find a particular result. Anyhow, whenever things like this come out, everyone always thinks it's about scientists who manipulate data deliberately. While that happens, it's more often just researchers who "try things out" after collecting data and notice a pattern (unintentionally skewing things). If they have to declare methods and statistical tests beforehand, it's harder to make these errors.
A few months back, I happened upon a very useful guide to the problems in modern scientific publication, which can be found partly online here [statisticsdonewrong.com]. I ended up buying the print edition, and the sheer number of examples of completely bogus research ending up being accepted in various scientific fields due to erroneous stats and various biases that creep into the publication process... well, it's just shocking. Seriously.
As the book notes, the other problem is that even finding these errors is incredibly time-consuming and labor-intensive. I specifically remember one case where a new oncology test was proposed by Duke researchers and seemed to have great results. This case eventually became so infamous that it was reported on [nytimes.com] in the popular media [economist.com].
Anyhow, basically they had a couple independent statisticians analyze the work (where they found HUGE numbers of problems in mislabeled data, mistakes in analysis and basic computation, etc., which appear par for the course in many labs, if you believe the studies on this stuff in the book). Ultimately, estimates are that it took TWO THOUSANDS HOURS of work for these independent statisticians to complete their analysis and render a verdict.
And once they did this, the statisticians tried to publish it -- but major journals didn't want it. Groundbreaking results are much more interesting that tedious statistical analysis. The National Cancer Institute caught wind of the problems and initiated an independent review, which found no errors (probably because the review was done by cancer experts, not stats experts, and they hadn't been giving the stats analysis done by the other researchers).
The only reason any of this ever really got much attention is because one of the lead researchers was accused of falsifying some aspects of his resume, which led to people actually going back and questioning his papers.
The book is full of stories like this, though, as well as citations of analyses of how many journal articles in various fields suffer from serious statistical problems.
It's all really scary when you start realizing how much bogus research is out there... most of it completely unintentional, and most of it passing peer review because it follows the field's "standard methodologies."
Re: Similar issues in other fields, not a perfect (Score:1)
Re: (Score:2)
It's ironic too because we tend to waste a lot of time trying things that don't work, concluding that they don't work, and moving on t
More basic than just finding the results they want (Score:5, Informative)
The basic flaw is worse. They didn't just run one test, find the results they wanted and go with it. They ran a test with only an idea of what they wanted, then took all the results they got and picked out ones that were positive for conditions or treatments they could go with. It's like going into a test for a drug to treat heart attacks, finding that it doesn't do anything for heart attacks but does seem to lower cholesterol levels, and announcing that the trials of your new cholesterol medication were positive.
Having to declare up front what their goals are destroys the ability to cherry-pick like this. What we're seeing with the drop in positive results isn't so much the difference in clinical effectiveness of the drugs but the dragging into the spotlight of the pharma companies' ability to predict what their drugs will do and how well they'll do them. There's a very interesting blog here [sciencemag.org] that covers a lot of this, and one conclusion that keeps coming up again and again is that medical biochemists and researchers don't really have a good way of predicting from lab results what a compound will do in a live human. It also highlights fairly often how the drug companies will keep pushing a drug through trials even though the results aren't encouraging. It's a common attitude in business and finance, that now that you've invested this much money in something you have to get some return out of it to justify the cost. It's also a common failing in gambling, the belief that now that you're in the hole you have to dig yourself out somehow. But in gambling, if you're holding a bad hand your best bet is to fold. Don't worry about how much you've already got in the pot, it's already lost. Fold and cut your losses before you throw any more money away. Drug companies are notoriously bad at making that decision to walk away. They're also notoriously bad at dealing with a field where there aren't many good rules you can follow to get results. MBAs like process and procedure and predictable results, and right now biochemical research is in a situation where the new stuff is all likely out in areas where there isn't a lot of research, there isn't a good map of the territory and you're going to be doing a lot of "poke it with a pointy stick and let's see what it does" work.
Re: (Score:2)
Which isn't wrong, but requires verification (Score:2)
Exactly: If you have a bunch of random data, and want p that specific effect, in order to ensure that you aren't reacting to random noise.
Nobody has "forgotten" anything; it's about money. (Score:1)
Nobody "forgot" that. It's just that clinical trials often take years and involve thousands of people in the later stages, making them very expensive. Funding to replicate results is always lacking, particularly if it might disconfirm a positive find
Re: (Score:2)
That is why I believe funding of trials should not be done by pharmaceutical companies. It is a conflict of interest from beginning. Somethings by their very nature should be funded by taxes, because it is too important to simply let the organization with the biggest wallet win.
research....publish.... (Score:2)
basic tenet of science? (Score:1)
no, no, no...
it's not about the 'basic tenet of science',
it's about the 'basic tenet of profit'.
Science? (Score:4, Interesting)
It appears the medical research field has forgotten this basic tenet of science:
It's almost as if it isn't science at all, but rather advertising, where the target audience is a government agency that gives the company permission to transfer the product to their other advertising division who then advertise it to doctors [youtube.com] and the public. What percentage of these clinical trials are trials of something not destined to produce wealth for an organization if the results of the trial are positive? When a wealth generating organization approves expenditure of the large amount of money to do a clinical trial, I'm sure it also adds extra management, or transfers management of the research so as to help the results of the trial be more profitable than just doing science ever could be. I'm thankful that this requirement to register these trials exists.
Re: (Score:2)
It's also a massive barrier to entry for competitors. At fault, either way, is the FDA and how it operates.
Re: (Score:2)
Re: (Score:2)
"Just like everybody else"? You're saying that if I donate money to my city councilor and then ask him to adopt a policy that benefits me, I should be criminally prosecuted? If that's the rule, then about 90% of teachers should be thrown in jail, because they all donate to, and lobby and petitio
Re: (Score:2)
Re: (Score:2)
They are telling the truth: they (generally) complied with the old FDA requirements and reported their results truthfully, and now they are complying with the new ones. The problem isn't with companies committing fraud, it's with regulations not achieving what you want them to achieve. That is, no matter what regulations you come up with, people and companies will figure
Car analogy! (Score:2)
Another serious issue is that it is hard publishing negative results. So even though every re
Jelly bean analogy (Score:3)
This was explained in the journal xkcd.
https://xkcd.com/882/ [xkcd.com]
Ok, it's not peer-reviewed. But it has a very high impact factor.
Re: (Score:1)
Re: (Score:3)
It depends on what you're looking at.
Is this entirely independent research by respected labs? Or is this research in one particular area, by a specialised lab, with a particular sponsor? The latter is, we all know but can't prove easily, biased.
Is this a paper designed to do nothing more than back up a sponsor's advertising claims? Or is it something groundbreaking from a lab with few political ties, nothing to prove, and some serious science behind it.
It all changes the perspective of what a "scientist"
And all experiments should be published too (Score:2)
This basic idea should be applied much more broadly, and given that most publishing is electronic now, all experiments should be published and evaluated on the validity of their methodology and how interesting or insightful the hypothesis being tested is, not their result.
How much money and time is wasted trying the same negative result experiments at different labs because they don't know that someone else has tried it and gotten a negative because it was not published?
Re:this is happening everywhere (Score:4, Insightful)
when did academia get taken over by idiots? now that the gatekeepers are dumb we're fucked and all the smart people just go make money.
Well, actually, academia wasn't taken over by idiots. It was taken over - infiltrated - by smart people who were much more interested in money, prestige and power than in scientific truth.
That's the unfortunate fact about American culture. The USA was founded on the belief that all people (well, all white males of a certain age with property, but that's a small detail) should be treated alike. No titles of royalty, nobility or gentry. No class system. No special distinctions or honours.
The result, which became obvious very early on, was a society in which the only value was money. And money, it turns out, corrodes everything that is honest, decent and worthwhile. Now that culture is flooding the rest of the world - although some nations have done their valiant best to build dykes to keep it out.
"As the sociologist Georg Simmel wrote over a century ago, if you make money the center of your value system, then finally you have no value system, because money is not a value".
– Morris Berman, “The Moral Order”, Counterpunch 8-10 February 2013. http://www.counterpunch.org/20... [counterpunch.org]
Re: (Score:2)
Greed is not the only problem. Yes it is a big factor, but I also saw researchers doing things they should have gotten be laughed about, not published, which happened just because of lack knowledge or rigor.
Re:this is happening everywhere (Score:4, Interesting)
Americans are actually considerably less materialistic than other nations: http://www.ipsos-na.com/news-p... [ipsos-na.com]
Money to most Americans is a means to an end, a way to pursue the values and ideals they actually care about.
Well, Georg Simmel certainly got his wish, because about a decade after his death, his country made community, fairness, and equality the center of its value system, under the NSDAP party program. And it did it again two decades later in the GDR. And it was a resounding success: people indeed stopped caring about money, because they ended up caring mostly about not starving, not getting shot, and not getting gassed. Germany's tradition of academic philosophy that Simmal was part of bears a great deal of the blame.
And, of course, Simmel himself came from a well-off family and never had to worry about a lack of money limiting what he could do. His teachings are the philosophical equivalent of "let them eat cake".
Re: (Score:2)
"Americans are actually considerably less materialistic than other nations: http://www.ipsos-na.com/news-p [ipsos-na.com]... [ipsos-na.com]"
Americans SAY they are considerably less materialistic than other nations. FTFY.
Have you noticed that people from the wealthier nations tend to say they are less materialistic? Funny that. Moreover, I didn't say that Americans were unduly concerned about material possessions. I said that the American culture is obsessed my money - quite a different statement. In fact, money is often p
Re: (Score:2)
Materialism is a personal preference or choice, and asking is a reasonable way to find out what that is. But if you think they are lying, there is tons more data supporting the notion that Americans are generally less materialistic than other nations.
Nothing "funny" about it. You need to reach a certain level of material w
Re: (Score:2)
Exactly so. Money and its pursuit cannot replace the more human values such as honesty, courage, kindness, courtesy, imagination, charity, etc. There is something uniquely corrupting about money, in that when you start to think about it a lot you eventually become almost incapable of thinking about anything else. That's when you start sacrificing the good things of life for more money - and more, and more. If you are Larry Ellison, you want to be richer than Bill Gates. And so on.
Re: (Score:3)
Money grubbers are nice people next to power grubbers.
Re: (Score:2)
when it was discovered that one could make a fuckload of money from presenting pseudoscience and revisionism as Truth and indoctrinating the next generation of fresh young minds to the dogma of corporate rote and the prospect of production line drudge their entire working lives.
"Would you like fries with that?"
- The single most repeated line of a university graduate's professional career.
Pharmaceuticals and money (Score:1)
This is what happens when they intermingle...
Classic case of confirmation bias (Score:3)
Good! (Score:2)
Re: (Score:2)
A lot of people also get hurt and killed by not making new drugs available that could help them.
Ultimately, "they" should do the testing, and each patient should decide for themselves what risks to take. Unfortunately, in our paternalistic system, people are denied those choices.
Re: (Score:2)
Re: (Score:2)
if only it were so simple (Score:2)
That's not sufficient. In many experiments, it is easy for multiple independent researchers to share incorrect assumptions or to make the same mistake.
Trusting scientific results is something that should take decades, many repeated experiments, and an understanding of how those results fit in with the rest of science.
Re: (Score:2)
many things need to be taken into account:
- experimental brief
- detailed experimental method
- margins for error
- depth of reporting
unfortunately none of this is available when it comes to eg pharmaceutical studies. All we end up with is "Clinically proven!". Great. Which clinic, and how was the proof arrived at?
Medicine may eventually become science... (Score:2)
... if they work really hard at curbing egos and huge financial incentives. As it is, quite a bit of medical "research" is utterly pathetic and counter-productive. Unfortunately, more and more of that is happening in other fields too. For example in CS, not a lot of real scientific progress has been made in the last 30 years, due to the same stupidity, arrogance and perverted incentives.
Obvious need (Score:2)
Say you want to find out if certain candies act as a libido increaser.
You a do a study on 20 different candies. One of them seems to act like a weak form of viagra, at a 95% certainty that it isn't random chance. Except 95% is also known as one in 20 and you tested 20 candies...
If you didn't start out saying you were testing if M&M's were the viagra, you can claim that you have a positive effect. But if you were forced to specify the candy you were testing,
flagging corporatist bias (Score:2)
This is great news, it shows that money paying for results spoils the process. When it's opened up for peer review, one leg of any review being the ability to repeat the experiment using the exact same conditions and expecting the exact same results, corporate bias in reporting results shows up like Fat Waldo in a herd of adele penguins.
The bias being brought about by the sponsored research simply dismissing negative results as experimental error rather than seeing it as legitimate reading as far as varianc
Re: (Score:1)
Re:If Only (Score:4, Insightful)
...such rigor were required of Climate Science.
And now the onus is on your to back up what you say and prove that such rigor isn't required of climate science, or that the results are being significantly skewed because of their funding source. Just because what the science says does not match your gut instinct is not proof. Just because it could happen is not proof; we need evidence of inconvenient funding sources being omitted from research, or a meta study showing differing results depending on how the methodology is described.
In fact, I would go so far as to say that most climate scientists have done a good job of keeping corporate interests away from their research. If a scientist wanted to ensure future funding and be free of political interference, then they would take the easy path of downplaying global warming. No scientific research organisation has been defunded or disbanded because they gave evidence that contrary to man's impact on the environment.
Re: (Score:2, Informative)
@Gadget_Guy ...prove that such rigor isn't required of climate science...
>
One way to evaluate scientific hypotheses is to look at what the events they predict, then observe nature to see if the predicted events correspond to reality. In that sense, modern climate science is an epic fail because the "global warming" predicted by their models failed to happen. (Prompting the climate-alarmist "true believers" to switch to 'climate change' (so, up or down, can't lose))
http://www.americanthinker.com... [americanthinker.com]
> ..
Re:If Only (Score:4, Insightful)
Yes because climate scientists are all trying to get rich by pandering to the Government? THis is the most ridiculous argument against climate change. The only "scientists" with a demonstrable financial interest are the corporate shills denying the evidence. Take a look at the history of the campaign to end leaded additives in gasoline. The kind of corporate-funded "research" trying to discredit the voices sounding the alarm against lead in fuels sound eerily similar to what is going on in the climate change debate today.
Re: (Score:2)
Scientists have a job because the vast majority of funding for climate science comes from government or NGO organizations that only fund research that is looking to confirm human caused global warming er global climate change.
Which is why most of the criticisms come from older scientists with tenure and no longer trying to maintain a research lab so don't need funding or are simply retired.
Re: (Score:2)
Re:If Only (Score:5, Insightful)
You're kidding, right? How often do we have to chant the mantra that climate is not the same as weather. The climate models may suggest that we will see in increase in precipitation, but that doesn't mean that in one specific location that there will be more rain. Also, there are other localised factors with droughts.
None of what was in that article is enough to make the claims that all the models have failed. I also don't understand the point of the article. If 97% of the scientists agree that man has a hand in global warming, it doesn't mean that they agree on all the details. Nor does it mean that any disagreement within the community is proof that the whole thing is a big fat lie.
BS. 'Big Oil' is a red-herring to divert attention away from 'Big Government', whose grants and funding tend to force researchers to become, in effect, lobbyists for political activism in order to 'pay the rent'.
And that is even more BS. There is no proof that there is any "Big Government" that is attempting to control the scientific community, especially when 50% of those people in power are actively against the idea of climate change. Whenever you hear of political interference with the scientific process, how often is it some left-wing conspiracy to force the hand of scientists compared with conservatives attempting to shut down institutions that do research into climate change? Where is the evidence of this giant conspiracy, other than far-right pundits speculating as if it was fact?
Re: (Score:3)
because the "global warming" predicted by their models failed to happen
Just for kicks, look at this set of organizations that disagree with you [nasa.gov].
For what you're saying to be true, not only every one of those organizations would have to be concealing data or faking models, every single organization on the planet would have to be in on it - - because science is science.
So the claim is that the entire global scientific community is suddenly full of shit, but only on this one subject (although of course the implication is that nothing affiliated with any university or mainstrea
Re: (Score:2)
Only every experiment that gets readings from instruments with known biases? What the fuck do you think scientific instruments are, oracles? You think they're made of unobtainium and starshine, wishes and wands? Lab instruments can be expensive and delicate and uniform.
Weather stations can't. Lab instruments can be operated in temperature- and humidity- and everyotherfuckingthingelse-controlled environments. Weather stations can't. They degrade, making the instruments behave differently. Conditions ch
Re: (Score:2)
focused on human randomized controlled trials that were funded by the US National Heart, Lung, and Blood Institute (NHLBI). The authors conclude that registration of trials seemed to be the dominant driver of the drastic change in study results. They found no evidence that the trend could be explained by shifting levels of industry sponsorship or by changes in trial methodologies.
Re: (Score:2)