Why Published Research Findings Are Often False 453
Hugh Pickens writes "Jonah Lehrer has an interesting article in the New Yorker reporting that all sorts of well-established, multiply confirmed findings in science have started to look increasingly uncertain as they cannot be replicated. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology and in the field of medicine, the phenomenon seems extremely widespread, affecting not only anti-psychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants. 'One of my mentors told me that my real mistake was trying to replicate my work,' says researcher Jonathon Schooler. 'He told me doing that was just setting myself up for disappointment.' For many scientists, the effect is especially troubling because of what it exposes about the scientific process. 'If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?' writes Lehrer. 'Which results should we believe?' Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to 'put nature to the question' but it now appears that nature often gives us different answers. According to John Ioannidis, author of Why Most Published Research Findings Are False, the main problem is that too many researchers engage in what he calls 'significance chasing,' or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. 'The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,'"
Hmmmmm (Score:5, Interesting)
Re:Hmmmmm (Score:5, Insightful)
Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively. Whenever results have a human element there's always the possibility of experimental bias.
Triply so when the phrase "one of the fastest-growing and most profitable pharmaceutical classes" appears in the business plan.
Fortunately for science, the *real* truth usually rears it's head in the end.
Re:Hmmmmm (Score:4, Insightful)
Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively.
Alternatively, this article is almost unbearably stupid. It starts off heavily implying that reality itself is somehow changeable - including a surfing professor who says something like "It's like the Universe doesn't like me...maaan".
This is just a journalistic tactic, though. Start with a ridiculous premise to get people reading, then break out what's really happening : poor use of statistics in science. What was really the point of implying that truth can change?
Re:Hmmmmm (Score:5, Informative)
Well, passing over for the moment the likes of determinism and ecological psychology, I think you're mistaken in direct studies of human behavior. There are a number of very subtle effects, that when run through the non-linear recurrent processes of the brain can lead to significant behavioral changes (ie. the demand effect [wikipedia.org]). While some of these were touched on lightly in the New Yorker article (about blinding protocols and so on) there are second order effects that are impossible to control. A psychologist who does a large subject pool experiment needs funding, which to get generally requires pilot results. These results are exciting to the psychologist and their lab, they're more motivated, have higher energy, probably are interacting better with the subjects (or the technicians running the subjects, if they have enough money to blind it that deeply), more motivated people are going to produce different behaviors than less motivated people. If the blinded study is positive and appears significant, it may become a big thing.. but by the nth repetition of the original experiment by another lab to verify they understand the protocol, the lab might be completely bored with the initial testing and the result disappears, essentially a variation of the Hawthorne effect [wikipedia.org] (which has itself been disappearing). That may well mean that the effect exists in certain environments but not others, which is an ultimately frustrating thing to classify in systems as complex as human society.
It essentially boils down to the fact that we're all fallible, social beings that interact with the environment, rather than merely observing it. Whether you want to say that this adds correlated but unpredictable noise to any data analysis that is not being appropriately controlled for (but can be), or is fundamental limit on our ability to understand certain aspects of the world, at our current level of understanding it does rather seem that there is a class of experiments in which a scientist's mental state affects the (objective) results.
Re: (Score:3)
So basically a kind of psychological uncertainty principle?
Re: (Score:3)
Or rather, it does seem that some social scientists have much poorer standards than their colleagues in the "hard" sciences. That seems a much simpler explanation than inventing a new "class" of experiments.
There's nothing new about errors that creep in due to the scientist's mental or physical state. Look up the theory of experimental errors [wikipedia.org], which was invented two h
Re:Hmmmmm (Score:5, Interesting)
Start with a ridiculous premise to get people reading, then break out what's really happening
Welcome to corporate journalism. And corporate science.
If there's one useful thing that 30 years of recreational gaming has taught me, it's this: Players will find loopholes in any set of rules, and exploit them relentlessly for an advantage. Corrolaries include the tendency for games to degenerate into contests between different rulebreaking strategies and the observation that if you raise the stakes to include rewards of real value (like money) then the games with loopholes attract players who are not interested in the contest, but only in winning.
This lesson applies to all aspects of life from gaming, to sports, business, and even dating.
And so it's no surprise that when the publishers set up a set of rules to validate scientific results, that those engaged in the business of science will game those rules to publish their results. They're being paid to publish; if they don't publish, they've "lost" or "failed" because they will receive no further funding. So the stakes are real. And while the business of science still attracts a lot of true scientists -those interested in the process of inquiry- it now also attracts a lot of players who are only interested in the stakes. Not to mention the corporate and political interests who have predetermined results that they wish to promulgate.
What was really the point of implying that truth can change?
To game the system, of course. The aforementioned corporate and political interests will use this line of argument now, in order to discredit established scientific premises.
Re: (Score:3)
Heh. This explanation appeals to me. It reminds me a little bit of an article called "Playing to Win" that talks about 'scrubs'. If you haven't read it, a scrub will complain when an experienced player seems to exploit loopholes but is actually just playing the game.
I think that the two situations you mention (science and journalism) can feel a bit like games sometimes. Perhaps the players are becoming confused as to the real purpose of the game. In the case of science, it is to advance knowledge; not to ad
Re:Hmmmmm (Score:5, Interesting)
Re:Hmmmmm (Score:5, Insightful)
nahh, the problem is a misunderstanding of statistics (thinking that post-hoc analysis with this fishing for statistical significance) is as valid as proper hypothesis testing. The proper way is where the hypothesis is fully pre-formed and then tested. The numbers and statistics apply ONLY TO THE HYPOTHESIS being tested, so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis.
The need to publish (a scientist's income relies on what he publishes in most cases) as well as funding issues force scientists to try to find some usable results from their science, and by trawling through their data they can often salvage what would otherwise have been a failed bit of research. Except this salvaging operation may actually be absolutely worthless. This is most often not done on purpose but rather due to only partly understanding what statistics and significance testing tell us.
So, a capitalistic, fully performance based (with results being the performance metric) environment does not seem to work well for science.
Surprised?
Me neither.
Re:Hmmmmm (Score:5, Interesting)
Very well expressed. To put this in a context which will seem bizarre to many readers of slashdot, there is a whole range of products on the market to help "scientific astrologers" search out correlations between planetary positions and life circumstances. And a legion of astrologers making use of them -- at several hundred dollars a copy -- to pore over birth charts with dozens and dozens of factors. Unless things have changed in the years since I looked into this, what's usually conveniently sidestepped is that some of those factors will indeed show up significant by chance. After all, that is the very definition of probability expressions such as "p less than .05". On replication, these findings will normally disappear, resulting in a crestfallen astrologer. (Then again, why not just expand the original dataset and check again to see if different factors come up this time :-)
But the motivation to get something out of the data is high, as the parent post points out, and researchers may be able to deceive themselves just as well as astrologers can, especially when academic careers are on the line.
Re:Hmmmmm (Score:5, Insightful)
nahh, the problem is a misunderstanding of statistics (thinking that post-hoc analysis with this fishing for statistical significance) is as valid as proper hypothesis testing. The proper way is where the hypothesis is fully pre-formed and then tested. The numbers and statistics apply ONLY TO THE HYPOTHESIS being tested, so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis.
This significance of this fundamental mistake cannot be overstated. It seems to be prevalent in medical literature and there was a doctor doing the rounds lecturing about this a couple of years back. I wish I could recall exactly which podcast but he covered all sorts of common fundamental errors in medical research statistics and did it in a very accessible way. The key thing to remember is that if you have enough variables there WILL by complete coincidence be correlation between some of them in any given sample. So to test a hypothesis properly, not only must you formulate it in advance without looking for any correlation within the data, but you must look at more than one data set to verify your findings.
Re: (Score:3)
Ahh, the empirical proof.
In my times, this was more in the range of 'support'.
CC.
Re:Hmmmmm (Score:5, Interesting)
The National Center for Complimentary and Alternative Medicine has received billions of dollars of public NIH funding. They study "alternative" medicine, such as chiropractic and homeopathic remedies. So far, their strongest conclusion has been that ginger has a slight positive effect on upset stomachs.
Billions of dollars. Ginger for upset stomachs. When asked why they haven't produced many solid results, the director of NCCAM usually says that they need more funding. I'd say we need a bit more results-based funding in some areas.
Re:Hmmmmm (Score:5, Insightful)
No, that doesn't solve the problem, it increases it.
The consistent lack of results is a result, and a very useful one too.
The logical next step is to ban marketing of humbug until and unless the snake oil sellers can show valid scientific theories and peer reviews for their remedies.
Likewise, capitalist-funded research needs to stop rewarding findings, but start treating all results as equally valid science, and stop punishing scientists who produce negative and inconclusive results. That's good science, which is what they should pay for.
Consistently publishing more results than randomness would dictate is a clear indication of bad science, and should be punished, not rewarded.
Ben Goldacre and Bad Science (Score:3)
Ben Goldacre [badscience.net], an MD from UK, has been at the detecting pseudoscience game for a while now. I have just started reading his book, Bad Science: Quacks, Hacks, and Big Pharma Flacks [amazon.com]. I find it refreshingly topical and well-focused on the problem: evidence-based decision making.
Similar to Goldacre's findings, my experience has been that evidence, which has been produced by some test, requires the nature of that test to be disclosed. Following the model of the scientific process, evidence requires the following
Re: (Score:3)
Regression toward the mean (Score:5, Informative)
I think that people tend to underestimate the pervasive impact of regression toward the mean.
Even without "data snooping" (improperly reanalyzing your data post-hoc in multiple ways to find something that appears to be statistically significant), there is still going to be bias. If I do an experiment and I happen to "luck out" and get a large (i.e. larger than the "true" mean of an infinite number of observations) effect size just by chance, I am far more likely to do follow-up experiments than if I am unlucky and the effect size is small or the result is not statistically significant. If subsequent experiments asking the same question in different ways also give a statistically significant result, my belief in the phenomenon is reinforced even if the effect size is a bit smaller.
So I am far more likely to identify a real phenomenon if because of a statistical fluctuation I initially observe a larger effect size or a smaller standard error than the "true" value. And my figures from that initial study, showing a nice big effect and a small error bar are far more likely to pass peer review than if the effect size is smaller and the error bars are larger, even if the criterion for statistical significance is satisfied.
If I am unlucky, and I get a lot of variation and/or a small effect size (again, compared to the "true" value from an infinite number of experiments), there is a good chance that the experiment will go into a drawer. Perhaps I'll give up on the idea, or perhaps I'll try it again, but I'll improve the experimental design in a way that I hope will reduce the statistical variability or give me a larger effect size. Of course, if it "works," I'll pat myself on the back for solving the technical problem and go on to do follow-up studies, even though statistically speaking it may well be the case that the prettier result from the new design is itself just a statistical fluctuation.
Part of the problem is that by convention, we report a single value for effect size. Yes, some sort of estimate of standard deviation is appended, but what people remember is that single value. It simply is very hard for human beings to think in terms of statistical distributions. We tend to forget (even though we know it to be true in theory) that a statistically significant result does not show that our estimate of effect size is correct--all it tells us is that the effect size is unlikely to be zero.
Thus, we can predict, just on statistical grounds, that effect sizes will tend to decline ("regress" toward the "true" mean) over time with follow-up studies, based on the simple fact that those follow-up studies are far more likely to happen if the measured effect size was initially larger than the "true" value than if it was smaller. And as far as I know, nobody has been able to come up with any statistically rigorous way of estimating the magnitude of this unavoidable bias.
Re: (Score:3)
So, a capitalistic, fully performance based (with results being the performance metric) environment does not seem to work well for science.
Nonsense, it works great for pharmaceutical companies - the more money the drug can make, the better the test results!
Re:Hmmmmm (Score:5, Interesting)
> So, a capitalistic, fully performance based (with results being the performance metric)
> environment does not seem to work well for science. / Surprised? / Me neither.
This is a gratuitous, cheap shot. These problems appear only in scientific research that is funded, managed, or supervised by government agencies or academic review committees so that bureaucrats will grant money, or full professorships, or licenses to sell drugs. Hence the crack that if you want to study squirrels in the park, you title your grant proposal, "Global Warming and Squirrels in the Park."
There are "capitalistic... performance-based environments" in science - but they're the corporate R&D departments that are seeking marketable innovations. There isn't much intellectual corruption or fudging of study results in, say, pushing the limits of video card performance.
Re: (Score:3)
Yup, this is a big problem with statistical arguments - you have to accept some confidence level.
If everybody does everything right, then we'd probably expect 5% of all research to be outright wrong due to random chance alone (that's what 95% confidence means).
However, the reality is that somebody gets a big set of data, and tests 100 different hypothesis against it. You'd expect 5 of those to be confirmed purely due to random chance, even if there is no real relationship. Five earth-shattering confirmed
Re: (Score:3)
The numbers and statistics apply ONLY TO THE HYPOTHESIS being tested, so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis. The need to publish (a scientist's income relies on what he publishes in most cases) as well as funding issues force scientists to try to find some usable results from their science, and by trawling through their data they can often salvage what would otherwise have been a failed bit of research
Where does this happen? Clinical trials? With my specific field within basic cell biology research, I don't see this happening much. Experiments and research goals fail frequently of course, but when researchers think they find something in the ashes of those failed experiments, they seem to use that as a starting point for a new hypothesis and a new round of experiments. I can't recall ever seeing something like what you're talking about getting published.
Re:Hmmmmm (Score:4, Interesting)
nahh, the problem is a misunderstanding of statistics (thinking that post-hoc analysis with this fishing for statistical significance) is as valid as proper hypothesis testing. The proper way is where the hypothesis is fully pre-formed and then tested. The numbers and statistics apply ONLY TO THE HYPOTHESIS being tested, so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis.
The problem is that there are a lot of fields (e.g., astronomy, economics) where it is not possible to conduct proper lab experiments. That means you've got to just collect all the data that you can and try to work with it. The best way to do that is to partition the data and use part of it to search for candidate hypotheses, and the rest (possibly with additional partitions) to check those hypotheses, and yet it's never entirely certain that enough data is present in either set for correct conclusions to be drawn. It's challenging statistically (and part of why I prefer to write programs instead).
Re:Hmmmmm (Score:5, Informative)
>>so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis
Cannot? Or should not?
I work as an external evaluator on federal projects, and have been told by one group I worked with, after I delivered a negative result on their data, that "we know that the stats can say anything - why don't you take another look at the stats and find something that makes us look better?" I refused, saying it would be dishonest to change the analysis. They fired me, saying "most evaluators make us look better than the data, but you're making us look worse."
The entire point of an external evaluator is to have a third party looking at your data, so as to prevent this kind of analysis fudging, but when I reported it to the federal case officer overseeing the grant, they just shrugged and didn't care. They don't want any drama to crop up in the grants they oversee. Makes them look bad to *their* bosses.
Re:Hmmmmm (Score:5, Insightful)
no, it's not that they do it on purpose at all. At least most of them.
It's that most scientists have mostly a very basic understanding of statistics (except statisticians, obviously) and don't understand the implications of those shortcuts. They genuinely believe that their results (after trawling through data to find some statistical link) are strong, and feel confident in presenting them, especially as they have nice and shiny statistics to back the results up.
The capitalistic and performance based system is what pushes scientists into taking these shortcuts to begin with, thinking that it's no big problem ("dude, I can see a link here, and it's at .002!! YAY!").
So, two problems:
1 not everybody is an expert at statistics
(1a is that numbers look impressive when
you only half understand them...)
and 2. The pressure to put bread on their tables pushes scientists to try to find usable results somewhere in their research, otherwise they don't get funding for further research and may lose their jobs.
Re: (Score:3)
What is more, I am most definitely not an expert on statistics or research methods.
I did, however, have a very good and sensible professor in my basic statistical research methodology course who pointed very strongly at these pitfalls, and was diligent in pointing out the weaknesses of statistical analysis as well as the strengths.
Lucky me...
Re:Hmmmmm (Score:5, Informative)
No, scientists in many fields (and some of which you would expect the opposite) do not understand statistics well.
If you dig through your well gathered data you will find correlations that are purely chance. Which is why you are supposed to be looking for the predetermined correlation not just any correlation. But when you've spend a lot of time and effort gathering a set of data, digging into it to find other things seems like a reasonable plan - and as long as you do another completely separate data gathering study to check what you find it is (but there's a great pressure to publish something now since you just spent a huge wad of cash and your performance is measured by what you publish not by actual scientific progress).
Scientists do this. Traders at investment banks (and elsewhere) do this. People just do this.
"Fooled by Randomness" by Taleb is a good look into this from the trading perspective. Assuming you don't mind his writing style, "ego-centric and pompous" is a common description (though I don't find it so).
I'm pretty sure investment banking is dominated by "rightoids" which nullifies your ridiculous injection of politics into the universal human bias to see patterns in randomness.
Re:Hmmmmm (Score:5, Insightful)
The problem is actually the opposite. Note the E.S.P. experiment cited in the article. Rhine's initial experiment suggested to him that E.S.P. was real. Before publishing his results he did the right thing and reran the tests and the results proving E.S.P. were not repeatable.
The next part is his absolute failure to understand the scientific method and statistics. He concluded that "extra-sensory perception ability has gone through a marked decline.” In fact what he experienced was Regression toward the mean [wikipedia.org].
Taking a well understood principle, renaming it with a term that suggests an action is taking place, then arguing that you have found some new phenomenon that proves science doesn't work is not critical information about anything.
It is ignorance that will be dismissed for obvious reasons. Too much time and energy is wasted repeatedly addressing these attacks on science by people who want so badly for their pseudo-science or supernatural beliefs to be true. In a perfect world when somebody stumbles upon regression to the mean without knowing it they would do additional research to understand what it is they are observing rather than conclude that their initial experiment was correct and the supernatural ability they detected was "declining" rather than accept the alternate, it was never there in the first place.
Re: (Score:3)
What is the point of answering a question and then asking the question?
What is the point of asking rhetorical questions?
Re:Hmmmmm (Score:5, Insightful)
Yet another reason for the routine use of 95% frequentist statistics to be replaced with bayesian methods.
Frequentist statistics is a reasonable approximation of Bayesian statistics when applied to large numbers. You may well be right above, but my impression is that these sorts of statistical mistakes don't come from inappropriate use of frequentist methods, but rather from conclusions based on poor evidence and a heaping helping of observer bias. Do enough studies and eventually you will see spurious artifacts. If your study is to duplicate someone's study, you may feel pressure (conscious or not) to duplicate the results as well.
Re:Hmmmmm (Score:5, Insightful)
The pharmaceutical industry is easily one of the most corrupt industries known to man. Perhaps some defense contractors are worse, but if so, then just barely. It's got just the right combination of billions of dollars at play, strong dependency on the part of many of its customers, a basis on intellectual property, financial leverage over most of the rest of the medical industry, and a strong disincentive against actually ever curing anything since it cannot make a profit from healthy people. Many of the tests and trials for new drugs are also funded by the very same companies trying to market those drugs.
Sure, after the person suggesting that all is not as it appears to be is laughed at, ridiculed, cursed, given the old standby of "I doubt you know more than the other thousands of real scientists, mmmkay?" for daring to question the holy sacred authority of the Scientific Establishment and daring to suggest that it could ever steer us wrong or that this, too is unworthy of 100% blind faith or that it may have the same problems that plague other large institutions. The rest of us who have been willing to entertain less mainstream, more "fringe" theories that are easy to demagogue by people who have never investigated them already knew that the whole endeavor is pretty good but not nearly as good as it is made out to be by people who really want to believe in it.
Why there are few cures (Score:4, Interesting)
One tends to hear this sort of thing from people who don't know anything about the pharmaceutical industry, and of course this attitude is pushed very hard by people who are hawking quack cures of one sort or another, and who are thus competitors of the pharmaceutical industry.
I'm an academic pharmacologist, but I've met a lot of the people involved in industrial drug discovery, and trained more than a few of them. People tend to go into pharmacology because they are interested in curing disease and alleviating suffering. Many of them were motivated to enter the area by formative experiences with family members or other loved ones suffering from disease. They don't lose this motivation because they happen to become employed by a pharmaceutical company--indeed, many enter industry because it is there that they have the greatest opportunity to be directly involved in developing treatments that will actually cure people.
It is certainly true that pharmaceutical companies are businesses, and their decisions regarding how much to spend on treatments for different illnesses are strongly influenced by the potential profits. A potential treatment for a widespread chronic disease can certainly justify a larger investment than a one-time cure. But it can also be very profitable to be the only company with a cure for a serious disease. And it would be very bad to spend a lot of money developing a symptomatic treatment only to have somebody else find a cure. So a company passes up an opportunity for a cure at its peril. There is definitely a great deal of research going on in industry on potential cures.
The real reason why cures are rare is that curing disease is hard. Biology is complicated, and even where the cause is well understood, a cure can be hard to implement. For example, we understand in principle how many genetic diseases can be cured, but nobody in industry or academia knows how to reliably and safely edit the genes of a living person in practice. It is worth noting that the classic "folk" treatments for disease, including virtually all of the classic herbal treatments that have been found to actually be effective--aspirin, digitalis, ma huang, etc--are not cures; they are symptomatic treatments. Antibiotics were a major breakthrough in the curing of bacterial diseases, but they were not created from scratch, but by co-opting biological antibacterial weapons that were the product of millions of years of evolution. Unfortunately, for many diseases we are not lucky enough to find that evolution has already done the hardest part the research for us.
Re:Hmmmmm (Score:4)
Well, thank goodness that those thieving bastards discovered a drug that controls the seizures that 70 years ago would have seen me shunted to a sanitarium.
As it is, for US$100/month, I've got a well-paying job, wife, 2 kids and an "above water" mortgage.
You cannot have a viable industry at all if you never produce a useful product that people need or want. If they never produced useful drugs/treatments then there wouldn't be a pharmaceutical industry for us to talk about.
So, having met the bare-minimum requirement for having a viable industry at all, it is then possible to consider whether or not this industry is corrupt and whether it is more or less corrupt than other industries.
For example, I also mentioned defense contractors. I assume the firearms they produce will indeed shoot bullets and the bombs they produce will explode and the fighter jets they produce will fly. None of that is relevant to a discussion about the corruption in that industry. Such a discussion could start with considerations like how those contracts are awarded for those working munitions.
I'm not sure why so many Slashdotters think that you cannot criticize a thing without totally demonizing it and denying any and all use it may have. It's classic either-or, black-and-white thinking that implies something is either perfect and utterly flawless or absolutely no good at all. In short, you responded to a claim I never made: that no one ever benefits from any drug produced by this industry. This is not useful and fails to recognize that there are deliberate reasons I never made that claim.
Re: (Score:3)
I try to do at least *some* due diligence before disagreeing with Received Slashdot Wisdom.
http://en.wikipedia.org/wiki/Carbamazepine#History [wikipedia.org]
Re:Hmmmmm (Score:5, Informative)
So let me guess, you are one of those 'The pharmaceutical industry is hiding all the cures from us so they can sell us drugs' nuts. Here's the deal. The bulk of basic research (towards finding cures) is done by university researchers throughout the country. The majority of it is funded by the NIH, one of the U.S.'s best contributions to the world. They spend ~$30 Billion a year on it. The big Pharma companies? Yes, they spend some money on basic research. But they spend the bulk of their research dollars on clinical trials. Putting a single drug through the trials process can cost $10-100 Million until they either find out they drug doesn't work, is too dangerous to use, or actually works and will be a viable product. So the pharmaceutical companies aren't hiding the cures, because they aren't the ones doing the bulk of looking for them. The university researchers are. And as a university researcher working off NIH funds, let me tell you, if I find a cure you will find out about it. I'll get it published, be cited in the journals about a zillion times, likely get a Nobel prize, get pretty much guaranteed funding for the rest of my career because of my success record, and be invited to give talks at universities, organizations around the world (on their dime, often with a nice little check).
Scientists like to talk about their work. We have post-docs and Ph.D. students working in our labs who love to talk about our work. We have research techs who do a lot of the work and all know what's going on in the lab. All of these folks have friends and family, some of whom might be directly impacted if we find a cure for a disease. You think all those folks are going to keep quiet about it if we find something? What's the incentive? So take off the tin-foil had my friend. There are a lot of scientists out there working to find cures for diseases. You might not realize it, but it's kind of a tough thing to do.
Re:Hmmmmm (Score:5, Insightful)
My brother went through 10 months of chemotherapy. 10 months of being nauseous, 10 months of not wanting to eat, 10 months without a sense of smell, 10 months with no sense of taste, 10 months of physical weakness, 10 months of diminished mental capacity, 10 months of needles, 10 months of IVs full of chemicals that burned when they went in, 10 months of doctors prodding and poking. He's now cancer-free and has been for 12 years. Back when he went through that his odds of surviving Hodgkin's lymphoma were about 80%. Current treatment has reached 90%, and a recent experimental treatment is at 98%. They're all still unpleasant and take months. Do you honestly think that if I had the ability to jump in with a magic pill and spare my brother those 10 months I wouldn't do it because it might hurt the corporate bottom line? Fuck the bottom line. Fuck having a job if it came to it. That's the prevailing attitude in biotech and pharmaceutical companies because they're made up of people like me, people who have seen loved ones go through horrible illness, and not the monsters your fantasy requires.
Re: (Score:3)
Perhaps when you live in your own little bubble out of touch with objective reality, maybe then you can put words in someone else's mouth, respond to things they never claimed, and then pat yourself on the back for how clever you are when this oddly results in you being "
Already debunked (Score:5, Insightful)
Is it possible that there has always been error, but it is just more noticeable now given that reporting is more accurate?
Precisely. As mentioned in a Scientific American [scientificamerican.com] blog:
"The difficulties Lehrer describes do not signal a failing of the scientific method, but a triumph: our knowledge is so good that new discoveries are increasingly hard to make, indicating that scientists really are converging on some objective truth."
Re:Already debunked (Score:5, Insightful)
That's not to say that individual scientists don't sometimes dislike the outcome and ultimately attempt to ignore and/or discredit the counter-evidence, but in the long run this can never work since hard data cannot be hand-waved away forever.
Re: (Score:3)
Consider also that a result being significant to 95% confidence simply means that you would expect that same result 5% of the time purely by chance.
But I suspect the larger problem stems from the career aspect of modern science. Sometimes the failure of a promising experiment can set you back by months or even years, not to mention clouding the horizon for future funding. It's not surprising that some will do whatever is needed to present their results in a positive light, even if that crosses ethical lines
Re:Already debunked (Score:4, Insightful)
Consider also that most researchers run more than 20 regressions when testing their data. That means that the 95% significance level is grossly overstated.
The key is that the 95% level applies only if you don't data snoop beforehand and only if you run the regression once and only once. The true significance levels of many studies that claim a 95% level is likely to be 50% when you consider all the pretesting and data snooping that goes on in reality - not the rather idealised setup that is reported in the journal publication.
Re: (Score:3)
It's funny that you're making a connection between left-wing leanings and using science as a kind of religion substitute. Where I live (Northern Europe), it's more common for the right-wingers to believe in science and rationalism. The right tends to use rational, economic arguments to propose tax cuts, market de-regulations and decreasing the size of the government, whlle the socialist left tends to use arguments based on empathy and solidarity. Consequently, people with a belief in science and rationalism
It's simple. (Score:5, Interesting)
Even in academia, there's an establishment and people who are powerful within that establishment are rarely challenged. A new upstart in the field will be summarily ignored and dismissed for having the arrogance to challenge someone who's widely respected. Even if that respected figure is incorrect, many people will just go along to keep their careers moving forward.
LK
Re:It's simple. (Score:4, Informative)
Having worked in multiple academic establishments, I have never seen that. I have seen people argue their point, and respected figures get their way otherwise (offices, positions, work hours, vacation). But when it came to papers, no one was sitting around rejecting papers because it conflicted with a "respected figure." Oftentimes, staff would have disagreements that would sometimes be an agreement to disagree because of lack of data. Is this your personal experience? Because it I don't disagree that this may occur some places, I just haven't seen it. But I want to be sure you have, and are not just spreading an urban legend.
Re: (Score:3)
Because it I don't disagree that this may occur some places, I just haven't seen it. But I want to be sure you have, and are not just spreading an urban legend.
That's a fair question. I have not experienced it first hand, but I have seen it as an outside observer.
LK
Re: (Score:3)
Re:It's simple. (Score:4, Interesting)
Oh, it happens. And if you're in the academic business, then I'm very surprised you've not noticed it.
Politics is very important in the business of accepting and rejecting papers. It's micro-politics, i.e. office politics. It's very important to get things accepted, but in order to do so, you have to be aware of the relevant political issues within the committee that will accept or reject your work. It's hard to write a paper that doesn't step on any toes, so you have to be sure you pick the right toes to step on.
When I was part of this business I was aware of a few long-standing feuds between academics; their research students and coworkers all took sides and rejected work from the other side. It was bizarre. It would have been funny if it had not been so pathetic. Even now I cannot watch an old Newman and Baddiel sketch [youtube.com] without being reminded of childish feuding professors from real life.
I don't think every sort of science is like this. Probably in physics and chemistry, you can get unpopular work published just by being overwhelmingly right. But in softer non-falsifiable sciences, it's mostly about politics, and saying the right things. There are a whole bunch of suspect sciences that I could list, but I know that some of them would earn me an instant troll mod (ah, politics again!), so I'll leave it at that.
Re: (Score:3)
The article is about selective reporting of results, publication bias, and "collective illusion nurtured by strong a-priori beliefs".
Doesn't that fit the blind acceptance of the CO2 hypothesis despite evidence to the contract, exactly?
Re: (Score:3)
Clarification: It looks like some of you took my reference to geocentrists and hollow earth believers to be a deliberate jab at people who believe in God or don't believe Al Gore. I didn't mean it that way. I really do know hollow earth believers, and have had arguments with them as recently as two weeks ago, so it was kind of on my radar. Their view appears crazy to most people, but it looks rational in their world, based on their assumptions about the untrustworthiness of mainstream science. And they
Re: (Score:3)
Its true that global warming alarmism has more to do with politics and obtaining grant money than with science. I've said that elsewhere. If its only the 'catstrophe science' that you have a problem with, then we're in complete agreement.
That's not where most of the ostensibly anti-'catastrophe science' people are coming from though. They now admit warming as a tactical matter, but for a long time persisted in denying it completely. They shift their arguments around in whatever way seems to best justify
News Flash: Scientists Human Too, Study Finds (Score:4, Insightful)
After years of speculation, the a study has revealed that scientists are, in fact, human. The poor wages, long hours, and relative obscurity that most scientists dwell in has apparently caused widespread errors, making them almost pathetically human and just like every other working schmuck out there. Every major news organization south of the mason-dixon line in the United States and many religious organizations took this to mean that faith is better, as it is better suited to slavery, long hours, and no recognition than science, a relatively new kind of faith that has only recently received any recognition. In other news, the TSA banned popcorn from flights on fears that the strong smell could cause rioting from hungry and naked passengers who cannot be fed, go to the bathroom, or leave their seats for the duration of the flight for safety reasons....
Re:News Flash: Scientists Human Too, Study Finds (Score:5, Interesting)
After years of speculation, the a study has revealed that scientists are, in fact, human. The poor wages, long hours, and relative obscurity that most scientists dwell in has apparently caused widespread errors, making them almost pathetically human and just like every other working schmuck out there...
I'll add another cause to the list. The "publish or perish" mentality encourages researchers to rush work to print often before they are sure of it themselves. The annual review and tenure process at most mid-level research universities rewards a long list of marginal publications much more than a single good publication.
Personally, I feel that many researchers publish far too many papers with each one being an epsilon improvement on the previous. I would rather they wait and produce one good well-written paper rather than a string of ten sequential papers. In fact, I find that the sequential approach yields nearly unreadable papers after the second or third one because they assume everything that is in the previous papers. Of course, I was guilty of that myself because if you wait to produce a single good paper, then you'll lose your job or get denied tenure or promotion. So, I'm just complaining without being able to offer a good solution.
race to the bottom (Score:4, Interesting)
Re: (Score:3)
In 250 years, the US has had two major wars on its territory. Both led to significant increases in liberty. By contrast, communism turned the 20th century into a worldwide bloodbath. The ideas pouring out of t
Re: (Score:3)
I hope you didn't read the article. If you did, your post is an example of the decline of reading comprehension in the United States. It's not an attack on science or the scientific method. If it's an attack on anything, it's how we publish scientific studies and how all too often studies are accepted as statistically significant when they are not. The article doesn't suggest that we abandon science but rather that we scrutinize it more and stop believing that the results of every study indicate the truth t
Quantity, not quality, is often prioritised. (Score:5, Insightful)
Re:Quantity, not quality, is often prioritised. (Score:4, Insightful)
Whether the conclusions of those are true or false is not something that hiring committees will delve into too much. If you are young and have a family to support, it can be tempting to take shortcuts.
Yes, the incentive to publish, publish, publish leads to all kinds of problems. But more importantly, the incentives for detailed peer-reviewing and repeating others' work just aren't there. Peer-reviewing in most cases is just a drag, and while it's somewhat important for your career, nobody's going to give you Tenure on the basis of your excellent journal reviews.
The inventives for repeating experiments are even worse. How often do top conferences/journals publish a result like "Researchers repeat non-controversial experiment, find exactly the same results"?
Re:Quantity, not quality, is often prioritised. (Score:5, Interesting)
Agreed. Way too many papers from academia are ZERO value added. Most are a response to "publish or perish" realities.
Cases in point: One of my less favorite profs published approximately 20 papers on a single project, mostly written by his grad students. Most are redundant papers taking the most recent few months data and producing fresh statistical numbers. He became department head, then dean of engineering.
As a design engineer I find it maddening that 95% of the journals in the areas I specialize in are:
1. Impossible to read (academia style writing and non-standard vocabulary).
2. Redundant. Substrate integrated waveguide papers for example are all rehashes of original waveguide work done in the 50's and 60's, but of generally lower value. Sadly the academics have botched a lot of it, and for example have "invented" "novel" waveguide to microstrip transitions that stink compared to well known techniques from 60's papers.
3. Useless. Most, once I decipher them, end up describing a widget that sucks at the intended purpose. New and "novel" filters should actually filter, and be in some way as good or better than the current state of the art, or should not be bothered to be published.
4. Incomplete. Many interesting papers report on results, but don't describe the techniques and methods used. So while I can see that University of Dillweed has something of interest, I can't actually utilize it.
So as a result when I try to use the vast number of published papers and journals in my field, and in niches of my field to which I am darn near an expert, I cannot find the wheat from the chaff. Searches yield time wasting useless results, many of which require laborious decyphering before I can figure that they are stupid or incomplete. Maybe only 10% of the time does a day long literature search yield something of utility. Ugh.
Re: (Score:3)
--------
Happy New Year and Good Riddance Old Year
Re: (Score:3)
Re:Quantity, not quality, is often prioritised. (Score:4, Insightful)
When doing research in one particular area I did find that even in an area with very few people working in it many of them would have a large number of citations for papers by
1:themselves
2:their former/current students
3:their former/current teachers
but at the same time that isn't very surprising.
citing yourself can make sense when expanding on an earlier pieces of research and you're vastly more familiar with the work being done by people you know well like your students or teachers.
Taken apart by a scientist (Score:5, Informative)
This article has already been taken apart by P.Z. Myers in a blog post [scienceblogs.com] on Pharyngula. Here's his conclusion:
Basically, it's not like anyone's surprised at this.
Re: (Score:3)
Re: (Score:3)
That is just so bogus. One of the problems the scientists have when communicating to the general public is that it is too hard to get a straight answer from them with absolute certainty. They talk in terms of probability and uncertainty. The reason they do this is because within the scientific world, if your conclusions go beyond what your data shows then it will not pass a peer review.
Often conclusions end by highlighting what is still not known and where future research needs to be done. This idea that th
Re: (Score:3)
too many [scientists] demand the presumption of infallibility with the arrogance of a medieval pope.
"Too many" is just weasel words. Who are you talking about? There are surely some arrogant scientists out there - just as there are arrogant sales clerks, IT technicians, cooks, and graphic designers. But what counts as "too many?" More than half? Can you name any prominent scientists or science advocates who demand that we presume their infallibility? Your comment would be easier to agree with if you did.
"Provisional is certainly not a word given any emphasis in any IPCC report.
If you're expecting that word to appear in every scientific paper you've missed the point. PZ is saying
Re: (Score:3)
Interesting reply to excelent article (Score:5, Insightful)
NYT article is well written and informative. It's clearly not assuming that there is something wrong with scientific method, but just asks - could it be? There is excellent reply by George Musser at "Scientific American" http://cot.ag/hWqKo2 [cot.ag]
This is what I call interesting and engaging public discussion and journalism.
Which results should we believe? (Score:5, Insightful)
What a ridiculous question. How about the results that are replicated, accurately, time and time again, and not ones that aren't based off of scientific theory, or failed attempts at scientific theory?
Bogus article (Score:5, Interesting)
That article is as flawed as the supposed errors it reports on. The author just "discovered" that biases exist in human cognition? The "effect" he describes is quite well understood, and is the very reason behind the controls in place in science. This is why we don't, in science, just accept the first study published, why scientific consensus is slow to emerge. Scientists understand that. It's journalists who jump on the first study describing a certain effect, and who lack the honesty to review it in the light of further evidence, not scientists.
Publish or Perish (Score:4, Insightful)
This is the natural outcome of 'publish or perish.' If keeping your job depends almost solely on getting 'results' published, you will find those results.
Discovery is more prestigious than replication. I don't see how to fix that.
Not Science (Score:3)
Before you can question the scientific method through experimentation you first must understand and utilize the scientific process. That last quote is a massive clue that the issue is that they are stepping away from the scientific process and trying to force an answer.
I'll go read the article but before I do I'll just note that in working in semiconductor manufacturing and development both the scientific process and statistical significance are at the core of resolving problems, maintaining repeatable manufacturing and developing new processes and products. And from my 20 years of experience the scientific process worked just fine and when results were not reproducible then you had more work to do but you didn't decide that science no longer worked and that the answer simply changed.
I can guarantee that if we throw away the scientific process and no longer rely of peer review and replication then all those fun little gadgets everyone enjoys these days will become a thing of the past and we'll enter into the second dark age.
logical contortions in the article (Score:5, Interesting)
The article can be viewed on a single page here: http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all [newyorker.com]
Not surprisingly, most of the posts so far show no signs of having actually RTFA.
Lehrer goes through all kinds of logical contortions to try to explain something that is fundamentally pretty simple: it's publication bias plus regression to themean. He dismisses publication bias and regression to the mean as being unable to explain cases where the level of statistical significance was extremely high. Let's take the example of a published experiment where the level of statistical significance is so high that the result only had one chance in a million of occurring due to chance. One in a million is 4.9 sigma. There are two problems that you will see in virtually all experiments: (1) people always underestimate their random errors, and (2) people always miss sources of systematic error.
It's *extremely* common for people to underestimate their random errors by a factor of 2. That means the the 4.9-sigma result is only a 2.45-sigma result. But 2.45-sigma results happen about 1.4% of the time. That means that if 71 people do experiments, typically one of them will result in a 2.45-sigma confidence level. That person then underestimates his random errors by a factor of 2, and publishes it as a result that could only have happened one time in a million by pure chance.
Missing a systematic error does pretty much the same thing.
Lehrer cites an example of an ESP experiment by Rhine in which a certain subject did far better than chance at first, and later didn't do as well. Possibly this is just underestimation of errors, publication bias, and regression to the mean. There is also good evidence that a lot of Rhine's published work on ESP was tainted by his assistants' cheating: http://en.wikipedia.org/wiki/Joseph_Banks_Rhine#Criticism [wikipedia.org]
Maybe it is not science (Score:5, Interesting)
Now science uses different math, and the results are expressed differently, even probabilistically. But in real science those probabilities are not what most think as probability. In a scanning tunneling microscope, for instance, works by the probability that a particle can jump an air gap. Though this is probabilistic, It is well understood so allows us to map atoms. There is minimal uncertainty in the outcome of the experiment.
The research talked about in the article may or may not be science. First, anything having to do with human systems is going to be based on statistics. We cannot isolate human systems in a lab. The statistics used is very hard. From discussions with people in the field, I believe it is every bit as hard as the math used for quantum mechanics. The difference is that much of the math is codified in computer applications and researchers do not necessarily understand everything the computer is doing. In effect, everyone is used the same model to build results, but may not know if the model is valid. It is like using a constant acceleration model for which a case where there is a jerk. The results will be not quite right. However, if everyone uses the faulty model, the results will be reproducible.
Second, the article talks about the drug dealers. The drug dealers are like the catholic church of Galileo's time. The purpose is not to do science, but to keep power and sell product. Science serves a process to develop product and minimize legal liability, not explore the nature of the universe. As such, calling what any pharmaceutical does as the 'scientific method' is at best misguided.
The scientific method works. The scientific method may not be comopletey applicable to fields of studies that try to find things that often, but not, always, work in a particular. The scientific method is also not resistant to group illusion. This was the basis of 'The Structure of Scientific Revolution'. The issue here, if there is one, is the lack of education about the scientific method that tends to make people give individual results more credence than is rational, or that is some sort of magic.
Here is a breakdown (Score:3)
http://www.youtube.com/watch?v=qNxfPAF1frM [youtube.com]
Research has always had problems (Score:3)
First, science has always had a political aspect. Publication reviewers are always biased by conventional wisdom among their scientific peers, and they will become critical of any submitted paper that strays from that view. A lot of careers are based on following the conventional wisdom, and threats to those careers are met with political responses.
Second, the quest for statistical significance is based on serious misunderstanding of statistics among scientists. It has been so for decades. Publication editors are thoroughly ignorant of statistics if they demand statistical significance at the .95 or .99 levels as a condition of acceptance.
Results that are statistically significant may or may not be clinically significant. Both factors must be considered.
Significance levels are based on one model of statistical inference. There are other models, although those have been subjected to politics within the mathematical/statistical community. Although Bayesian statistics are now accepted (and form a critical basis in theories of signal processing, radar, and other technologies) they were rejected by the statistical community for many years. The rejection was almost completely political, because the concepts challenged the conventional wisdom.
The basic scientific method is not a problem. The major problem is the factors in publication acceptance and the related biases and pressures to adhere to the conventional wisdom. Rejection of papers based on politics or on ignorance of statistical methods is outside the scientific method and needs to be rooted out.
Its the Blue People (Score:3)
Anyone familiar with the concept of "reality on demand" knows that it is constructed on a need-by-need basis by the Blue People. Now the Blue People are not 100% reliable. Sometimes they forget to put back key pieces of reality. This is the source of the problem of failure to reproduce results. The reproduction is being attempted in a reality which is simply too different.
Two possible solutions come to mind. Only conduct experiments where one never has to leave the room. Or maybe find a good lawyer who can negotiate a higher quality level contract with the Blue People.
Re: (Score:3)
I was actually about to feed the troll. I was 2 sentences in before going "oh... right."
Re: (Score:3)
Or, to put it more charitably, medicine and psychology are far describing far more complex phenomenon than we like to admit.
For example, in psychiatric genetics, there are dozens of articles every year that find a new gene associated with a common and important condition (e.g. autism, schizophrenia, depression). After each new finding comes out, there are dozens of labs that try to replicate that finding, usually one or two replicate (or partially replicate) the finding, and five or six don't replicate it.
Re: (Score:3)
This is a bit of a fallacy. Bush increased stem cell research funding, fuel cell research funding, etc. He was in office for 8 years, and I believe 2001 was the first time he cut science spending. That was part of a larger goal to cut spending across the board.
How did he respond in 2002? He asked Congress to DOUBLE science spending.
http://www.scienceprogress.org/2008/01/bush-asks-congress-to-double-science-spending/ [scienceprogress.org]
My wife showed me a great graph during the last election that tracked science spending from a
Are you delusional? (Score:3)
or just afraid of your wife?
Yes, in 2008, when public pressure was on and a new election was coming he gave lip service to people about getting more money. At a time when he would have no incentive to actually back up his statement.
Seriously, learn freaking politics before mashing out nonsense.
Obama has NOT CUT NASA BUDGET. He increased it 6 billion dollars.
In 2006, NASA has it's budget butchered and 3 billion cut.
The republicans have been slowly cutting science since Reagan. The more the religious right in
Re:Yes it does. (Score:5, Insightful)
I agree. Though its not lying with the Clintonesque definition of lying that most people use. Its more lying my omission, distorting the meaning of the results by not putting them in their complete context. At least that's how it is with the papers I've read and known enough about to have an educated opinion on. Although the misrepresentation is usually at least partially intentional, I don't think its all intentional.
Re:Yes it does. (Score:5, Insightful)
Are you serious? Many thousands of people are dead simply because a few people were trying to stay gainfully employed to support their families?
I am truly sorry if this comes off as offensive as I think it does but if you believe there would be mass suffering from unemployment if we did not bomb the shit out of Iraq and that was the basis for the lies that resulted in many thousands losing their lives then you are seriously deluded.
As a U.S. citizen I found Clinton's actions and lies embarrassing, but the lies from Bush transferred billions, if not trillions, of public funds into the hands of a few and resulted in the deaths of many thousands of people.
Comparing lies about a blow job to lies resulting in debt and death is absurdity on a grand scale.
Read the Fucking Article, Doucheebag (Score:4, Informative)
If you had bothered to read the fucking article instead of jumping to some half assed conclusion you would see that the article has nothing to do with lying.
It's not "the oil companies have paid scientists to lie about science"
It's "I'm fascinated that trends I detected early in my research seem to fall apart as I continue to investigate"
Anyway.. thanks for lowering the level of discussion on /. even further, douche.
Not that simple. (Score:5, Informative)
That's not a given. Particularly in the soft sciences - psychology, for instance - it is extremely difficult to control for all factors (I'm more inclined to say nearly impossible) and so replication of results can be subsumed by other effects, or even simply not work at all. You know that whole generation gap thing? That's a good example of groups of people who are different enough that the reactions they will have to certain subject matter can be polar opposites. So something that was "definitively determined" in 1960 may be statistically irrelevant among the current generation.
That's just one example of how squishy this all is. Without having to bring lying into it at all. And then, there will be liars; and there will be people who draw conclusions without scientific rigor at all, simply because it's just too difficult, expensive or time-consuming to attempt to confirm the ideas at hand. And there is the outlier personality; the one who accounts for those other few percent -- all the declarations of "this is how it is" are false for them right out of the gate.
Hard sciences simply lend themselves a lot better to repeatability. Where I think we go wrong is assigning the same certainties to the claims of the soft scientists. I have personally seen psychiatrists, best intent not in doubt, completely err in characterizing a situation to the great detriment of the people involved, because the court took the psychiatrist's word as gospel truth.
All science is an exercise in metaphor, but soft science is an exercise of metaphor that is almost always far too flexible. One place you can see this happening is the trendy / cyclic adherence to Froyd, Jung, Maslow, Rogers and so forth... the "correct" way to raise babies... Ferberizing, etc. This stuff isn't generally lies at all, but it also generally isn't "right." Good intentions do not automatically make good science.
Serious medicine is another good example. Something that might work very well for you might not work at all for me; get the wrong group of test subjects, and your results will skew or worse. This is an area that I think is fair to call a hard science, but where we just don' t know enough about the systems involved. Generally speaking, I don't think our oncologist lies to us; further, I think he's pretty well aware of the limitations of his practice and the state of knowledge that informs it; but they just don't know enough. To which I hopefully add, "yet."
On a personal level - since that's all I can really affect - I treat soft science about the same way I do astrology. If you believe it, you'll probably attempt to modify your behavior because of the predictions, which in turn may, or may not, affect your actual outcome. If you don't, it's either irrelevant or too uncertain to trust anyway. So it's low confidence all the way.
I do, however, still place very high confidence in Boyle's law [wikipedia.org] for gasses. Hard science works very well. :)
Re:Not that simple. (Score:5, Insightful)
It looks like a lot of the studies that suffer from this effect are concerned with people and their behavior. Personally I don't think its a matter of whether the science is hard or soft but just that the domain has some issues that are not so important with other fields, e.g. the structure of a galaxy or the behavior of a gas with respect to pressure.
The main problem is that when you're looking at anything that has something to do with humans then the tool with which you carry out the investigation is in part the very thing you are investigating [the mind.] This increases the potential for bias no end, and in the opinion of some, renders the whole exercise a completely futile and confounded endeavor. But I would tend to believe that this problem is the exact reason why one should study the mind, exactly because it is the lens through which we view the universe.
In many respects it's a flawed tool for research. Not only filters but active perceptual mechanisms are at work, and function in such a way as to ensure that people seem to create a large part of the reality that they live in. This shouldn't stop scientists from investigating imho, but means that in looking at an area such as the mind, humility is indeed appropriate.
Soft science as you call it should not be conflated with astrology - like many other practices astrology is closer to a very ancient and wonderful art- that of separating people from their money, than it is to scientific investigation. But then perhaps i would say that, being a virgo.
Re: (Score:3)
[rises to feet] Extremely well said, sir, though I cannot agree, having been born under a different sign. [zazzle.com] Still, may I offer you a beer, or perhaps another affordable drink of your choice?*
* Offer redeemable only in rural Mont
Re: (Score:3)
"Hard sciences simply lend themselves a lot better to repeatability. Where I think we go wrong is assigning the same certainties to the claims of the soft scientists."
This may be true in theory but not in practice. Getting really good data to support or refute a hypothesis is difficult, time consuming and expensive. And generally there is never enough of any of those. Positive, regular and certain results are expected by journals and the world at large. So most data sucks to some degree. And thus so do
Re:Not that simple. (Score:5, Interesting)
Hard sciences simply lend themselves a lot better to repeatability. Where I think we go wrong is assigning the same certainties to the claims of the soft scientists.
Granted that hard sciences are probably more reliable, but unfortunately, a lot of the research even there is shaky. I overheard roughly the following conversation between a graduate student in mathematics and his thesis adviser one summer, while I was doing undergraduate summer math research at the CUNY Graduate Center on an NSF grant (RTG):
Even if high-profile results are more reliable in the hard sciences, your average paper is still unreproducible garbage. The problem is the system, which forces everyone to publish as much as possible without heed to quality; and the journals, which publish only positive results. Researchers need to publish all their results publicly, including registering their hypotheses before they even begin the study. Universities need to take a stand by not focusing on quantity of publications. More emphasis must be placed on repeatability.
The people who treat this kind of finding as an attack on science are perpetuating the problem. We should be looking to make the scientific process ever better and more accurate as we come to understand its pitfalls better, not shrug off its inadequacies as inevitable.
Re:Yes it does. (Score:5, Insightful)
It's only lying if you do it intentionally. If ten labs independently and without knowing of each other perform essentially the same experiment, and one of them has a statistically significant result, is that lying? The other nine won't get published because, unfortunately, people only rarely (and for large or controversial experiments [nytimes.com]) publish negative results, but the one anomalous study will.
The vast majority of science is performed with all the good will in the world, but it's simply impossible for scientists to not be human. That's why we do replicate experiments - hell, my wife just published a paper where she tried to replicate someone else's results and got entirely different ones, and analyzed why the first guy got it wrong.
Re:Yes it does. (Score:4, Funny)
It's only lying if you do it intentionally.
Or, as George Costanza says, "It's not a lie if YOU believe it".
Re: (Score:3)
Here's an example of what you describe:
For decades, researches would study the effect of breast implants on breast cancer. The hypothesis was the silicone or plastic increases the risk (many theories why). At the end of the study they wouldn't get results consistent with the hypothesis. Each assumed they had screwed up. Very few of them published, and the research was effectively buried over and over.
Then one researcher, having also failed to prove her hypothesis, decided to dig deeper. She dug up all
Re:Yes it does. (Score:4, Insightful)
Huh? (Score:5, Insightful)
Did you even read the article?
This is basically about poorly designed clinical drug trials without sufficient controls. Sloppy work, even if it seemed rigorous enough at the time.
The sensationalistic "scientific method in question" stuff is pure BS, but after all this is New Yorker magazine we're talking about, so one wouldn't expect too much scientific literacy. It was the scientific method of "predict and test" that caught these erroneous results, so the method itself is fine. The "scientist" who designed a sloppy experiment is too blame, not the method.
However, I'm not sure that psychiatric drug trials even deserve to be called science in the first place. The principle of GIGO (Garbage In - Garbage Out) applies. This is touchy-feely soft science at best. How do you feel today on a scale of 1-10? Do the green pills make you happy?
Re: (Score:3)
Money is the root of all evil?
Drug trials are biased by money. And so is the media's reporting. Drama sells better than facts.
Did you like the movie Apollo 13? Great movie, but they felt it necessary to distort a lot of the facts to heighten the drama. It was fun seeing all those engineers and scientists heroically scramble to solve problems in the nick of time. Fun to see all those dull, staid rules and procedures thrown out, see the bureaucracy run over. And fun to watch them forced to try jury
Re:Yes it does. (Score:4, Interesting)
For example, anything to do with the general health of a person can only really be measured over long time scales (decades), as well as measurements of the climate and things like that.
In those cases, 'reproduction' means taking the same data, sifting it in possibly the same way (but maybe not), and getting the same or similar result.
Now take this fact in the context of data dredging.
Data dredging does not have to be intentional (ie: an intent to defraud, although it certainly can be.)
If you take 1000 scientists and give them all the same data, they will probably look at that data in several thousand ways. If you are dealing with 95% intervals, and the data is looked at in 2000 ways, then about 100 of those ways will present something 'significant' by simple random chance.
The same phenomenon exists in that whole bullshit "Equidistant Letter Spacing" Bible-Code crap, but is much easier to dismiss because you have to believe something extremely unlikely (God exists, and orchestrated the translation of the bible into English so that it would have hidden codes.)
When you really get into dismissing Bible Code in a mathematical manner, you end up realizing that in any data set there exists many things of statistically significance and yet also completely bullshit.
Re: (Score:3)
In these cases you break the data up into randomly selected smaller sets. This allows you to test a hypothesis developed from looking at one set against other sets to see if there really is a pattern.
Data snooping (Score:4, Interesting)
Not really. This would be only true if all of those 2000 ways were statistically independent from one another. It would take a much larger dataset than most scientists deal with for there to be 2000 different ways of analyzing it, and even then they would not be statistically independent.
So the problem is not as bad as you suggest, but it is real. If I compare 20 different statistically independent measurements, one is expected to meet the p 0.05 criterion by pure random chance. There are ways of correcting for this bias, by requiring a higher criterion of statistical significance (say p 0.0025), but that also reduces the power of my study to detect a real difference.
Which is appropriate really depends upon the nature of the experiment and the question being asked. If I do 20 measurements and half of them are statistically significant, I may not much care if one of them is by chance.
If I want to minimize the likelihood of reporting an incorrect result, while maximizing the power of my study, my best bet is to decide in advance on a very few measurements and statistical tests, and stick with them. That's good for me, but it doesn't really help the reader who is looking at a bunch of different studies, because each finding reported at p = 0.05 still has one chance in 20 of being wrong. Added to that is an unknown magnitude of publication bias, because studies with significant findings are more likely to be published than those that find nothing of statistical significance.
Re: (Score:3)
When a scientist has some hypothesis he wants to test and goes and looks into the data.. thats great. The problem is that there are far fewer reasonable hypothesis than there are ways to examine the data...
Re: (Score:3)
Yikes, I don't know what industry your phd biochemists are going into, but if they go work for a drug company they can make a solid 200K+.
Links on problems with peer review (Score:3)
Also: http://www.google.com/#q=peer+review+as+censorship [google.com]
http://www.counterpunch.org/mazur02262010.html [counterpunch.org]
http://www.suppressedscience.net/censorship-medicine.html [suppressedscience.net]
A key point being that keeping information from the public is not the same as modding up (or revising interactively) information like on slashdot. What would slashdot be like if every comment needed "peer review" before it was posted? Instead, slashdot uses after the fact moderation. (Nothing is perfect, of course.)
In general:
http://www.suppressedsci [suppressedscience.net]