Study: More Than Half of Psychological Results Can't Be Reproduced 257
Bruce66423 writes: A new study trying to replicate results reported in allegedly high quality journals failed to do so in over 50% of cases. Those of us from a hard science background always had our doubts about this sort of stuff — it's interesting to see it demonstrated — or rather, as the man says: 'Psychology has nothing to be proud of when it comes to replication,' Charles Gallistel, president of the Association for Psychological Science. Back in June a crowd-sourced effort to replicate 100 psychology studies had a 39% success rate.
Comparison? (Score:3)
Does anybody know how this compares to the hard sciences? How many published math papers turn out to be incorrect? How many physics experiments cannot be reproduced?
Re:Comparison? (Score:5, Informative)
In spite of the gut feeling of the submitter, it's not much better in at least computer science: http://reproducibility.cs.ariz... [arizona.edu]
Re: (Score:3)
Re: (Score:3)
Or as low as 10% in published studies. The storied, btw, seem to have gotten significantly worse over the last few weeks. What's going on?
Probably a paid agenda by now. Dice has been trying to figure out how to monetize Slashdot... how else?
Re: (Score:3)
73% of all statistics are made up.
And the other half are poorly understood.
Re: (Score:2)
Hell, 14% of all people know that.
Re:Comparison? (Score:4, Informative)
In spite of the gut feeling of the submitter, it's not much better in at least computer science: http://reproducibility.cs.ariz... [arizona.edu]
And to clarify: they only checked for what they call "weak repeatability": was it possible to get the code from the original researchers and if yes, was it possible to build it (or did the author at least explain how he himself managed to build it). They did not even investigate whether they could replicate the reported results with the the code that built.
Re:Comparison? (Score:5, Insightful)
That sounds like a pretty weak test, but not a bad one. To my mind, this crosses one of my areas of expertise since, I have had his job as a professional sysadmin. I worked in a shop where, for better or worst, we decided that all free software we used on Solaris would be compiled from source.
This quickly became a huge mess as updates would sometimes bring changes and there was always the question "who built it last time and what options did they choose", so quickly we found a need to fix that, and I started scripting. (its where my competence with shell really began)
Once you have even solved the easy part, then you have to think about versions and dependencies.
In fact, later on we were getting involved in research computing, that wasn't my project but one of the topics that came up was... researchers will build this software, just like we are talking about, and use the data.,...now someone wants to audit it down the road....
What happens if the libraries have changed and the old code doesn't compile? What if there is an error in a calculation that was introduced by a particular library version being used?
The reality is, you write the code, but it gets run in an environment. That entire environment has the potential to have an effect, a full specification needs to capture at least some of that as well.
Re: (Score:2)
What happens if the libraries have changed and the old code doesn't compile?
Then you read the f***ing error, and fix the f***ing code. This isn't like building IKEA furniture. A certain level of competency is required to compile software.
What if there is an error in a calculation that was introduced by a particular library version being used?
Let's ignore the fact that you're asking why changing the parameters of the experiment yields a different for a minute and address the obvious fact that an error is an error. If the results of an experiment are due to an error, then the conclusion of the experiment is invalid.
Re: (Score:2)
Then you read the f***ing error, and fix the f***ing code.
You completely missed his point.
The question was, "What if the original code can no long be used?". You can modify the code and run a different experiment, but you are not reproducing the original.
Re: (Score:2)
Also there are two different questions:
First: Did or could this code have produce this data?
Second: Is it in error?
You can't fully answer the first without fully specifying the environment. Not even you can't prove it wrong, you can't defend yourself. "Oh you used a buggy version if this library" is an entirely different accusation from "Oh you fabricated these results and didn't publish the real code".
Both are potentially valid, but both are very very different in implication. Also, what if there is a bug
Re: (Score:2)
Replication and reproducibility are not the same. Simply getting the source code and re-running the results is just replicating the study. It doesn't tell you anything about how reproducible the results are.
To be reproducible, someone should be able to use similar methods and get the same results. If a result is completely dependent on a specific build of the software, it's not robust enough to be considered reproducible.
Publications should require a concise written description of the method and solution th
Re: (Score:2)
I'm dismayed that in CS that the academic community is putting so much emphasis on replication and not enough on robust reproducibility.
From my experience, the academic community puts no emphasis on either. I think it would be neat to study a paper and attempt to reproduce the results, but that doesn't get me a journal paper in today's academic landscape.
Re: (Score:2)
The code is not the experiment, it's the tool that was used in the experiment; see the difference? You don't need the exact model beaker produced in the same year from the same manufacturer to prove a chemistry experiment true or false, similarly you don't need a line by line copy of the original code to test the theory.
Re: (Score:2)
So, how does one go about proving that the ORIGINAL results were the result of an error, as opposed to the later results meant to verify the original results?
C'mon, it's just as likely that an error was introduced in newer code than that one was fixed....
Re: (Score:2)
Oh of course, I don't deny that at all but... realize these results will be used for years. Someone 10 years from now might dust off the study being written today and want to validate it. If discrepancies arise....then what?
Its a serious issue a bit unlike you see elsewhere. There is seldom a case for installing decade since obsolete software, and its not always easily peroformed, and if you haven't even captured all the versions of everything.
But more crucially to the point, this issue is something beyond
Re: (Score:3)
While I approve of making the code available, that has nothing to do with repeatability of the experiment. If the paper describes what the code is doing in sufficient detail, somebody verifying the work can write their own code. This is much better for verification because it makes it much less likely that the results will be due to a hard-to-find bug. It's much more expensive to do that, of course.
(If the paper doesn't describe what's going on in sufficient detail for replication, it's crap no matter
Re:Comparison? (Score:5, Insightful)
Yo Bruce66423, RTFA sometime.
Re: (Score:2, Interesting)
Re: (Score:2)
I am not really sure that biology counts as a hard science either. I know of eminent biologists in the life science field that consider the idea that "if it ain't reproducible it ain't real" to be nonsense.
In my view anyone who does not consider that to be the cornerstone of experimental study is *NOT* a scientist.
Re: (Score:2)
"if it ain't reproducible it ain't real"
Exactly. Every time my users meet a intermittent fault in my systems, I point out to them that this is not scientific, and they must be imagining it. Hold on, their appears to be be a crowd of angry users outside. Let me get the door, I am sure it is *&%^*@))*+CARRIER LOST
Re: (Score:3)
Before I get dumped, on, yes -"there". I come back from the dead to correct my own typos...
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Depends on how you define a hard science. The distinction is murky and ultimately set by the level or empericism in the field and how practical it is to test something.
Take cosmology... hard science right? Its just physics and astronomy. But how do you run an experiment on a galaxy. Is that a harder science than neuroscience or various fields of bioscience where they can actually test it right now?
In cosmology we have Dark Matter... matter we can't detect except by infrence with how far our gravitational ca
Re: (Score:3)
So is medical science not "real science" because we've had quite a few stories over the last few years that a ton of results from medical research and drug trials can't be reproduced.
Re:Comparison? (Score:5, Interesting)
Re:Comparison? (Score:5, Insightful)
So is medical science not "real science" because we've had quite a few stories over the last few years that a ton of results from medical research and drug trials can't be reproduced.
A large percentage of medical studies are funded by manufacturers, and it's fairly well understood that most of those don't get published unless they produce the "right" results. And those that are published are often really "preliminary", based on too little data to be considered reliable. But if a test on 10 or 20 patients gives the "right" results, there is a lot of marketing pressure to get the paper published right away.
This easily explains the growing problem of medical products that are found to be worthless (or even harmful) to the patients, after years of heavy marketing has produced large profits.
There's also the age-old problem that studies with "negative" results usually don't get published at all. As usual, there's a good xkdc comic [xkcd.com] that explains the methodology in a way that even the minimally numerate reader can understand.
Re: (Score:2)
You also don't have to prove that a drug is safe or efficacious to sell a derivative of it. All you have to prove is that it doesn't kill many more people than the last version. You not only don't have to prove that it works better, you don't have to prove that it works at all.
Re: (Score:2)
Re: (Score:2)
So is medical science not "real science" because we've had quite a few stories over the last few years that a ton of results from medical research and drug trials can't be reproduced.
The anti-science crowd sees numbers like this, and thinks Fraud! Stupid Scientists! When in fact, this is science being science.
There can be any number of reasons a result isn't repeatable and gets rejected, from bad data to bad conclusions to insufficient control to fraud of one sort or another. Sometimes it just needs tweaked and redone.
Case in point, some Einstein posting above has used this article about psych as evidence/ proof refuting AGW.
With logic and reasoning like that, I'm certain he'll
Cargo Cult Science (Score:2)
That is pretty much the charitable explanation that Feynman offered in his notorious public talk on this subject.
That and confirmation bias. The classic example is Millikan's initial but not-quite-accurate measurement of the charge of an electron followed by subsequent results that "drifted" slowly but surely to the more modern measurement value.
Gee, Millikan is way off, but I can't publish this, I need to go over my apparatus and procedures to find what is wrong.
Re: (Score:2)
Medicine is more of a "fuzzy" science, where its findings, theories and models -- i.e. its "truth" -- have much lower confidence value assigned to them compared to physics, but higher than psychology/sociology. (Journalists and people who yell "the science is settled!" ignore this confidence value.)
In my view that's a natural reflection of how patterns are fewer in physical world and thus easier to repeat, compared to living organisms, especially higher orders like mammals, and compared to mind/society stuf
Re:Comparison? (Score:5, Interesting)
It's interesting nonetheless seeing what studies come up as bunk and which get confirmed. For example, I opened up their data file and started pulling up random entries about gender differences for fun:
"Sex differences in mate preferences revisited: Do people know what they initially desire in a romantic partner?" - The original study claimed that while men often self-report having their selection criteria for a partner being a lot more hinged around appearance than women do, that in practice this isn't the case, and more to the point, people's self-reporting for what they want most in a partner has little bearing on what they actually find most important in partner selection in practice.
The re-analysis confirmed this study.
"Perceptual mechanisms that characterize gender differences in decoding women's sexual intent" - This was a followup study to an earlier study that claimed that women often perceive men's sexual interest as friendliness while men often perceive women's friendliness as sexual interest. This study found, by contrast, that while men often misperceive friendliness as sexual interest, they also often misperceive sexual interest as friendliness - that they're just worse in general than reading sexual interest than women.
The re-analaysis was thus in a way responding to both the original and the followup. And found neither to be true. They found no difference between men and women in ability to read sexual interest vs. friendliness.
"Loving those who justify inequality: the effects of system threat on attraction to women who embody benevolent sexist ideals." - this study was to test - and reported confirmation - of the hypothesis that men who don't trust the government will also tend to find attractive women who embody "benevolent sexist" stereotypes - that is, that women are vulnerable, need to be saved, belong in the house, are there to complete men, etc, vs. women who have interest in careers or activities outside of the family, expect to be seen as equals, etc.
The reanalysis showed no correlation at all.
"The Best Men Are (Not Always) Already Taken: Female Preference for Single Versus Attached Males Depends on Conception Risk" - this study claimed that women in relationships find single men more attractive when they're ovulating and partnered men when they're not, but that single women show no preference. They argued that this result is expected given selective factors.
The reanalysis showed no correlation at all in any of the above cases.
Re: (Score:2)
You know nothing of Psychological research.
It helps to be crazy - we do know that.
Re: (Score:2)
Re: (Score:2)
Like computer science...
Consider that most people I know who have an undergraduate degree in "computer science" are as close to a scientist as a blog is to journalism. Computer Science degrees can mean as little as "more computer credits than liberal arts credits." But maybe that is endemic to popular degree programs.
Re: (Score:2)
In other words... (Score:3)
Re: (Score:2)
Despair poster
Re: (Score:3)
Psychology more scientific than cancer studies? (Score:4, Interesting)
Re:Psychology more scientific than cancer studies? (Score:5, Interesting)
Or this one [slashdot.org] from just under 2 weeks ago:
The requirement that medical researchers register in detail the methods they intend to use in their clinical trials, both to record their data as well as document their outcomes, caused a significant drop in trials producing positive results. From Nature: "The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000.
Re: (Score:3)
Psychology is an offensive science: people don't believe psychology is anything more than voodoo. This lets them stay quite comfortable in their control over their mind, instead of admitting it may have some uncontrollable science behind it.
10% of cancer studies are reproducible? Well that's just science. 50% of psychology studies are reproducible? Psychology is no more real than chance; every study is a coin toss, and nothing is real.
Climate science papers are probably way wonkier than psychology
Re: (Score:2)
Explain what you are referring to about this stuff on TV. I can't recall one instance of psychology being invoked in RWBY, Inuyasha, or Deep Space 9.
What I said was generated on-the-fly from an understanding of psychology (along with politics and a few other things).
Re: (Score:2)
It's hard to believe psychology studies are more reproducible than cancer studies (11% reproducible): http://www.nature.com/nature/j... [nature.com]
It seems you don't understand what it is you linked to or you're trolling, the key here is preclinical. That is, there's an 11% chance we can reproduce lab results on actual people in clinical trials, so if you're in the first round of an experimental drug 9 out of 10 times it won't work. That sucks, but our understanding of the body and cancer isn't better so we have no choice but to experiment in practice. It says nothing about how reproducible the clinical results are, but before it's through all the rou
Re: (Score:3, Insightful)
I think 538 covered this better. (Score:4, Informative)
Yes there is a problem, and yes there needs to be a solution.
http://fivethirtyeight.com/dat... [fivethirtyeight.com]
Interesting FTA (Score:2)
A study by more than 270 researchers from around the world has found that just 39 per cent of the claims made in psychology papers published in three prominent journals could be reproduced unambiguously – and even then they were found to be less significant statistically than the original findings.
The non-reproducible research includes studies into what factors influence men's and women's choice of romantic partners, whether peoples’ ability to identify an object is slowed down if it is wrongly labelled, and whether people show any racial bias when asked to identify different kinds of weapons.
Article gives the wrong impression (Score:2)
Re: (Score:3)
39% without secondary false-positives. (Score:4, Informative)
Re: (Score:2)
It's a structural issue inherent in what a cynic could call the publication/tenure industrial complex. Our education and tenure systems demand ever increasing and ever more impressive publications and publication rates, and our journal systems are obsessed with the "story" of the prescient scientist who confirmed a theory conceived in a vacuum. This kind of fuckery is the natural end result.
Re: (Score:3)
It's a structural issue inherent in what a cynic could call the publication/tenure industrial complex. Our education and tenure systems demand ever increasing and ever more impressive publications and publication rates, and our journal systems are obsessed with the "story" of the prescient scientist who confirmed a theory conceived in a vacuum. This kind of fuckery is the natural end result.
But the cynic in me would compare people's reactions to this to the idea that if a person is arrested, gets a fair trail, and is sent to prison is somehow proof that legal justice system is broken.
Finding non-reproducible results is exactly what the purpose is for repeating experiments.
Now the scientist in me says "I'm a little concerned about those numbers, let's do an analysis and maybe we'll find out why."
While the cynic in me says The denier and anti-science culture will just trot out their fa
Re: (Score:2)
Sound outlandish? We already have an AGW denier in here claiming psychology result replication numbers is just that proof.
Well, they genuinely have a psychological problem, but people throw out babies with bathwater constantly just because they're too lazy to fish out the baby, so it's a widespread one.
Re: (Score:2)
Sound outlandish? We already have an AGW denier in here claiming psychology result replication numbers is just that proof.
Well, they genuinely have a psychological problem, but people throw out babies with bathwater constantly just because they're too lazy to fish out the baby, so it's a widespread one.
It's too bad that the social conservatives won't allow critical thinking to be taught in schools. Damn near as bad as the great Satan Darwin.
oh boy - I think I just earned my agitation enginerr chops for the day...
Re: (Score:2)
I'm not talking about the replication efforts, I'm talking about the overall system and how it leads to bad science and excessive publication in the first place.
Re: (Score:2)
Careers and status make major contributions. If you got onto the dietary cholesterol is bad for you theory early as a scientist, your entire career and standing is built around that theory.
As you gain influence and status over research proposals and funding, you're more likely to approve proposals that advance the theories you're invested in and reject proposals that might disprove your career-invested theories.
The entire process, not just individual studies or their results, becomes a victim of both ego a
Re: (Score:2)
I think the inherent scarcity in research resources means that you will pretty much always have a kind of gatekeeper who decides what projects and who's projects gets funded and what doesn't.
It'd be great if that gatekeeper was a neutral party without a vested interest, but I'd wager it likely takes someone inherently knowledgeable in the field to be able to intelligently understand the research requests.
You could have a committee to hopefully limit individual vested interests, but ultimately there are infl
Re: (Score:2)
Yeah that's what I noticed too (my minor was in cognitive psychology - like trying to figure out AI from the opposite direction). When Fleischmann and Pons posted their cold fusion paper, every physics lab in every school grabbed it and tried to replicate it, just because of how cool it would be if it actually worked. I never saw the same ze
Get over yourself (Score:2)
Those of us from a hard science background always had our doubts about this sort of stuff
Maybe you should have been using those doubts for introspection? I can easily find numerous retractions from "hard science" journals plus it's easy to find a number of similar studies showing reproducibility in "hard" science medical research and drug trials.
Re: (Score:2)
reproducibility *problems* in "hard" science medical research and drug trials
Fixed for myself.
Re: (Score:2)
Just out of curiosity ... (Score:4, Funny)
Has this study been replicated?
Or is it perhaps a replication of an earlier study?
Please tell me at least Dunning Kruger is real (Score:3)
That would be... (Score:2)
...why it's not actually called a 'science' by anyone who understands what science is.
Re: (Score:2)
Double-blind experiments are an integral part of science. Why do experiments need to be double-blind?
Re: (Score:2)
Re: (Score:2)
You went through a lot to try to not say "because of sound psychological principles which tell us that the human mind will see what it wants to see."
It seems to me double-blind experiments exist because of a scientific understanding of psychology.
Re: (Score:2)
And you clearly don't understand what science is, nor, despite what you may think, do you hang out or listen to people who do.
Let me help you. Science isn't about WHAT you study. It's about HOW you study it.
Re: (Score:2)
I'm going to guess that you don't have a formal science background.
About as hard as environmental "science" (Score:2, Troll)
Can't be reproduced, data has to be doctored and cooked to within an inch of its life to stay consistent, ideological agenda overrides any pretense of scientific method.
Yep, that's modern science alright!
Which is why (Score:3)
... you don't make any important decisions based on a single paper. That's true for hard sciences as well as social sciences.
Science by its very nature deals in contradictory evidence. I'd argue that openness to contradictory evidence is the distinguishing characteristic of science. A and not A cannot be true at the same time, but their can be, and normally is, evidence for both positions. So that means science often generates contradictory papers.
What you need to do is read the literature in a field widely so you can see the pattern of evidence, not just a single point. Or, if you aren't willing to invest the time for that you can find what's called a review paper in a high-impact factor journal. A review paper is supposed to be a fair summary of recent evidence on a question by someone working in the field. For bonus points read follow-up letters to that paper. Review papers are not infallible, but they're a heck of a lot more comprehensive than any other source of information.
Re:Which is why (Score:4, Interesting)
"Despite the rather gloomy results, the new paper pointed out that this kind of verification is precisely what scientists are supposed to do: “Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should.”"
This is the kind of stuff that needs to be done.
Re: (Score:2)
... you don't make any important decisions based on a single paper. That's true for hard sciences as well as social sciences.
OK, tell that to the NIH [nytimes.com]. Or hey, tell the APA. The DSM might need a little love. Those traitorous fucks.
The truth is that the course of our lives is often decided, by fiat, on the basis of a single study
Re: (Score:2)
Well, as far as Atkins is concerned, diet research is really, really hard and expensive to do right. I know because when I was an MIT student one of my jobs was office boy in the Food and Nutrition group, and I saw how hard it was. In one of the studies, research subjects were given a duffle bag from which all the input to their digest systems came, and into which all the output from the same went, for six bloody months.
Of course not every study needs to be that rigorous, but diet is one of those areas wh
Re: (Score:2)
By the way, the current state of research seems to be that carbohydrate restricted diets work well in the short term but have only modest success in the long term.
Right, I linked that story because it speaks directly to the deception from the NIH and the USDA. It wasn't about the diet — I had great success the first time, and less weight loss the second time, so now I'm just watching what I eat — but about the deliberate attempts to deceive by our government. Shock, amazement, I know. At best they were trying to look busy. More likely, they were working on behalf of the rising processed foods industry.
Re: (Score:2)
That's true for hard sciences as well as social sciences.
Much more so for social sciences. It's much more difficult to account for variations caused by subtle differences in test and control populations of individual experiments. It is often best to run a series of experiments, each with its own test groups and then examine the results to 'average out' the effect of conditions that were not properly accounted for. This requires a statistically complex analysis to determine whether the individual test give the same results. And as we all know, 62% of all scientist
Feynman and Crichton (Score:5, Informative)
Feynman talked about this fairly often, most notably in Cargo Cult Science [columbia.edu]. The problem seems to be most common in soft sciences, such as the rat running example he gives from psychology. See also his commentary on science education in Brazil [v.cx].
The Brazil report appears to be unrelated, but hear me out.
Brazil's problem was cultural. Their textbooks included all of the right information, but it wasn't presented as things that the student could learn about the real world, just as facts to be memorized. Scientists without the culture of science will make lousy experiments because they don't understand what they want to do or why, or how or why they need to keep themselves honest.
The culture of physics in the US was very good, but they were unable to export that culture to Brazil when they tried.
In the same way, other branches of science were unable to duplicate the physics culture. The rat runners in the example given didn't understand what they were trying to do, so they didn't pay attention to Young's work, which would have helped keep them honest.
Crichton's Aliens Cause Global Warming [s8int.com] lecture was given nearly 30 years after Feynman's Cargo Cult Science, and it shows a creeping degeneration of the culture of science.
I go a step further, and say that the decline of the science culture has been part of a general cultural decline. There has been no great art or literature or music in decades.
The good news is that people are waking up. The internet is connecting people to each other, to science, and to culture. We are pissed about the decline of the past century, the decline that we've allowed, or at least failed to prevent, and are steeling our resolve to do the hard work to restore our greatness.
Articles like this show the stirring of the cultural revival. Keep them coming, please.
Re: (Score:3)
Science has never been about consensus. Science is about truth, about reality. Science is about testing an idea by comparing it to nature, and about having the honesty to accept what nature has to say, even when you don't like the answer.
Consensus is the domain of Scidolatry. The paradigm shift is usually nothing more than the demographic wave. The people that have calcified around the previous consensus gradually die off, and the next generation gets to look at the data fresh, without a lifetime of wor
That's odd... (Score:3)
Study: More Than Half of Psychological Results Can't Be Reproduced
That's not what my study said.
An ANTI-SCIENCE attack paid for by Koch brothers! (Score:2, Troll)
I denounce this propaganda attack piece paid for by Koch brothers seeking to destroy the planet and drown the poor for profit!!!!
Oh, this is not about Climate science? Never mind...
Perhaps not surprising (Score:2)
I didn't have the patience to read through the whole article in detail, but I didn't see anything about how long back the study had checked - this may be important for the reulst. Eksperiments in psychology must be particularly difficult to set up and evaluate rigorously, and I suspect we weren't too good at it in the early years. Even in modern, physical medicine, where there now good practices, it can be very difficult to get strong data.
You can't study psychology in a vacuum (Score:3)
I would rather we have studies disproven or adjusted by additional work than have those studies not published at all. The human brain is a very complicated thing that we actually know very little about; if we discard psychology entirely out of hand we will then do very little to further our understanding of how it actually works.
A well-respected physician explained it this way: (Score:3)
Psychology is today where Medicine was 200 years ago.
There's nothing inherently wrong with treating behavioral maladies, and such treatment could eventually be classified as medicine.
It's just not there yet.
(He also didn't have a very high opinion of chiropractors...)
Re: (Score:2)
Is anyone surprised? (Score:2)
Re: (Score:2)
Prove it.
Re: (Score:2)
Re: (Score:3)
We're talking about the demarcation problem here. Still, I'll answer your sillyness.
Well when more then 1/2 of your studies and research can't be repeated, you lose.
Then I guess we should toss out the whole of modern medicine [slashdot.org] as well, eh? Don't be foolish. This is how science is supposed to work. What concerns me most is that when faced with a failed replication, your first reaction is to reject the original research. It could have easily been a failure on the part of the second experimenters. To sort that out, you need are more replications. Science is riddled with contradictory
Re: (Score:2)
Your point about Ty Cobb is meaningless because that's not a control experiment, it's a random, fluctuating experiment, where the variables are always changing. Psychology can and never will be science, it's highly educated nut
Re: (Score:2)
why that is (Score:4, Interesting)
We are right to hold discoveries of science and the scientific method in high regard. But that approval is distinct from respecting scientists as a class. The problem of non-reproducibility is no fault in the scientific method but instead indication of the rotten state of modern academia.
Earlier in my career I worked at universities writing software used for psychology and neuroscience experiments. On the basis of that experience I can offer an explanation for why about 1/2 of experiments are not reproducible: A lot of psychology faculty are terrible liars. While some demonstrate perfect integrity, others, probably the ones generating all those irreproducible results, lied whenever it suited their purposes. Still others were habitual liars who lied not to achieve some specific outcome but out of habit or compulsion. The center director of one research group confided to me, after a dispute with the faculty, that he had not been able to control his compulsion to lie. And when I claim that faculty "lie", I do not mean what could, by any stretch, be characterized as errors, oversights, or honest differences of opinion. I mean abusive, sociopathic, evident and deliberate lying. Like being told that the inconvenient evidence which you have in hand, "does not exist."
The lying is enforced by implicit threat. One time I responded to an email message, correcting an error, and then immediately after that a prominent member of the faculty, somewhat creepily, follows me into the restroom, stands too close to me while I am using the urinal, and explains to me in a threatening tone the error of my reasoning, which according to him, was that, "it would not do that because it would not do that." The dean imposed a disciplinary penalty on me for objecting to that. Though that was unusual, typically challenging lies elicited, a yelling, screaming fit from a faculty member. So it's not just lying, but lying backed up by threatening, thuggish, behavior of the faculty and university administration. This was a highly-regarded department with generous NIH funding, which makes me think that lying in that field is kind of a mainstream thing.
The root cause here has little to do with science, per se, and has more to do with the rotten management of colleges and universities. Regardless of what the employee handbook states, there are few de facto restrictions on faculty conduct and university administrations act to cover up problems by disciplining and threatening the whistle-blowers. Jerry Sandusky was not a scientist, he was a football coach, but if you look at the way Penn State concealed child molestation and protected him, that is typical of the way universities respond to faculty misconduct as welll, and explains why academic dishonesty is tolerated. One full-time faculty member in the department in which I worked had not set foot in the department in over five years nor ever appeared in any of the classes which she "taught." According to the department chairman, every time she was contacted to encourage her retirement was, whe was, "drunk off her ass in the middle of the day." It was tolerated and covered up.
I am not claiming that all scientists, fields, or academic departments are full of liars. I have never worked closely with physicists, computer scientists or mathematicians on a daily basis, but none whom I know personally have behaved like that.
To sum it all up, psychology has a problem with poor reproducibility of published results, many of the psychology faculty I knew were terrible liars; there might be a causal connection between the two.
Re: (Score:2)
Humans will never effectively study themselves (Score:2)
We're inherently too biased. Only an AI with scalable human-like intelligence will give us real answers about ourselves.
This may be why one will never be built.
so Scientology is Correct? (Score:2)
Very interesting. Xenu would be proud.
At the University of Alberta (Score:3)
(...as of about 8-9 years ago) The psych department had its own stats class, taught by a psych professor. You couldn't get an exemption if you had a high-level statistics course under your belt already, they insisted that psych stats were 'special' somehow, and needed to be taught differently.
If by 'special', you mean 'less rigorous' and 'taught by people that literally don't understand the definition of a function', then yes, the classes were special, and failed to prepare the students in any significant way for good statistical analysis.
I'm sure the story is the same at many universities.
Science has disproved itself (Score:2)
Re: (Score:2)
Re: (Score:2)