Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Medicine Science

Study: More Than Half of Psychological Results Can't Be Reproduced 257

Bruce66423 writes: A new study trying to replicate results reported in allegedly high quality journals failed to do so in over 50% of cases. Those of us from a hard science background always had our doubts about this sort of stuff — it's interesting to see it demonstrated — or rather, as the man says: 'Psychology has nothing to be proud of when it comes to replication,' Charles Gallistel, president of the Association for Psychological Science. Back in June a crowd-sourced effort to replicate 100 psychology studies had a 39% success rate.
This discussion has been archived. No new comments can be posted.

Study: More Than Half of Psychological Results Can't Be Reproduced

Comments Filter:
  • by mwvdlee ( 775178 ) on Friday August 28, 2015 @07:18AM (#50408361) Homepage

    Does anybody know how this compares to the hard sciences? How many published math papers turn out to be incorrect? How many physics experiments cannot be reproduced?

    • Re:Comparison? (Score:5, Informative)

      by Halo1 ( 136547 ) on Friday August 28, 2015 @07:23AM (#50408387)

      In spite of the gut feeling of the submitter, it's not much better in at least computer science: http://reproducibility.cs.ariz... [arizona.edu]

      • Or as low as 10% [jove.com] in published studies. The storied, btw, seem to have gotten significantly worse over the last few weeks. What's going on?
        • Or as low as 10% in published studies. The storied, btw, seem to have gotten significantly worse over the last few weeks. What's going on?

          Probably a paid agenda by now. Dice has been trying to figure out how to monetize Slashdot... how else?

      • Re:Comparison? (Score:4, Informative)

        by Halo1 ( 136547 ) on Friday August 28, 2015 @07:44AM (#50408519)

        In spite of the gut feeling of the submitter, it's not much better in at least computer science: http://reproducibility.cs.ariz... [arizona.edu]

        And to clarify: they only checked for what they call "weak repeatability": was it possible to get the code from the original researchers and if yes, was it possible to build it (or did the author at least explain how he himself managed to build it). They did not even investigate whether they could replicate the reported results with the the code that built.

        • Re:Comparison? (Score:5, Insightful)

          by TheCarp ( 96830 ) <sjc@carpa[ ].net ['net' in gap]> on Friday August 28, 2015 @08:10AM (#50408663) Homepage

          That sounds like a pretty weak test, but not a bad one. To my mind, this crosses one of my areas of expertise since, I have had his job as a professional sysadmin. I worked in a shop where, for better or worst, we decided that all free software we used on Solaris would be compiled from source.

          This quickly became a huge mess as updates would sometimes bring changes and there was always the question "who built it last time and what options did they choose", so quickly we found a need to fix that, and I started scripting. (its where my competence with shell really began)

          Once you have even solved the easy part, then you have to think about versions and dependencies.

          In fact, later on we were getting involved in research computing, that wasn't my project but one of the topics that came up was... researchers will build this software, just like we are talking about, and use the data.,...now someone wants to audit it down the road....

          What happens if the libraries have changed and the old code doesn't compile? What if there is an error in a calculation that was introduced by a particular library version being used?

          The reality is, you write the code, but it gets run in an environment. That entire environment has the potential to have an effect, a full specification needs to capture at least some of that as well.

          • What happens if the libraries have changed and the old code doesn't compile?

            Then you read the f***ing error, and fix the f***ing code. This isn't like building IKEA furniture. A certain level of competency is required to compile software.

            What if there is an error in a calculation that was introduced by a particular library version being used?

            Let's ignore the fact that you're asking why changing the parameters of the experiment yields a different for a minute and address the obvious fact that an error is an error. If the results of an experiment are due to an error, then the conclusion of the experiment is invalid.

            • by tomhath ( 637240 )

              Then you read the f***ing error, and fix the f***ing code.

              You completely missed his point.

              The question was, "What if the original code can no long be used?". You can modify the code and run a different experiment, but you are not reproducing the original.

              • by TheCarp ( 96830 )

                Also there are two different questions:

                First: Did or could this code have produce this data?
                Second: Is it in error?

                You can't fully answer the first without fully specifying the environment. Not even you can't prove it wrong, you can't defend yourself. "Oh you used a buggy version if this library" is an entirely different accusation from "Oh you fabricated these results and didn't publish the real code".

                Both are potentially valid, but both are very very different in implication. Also, what if there is a bug

              • Replication and reproducibility are not the same. Simply getting the source code and re-running the results is just replicating the study. It doesn't tell you anything about how reproducible the results are.

                To be reproducible, someone should be able to use similar methods and get the same results. If a result is completely dependent on a specific build of the software, it's not robust enough to be considered reproducible.

                Publications should require a concise written description of the method and solution th

                • by bondsbw ( 888959 )

                  I'm dismayed that in CS that the academic community is putting so much emphasis on replication and not enough on robust reproducibility.

                  From my experience, the academic community puts no emphasis on either. I think it would be neat to study a paper and attempt to reproduce the results, but that doesn't get me a journal paper in today's academic landscape.

              • The code is not the experiment, it's the tool that was used in the experiment; see the difference? You don't need the exact model beaker produced in the same year from the same manufacturer to prove a chemistry experiment true or false, similarly you don't need a line by line copy of the original code to test the theory.

            • If the results of an experiment are due to an error, then the conclusion of the experiment is invalid.

              So, how does one go about proving that the ORIGINAL results were the result of an error, as opposed to the later results meant to verify the original results?

              C'mon, it's just as likely that an error was introduced in newer code than that one was fixed....

        • While I approve of making the code available, that has nothing to do with repeatability of the experiment. If the paper describes what the code is doing in sufficient detail, somebody verifying the work can write their own code. This is much better for verification because it makes it much less likely that the results will be due to a hard-to-find bug. It's much more expensive to do that, of course.

          (If the paper doesn't describe what's going on in sufficient detail for replication, it's crap no matter

      • Re:Comparison? (Score:5, Insightful)

        by oh_my_080980980 ( 773867 ) on Friday August 28, 2015 @08:15AM (#50408707)
        Exactly. This applies to "hard" sciences too. The jackass submitter couldn't get his head out of his ass long enough to read the article and understand the implication. One of the problems the article mentioned was journals not willing to publish null results and research that just replicates other research. That's rather important because you need more than one researching publishing similar findings to feel confident about the results. The other point is that some the results, though valid, were valid for very narrowly defined scenarios and therefore not generalization.

        Yo Bruce66423, RTFA sometime.
      • Re: (Score:2, Interesting)

        by aaronb1138 ( 2035478 )
        Let's be mindful that academically, CS is not considered one of the hard sciences. It might look and smell like one, but nobody in physics, math, biology, engineering, or chemistry takes CS seriously. There is a pretty sharp intellectual and attitude divide at the engineering oriented schools between the EE majors and professors and the CS majors and professors. Much sharper divide than two fields with so much in common and a decent number of overlapping classes. There's a reason why IT/IS is taught at
        • by jabuzz ( 182671 )

          I am not really sure that biology counts as a hard science either. I know of eminent biologists in the life science field that consider the idea that "if it ain't reproducible it ain't real" to be nonsense.

          In my view anyone who does not consider that to be the cornerstone of experimental study is *NOT* a scientist.

          • "if it ain't reproducible it ain't real"

            Exactly. Every time my users meet a intermittent fault in my systems, I point out to them that this is not scientific, and they must be imagining it. Hold on, their appears to be be a crowd of angry users outside. Let me get the door, I am sure it is *&%^*@))*+CARRIER LOST

      • I was going to comment that I'd expect some variation depending on the quality of the venue, but then I looked at the list. Most of the places that they looked at are top-tier publications, so it's pretty depressing. That said, they are focussing on the wrong aspect of reproducibility. The real metric should be, given the paper, can someone else recreate your work. And I suspect that even more papers fail on that. At the ASPLOS panel discussion this year, there was a proposal that PhD students should s
      • I am not sure I understand. What even is a paper in computer science? CompSci is pure math, in a very specific field. So what is a no-reproducible study in CompSci, just math that you cannot follow?
    • Depends on how you define a hard science. The distinction is murky and ultimately set by the level or empericism in the field and how practical it is to test something.

      Take cosmology... hard science right? Its just physics and astronomy. But how do you run an experiment on a galaxy. Is that a harder science than neuroscience or various fields of bioscience where they can actually test it right now?

      In cosmology we have Dark Matter... matter we can't detect except by infrence with how far our gravitational ca

  • by msauve ( 701917 ) on Friday August 28, 2015 @07:20AM (#50408373)
    You can fool all of the people, some of the time, or some of the people, all of the time.
    • "You can fool all of the people all of the time if your effects budget is large enough”
      Despair poster
      • by rastos1 ( 601318 )
        "You can fool some of the people all the time, and those are the ones you want to concentrate on." - George W. Bush
  • by mcelrath ( 8027 ) on Friday August 28, 2015 @07:23AM (#50408383) Homepage
    It's hard to believe psychology studies are more reproducible than cancer studies (11% reproducible): http://www.nature.com/nature/j... [nature.com]
    • by Lunix Nutcase ( 1092239 ) on Friday August 28, 2015 @07:35AM (#50408465)

      Or this one [slashdot.org] from just under 2 weeks ago:

      The requirement that medical researchers register in detail the methods they intend to use in their clinical trials, both to record their data as well as document their outcomes, caused a significant drop in trials producing positive results. From Nature: "The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000.

    • Psychology is an offensive science: people don't believe psychology is anything more than voodoo. This lets them stay quite comfortable in their control over their mind, instead of admitting it may have some uncontrollable science behind it.

      10% of cancer studies are reproducible? Well that's just science. 50% of psychology studies are reproducible? Psychology is no more real than chance; every study is a coin toss, and nothing is real.

      Climate science papers are probably way wonkier than psychology

    • by Kjella ( 173770 )

      It's hard to believe psychology studies are more reproducible than cancer studies (11% reproducible): http://www.nature.com/nature/j... [nature.com]

      It seems you don't understand what it is you linked to or you're trolling, the key here is preclinical. That is, there's an 11% chance we can reproduce lab results on actual people in clinical trials, so if you're in the first round of an experimental drug 9 out of 10 times it won't work. That sucks, but our understanding of the body and cancer isn't better so we have no choice but to experiment in practice. It says nothing about how reproducible the clinical results are, but before it's through all the rou

  • by nucrash ( 549705 ) on Friday August 28, 2015 @07:23AM (#50408385)

    Yes there is a problem, and yes there needs to be a solution.

    http://fivethirtyeight.com/dat... [fivethirtyeight.com]

  • Two interesting paragraphs that might have been good for the summary as well;

    A study by more than 270 researchers from around the world has found that just 39 per cent of the claims made in psychology papers published in three prominent journals could be reproduced unambiguously – and even then they were found to be less significant statistically than the original findings.

    The non-reproducible research includes studies into what factors influence men's and women's choice of romantic partners, whether peoples’ ability to identify an object is slowed down if it is wrongly labelled, and whether people show any racial bias when asked to identify different kinds of weapons.

  • The idea that this means "Psychology is wrong" is the opposite of what should be gleaned here. 1. This looked at a journal from 2008. That's pretty recent in scientific terms. 2. What is the replicability of new scientific findings in "hard" science fields like physics, chemistry, and medicine? Well, we don't know very well, because there hasn't been a concerted effort to explore that like there has been in psychology (At least not that I was able to find; I thought there was an effort. Please link if yo
    • The consequence of "publish and perish" model is that most of scientific papers just aren't very useful. Proper science is still being done, though it's drowned out by scientists who have nothing more useful to add at the moment but have to publish in order to get grants. And there's not much difference here between psychology and "hard" sciences.
  • by JoshuaZ ( 1134087 ) on Friday August 28, 2015 @07:28AM (#50408435) Homepage
    If you have 100 studies you are replicating, by sheer chance you are likely still going to have a few who you successfully replicate but aren't real. So the problem may be worse than that (slightly). Psychology isn't the only field with these issues. There have been a lot of problems in medicine also. See https://www.newscientist.com/article/mg21528826-000-is-medical-science-built-on-shaky-foundations/ [newscientist.com]. Part of the problem is one of incentives: the incentives to do a study which simply replicates a pre-existing study is low, and many journals won't even publish them. This also combines in bad ways with post-hoc analysis where you look at your data and find a pattern in it that is worth publishing; the worst offender here is medicine where people use different statistical tests and different subgroup analysis until they get a positive result.
    • It's a structural issue inherent in what a cynic could call the publication/tenure industrial complex. Our education and tenure systems demand ever increasing and ever more impressive publications and publication rates, and our journal systems are obsessed with the "story" of the prescient scientist who confirmed a theory conceived in a vacuum. This kind of fuckery is the natural end result.

      • It's a structural issue inherent in what a cynic could call the publication/tenure industrial complex. Our education and tenure systems demand ever increasing and ever more impressive publications and publication rates, and our journal systems are obsessed with the "story" of the prescient scientist who confirmed a theory conceived in a vacuum. This kind of fuckery is the natural end result.

        But the cynic in me would compare people's reactions to this to the idea that if a person is arrested, gets a fair trail, and is sent to prison is somehow proof that legal justice system is broken.

        Finding non-reproducible results is exactly what the purpose is for repeating experiments.

        Now the scientist in me says "I'm a little concerned about those numbers, let's do an analysis and maybe we'll find out why."

        While the cynic in me says The denier and anti-science culture will just trot out their fa

        • Sound outlandish? We already have an AGW denier in here claiming psychology result replication numbers is just that proof.

          Well, they genuinely have a psychological problem, but people throw out babies with bathwater constantly just because they're too lazy to fish out the baby, so it's a widespread one.

          • Sound outlandish? We already have an AGW denier in here claiming psychology result replication numbers is just that proof.

            Well, they genuinely have a psychological problem, but people throw out babies with bathwater constantly just because they're too lazy to fish out the baby, so it's a widespread one.

            It's too bad that the social conservatives won't allow critical thinking to be taught in schools. Damn near as bad as the great Satan Darwin.

            oh boy - I think I just earned my agitation enginerr chops for the day...

        • I'm not talking about the replication efforts, I'm talking about the overall system and how it leads to bad science and excessive publication in the first place.

      • by swb ( 14022 )

        Careers and status make major contributions. If you got onto the dietary cholesterol is bad for you theory early as a scientist, your entire career and standing is built around that theory.

        As you gain influence and status over research proposals and funding, you're more likely to approve proposals that advance the theories you're invested in and reject proposals that might disprove your career-invested theories.

        The entire process, not just individual studies or their results, becomes a victim of both ego a

    • Part of the problem is one of incentives: the incentives to do a study which simply replicates a pre-existing study is low, and many journals won't even publish them.

      Yeah that's what I noticed too (my minor was in cognitive psychology - like trying to figure out AI from the opposite direction). When Fleischmann and Pons posted their cold fusion paper, every physics lab in every school grabbed it and tried to replicate it, just because of how cool it would be if it actually worked. I never saw the same ze

  • Those of us from a hard science background always had our doubts about this sort of stuff

    Maybe you should have been using those doubts for introspection? I can easily find numerous retractions from "hard science" journals plus it's easy to find a number of similar studies showing reproducibility in "hard" science medical research and drug trials.

    • reproducibility *problems* in "hard" science medical research and drug trials

      Fixed for myself.

    • Retraction in "hard science" journals is a good thing it shows that experiments were scrutinized and replicated. That's what publishing does it allows others unrelated to the original study to verify the study.
  • by jc42 ( 318812 ) on Friday August 28, 2015 @07:34AM (#50408461) Homepage Journal

    Has this study been replicated?

    Or is it perhaps a replication of an earlier study?

  • I mean it just seems too true to not be real https://en.wikipedia.org/wiki/... [wikipedia.org]
  • ...why it's not actually called a 'science' by anyone who understands what science is.

    • Double-blind experiments are an integral part of science. Why do experiments need to be double-blind?

      • To avoid researcher biases creeping in and affecting the results. Even with double-blind studies, there can be issues of researcher bias. http://centerforopenscience.gi... [github.io] This is exactly why what we're seeing here is science at work, not evidence of its failure. We must constantly review the findings of new (and even old) science to fully distinguish what is real from what is false. The world is complex place and it is often the case that things are true in one context and not true in others.
        • You went through a lot to try to not say "because of sound psychological principles which tell us that the human mind will see what it wants to see."

          It seems to me double-blind experiments exist because of a scientific understanding of psychology.

    • And you clearly don't understand what science is, nor, despite what you may think, do you hang out or listen to people who do.

      Let me help you. Science isn't about WHAT you study. It's about HOW you study it.

    • by narcc ( 412956 )

      I'm going to guess that you don't have a formal science background.

  • Can't be reproduced, data has to be doctored and cooked to within an inch of its life to stay consistent, ideological agenda overrides any pretense of scientific method.

    Yep, that's modern science alright!

  • by hey! ( 33014 ) on Friday August 28, 2015 @08:03AM (#50408629) Homepage Journal

    ... you don't make any important decisions based on a single paper. That's true for hard sciences as well as social sciences.

    Science by its very nature deals in contradictory evidence. I'd argue that openness to contradictory evidence is the distinguishing characteristic of science. A and not A cannot be true at the same time, but their can be, and normally is, evidence for both positions. So that means science often generates contradictory papers.

    What you need to do is read the literature in a field widely so you can see the pattern of evidence, not just a single point. Or, if you aren't willing to invest the time for that you can find what's called a review paper in a high-impact factor journal. A review paper is supposed to be a fair summary of recent evidence on a question by someone working in the field. For bonus points read follow-up letters to that paper. Review papers are not infallible, but they're a heck of a lot more comprehensive than any other source of information.

    • Re:Which is why (Score:4, Interesting)

      by oh_my_080980980 ( 773867 ) on Friday August 28, 2015 @08:23AM (#50408771)
      Excellent point. From the article http://www.washingtonpost.com/... [washingtonpost.com]

      "Despite the rather gloomy results, the new paper pointed out that this kind of verification is precisely what scientists are supposed to do: “Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should.”"

      This is the kind of stuff that needs to be done.
    • ... you don't make any important decisions based on a single paper. That's true for hard sciences as well as social sciences.

      OK, tell that to the NIH [nytimes.com]. Or hey, tell the APA. The DSM might need a little love. Those traitorous fucks.

      The truth is that the course of our lives is often decided, by fiat, on the basis of a single study

      • by hey! ( 33014 )

        Well, as far as Atkins is concerned, diet research is really, really hard and expensive to do right. I know because when I was an MIT student one of my jobs was office boy in the Food and Nutrition group, and I saw how hard it was. In one of the studies, research subjects were given a duffle bag from which all the input to their digest systems came, and into which all the output from the same went, for six bloody months.

        Of course not every study needs to be that rigorous, but diet is one of those areas wh

        • By the way, the current state of research seems to be that carbohydrate restricted diets work well in the short term but have only modest success in the long term.

          Right, I linked that story because it speaks directly to the deception from the NIH and the USDA. It wasn't about the diet — I had great success the first time, and less weight loss the second time, so now I'm just watching what I eat — but about the deliberate attempts to deceive by our government. Shock, amazement, I know. At best they were trying to look busy. More likely, they were working on behalf of the rising processed foods industry.

    • by PPH ( 736903 )

      That's true for hard sciences as well as social sciences.

      Much more so for social sciences. It's much more difficult to account for variations caused by subtle differences in test and control populations of individual experiments. It is often best to run a series of experiments, each with its own test groups and then examine the results to 'average out' the effect of conditions that were not properly accounted for. This requires a statistically complex analysis to determine whether the individual test give the same results. And as we all know, 62% of all scientist

  • Feynman and Crichton (Score:5, Informative)

    by Orgasmatron ( 8103 ) on Friday August 28, 2015 @08:32AM (#50408833)

    Feynman talked about this fairly often, most notably in Cargo Cult Science [columbia.edu]. The problem seems to be most common in soft sciences, such as the rat running example he gives from psychology. See also his commentary on science education in Brazil [v.cx].

    The Brazil report appears to be unrelated, but hear me out.

    Brazil's problem was cultural. Their textbooks included all of the right information, but it wasn't presented as things that the student could learn about the real world, just as facts to be memorized. Scientists without the culture of science will make lousy experiments because they don't understand what they want to do or why, or how or why they need to keep themselves honest.

    The culture of physics in the US was very good, but they were unable to export that culture to Brazil when they tried.

    In the same way, other branches of science were unable to duplicate the physics culture. The rat runners in the example given didn't understand what they were trying to do, so they didn't pay attention to Young's work, which would have helped keep them honest.

    Crichton's Aliens Cause Global Warming [s8int.com] lecture was given nearly 30 years after Feynman's Cargo Cult Science, and it shows a creeping degeneration of the culture of science.

    I go a step further, and say that the decline of the science culture has been part of a general cultural decline. There has been no great art or literature or music in decades.

    The good news is that people are waking up. The internet is connecting people to each other, to science, and to culture. We are pissed about the decline of the past century, the decline that we've allowed, or at least failed to prevent, and are steeling our resolve to do the hard work to restore our greatness.

    Articles like this show the stirring of the cultural revival. Keep them coming, please.

  • by wonkey_monkey ( 2592601 ) on Friday August 28, 2015 @08:34AM (#50408843) Homepage

    Study: More Than Half of Psychological Results Can't Be Reproduced

    That's not what my study said.

  • A new study trying to replicate results reported in allegedly high quality journals failed to do so in over 50% of cases.

    I denounce this propaganda attack piece paid for by Koch brothers seeking to destroy the planet and drown the poor for profit!!!!

    Oh, this is not about Climate science? Never mind...

  • I didn't have the patience to read through the whole article in detail, but I didn't see anything about how long back the study had checked - this may be important for the reulst. Eksperiments in psychology must be particularly difficult to set up and evaluate rigorously, and I suspect we weren't too good at it in the early years. Even in modern, physical medicine, where there now good practices, it can be very difficult to get strong data.

  • Psychology is complicated. Even identical twins don't have minds that function exactly the same under the closest of possible circumstances. Failure to reproduce the results of a study don't necessarily mean it was a bad study; it just means that our understanding of the study is incomplete.

    I would rather we have studies disproven or adjusted by additional work than have those studies not published at all. The human brain is a very complicated thing that we actually know very little about; if we discard psychology entirely out of hand we will then do very little to further our understanding of how it actually works.
  • Psychology is today where Medicine was 200 years ago.
    There's nothing inherently wrong with treating behavioral maladies, and such treatment could eventually be classified as medicine.
    It's just not there yet.

    (He also didn't have a very high opinion of chiropractors...)

    • The physician must not have been very familiar with Psychology. Psychology is no medicine, nor is it like medicine 200 years ago. Further, psychology is far more than the study and treatment of mental illnesses. It is the study of behavior. Proper functioning is a far broader field than mal-functioning. Likewise, the body of psychological scientific literature extends far beyond mental disorders. He's totally right about chiropractors.
  • Psychology is not science.
    • by narcc ( 412956 )

      Prove it.

      • Well when more then 1/2 of your studies and research can't be repeated, you lose. For instance if I call myself the world best engineer and yet over 1/2 of everything i do is wrong and fails, that would be totally in disagreement.
        • by narcc ( 412956 )

          We're talking about the demarcation problem here. Still, I'll answer your sillyness.

          Well when more then 1/2 of your studies and research can't be repeated, you lose.

          Then I guess we should toss out the whole of modern medicine [slashdot.org] as well, eh? Don't be foolish. This is how science is supposed to work. What concerns me most is that when faced with a failed replication, your first reaction is to reject the original research. It could have easily been a failure on the part of the second experimenters. To sort that out, you need are more replications. Science is riddled with contradictory

          • When you start getting contradictory results from tested research, at that moment you need stop what you're doing, go back to the original papers and start over. You keep starting over until you either always agree with one set or results of can never replicate the original test.

            Your point about Ty Cobb is meaningless because that's not a control experiment, it's a random, fluctuating experiment, where the variables are always changing. Psychology can and never will be science, it's highly educated nut
  • why that is (Score:4, Interesting)

    by Jodka ( 520060 ) on Friday August 28, 2015 @11:13AM (#50410147)

    We are right to hold discoveries of science and the scientific method in high regard. But that approval is distinct from respecting scientists as a class. The problem of non-reproducibility is no fault in the scientific method but instead indication of the rotten state of modern academia.

    Earlier in my career I worked at universities writing software used for psychology and neuroscience experiments. On the basis of that experience I can offer an explanation for why about 1/2 of experiments are not reproducible: A lot of psychology faculty are terrible liars. While some demonstrate perfect integrity, others, probably the ones generating all those irreproducible results, lied whenever it suited their purposes. Still others were habitual liars who lied not to achieve some specific outcome but out of habit or compulsion. The center director of one research group confided to me, after a dispute with the faculty, that he had not been able to control his compulsion to lie. And when I claim that faculty "lie", I do not mean what could, by any stretch, be characterized as errors, oversights, or honest differences of opinion. I mean abusive, sociopathic, evident and deliberate lying. Like being told that the inconvenient evidence which you have in hand, "does not exist."

    The lying is enforced by implicit threat. One time I responded to an email message, correcting an error, and then immediately after that a prominent member of the faculty, somewhat creepily, follows me into the restroom, stands too close to me while I am using the urinal, and explains to me in a threatening tone the error of my reasoning, which according to him, was that, "it would not do that because it would not do that." The dean imposed a disciplinary penalty on me for objecting to that. Though that was unusual, typically challenging lies elicited, a yelling, screaming fit from a faculty member. So it's not just lying, but lying backed up by threatening, thuggish, behavior of the faculty and university administration. This was a highly-regarded department with generous NIH funding, which makes me think that lying in that field is kind of a mainstream thing.

    The root cause here has little to do with science, per se, and has more to do with the rotten management of colleges and universities. Regardless of what the employee handbook states, there are few de facto restrictions on faculty conduct and university administrations act to cover up problems by disciplining and threatening the whistle-blowers. Jerry Sandusky was not a scientist, he was a football coach, but if you look at the way Penn State concealed child molestation and protected him, that is typical of the way universities respond to faculty misconduct as welll, and explains why academic dishonesty is tolerated. One full-time faculty member in the department in which I worked had not set foot in the department in over five years nor ever appeared in any of the classes which she "taught." According to the department chairman, every time she was contacted to encourage her retirement was, whe was, "drunk off her ass in the middle of the day." It was tolerated and covered up.

    I am not claiming that all scientists, fields, or academic departments are full of liars. I have never worked closely with physicists, computer scientists or mathematicians on a daily basis, but none whom I know personally have behaved like that.

    To sum it all up, psychology has a problem with poor reproducibility of published results, many of the psychology faculty I knew were terrible liars; there might be a causal connection between the two.

    • But if science can be corrupted, how is it safe? Say you are transported a thousand years into the future. You no longer know the people and establishments doing science. Do you trust the scientific facts and results, or not? If you need to respect, trust, and know the scientists conducting the research to trust the results, that means that scientific findings are irrelevant. Are just as useless as someones hunch or theory. Science is setup specifically so that the science stands on its own, so that if we f
  • We're inherently too biased. Only an AI with scalable human-like intelligence will give us real answers about ourselves.

    This may be why one will never be built.

  • Very interesting. Xenu would be proud.

  • (...as of about 8-9 years ago) The psych department had its own stats class, taught by a psych professor. You couldn't get an exemption if you had a high-level statistics course under your belt already, they insisted that psych stats were 'special' somehow, and needed to be taught differently.

    If by 'special', you mean 'less rigorous' and 'taught by people that literally don't understand the definition of a function', then yes, the classes were special, and failed to prepare the students in any significant way for good statistical analysis.

    I'm sure the story is the same at many universities.

  • Science has disproved itself. It is not even internally consistent. Actual hard sciences get the same results. If you have no reason to trust the a result returned from science is more likely to be true that 50-50, then the entire system is just worthless. I think we need to rethink the whole process.

He who steps on others to reach the top has good balance.

Working...