Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

The Neuroscience of Screwing Up 190

resistant writes "As the evocative title from Wired magazine implies, Kevin Dunbar of the University of Toronto has taken an in-depth and fascinating look at scientific error, the scientists who cope with it, and sometimes transcend it to find new lines of inquiry. From the article: 'Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) "The scientists had these elaborate theories about what was supposed to happen," Dunbar says. "But the results kept contradicting their theories. It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."'"
This discussion has been archived. No new comments can be posted.

The Neuroscience of Screwing Up

Comments Filter:
  • by xmas2003 ( 739875 ) * on Wednesday December 30, 2009 @07:51PM (#30601542) Homepage
    The WIRED piece threads what is written in the summary around the story of how Arno Penzias and Robert Wilson at Bell Labs discovered Cosmic Radiation [wikipedia.org] after being puzzled for a year about background noise on their radio telescopes ... even scraping pigeon poop off their gear as a possible source until they realized the signal was real - Homer Simpson would have said D'OH! ;-) [komar.org]
    • Two Relevant Quotes (Score:5, Informative)

      by bill_mcgonigle ( 4333 ) * on Thursday December 31, 2009 @12:19AM (#30603056) Homepage Journal

      "It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong."
        - Richard Feynman

      "The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! but rather, "hmm.... that's funny...."
        - Isaac Asimov

      • Re: (Score:3, Insightful)

        "It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong."

        Correct, as long as you take that to mean "experiments in general". It is possible to make mistakes in experiments, and you shouldn't throw out General Relativity because some lab newbie got the setup wrong.

        As I said in my other comment [slashdot.org] on this issue, you have to decide which is more likely: that you got the experiment wrong, or the hypothesis is wrong, and this depends on your confidence in both. But you should never hide the result, or else you can get an informational cascade that leads to conformism t

    • by shoor ( 33382 )
      When I first heard about background radiation, I thought to myself, didn't George Gamow predict that in one of his Mr Tompkins books which I read either in high school or junior high school? (and these books were written for juveniles). In fact Gamow did predict it. I read years later in Timothy Ferris's book "The Red Limit, The Search For the Edge of the Universe" that Gamow was astonished that the discoverers of the background radiation did not credit his insight. The book also mentions various scient
  • Ridiculous (Score:4, Interesting)

    by MrMista_B ( 891430 ) on Wednesday December 30, 2009 @07:54PM (#30601566)

    "It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."

    That doesn't mean the data is wrong, it means the /hypothesis/ was wrong, if not the theory, and needs to be modified.

    If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.

    • Re:Ridiculous (Score:5, Insightful)

      by wizardforce ( 1005805 ) on Wednesday December 30, 2009 @08:05PM (#30601636) Journal

      If your equipment is malfunctioning, you may end up with data that is fairly random where there should be some pattern or your measurements on your controls don't remotely match the values they should be. As an example, a standardized solution tests for a markedly different concentration than it should; a good sign that something is wrong. Things go wrong occasionally. That is why it is imperative that experiments be repeatable and have good experimental design.

      • by Rich0 ( 548339 )

        Yup - good controls are critical to good science.

        They're especially important when the work is more routine, because that's when you're more tempted to just believe results because they look like what you expect. Or, perhaps you're talking about quality control testing and have a financial interest in having the results pass.

        The only way to know that your results are actually right is to have a controlled analytical process. There are lots of ways to do that, and there is a whole field of pursuit around m

    • Re:Ridiculous (Score:4, Interesting)

      by MyLongNickName ( 822545 ) on Wednesday December 30, 2009 @08:06PM (#30601648) Journal

      And this is what bothers me. If you are willing to run an experiment enough times, you will eventually get data to support your assertions. Get a statistical 90% certainty, and it could be that you ran the scenario 100 times, and throw out the 99 times that did not give you this certainty. The scientific process is bullet proof. The folks who "do science" not necessarily so.

      • Re: (Score:3, Interesting)

        by TapeCutter ( 624760 ) *
        "If you are willing to run an experiment enough times, you will eventually get data to support your assertions."

        Yes, I belive Edison tried over 5000 different hand made bulb/filiment combinations before he found one that supported his assertion.

        Thowing out data is not about proving pet theories, it's about admitting you cocked up the experiment. eg: Prof Sumner Miller [abc.net.au] never edited out failed demonstrations from his TV show, nor did he claim the failed demo proved accepted theories of physics were wron
        • Edison didn't believe in Ohm's law, so it isn't surprising it took him that long to get his labmonkeys to make something useful.

          • Re: (Score:3, Insightful)

            by ibsteve2u ( 1184603 )
            Sorta kinda. Per Harold Evans' book The Spark of Genius as quoted in U.S. News & World Report [usnews.com]

            The real secret, Edison found, arguing it out with Charles Batchelor, was to raise the voltage to push a small amount of current through a thin wire to a high-resistance filament. It was an application of the law propounded in 1827 by the German physicist George Ohm, but it was still imperfectly understood. Edison himself said later, "At the time I experimented I did not understand Ohm's law. Moreover, I do not want to understand Ohm's law. It would stop me experimenting." This is Edison in his folksy genius mode. Understanding the relationship linking voltage, current, and resistance was crucial to the development of the incandescent lamp, and he understood it intuitively even if he did not express it in a mathematical formula.

      • If there's nothing wrong with your equipment or procedures and your experiment fails to find X then it's indicative that your hypothesis is wrong and any group that repeats your experiment is going to know about it immediately.

      • by Culture20 ( 968837 ) on Wednesday December 30, 2009 @09:14PM (#30602108)

        The scientific process is bullet proof. The folks who "do science" not necessarily so.

        What exactly are you advocating?

        • Re: (Score:3, Funny)

          by Thing 1 ( 178996 )

          The scientific process is bullet proof. The folks who "do science" not necessarily so.

          What exactly are you advocating?

          That, in battle against an opponent armed with a firearm of some sort, one should adorn one's self with the scientific process, and not the folks who are following it, for greatest effect?

        • by ArcherB ( 796902 )

          The scientific process is bullet proof. The folks who "do science" not necessarily so.

          What exactly are you advocating?

          Maybe he is saying that as long as scientists stand behind science, they'll be fine when the bullets start flying. When they ignore science and try to do their own thing, they get a cap in their asses.

      • Ok I'm really not clear on what you're trying to illustrate here.

        Sure you can get a minority result from an experiment but the less likely the outcome, the more costly it is to get those results and the less likely anyone else who repeats that experiment will see the same result set.

        In other words if you tried to rig a test with a 95% confidence level. You would have to run the same test twenty times just to guarantee a result outside your confidence interval *but* that doesn't guarantee that the res
      • Re: (Score:3, Informative)

        by dcollins ( 135727 )

        This is well known and called the "file drawer effect" (or publication bias).

        http://en.wikipedia.org/wiki/Publication_bias [wikipedia.org]

        "In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start.[11] In this way, negative results should no l

    • Re:Ridiculous (Score:4, Informative)

      by Shadow of Eternity ( 795165 ) on Wednesday December 30, 2009 @08:09PM (#30601660)

      Not always, sometimes your data doesn't make sense because you made a mistake somewhere that wound up turning your results into garbage.

      • Sounds like they're in need of engineers to design those experiments.

      • Re: (Score:3, Interesting)

        Very true. Sometimes the data really are wrong. (The Minimum Discrimination Information criterion [wikipedia.org] is a way of rigorously answering the question of whether your data or your theory is in error.)

        But simply throwing out the data is the 100% wrong thing to do. That destroys the information that would have eventually told you if you were really doing something wrong in your experiment, or if you've discovered something new.

        It also creates an information cascade-type situation where everyone culls any non-conf

    • by labnet ( 457441 )

      Often the data is crap, because the measurements are so hard to make.
      For example, you would think measuring temperature is easy. Not so.
      Lets say you wish to determine the cooling capacity of an airconditioner.
      How do you measure the temperature and air velocity gradients across both the return and supply air streams. Do I use 1 sensor, 10 sensors, 100 sensors. Do you create turbulence or laminar flow? How accurate is the humidity measurement?
      The point is, the data is often crap, because measurements are hard

    • It can also be that their methodology is wrong, their equipment is wrong, they are simply incompetent.

      I suspect that the main thing that differentiates scientists who make a major lasting contribution to science and those that just push knowledge slowly forward is whether they treat such problems as "Oh... that is interesting" or "Crap, must have screwed it up better start it over".

      Of course I'm sure those in the first category spend a lot of their time chasing ghosts because most of the time they did in fa

    • Re: (Score:2, Insightful)

      by rbannon ( 512814 )

      Not religion, but federally funded dogma. More than 20 years ago I became aware of how dogma gets grounded in fundamental research: you need to write grants that fit the dogma. One hapless soul actually stood up during a big AIDS conference and suggested that the researchers were mere lemmings. He, of course, was shouted down, but he was only trying to tell the lemmings to keep an open mind. Fast forward 20+ years and the lemmings are still in control.

      Our educational system is totally broken when the educat

      • Re: (Score:3, Informative)

        by Simetrical ( 1047518 )

        Our educational system is totally broken when the educated just want things to fit. Even in mathematics, we're promoting a crop of "just tell me what to do!"

        As a grad student in pure mathematics, I'm curious: do you mean in low-level math education, or mathematical research? Basic math education is often just about giving you the tools you need to do your job, so there's nothing wrong with just telling people what to do. Higher-level courses (meaning the kinds only pure-math majors typically take) do require you to actually understand the material and be able to prove things from first principles, probably far more so than any other field.

    • Re:Ridiculous (Score:5, Insightful)

      by BlueParrot ( 965239 ) on Wednesday December 30, 2009 @09:21PM (#30602140)

      "It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."

      That doesn't mean the data is wrong, it means the /hypothesis/ was wrong, if not the theory, and needs to be modified.

      If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.

      a) You've clearly never done any real research or you would be well aware of the hundreds of millions of ways you can screw up an experiment and get nonsense data ( bad machinery, you wired up a detector wrong, the cell lines you were feeding vitamin K happened to get contaminated by bacteria halfway through etc... )

      b) There is almost never a clear difference between data and theory. The only raw data you have is a bunch of numbers on a piece of paper, in order to determine if they correspond to your theory or not you need to interpret the numbers somehow, and it may just as well be the interpretation that is wrong as is the theory you were trying to test using the interpreted data.

      c) Because you are often restricted by cost and time it's often not feasible to do a full analysis of why your experiment did not work. Hence if you did not get any useful results ( uncertainty was too large, it seems obvious you must have messed up somewhere etc.. ) then frequently the only sane option is to conclude your experiment was a failure.

      d) If scientists followed your advice we would never have got the electronic equipment you used to make your post.

      Basically your ideas about what science is or should be are extremely naive and to anybody who has done even a high school chemistry experiment it should be clear you have no idea what you're talking about.

      • Re:Ridiculous (Score:4, Insightful)

        by honkycat ( 249849 ) on Wednesday December 30, 2009 @11:41PM (#30602900) Homepage Journal

        Your post is spot on. I'd mod up, but I wanted to clarify (I think you'd agree) that there's a difference between a successful experiment that is inconsistent with a theory and a failed experiment. The purpose of an experiment is not to prove a hypothesis, it's to TEST a hypothesis (or to gather data toward that end). Success means you make a useful statement that aids in the test. Failure means the data were not useful. It has nothing to do with the correctness of the theory or hypothesis.

        In the specific quote mentioned, the data "not making sense" doesn't mean that they disagreed with what the experimenter was expecting, it means that they came back in a way that "couldn't happen." That is, that something had gone wrong making the experiment a failure. For example, in some tests I was doing a couple years ago with a prototype radio receiver, I needed to measure its noise level. As a signal, I would sweep a resistive load up and down in temperature---the load outputs noise with intensity that depends on its physical temperature. In this case, as a check, I would start with the load at a low temperature, then heat it past the point of interest, and then cool it back to the starting temperature. I would measure twice, once on the way up and once on the way down. What I found was that the results disagreed between the two measurements. That "does not make sense" in the sense of the article---the testing method was flawed.

        In a sense, it was a successful test of a hypothesis. The hypothesis was that the receiver behaves in a particular way (which is what you'd consider the REAL hypothesis under test) AND that the test setup was a valid way to measure that. I disproved the joint hypothesis. In this case, it was the latter part that was invalid---the test was invalid---and I could say nothing about the receiver. This was simply a failed experiment. There is no religion going on by my not claiming that receivers don't behave as we think they do when I just discarded my results.

        Every now and then, the reason for a failure might be interesting. This is rare, but when it happens can be responsible for amazing discoveries. In my case, it was a problem of thermal equilibrium. My devices were operating in a vacuum at very low temperatures (about 20 Kelvin) and it can be difficult to affix a heater or a thermometer to just the part of a device that you want to heat or measure....

        The OP's statements mirror the general misunderstanding of the scientific method that is rampant in the non-scientific community. We need to help people understand this.

    • Re: (Score:3, Informative)

      by RobertF ( 892444 )
      Anyone who has spent time working in a lab knows that not all data is equal. You can get useless results if something isn't quite as clean as necessary, or perhaps you were in a bit of a rush and didn't connect everything perfectly. Any interesting experiment usually has numerous points at which humans can mess things up. Errant data is usually a sign that you have improperly set up the experiment, so you'll spend most of your times reviewing and fixing procedures until you get what you expected.
    • Re: (Score:3, Informative)

      by Arancaytar ( 966377 )

      It is possible to end up with crap data because the premise of your experiment is wrong. You can ignore a variable that should have been controlled or kept equal, or you can measure the wrong variable.

      You can also end up with data that neither confirms nor denies your hypothesis, because it allows no statistically significant conclusion.

  • by techno-vampire ( 666512 ) on Wednesday December 30, 2009 @07:56PM (#30601586) Homepage
    If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them. TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better. Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.
    • Re: (Score:3, Insightful)

      Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.

      It's just a result of how science is performed. Science doesn't have low hanging fruits anymore, consequently any problem that someone investigates takes dedication, because it's intellectually hard or takes lots of effort or both. Most people aren't going to be motivated enough to put that much effort into it without already having an axe to grind, a point

      • Re: (Score:2, Insightful)

        by dwguenther ( 1100987 )
        The people who are not motivated enough to put in the effort are not scientists - they are pundits. Researchers who are truly interested in their work - and that would be most of them - put in decades of observation and analysis looking for some truth, because simply grinding an axe would never be personally satisfying. It is lazy and disrespectful of you and other armchair commentators to simply dismiss all that work with a three-line opinion.
        • It is lazy and disrespectful of you and other armchair commentators to simply dismiss all that work with a three-line opinion.

          Doing precisely this is one of the more distasteful parts of most scientists' jobs.

    • by caramelcarrot ( 778148 ) on Wednesday December 30, 2009 @08:32PM (#30601850)
      As other people have pointed out - sometimes the data is just crap due to the difficulty of making measurements. Sometimes you've measured something other than what you actually need to compare to theory, sometimes there's too much noise. The skill of a great experimentalist is being able to take good enough data that you can't justify ignoring it if it comes out different to what you expected.
    • by bcrowell ( 177657 ) on Wednesday December 30, 2009 @08:34PM (#30601872) Homepage

      If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them. TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better. Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.

      This is a beautiful explanation of how science is supposed to work. In reality, science doesn't really work this way. It doesn't work this way in my experience as a scientist, and it doesn't work this way if you read the history of science.

      For some good historical examples, see Microbe Hunters, by de Kruif (one of the best science books of all time, although you have to look past the racism in some places -- de Kruif was born in 1890). A good example from physics is the Millikan oil-drop experiment, where he threw out all the data that didn't fit what he was trying to prove -- but then claimed in his paper that he'd never thrown out any data. Galileo described lots of experiments as if he'd done them, even though he didn't actually do them, or they wouldn't have actually come out the way he described.

      Michelson and Morley set out to prove the existence of the aether, published their results believing they must be wrong. Nobody else believed them, either. Various people then spent the next 30 years trying to fix the experiment by doing things like taking the apparatus up to the top of a mountain, or doing the experiment in a tent, so that the aether wouldn't be pulled along with the earth or the walls of a building. By the time Einstein published special relativity in 1905, most physicists had either never heard of the MM experiment, or considered it inconclusive.

      When your results come out goofy, 99.9% of the time it's because you screwed up. You don't publish it, you go back and fix it. If every scientist published every result he didn't believe himself, the results would be disastrous. If you try over and over again to fix it, and you still fail, only then do you have to make a complicated judgment about whether to publish it or not.

      The way science really works is not that scientists are disinterested. Scientists generally have extremely strong opinions that they set out to prove are true using experiments. The motivation is often that scientist A dislikes scientist B and wants to prove him wrong, or something similarly irrational, personal, or emotional. The reason this doesn't cause the downfall of science as an enterprise is that there are checks and balances built in. If A and B are enemies (and if you think the word "enemies" is too strong, you haven't spent much time around academics), and A publishes something, B may decide just to see if he can screw that sonofabitch A over by reproducing his work and finding something wrong with it. It's just like the adversarial system of justice. Society doesn't fall apart just because there are lawyers willing to represent nasty criminals. Einstein was famously asked what he would do if a certain experiment didn't come out consistent with relativity; his reply was that then the experiment would be wrong. Einstein fought against Bohr's quantum mechanics for decades. Bohr fought against Einstein's photons for decades. They were bitter rivals (and also good friends). It didn't matter that they were intensely prejudiced, and wrong 50% of the time; in the end, things sorted themselves out.

      • If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them. TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better. Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.

        This is a beautiful explanation of how science is supposed to work. In reality, scien

        • Re: (Score:3, Insightful)

          by bcrowell ( 177657 )

          Feyerabend and many other philosophers of science take a complementary stand to this by stressing the theory-ladenness of "facts."

          Yep, they're totally right. For example, this 2003 paper [arxiv.org] claimed to have empirically verified the prediction of general relativity that gravitational forces propagate at the speed of light. The authors made some technical errors, which were rapidly pointed out by others in the field. The final answer is that actually nobody has the faintest clue how to test this specific, centu

    • If the data don't make sense according to your theory, you don't discard the data, you discard the theory

      Not really, assuming the theory is something well-established and tested. Popper oversimplified things - experimental data is rarely so unambigous that you can outright discard a reliable theory. It's much more likely that you messed up than you proved it wrong, or maybe the theory needs a fairly minor modification rather than complete rejection.

      That's no reason to discard data though - not until you und

    • Most of the time that the data doesn't make sense it's because the scientist fucked up and didn't calibrate the sensor, or mislabeled a sample, or made a tpyo.

      But yes, discarding it is the wrong thing to do. Repeating it again (and if you now get what you were expected, doing it a third time) is usually the thing to do.

      Of course if it is expensive, then just write down the results the theory predicts ith a fudge factor for error. Better to retard humanity's knowledge of the world than to maybe look silly.

    • by syousef ( 465911 )

      If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.

      If it's a well established theory, you want to eliminate sources of the error before trying to overthrow it. For every Einstein that moves us to the next level from a well established theory there are 3 million cranks that just can't set up a well controlled experiment to save themselves. If you've conducted the experiment sufficiently b

    • It depends. If most of your data is noise it's fairly worthless anyway and you are better off trying to limit the sources of error and try again.
      For example consider seismic data. You've got 50Hz or thereabouts induced in the cables near powerlines, you have wind blowing on the geophones, passing cars or trains, differences in soil above the rock and other sources of noise. A lot of seismic data processing seems to be about throwing away the noisy data and stacking up what is left to limit the effect of
    • Yes, but that pre-supposes the ability to invent a new theory, because scientists are very unwilling to discard a theory if they have no alternative. After all, having no theoretical framework at all is very uncomfortable.

      And there I do think that Dunbar makes a perfectly valid point: Any group of specialists who are all of the same mind is very bad a thinking "out of the box" and inventing a new theory. To be able to do that, you need a healthy mixture of different backgrounds, and enough dissent to stimul

  • Good! (Score:5, Interesting)

    by RyanFenton ( 230700 ) on Wednesday December 30, 2009 @08:10PM (#30601662)

    If problems occur as you postulate elaborate hypothesis, then stop piling up the elaborate hypothesis! But be sure and still make available your existing (complex) hypothesis, methodology and unexpected data - preventing others from going down the same path with the same methodology is still highly valuable!

    Let's say you're looking at a production and consumption cycle involving neurotransmitters and neuroreceptors of some sort, and the various channels of input and output involved. Your starting presumption you base your hypothesis on is that there is a buildup which triggers an electrical signal to stop consumption and clear the channel. The only evidence you can realistically gather for now is protein density at a certain output channel - but others have worked to ensure this is a reliable approach specifically under these circumstances.

    So, you do the specific experiment, trigger the signal, but you get a wildly different result - the stop in consumption occurs, but the protein density does not change at all in the output channel. What actually happened is still unknown, only you haven't verified any correlation with your hypothesis. You still have valuable data, but no mechanism to verify under the circumstances. Either your methodology failed, or you misunderstood what was happening - and the world of knowledge is made larger by either... even if your paymasters won't get happy about the result.

    Science is often like throwing pebbles in complete darkness - it takes a lot of stones and close listening to make out a mental picture of the scene - especially when there's a lot of noise already around. Everyone would love it if we could just flip the lights on - but we have yet to invent a light that can see into the inner workings of the functioning brain very well. Gotta keep throwing those pebbles for now.

    Ryan Fenton

    • Re: (Score:2, Funny)

      by dcmoebius ( 1527443 )
      Neurotransmitter consumption cycles? Come on, if you want the +5 Insightful, you really need to couch your hypotheticals in terms of cars.
    • by Thing 1 ( 178996 )

      [...] preventing others from going down the same path with the same methodology is still highly valuable!

      Exactly. Thomas Edison "discovered" over 5,000 ways how not to create a light bulb. Had he published each and every one of them, perhaps the light bulb would have been invented sooner -- perhaps by someone else, or perhaps by him, collaborating with someone else who had read his published accounts of "how not to create a light bulb."

  • Is it just me or does this sound like an explanation for some of the Climategate science... But in that case they just massaged or ignored data that didn't agree with their conceptual framework of CO2 causing global warming.

    Not that the skeptics are all that immune. They seem to cherry pick data almost as well (just not quite as successfully from the POV of selling their story to the media and political left ..)

    • Re: (Score:3, Informative)

      by dwguenther ( 1100987 )
      Yeah, it's just you. AP News found no evidence of massaged or ignored data (http://news.yahoo.com/s/ap/20091212/ap_on_sc/climate_e_mails). So climate science is a poor example of this thesis.
      • by khallow ( 566160 )
        Where's the AP story for the computer code?
      • Re: (Score:3, Insightful)

        The Associated Press are scientists now? Oy vey, what is the world coming to when commenters on science.slashdot.org quote the media as an authoritative source?
        • As opposed to a random blogger? Really? Be careful throwing around the authority argument, it might come back to bite you.

    • by Rising Ape ( 1620461 ) on Wednesday December 30, 2009 @08:58PM (#30602002)

      By "almost as well" I assume you mean "all the time". The "sceptic" arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.

      And there's no real evidence of the proper scientists massaging or ignoring anything. Just because a detailed, written account of everything doesn't exist in stolen, incomplete private documents doesn't mean it doesn't exist at all.

      • by J Story ( 30227 ) on Wednesday December 30, 2009 @09:50PM (#30602298) Homepage

        And there's no real evidence of the proper scientists massaging or ignoring anything. Just because a detailed, written account of everything doesn't exist in stolen, incomplete private documents doesn't mean it doesn't exist at all.

        The behaviour surrounding the data is certainly indicative of a lack of confidence in the findings. Refusing FOI requests and claiming that "the dog ate it" do not show a group filled with the belief that their research is unassailable.

      • The "sceptic" arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.

        Only if you don't actually look around. Richard Lindzen [wsj.com] is a climate researcher at MIT, and has investigated it well (he was one of the authors of the IPCC report). His argument is that there is no strong evidence linking anthropogenic CO2 and a global crisis.

        And he is right. Check out the evidence for yourself [www.ipcc.ch]. Look at it critically, and try to see if they can establish a link. They can't.

  • by pieisgood ( 841871 ) on Wednesday December 30, 2009 @08:55PM (#30601988) Journal
    I can't help but think that Neuroscience needs to calm down, sit back, and take a deep breath. We are examining a system and we are trying to reverse engineer it. We can't start out by trying to create elaborate hypothesis for large systems, we need to go low level and examine the simpler systems. I really think they should hold on to the higher cognitive models for a later time because we can't even completely model C. Elegans and it has the least neurons of any, current, living organism. The way I see it, I total expect their hypothesis to be wrong, because they don't thoroughly understand the low end of the system.
    • Why not look at data wherever we can find it? If they find certain patterns that the human brain tends to go through, why not observe them and record them and understand them as well as possible?

      It isn't always necessary to know the underlying, simpler systems before we get useful information. I can calculate the resultant change in velocity of two objects after a collision, even though I don't understand the full quantum-mechanical underpinnings of the collision. I know what a computer will do when I ca
  • Anonymous Coward (Score:2, Insightful)

    by Anonymous Coward

    As a researcher myself, I certainly hope they don't throw out data too often. There is occasion to do so...sometimes, when trying to establish correlations (admittedly the weakest form of describing a phenomenon, etc), you learn that there is not one. There are times you obtain data that simply says, "These two phenomenon do not strongly affect each other" or "Something we do not know about or have not accounted for is happening all over this mess."

    This data could be kept forever in the unlikely event it

  • Bugs (Score:5, Insightful)

    by graft ( 556969 ) on Wednesday December 30, 2009 @09:36PM (#30602208) Homepage

    If the data doesn't fit your theory, the problem is most likely neither with the data (which is fine) nor with your theory (which may also be fine) but with the method you used to produce your data. You probably wired in an incorrect resistor, forgot to close a parenthesis in your Perl code, forgot to add the correct amount of EDTA to your reaction, etc. Then your results ended up looking like shit, and not surprisingly. Doing science is hard.

    There's no need to postulate any grand conspiracies or take pot-shots at science in general. This paper is examining real people doing real shit. Most of the time we fuck up, and we're not smart enough to figure out where we made the error.

  • Exceedingly common (Score:2, Interesting)

    by pigwiggle ( 882643 )

    in my experienced - I'm a physical chemist doing atomic resolution condensed phase computer modeling. It's so common that I am troubled when the first analysis gives the answer I expected. I likely spend more time looking for errors when the answer makes sense the first go through. Really.

    • in my experienced - I'm a physical chemist doing atomic resolution condensed phase computer modeling. It's so common that I am troubled when the first analysis gives the answer I expected. I likely spend more time looking for errors when the answer makes sense the first go through. Really.

      This. Getting everything right the first time is like winning the lottery - you don't believe it, and you shouldn't. People doing experiments is a messy thing. Isolating variables is difficult, and much more difficult than just making something happen.

      • by jamesh ( 87723 )

        winning the lottery

        Nobody ever wins the lottery anyway. I took a quick sample of the subset of the population that participated in lotteries and none of them had ever won, and by extrapolating that data I was able to prove that nobody had ever won a lottery ever.

        Some other studies have been done that came to different conclusions but I believe that their data collection methodology was flawed as their results didn't agree with mine, so I think they can be safely excluded.

  • by JohnFluxx ( 413620 ) on Thursday December 31, 2009 @12:13AM (#30603018)

    It's pretty rare for everything to go right.

    I work with holography. I shine a laser at a piece of film, then develop the film. And presto, I get no image. Do I throw out the theory that exposing film to light should produce an image? No, I assume that I screwed up and go back and start again. It's not uncommon for me to spend 3 months of cleaning, aligning, measuring and so on until I produce a proper image. I then throw away all the "bad" data. Maybe, theoretically, that data could be useful, but there's too many parameters to account for.

  • by dcollins ( 135727 ) on Thursday December 31, 2009 @01:59AM (#30603492) Homepage

    Don't think the summary quite found the central point of TFA.

    "Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn't the presentation -- it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they'd previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work."

    "I saw this happen all the time," Dunbar says. "A scientist would be trying to describe their approach, and they'd be getting a little defensive, and then they'd get this quizzical look on their face. It was like they'd finally understood what was important."

    So that's it: The keys are multiple viewpoints, skepticism, and intellectual competitiveness.

  • by DynaSoar ( 714234 ) on Thursday December 31, 2009 @02:59AM (#30603670) Journal

    I am calling this neuroscience because it has nothing to do with how the nervous system operates. In this sense I am following the lead of WIRED and/or Dunbar, who can't tell a neuro from a social. From TFA: "Kevin Dunbar is a researcher who studies how scientists study things". OK, he studies things called scientists. scientists are people. The study of people and how they behave is psychology. Science is a social activity. Investigations of social activities are sociology when taken as a whole, or social psychology when considered in terms of the activities of individuals operating within a social group. Dunbar studied social psychology, not neuroscience. There's not a speck of neuroscience cereal in it anywhere. There's very little if any actual social psychology, and psychology, or any science at all. There's talking about science, there's talking to scientists about doing science, and there's watching them do science. There's watching and talking about getting good results and not getting good results, and what people do in the matter case. If Dunbar thinks he's doing neuroscience, I suspect he's not even very clear on science itself, much less the various branches. And it does say he's "a researcher in", not that he's a scientist. I do research in curry recipes from different countries and cultures. I'm a researcher, but not a cultural curriology scientist.

    In fact I'll go s far as to say he's a researcher because he knows precious little and is trying to find out basic things, not as is the case with most scientists, someone who knows a fair amount and is trying to build on that with new knowledge. He is apparently not clear on the difference between 'screwing up' and not getting good and/or clean results. This may well be because he was unclear himself as to what it was he was looking at and talking about, and he thought he was just not getting good or clean results, when actually, guess what?

    He doesn't let loose any secrets. Anyone can talk to scientists and as what happens if and when things don't turn out as expected. If you get an honest (ie. less concerned with appearances than truth) scientist, anyone would get the same answers. Or one could simply read work from real social psychologists and others who study science and scientists and learn the same things. I myself always recommend Collin's & Pinch's "The Golem" as an illuminating, instructive and entertaining starting point.

    And a technical point on methodology: a study that does not find a difference between groups, treatments, whatever, 'fails to reject the null hypothesis' (the assertion that there is no observable difference). It does not prove there is no difference, it merely fails to find one. It fails, but only to find a difference, not to produce a result. It can't say there is no difference, it can only say that it couldn't find one. And, it fails to find a difference, no matter how nicely or hapazardly the data come out. The only studies that "fail" produce no data. Scientists may further fail to find an interpretation, but there's no limitation on trying to figure this out, and it applies to both 'results' (reject null hypothesis) and 'no results' (fail to reject null). Studies that produce data that 'makes no sense' produce data that fails to reject the null. The 'making no sense' is a post hoc evaluation of the data based on an incomplete understanding of the design, collection, analysis or interpretation. Such evaluations are done in science, but they are not part of the scientific process. Therefore when this occurs, it is not a "scientific" result and cannot be taken to reflect in the nature or quality of the work done. If you can't figure what it means, you can't figure out. You cannot say that since you cannot figure it out, then you figure out that it fails. If you think you can take something that 'doesn't make sense' and then say that it makes sense in that it represents a failure, then you've contradicted the assertion that it makes no sense. All you can say is that you don't understand it, and since you d

    • by DingerX ( 847589 )
      A few things about your post:

      A. It's not as readable as Mr. Lehrer's article.
      B. Mr. Lehrer is not the same as Dr. Dunbar, so what a journalist says about a scientist is not what scientist says.
      C. I gather the discussion of using fMRI on test subjects and noting ACC and DLPFC firing at the same time is, by your analysis, in the realm of cognitive psychology, and not neuroscience.
      D. So, according to your analysis, neuroscientists use the terms "Cognitive Dissonance" and "Confirmation Bias", while psychologis
  • About sex with the woman on top? (Yeah, I know bad joke.)

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...