Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Bloggers Put Scientific Method To the Test 154

ananyo writes "Scrounging chemicals and equipment in their spare time, a team of chemistry bloggers is trying to replicate published protocols for making molecules. The researchers want to check how easy it is to repeat the recipes that scientists report in papers — and are inviting fellow chemists to join them. Blogger See Arr Oh, chemistry graduate student Matt Katcher from Princeton, New Jersey, and two bloggers called Organometallica and BRSM, have together launched Blog Syn, in which they report their progress online. Among the frustrations that led the team to set up Blog Syn are claims that reactions yield products in greater amounts than seems reasonable, and scanty detail about specific conditions in which to run reactions. In some cases, reactions are reported which seem too good to be true — such as a 2009 paper which was corrected within 24 hours by web-savvy chemists live-blogging the experiment; an episode which partially inspired Blog Syn. According to chemist Peter Scott of the University of Warwick in Coventry, UK, synthetic chemists spend most of their time getting published reactions to work. 'That is the elephant in the room of synthetic chemistry.'"
This discussion has been archived. No new comments can be posted.

Bloggers Put Scientific Method To the Test

Comments Filter:
  • by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Monday January 21, 2013 @09:15PM (#42652853) Homepage Journal
    The bloggers are not testing the scientific method, they are testing methods that are scientific. Those are two vastly different concepts. Their work is important, but not epic.
    • Re: (Score:2, Interesting)

      The bloggers are not testing the scientific method, they are testing methods that are scientific

      Putting your ignorance in boldface type is amusing. The most basic promise of the scientific method is that results can be replicated by anyone with the proper equipment repeatedly and reliably. This is accomplished by describing an experiment to the detail level necessary to reproduce the result. If the result cannot be reproduced from a description of the experiment, it has failed this test.

      They are testing the scientific method insofar as asking whether professional and peer-reviewed scientific work actu

      • by gman003 ( 1693318 ) on Monday January 21, 2013 @09:56PM (#42653069)

        They are testing whether scientific papers meet the scientific method (ie. the results are reproducible). They are not testing the validity of the scientific method itself (myself, I cannot see how one could test the scientific method without using it, thus bringing the results into question).

        That is the point GP was attempting to make.

        • by c0lo ( 1497653 ) on Monday January 21, 2013 @10:07PM (#42653147)

          myself, I cannot see how one could test the scientific method without using it, thus bringing the results into question

          So little faith you have...

          (large grin)

          • There are protocols which are published and then there are protocols that remains unpublished

            How about those protocols which have remained unpublished?

            Any way to test those?

            • Sure, there is. All you have to do is send me $500 and I will test them according to some protocols which I have developed. I'd like to share them with you, but unfortunately due to proprietary business secrets which must remain undisclosed, I will be unable to publish them.
            • There are protocols which are published and then there are protocols that remains unpublished

              How about those protocols which have remained unpublished?

              Any way to test those?

              That's not science. That's grandma's secret recipe.

              • That which cannot be patented or copyrighted remains secret by necessity. Lest someone else profit from your work, which cannot be permitted.

                It's all about money. All of it. All the time. For everyone. Accept this, and you will do more than survive, perhaps. Ignore this, and you will suffer. Reject this, and you will fail, either in your philosophy or in your efforts. You don't have to like it nor do you have to perpetuate it, but it is what it is.

            • by c0lo ( 1497653 )

              How about those protocols which have remained unpublished?

              Ok... for you, let me be as explicit as possible: put you faith in God and pray!

              Is it clearer now?
              If not, do consider the context (or else [xkcd.com]).

              Still confused? I hope this will clear the matter: whoosh!

            • by dkf ( 304284 )

              How about those protocols which have remained unpublished?

              You need to test "Just make it up so that results give the answer we want in order to get more funding"? Really?

              Not that good scientists ever use that method. Alas, not all scientists are good, and that's why it is important to check published results. (It's a good task for grad students...)

            • The journal for those are the Tetrahedron Letters, and chemists have in vain tried to replicate them for a century.
            • Well, there are the unpublished protocols, and the published unpublished protocols, and finally the unpublished unpublished protocols.

              (Just ask Rummy)

        • They are testing whether scientific papers meet the scientific method (ie. the results are reproducible). They are not testing the validity of the scientific method itself (myself, I cannot see how one could test the scientific method without using it, thus bringing the results into question).

          Epistemology [wikipedia.org] would have something to say about your sight. Yes, the scientific method can be evaluated and tested. Anyway, if that's what you say the GP is making as a point, then it's a really stupid and pedantic one. The scientific method isn't just an abstract concept, it's also a means to an end. If we find that many people are making the same mistakes in the process itself, then yes, it is a statement about the scientific method.

          It's like saying a lake is a body of water and ignoring the fact that you

          • No, I have not studied philosophy with any serious effort, but if the scientific method cannot be used to prove itself, then perhaps it is lacking?

            And I don't reject the 'scientific method'. I think it is self-evident and self-revealing.

            The real question of these papers is not can the results be shown to be reproducible, but did the papers sufficiently disclose the methods? And if not, why not? there is, naturally, the possibility that the results were not even reproducible by the authors, and so they eit

            • No, I have not studied philosophy with any serious effort, but if the scientific method cannot be used to prove itself, then perhaps it is lacking?

              It's more a matter of logic.

              Assume, for the sake of argument, that the scientific method is flawed. Trying to test the scientific method could produce myriad results. Tests could show that it always fails. Tests could give random results. Or tests could correlate with some random thing - imagine if the scientific method only works with a waxing gibbous moon is ascendant in Leo. Or, and this case is significant, tests could always show that it does work (but it then fails when you try to test other theories)

              • Lines that intersect are not, by definition, parallel. Lines either intersect or they do not. Lines that do not intersect are parallel.

                We define non-intersecting lines as parallel. We don't prove they are parallel, we describe them as such.

                It has been a long time since I've employed geometry. How that relates to proving the Scientific Method is beyond me.

      • This speaks to the failings of the participants implementation of the scientific method, not to the failing of The Scientific Method.

        We have discussed here before the problem of too much content being generated and not enough people to peer review it all. Still not a failing of The Scientific Method.

        And I prefer my ignorance in courier font.

        • by __aaltlg1547 ( 2541114 ) on Tuesday January 22, 2013 @01:32AM (#42654225)
          No, it means that the original experimenters didn't describe their experiment correctly. Or worse, may have never done it at all...
          • Not describing the experiment correctly will not bother their corporate sponsors, who will either employ them directly or get the full description upon payment. Never doing it at all? No problem if they get more grants to build upon the nonexistent foundation they fabricated - unless they change direction with every grant and never evolve their previous work, which is nice work if you can get it.

            • Not describing the experiment correctly will not bother their corporate sponsors, who will either employ them directly or get the full description upon payment. Never doing it at all? No problem if they get more grants to build upon the nonexistent foundation they fabricated - unless they change direction with every grant and never evolve their previous work, which is nice work if you can get it.

              Honestly, I think the most probable reason is poor record taking and poor writing.

              • Sadly, you are quite possibly correct.

                So much for the scientific method. Even the experts suck at it.

                • Sadly, you are quite possibly correct.

                  So much for the scientific method. Even the experts suck at it.

                  Nobody said science was easy.

        • by mysidia ( 191772 )

          We have discussed here before the problem of too much content being generated and not enough people to peer review it all. Still not a failing of The Scientific Method.

          The article's headline text is a demonstration of that issue of too much content and not enough peer review :)

      • The bloggers are not testing the scientific method, they are testing methods that are scientific

        Putting your ignorance in boldface type is amusing. The most basic promise of the scientific method is that results can be replicated by anyone with the proper equipment repeatedly and reliably

        And I am sorry that you struggle so greatly to understand what I have written.

        They are testing the scientific method insofar as asking whether professional and peer-reviewed scientific work actually meets this basic test.

        Do you not understand the purpose of peer-review? If results that were peer-reviewed are not reproducible, that is not a failing of the scientific method itself, nor is it a failing of peer review. Peer review does not exist to validate methods as that would be quite nearly an impossible task for the majority of all scientific papers that are published currently unless the journal sent an editor to the lab that submitted said paper to rerun the work themselves - which would be so absurdly expensive that nobody would ever pay to publish. Peer review is intended to make sure that work published is scientifically rigorous and well written.

        Hell, if you go back and actually read my comment - I would say re-read but it does not appear you read it successfully for a first time yet - you will find that I did say this work is important. I also said that it is not testing the scientific method itself, which is correct.

        • Do you not understand the purpose of peer-review? If results that were peer-reviewed are not reproducible, that is not a failing of the scientific method itself, nor is it a failing of peer review. Peer review does not exist to validate methods as that would be quite nearly an impossible task for the majority of all scientific papers that are published currently unless the journal sent an editor to the lab that submitted said paper to rerun the work themselves - which would be so absurdly expensive that nobody would ever pay to publish. Peer review is intended to make sure that work published is scientifically rigorous and well written.

          Many of the published results and methods being verified are ones that have questionable results - such as producing too much output chemicals or reactions don't appear they should work at all. Those are papers for which peer-review has failed to provide adequate review. If the paper was truly read in-depth by other equally qualified scientists these issues would have been noticed and the paper (published or not) would have been called into question.

          The caveat to this would be papers that are published w

          • Do you not understand the purpose of peer-review? If results that were peer-reviewed are not reproducible, that is not a failing of the scientific method itself, nor is it a failing of peer review. Peer review does not exist to validate methods as that would be quite nearly an impossible task for the majority of all scientific papers that are published currently unless the journal sent an editor to the lab that submitted said paper to rerun the work themselves - which would be so absurdly expensive that nobody would ever pay to publish. Peer review is intended to make sure that work published is scientifically rigorous and well written.

            Many of the published results and methods being verified are ones that have questionable results - such as producing too much output chemicals or reactions don't appear they should work at all. Those are papers for which peer-review has failed to provide adequate review. If the paper was truly read in-depth by other equally qualified scientists these issues would have been noticed and the paper (published or not) would have been called into question.

            The caveat to this would be papers that are published with the sole purpose of seeking peer review and inviting other to validate the results, for example many of the cold fusion papers and the experiment which implied neutrinos were traveling faster than light.

            I also recognize that peer review happens both before and after publication, and in fact the bloggers are part of the peer-review process.

            Also frankly, there's a ton of papers out there which do work but which omit crucial details of how they work. Queue 2 years of my life discovering that someone's "simple robust synthesis" can't possibly have worked the way they said it did, and gradually reverse engineering that they were doing something they didn't report in any papers which had an important effect (they didn't quite seal up the reaction vial, but didn't leave it open, so coupled with water and condensation they were setting up a highly u

      • That's not testing the scientific method. Testing the scientific method would involve some test as to whether the scientific method itself works. Determining if some published experiment is actually described in a way that is reproducible says nothing about whether the scientific method itself works or is useful.

        Asking whether professional and peer-reviewed scientific work is actually using the scientific method is also not asking whether the scientific method itself works or is useful.

        I agree it's a useful

    • they are testing methods that are scientific.

      If a lot of those experiments can't be reproduced, then those methods weren't scientific to begin with.

      • by flink ( 18449 ) on Monday January 21, 2013 @09:59PM (#42653097)

        Right, but they are utilizing the scientific method to test the quality of published papers, not attempting to verify the utility of the scientific method itself.

        The headline should read "Bloggers apply scientific method to validate published findings".

        • by phantomfive ( 622387 ) on Monday January 21, 2013 @10:06PM (#42653137) Journal

          "Bloggers apply scientific method to validate published findings".

          A much better headline.

        • by jedidiah ( 1196 )

          ...or alternately: The Half Blood Prince tweaks the results.

        • Could as well say "Bloggers do what tens of thousands of grad students do every day."
          • by jo_ham ( 604554 )

            Could as well say "Bloggers do what tens of thousands of grad students do every day."

            That's about the face of it. The only difference is that they're blogging their progress rather than simply trying to get an intermediate step to work because you wanted to make something that is on the route to what you're really studying and you just want a decent prep for it.

            I think the general rule of thumb is "subtract at least 15% from the expected yield and assume at least one crucial step has been inadequately documented" and "ignore esoteric and exotic purification method and just recrystalise it".

      • they are testing methods that are scientific.

        If a lot of those experiments can't be reproduced, then those methods weren't scientific to begin with.

        Not necessarily. Sure, irreproducible results can be the result of shoddy, missing, or fabricated work, and unfortunately often are. There are also times when those results can come about through no fault the experimenter. If the scientist was running an overnight synthesis and was not notified that the HVAC failed for two hours starting at 3am is that his fault? Sure, he should check afterwards to make sure that the environmental conditions were properly controlled but that isn't always immediately ap

        • A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results. If you can't repeat your results you can't possibly give others instructions on how they can repeat them. Not knowing that the HVAC failed for a couple hours during one run out of a dozen should result in outlier results that can be investigated or discarded.

          • A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results. If you can't repeat your results you can't possibly give others instructions on how they can repeat them. Not knowing that the HVAC failed for a couple hours during one run out of a dozen should result in outlier results that can be investigated or discarded.

            Unfortunately, sometimes the experiments do get run multiple times and the data that didn't fit the expected results was thrown out and not included in the final data. I've seen far too many scientists who automatically throw out the highest and lowest samples and only average the data that grouped nicely, without making much effort to find out why some samples deviated.

          • by paiute ( 550198 )

            A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results.

            No time. Many if not most synthetic methodology papers will test a new reaction on a range of homologous compounds, say 20. Few of those are repeated twice, let alone multiple times. A lot of this work comes out of graduate school groups where the emphasis is on publishing rapidly. There is one Nobel Prize winner in particular whose publications in Tetrahedron Letters (usually a two page paper) are notorious for sacrificing accuracy for rapidity.

            • by sFurbo ( 1361249 )
              I thought Tetrahedron Letters in general was notorious for allowing very low quality articles to be printed. There must be a reason my professor in organic chemistry insisted on calling it "tetrahedron liars".
              • by paiute ( 550198 )

                I thought Tetrahedron Letters in general was notorious for allowing very low quality articles to be printed. There must be a reason my professor in organic chemistry insisted on calling it "tetrahedron liars".

                TettLet is for rapid publications. The methods are not always fully worked out and there is no detailed experimental. The function of TetLett is to give the chemical world a heads up of a new way of doing something. Where I expect a paper from the Journal of Organic Chemistry to be detailed, I don't expect the same of TettLet.

          • by vlm ( 69642 ) on Tuesday January 22, 2013 @09:05AM (#42655871)

            A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results.

            Won't help. I studied to be a chemist (admittedly a long time ago) and by far the biggest non-ethical problem out there is contamination.

            So it turns out that your peculiar reaction you're studying is iron catalyzed, in fact its incredibly sensitive to iron, but no one in the world knows that yet. And your reagents are contaminated. Or your glassware, which you thought was brand new and/or well cleaned, is contaminated. Or your lab is downwind of a hematite ore processor and the room dust is contaminated.

            Sure, you say, test everything. Well there isn't time/money for that, but for the sake of argument we'll assume there is. What if dust from the hematite ore processor is far larger than the filter paper pores in filtration stage of your overall process? Test the reagents and product all you want but you'll never find iron anywhere except room dust (which you already knew about) and the debris in the filter paper (which you assume was contaminated by room dust AFTER removal from the apparatus)

            The most important thing is this is the norm in chemistry, not an outlier. Chemistry is not math or CS, sometimes stuff just doesn't work or just works for no apparent reason. Unlike some technologies, detailed modeling of "why" "how" often doesn't happen for years, decades, centuries after the ChemEng team has been selling product / papers have been written.

            A very important lesson is analysis paralysis. So you live downwind of a hematite ore crusher. And you know it. And periodic tests of your lab show iron enriched house dust. But you can't go around testing everything, because you're surrounded by millions of things to test for. You're a gardener, god only knows whats on your hands. Skin oil of certain blood types is a contaminant? Your breath has a tinge of ethanol in it from last night? Maybe its your perfume / cologne / antiperspirant / nail polish? The point of discovery is its literally unknown... maybe wearing nitrile gloves instead of old fashioned latex "does something" good or bad to the reaction.

            I think the main thing "slashdotter IT people" need to understand is most chemistry and most chemical engineering runs somewhat less than 6 sig figs. This is incomprehensible to IT people... if your T1 or whatever LAN had a BER worse than 1e5 you'd call it in for repair... If you got one thousand read errors when you read a 1 gig DVD, you'd throw out that DVD. If your processor runs at 1500 mips then at a six sig fig error rate it would crash about 1500 times per second. The bad news is six sig figs is actually pretty good work for a chemistry lab. Certainly undergrads could never aspire it that level, both skill and experience and specialized equipment....

            Please spare me the details of one peculiar quantitative analysis technique that in one weird anecdote measured once in tiny fractions of a ppt. The overall system cannot be "cleaner" than the filthiest link in the entire system. Is the hand soap in your lab spectrographically pure? The unused toilet paper in the bathroom? All of your hoods and benches and storage cabinets are in a verified and tested cleanroom environment? Seriously? The drinking cooler and lab fridge also only hold spectrographically pure substances? Please no anecdotes.

    • Right, I tagged the story "crapheadline" as soon as I RTFS.

    • It might be epic (Score:5, Interesting)

      by Okian Warrior ( 537106 ) on Monday January 21, 2013 @10:33PM (#42653301) Homepage Journal

      The bloggers are not testing the scientific method, they are testing methods that are scientific. Those are two vastly different concepts. Their work is important, but not epic.

      I'm not so sure about that.

      We believe in a scientific method founded on observation and reproducible results, but for a great number of papers the results are not reproduced.

      Taking soft sciences into consideration (psychology, social sciences, medical), most papers hinge on a 95% confidence level [explainxkcd.com]. This means that 1 out of every 20 results arise from chance, and no one bothers to check.

      Recent reports tell us depression meds are no better than chance [www.cwcc.us] and scientists can only replicate 11% of cancer studies [nbcnews.com], so perhaps the ratio is higher than 1 in 20. And no one bothers to check.

      I've read many follow-on studies in behavioral psychology where the researchers didn't bother to check the original results, and it all seems 'kinda fishy to me. Perhaps wide swaths of behavioral psychology have no foundation; or not, we can't really tell because the studies haven't been reproduced.

      And finally, each of us has an "ontology" (ie - a representation of knowledge) which is used to convey information. If I tell you a recipe, I'm actually calling out bits of your ontology by name: add 3 cups of flour, mix, bake at 400 degrees, &c.

      This assumes that your ontology is the same as mine, or similar enough that the differences are not relevant. If I say "mix", I assume that your mental image of "mix" is the same as mine. ...but people screw up recipes, don't understand assembly instructions, and are confused by small nuanced differences in documentation.

      Does this happen in chemistry?

      (Ignoring the view that reactions can depend on aspects that the researchers were unaware of, or didn't think were relevant. One researcher told me that one of her assistants could always make the reaction work but no one else could. Turns out that the assistant didn't rinse the glassware very well after washing, leaving behind a tiny bit of soap.)

      It's good that people are reproducing studies. Undergrads and post-grads should reproduce results as part of their training, and successful attempts should be published - if only as a footnote to the original paper ("this result was reproduced by the following 5 teams..."). It's good practice for them, it will hold the original research to a higher standard, and eliminate the 1 out of 20 irreproducible results.

      Also, reproducing the results might add insight into descriptive weaknesses, and might inform better descriptions. Perhaps results should be kept "Wikipedia" style, where people can annotate and comment on the descriptions for better clarity.

      But then again, that's a lot of work. What was the goal, again?

      • I have mod points, but stupid Slashdot isn't showing me the moderation option on posts. So I will snark instead.

        But then again, that's a lot of work. What was the goal, again?

        Uhm. Publish or perish, I think it was...... Full speed ahead and damn the torpedoes. Verification studies don't count on your CV.

        Good idea to harness the slave labor though. That's what grad students are for.

        And how many grad students will actually be willing to do this verification work? None, who can think politically enough to stay in academia. What are you asking them to do? Verify pu

      • Re:It might be epic (Score:4, Interesting)

        by Skippy_kangaroo ( 850507 ) on Tuesday January 22, 2013 @07:53AM (#42655547)

        Many commenters seem to be mistaking some idealised thing called the Scientific Method with what is actually practiced in the real world when they claim that the scientific method is not being tested. Damn straight this is testing the scientific method - warts and all. If the scientific method as practiced in the real world does not deliver papers that provide sufficient detail to be reproduced then it is not working properly and is broken. In fact, I'm sure that most people who actually publish papers will acknowledge that the peer review process is broken and does not approach this idealised notion of what peer review is in many people's minds. If peer review is broken then a critical element of the scientific method is also broken.

        Reproducing results is hard every time I have done it - and that includes times when I have had access to the exact data used by the authors. (And sometimes even the exact code used by the authors - because I was using a later version of the statistical package and results were not consistent between versions.)

        So, if people want to claim that the Scientific Method is perfect and this is not a test of it - it would be interesting if they could point to anywhere this idealised Scientific Method is actually practiced (as opposed the the flawed implementation that seems to be practiced in pretty much any field I have ever become acquainted with).

      • by jo_ham ( 604554 )

        And finally, each of us has an "ontology" (ie - a representation of knowledge) which is used to convey information. If I tell you a recipe, I'm actually calling out bits of your ontology by name: add 3 cups of flour, mix, bake at 400 degrees, &c.

        This assumes that your ontology is the same as mine, or similar enough that the differences are not relevant. If I say "mix", I assume that your mental image of "mix" is the same as mine. ...but people screw up recipes, don't understand assembly instructions, and are confused by small nuanced differences in documentation.

        Does this happen in chemistry?

        It absolutely happens - chemistry is like cooking in that respect. Especially some of the trickier synthetic chemistry where your technique really matters. Chemists of different skill levels are going to get vastly different results from following the same prep.

        What these guys are doing is no different to what grad students do all the time - papers are a great source of preps for compounds that you want to use as building blocks for other things you want to do if your starting materials are either not readi

      • It's good that people are reproducing studies. Undergrads and post-grads should reproduce results as part of their training, and successful attempts should be published

        Dang, you were doing so well too. How about we publish both the successes and the failures, rather than just the successes. Just publishing the successes is how we get stuff like the 11% reproducibility rates you talked about earlier. So a footnote in the original paper: "this procedure was replicated by 23 teams, those results supported the results of this paper in only 3 cases."

    • Since testing IS the scientific method, if there is something wrong with the scientific method, this test will fail!

  • by Anonymous Coward on Monday January 21, 2013 @09:23PM (#42652883)

    If you try to repeat an experiment and fail then it is almost impossible to get published. Failed experiments, though critical for advancing science, aren't sexy and editors prefer their journals to be full of positives. So scientists don't even bother trying anymore. This is a problem in medicine and probably all sciences. There is a movement in medicine to report all trials [alltrials.net] so they can be found by researchers doing meta-studies.

    • If you try to repeat an experiment and fail then it is almost impossible to get published

      Tell that to Fleischmann and Pons.

    • by WrongMonkey ( 1027334 ) on Monday January 21, 2013 @09:33PM (#42652941)
      As a research chemist, I've published a couple of papers that were motivated because I didn't believe a paper's results to be true. The trick to get it past reviewers is to not only prove that they are wrong, but to come up with an alternative that is demonstrably superior.
      • That doesn't sound like a very good system.

        Watching scientists work should be like hearing one guy say, "Hey, check this out!" and another guy saying, "wow, that's cool, I can't get it." and another saying, "oh, you need to do it like this!"

        If someone can't say, "that doesn't work like you said it did," then scientific progress is going to be hampered.
        • That doesn't sound like a very good system.

          It's one that works, and is another part of the scientific method, albeit a step along from establishing reproducibility of an experiment. If prior research appears to set a hypothesis, then that should be tested - i.e. someone should make an attempt to knock it down by whatever means are available. You never really "prove" something to be the case, but if nothing can be done to disprove it, that's pretty nearly as good.

          • It's not a problem with the scientific method, it's a problem of communicating results. Clearly it isn't working optimally if this article is correct. If people can only tell each other when something works, but can't discuss when things don't work, then the communication channels are severely broken.
            • It's not a problem with the scientific method, it's a problem of communicating results. Clearly it isn't working optimally if this article is correct. If people can only tell each other when something works, but can't discuss when things don't work, then the communication channels are severely broken.

              This is actually a huge problem.

              Science really needs something like Freenode for different disciplines. A lot of the time you simply want to be able to talk to some of the people dealing with the work in the original group, since they're likely to be able to give you the small details which turn out to matter.

        • by jo_ham ( 604554 )

          That doesn't sound like a very good system.

          Watching scientists work should be like hearing one guy say, "Hey, check this out!" and another guy saying, "wow, that's cool, I can't get it." and another saying, "oh, you need to do it like this!"

          If someone can't say, "that doesn't work like you said it did," then scientific progress is going to be hampered.

          It's how it works though. Usually it's "hey look at this, and this happens because of x reason" and you come back with "I don't think that works the way you think because of y systematic error or z error in your technique". I've personally done this before with published papers, and it doesn't necessarily invalidate them so much as add more information to the table.

  • They got explosions at least?
  • Okay, how is this NOT common knowledge? Of course chemistry is hard, of course it's hard to replicate results--if it weren't, why in the hell do you think it took this long in the first place? If chemistry were easy, there'd be hundreds of reports of planes downed by terrorists, instead of the 3-4 ATTEMPTED explosions there have been. Chemistry is HARD, good luck doing it with limited ingreditents and improvised equipment.
    • Could this be why the general public doesn't trust science and instead rely on ancient mystical texts to make sense of the world they live in? Maybe a push to show the "hoi poloi" are perfectly capable of replicating the results researchers have observed would advance the cause of science?

      • Maybe a push to show the "hoi poloi" are perfectly capable of replicating the results researchers have observed would advance the cause of science?

        How many of the "hoi poloi" would have any idea what the paper is even trying to demonstrate? Or how to test it?

      • by jo_ham ( 604554 )

        Could this be why the general public doesn't trust science and instead rely on ancient mystical texts to make sense of the world they live in? Maybe a push to show the "hoi poloi" are perfectly capable of replicating the results researchers have observed would advance the cause of science?

        Not really. Also, it wouldn't matter - being a skilled synthetic chemist is a lot more than simply following the instructions, no matter how well they are written out.

        Perhaps you could make it easy enough for an undergrad with some reasonable lab experience to do it, but the problem with public perception of science is not optimistic yield reports and poor experimental write up in research papers.

  • by WrongMonkey ( 1027334 ) on Monday January 21, 2013 @09:41PM (#42652979)
    It's not a secret that about half of published synthesis methods are garbage and yield values are wildly creative. Reviewers don't have the means to verify these, so anything that seems plausible gets published. Then researchers are left to sort out the best methods based on which ones get the most citations.
    • It's not a secret that about half of published synthesis methods are garbage and yield values are wildly creative. Reviewers don't have the means to verify these, so anything that seems plausible gets published. Then researchers are left to sort out the best methods based on which ones get the most citations.

      It's not just synthesis methods. I remember taking a graduate control theory class in which the final project was for the class to replicate the results of a paper with a particular control algorithm. It just...wouldn't...work. Not a single person managed to replicate the results, which simply led to the inevitable conclusion they were fudged.

      I'm not and would never defend anyone who publishes any data that has been tempered with, but I still find it annoying that we've set up a system where there's an i

  • calling out BS, exaggerated, wrongly calibrated, and/or embellished results. This makes perfect sense to me. If you publishing a paper on a subject then it should be a repeatable recipe.

  • Are they testing the tried and true scientific method that *real* scientists used for centuries to arrive at the cumulative knowledge of mankind, or are they testing the modern scientific method that involves drawing a conclusion and then trying to find data that fits your model, discarding any data that doesn't?

  • It's the major people that can't handle physics switch to.

    (I kid, I kid)

  • http://openscienceframework.org/project/EZcUj/wiki/home [openscienceframework.org]

    The Open Science Framework focuses on psychology instead of chemistry but I would imagine they could both utilize similar frameworks and methodologies. I'm not a chemist or a psychologist so there may be some major incompatibility I don't know about.
  • I suppose reviewers are supposed to catch these things but they're probably too busy with their own 'discoveries' to give more than a cursory glance. Hopefully the bloggers will get enough recognition from their efforts to spur others; especially if the trend towards publishing direct online and away from peer reviewed journals continues.

  • Replicating published experiments is a worthwhile effort, and there should be more of it. However, it's already pretty well known that scientific papers have a high rate of irreproducible results, due to both fraud and error. If they fail to replicate particular experiments, it's not an indictment of the scientific method, but of the particular scientists.

    (Also, some experiments in the natural sciences are tricky to do and require experience to work, but they can still be replicated.)

  • by aussersterne ( 212916 ) on Monday January 21, 2013 @11:49PM (#42653697) Homepage

    is instructive (though still not very readable) here.

    The founding ethnomethodologist, Garfinkel argued that much of science depends on practical assumptions and habits of which researchers are only vaguely aware, leading to the "loss" of the phenomenon.

    This is both good and bad. On the one hand, it means that a phenomenon is real, with real implications (useless theory and tautology are marked by the difficulty of losing the phenomenon), but on the other hand it means that what is said about the phenomenon is often missing the most critical bits of information, unbeknownst to the PIs themselves because they are unaware of practical (embedded in the practice of) assumptions and habits, something that makes it seem likely that many scientific truths are either solidified far later than they might otherwise have been or incorrectly lost to falsification rather than pursued.

  • Another Fun Chemistry Blog: Things I Won't Work With http://pipeline.corante.com/archives/things_i_wont_work_with/ [corante.com]

    It's by Derek Lowe, a pharmaceutical chemist who blogs about chemical compounds that are way too dangerous. His position is that the closest you want to get to any of these things is reading about them. The closest I want to get is reading what he has to say about them.

    Take FOOF, aka F2O2, aka Dioxygen Difloride. Lowe calls it "Satan's kimchi".

    The latest addition to the long list of chemical

  • by decora ( 1710862 ) on Tuesday January 22, 2013 @12:40AM (#42653987) Journal

    These many many decades ago I went to a "top 5 STEM school" and enrolled in Chemistry. While all of the students had Very Expensive Machines to play with, they also had No Time To Do Careful Work.

    "Flubbing" of results was considered kind of, you know, ordinary, and if you didn't get what was expected, well, you were expected to just kind of ignore it.

    People like to harsh on the 'soft' subjects like history ... but I can assure you that many a history teacher would chew you up one side and down the other for saying things that are patently false and blatantly contradict the evidence - while many a Chemistry teacher would simply tell you "well, it should have worked. and you understand the basic ideas. so lets move on."

    I'm not sure what the issue is - if it's just too expensive to do experiments, or maybe the point is not to learn 'experimenting' but rather to learn 'theories... with some hands on experimenting to give you a flavor of it"????

    • by mysidia ( 191772 )

      "Flubbing" of results was considered kind of, you know, ordinary, and if you didn't get what was expected, well, you were expected to just kind of ignore it.

      See... research teams should have separation of duties, to avoid the temptation to cheat. The scientist does the experiment, someone else takes the measurements, someone else records the results, and enters into digital format; which quickly become immutable, digitally signed and timestamped; with noone taking or recording measurements allowed t

    • That reminds me of (aeons ago) my 8th grade science project. While most of the other kids were testing "Which battery lasts the longest?" I decided to test the effect of humidity on the speed of sound. Seemed relevant to my 13 year old mind, I couldn't find a lot of information on it, and I had 3 possible outcomes: H0 was that higher relative humidity has no effect; H1 was that higher relative humidity made the speed of sound faster; H2 was that higher relative humidity made the speed of sound slower.

      The e
      • by vlm ( 69642 )

        Did he bust you for error bars / sig figs? How many sigma was your "signal" above the noise? "pointed toward" doesn't sound like a firm result, so I can see the old prof getting kinda wound up if you wrote a relatively strong conclusion in English language prose, but provided math showing a very weak conclusion. That's "TV commercial logic" not scientific logic. Still the guy was a bastard about it, because he didn't explain his (likely) reasoning to you, or at least he didn't do it very well.

        Narrowing

        • 8th grade. I honestly don't remember if we had the math to figure out things like confidence or error intervals at that point in our education. Probably should have done, but I have no idea. But the prof wasn't even in the range of "That's wrong, but within the error range given $Factor." It was simply, "I don't care what results you got, you are wrong."

          And looking back, the methodology wasn't great -- tube physics do funky things to sound, so measuring through the PVC wasn't a good thing; the 1 meter leng
    • These many many decades ago I went to a "top 5 STEM school" and enrolled in Chemistry. While all of the students had Very Expensive Machines to play with, they also had No Time To Do Careful Work.

      "Flubbing" of results was considered kind of, you know, ordinary, and if you didn't get what was expected, well, you were expected to just kind of ignore it.

      People like to harsh on the 'soft' subjects like history ... but I can assure you that many a history teacher would chew you up one side and down the other for saying things that are patently false and blatantly contradict the evidence - while many a Chemistry teacher would simply tell you "well, it should have worked. and you understand the basic ideas. so lets move on."

      I'm not sure what the issue is - if it's just too expensive to do experiments, or maybe the point is not to learn 'experimenting' but rather to learn 'theories... with some hands on experimenting to give you a flavor of it"????

      Don't conflate education and research. If an experiment goes wrong in a classroom setting, then there simply isn't time/resources/patience to repeat it. For instance, when a student uses the wrong solvent and starts a fire, you grab the extinguisher, deal with the fire, and move on. Sometimes it is the students fault, sometimes the professor, other times the TA. Some people are just not meant to do chemistry--they are afraid of chemicals and/or they just don't "get it." Other people are just unlucky. It sim

  • computer science too (Score:3, Interesting)

    by Anonymous Coward on Tuesday January 22, 2013 @03:11AM (#42654607)

    The sad truth is that this happens everywhere, also in computer science. I once implemented a software pipelining algorithm based on a scientific paper, but when implemented, the algorithm appeared to be broken and had to be fixed first. In this case, it probably wasn't malice either. But the requirements to get something published are different from getting it to actually run. Brevity and readability may get in the way of correctness, which may easily go unnoted.

  • Not reproducible (Score:3, Insightful)

    by jotajota1968 ( 2666561 ) on Tuesday January 22, 2013 @08:51AM (#42655821)
    I am a physicist. I do believe that a large percentage (less say 50%) of scientific publications do not meet basic quality standards, let alone the scientific method. It depends also on the level of the journal. But even in the best journals you can find articles and results that can not be reproduced. The pressure to publish is too strong. Anyway, 70% or more of the articles are only academic exercises, or do not have robust statistics or do not receive more than one or two citations (including a couple of my own). Only the best of the articles (should) prevail with time (Darwinism).
  • Oooh, oooh, oooh I get to synthesize the MDMA!!! Rave anyone?

The use of money is all the advantage there is to having money. -- B. Franklin

Working...