Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Bloggers Put Scientific Method To the Test 154

ananyo writes "Scrounging chemicals and equipment in their spare time, a team of chemistry bloggers is trying to replicate published protocols for making molecules. The researchers want to check how easy it is to repeat the recipes that scientists report in papers — and are inviting fellow chemists to join them. Blogger See Arr Oh, chemistry graduate student Matt Katcher from Princeton, New Jersey, and two bloggers called Organometallica and BRSM, have together launched Blog Syn, in which they report their progress online. Among the frustrations that led the team to set up Blog Syn are claims that reactions yield products in greater amounts than seems reasonable, and scanty detail about specific conditions in which to run reactions. In some cases, reactions are reported which seem too good to be true — such as a 2009 paper which was corrected within 24 hours by web-savvy chemists live-blogging the experiment; an episode which partially inspired Blog Syn. According to chemist Peter Scott of the University of Warwick in Coventry, UK, synthetic chemists spend most of their time getting published reactions to work. 'That is the elephant in the room of synthetic chemistry.'"
This discussion has been archived. No new comments can be posted.

Bloggers Put Scientific Method To the Test

Comments Filter:
  • by WrongMonkey ( 1027334 ) on Monday January 21, 2013 @09:33PM (#42652941)
    As a research chemist, I've published a couple of papers that were motivated because I didn't believe a paper's results to be true. The trick to get it past reviewers is to not only prove that they are wrong, but to come up with an alternative that is demonstrably superior.
  • by girlintraining ( 1395911 ) on Monday January 21, 2013 @09:37PM (#42652963)

    The bloggers are not testing the scientific method, they are testing methods that are scientific

    Putting your ignorance in boldface type is amusing. The most basic promise of the scientific method is that results can be replicated by anyone with the proper equipment repeatedly and reliably. This is accomplished by describing an experiment to the detail level necessary to reproduce the result. If the result cannot be reproduced from a description of the experiment, it has failed this test.

    They are testing the scientific method insofar as asking whether professional and peer-reviewed scientific work actually meets this basic test. And in many cases it doesn't. Science only works if it is built on a firm foundation: Their work isn't just important, it's critical. It may not be fun, but explorations like this prevent us from assembling a body of knowledge and understanding based on flawed experiments... and that has happened many times in the history of science, especially in medicine.

  • It might be epic (Score:5, Interesting)

    by Okian Warrior ( 537106 ) on Monday January 21, 2013 @10:33PM (#42653301) Homepage Journal

    The bloggers are not testing the scientific method, they are testing methods that are scientific. Those are two vastly different concepts. Their work is important, but not epic.

    I'm not so sure about that.

    We believe in a scientific method founded on observation and reproducible results, but for a great number of papers the results are not reproduced.

    Taking soft sciences into consideration (psychology, social sciences, medical), most papers hinge on a 95% confidence level []. This means that 1 out of every 20 results arise from chance, and no one bothers to check.

    Recent reports tell us depression meds are no better than chance [] and scientists can only replicate 11% of cancer studies [], so perhaps the ratio is higher than 1 in 20. And no one bothers to check.

    I've read many follow-on studies in behavioral psychology where the researchers didn't bother to check the original results, and it all seems 'kinda fishy to me. Perhaps wide swaths of behavioral psychology have no foundation; or not, we can't really tell because the studies haven't been reproduced.

    And finally, each of us has an "ontology" (ie - a representation of knowledge) which is used to convey information. If I tell you a recipe, I'm actually calling out bits of your ontology by name: add 3 cups of flour, mix, bake at 400 degrees, &c.

    This assumes that your ontology is the same as mine, or similar enough that the differences are not relevant. If I say "mix", I assume that your mental image of "mix" is the same as mine. ...but people screw up recipes, don't understand assembly instructions, and are confused by small nuanced differences in documentation.

    Does this happen in chemistry?

    (Ignoring the view that reactions can depend on aspects that the researchers were unaware of, or didn't think were relevant. One researcher told me that one of her assistants could always make the reaction work but no one else could. Turns out that the assistant didn't rinse the glassware very well after washing, leaving behind a tiny bit of soap.)

    It's good that people are reproducing studies. Undergrads and post-grads should reproduce results as part of their training, and successful attempts should be published - if only as a footnote to the original paper ("this result was reproduced by the following 5 teams..."). It's good practice for them, it will hold the original research to a higher standard, and eliminate the 1 out of 20 irreproducible results.

    Also, reproducing the results might add insight into descriptive weaknesses, and might inform better descriptions. Perhaps results should be kept "Wikipedia" style, where people can annotate and comment on the descriptions for better clarity.

    But then again, that's a lot of work. What was the goal, again?

  • computer science too (Score:3, Interesting)

    by Anonymous Coward on Tuesday January 22, 2013 @03:11AM (#42654607)

    The sad truth is that this happens everywhere, also in computer science. I once implemented a software pipelining algorithm based on a scientific paper, but when implemented, the algorithm appeared to be broken and had to be fixed first. In this case, it probably wasn't malice either. But the requirements to get something published are different from getting it to actually run. Brevity and readability may get in the way of correctness, which may easily go unnoted.

  • Re:It might be epic (Score:4, Interesting)

    by Skippy_kangaroo ( 850507 ) on Tuesday January 22, 2013 @07:53AM (#42655547)

    Many commenters seem to be mistaking some idealised thing called the Scientific Method with what is actually practiced in the real world when they claim that the scientific method is not being tested. Damn straight this is testing the scientific method - warts and all. If the scientific method as practiced in the real world does not deliver papers that provide sufficient detail to be reproduced then it is not working properly and is broken. In fact, I'm sure that most people who actually publish papers will acknowledge that the peer review process is broken and does not approach this idealised notion of what peer review is in many people's minds. If peer review is broken then a critical element of the scientific method is also broken.

    Reproducing results is hard every time I have done it - and that includes times when I have had access to the exact data used by the authors. (And sometimes even the exact code used by the authors - because I was using a later version of the statistical package and results were not consistent between versions.)

    So, if people want to claim that the Scientific Method is perfect and this is not a test of it - it would be interesting if they could point to anywhere this idealised Scientific Method is actually practiced (as opposed the the flawed implementation that seems to be practiced in pretty much any field I have ever become acquainted with).

"If you lived today as if it were your last, you'd buy up a box of rockets and fire them all off, wouldn't you?" -- Garrison Keillor