Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Image

The Journal of Serendipitous and Unexpected Results 153

SilverTooth writes "Often, when watching a science documentary or reading an article, it seems that the scientists were executing a well-laid out plan that led to their discovery. Anyone familiar with the process of scientific discovery realizes that is a far cry from reality. Scientific discovery is fraught with false starts and blind alleys. As a result, labs accumulate vast amounts of valuable knowledge on what not to do, and what does not work. Trouble is, this knowledge is not shared using the usual method of scientific communication: the peer-reviewed article. It remains within the lab, or at the most shared informally among close colleagues. As it stands, the scientific culture discourages sharing negative results. Byte Size Biology reports on a forthcoming journal whose aim is to change this: the Journal of Serendipitous and Unexpected Results. Hopefully, scientists will be able to better share and learn more from each other's experience and mistakes."

*

This discussion has been archived. No new comments can be posted.

The Journal of Serendipitous and Unexpected Results

Comments Filter:
  • A great idea (Score:5, Interesting)

    by al0ha ( 1262684 ) on Thursday February 04, 2010 @01:29AM (#31019164) Journal
    but the obstacles are immense. Egos are massive and competition is fierce, so asking researchers to admit a mistake or give the competition a short cut is a tall order.
  • Re:A great idea (Score:5, Interesting)

    by Cassius Corodes ( 1084513 ) on Thursday February 04, 2010 @01:47AM (#31019248)
    I've published a paper with negative results before - there is no great pressure against it - and sometimes failing to re-create claimed results is big news. Perhaps the reason why people think negative results are not published as often is because you don't write "why my study was a big fat failure" - you report on the results you did get - why they are not conclusive / their limitations and what you think future researchers can do to improve on it. I.e. you turn what is ostensibly a failure into a win for science (not to mention a paper for you). I have read many such papers - so they are hardly uncommon.
  • Re:A great idea (Score:4, Interesting)

    by T Murphy ( 1054674 ) on Thursday February 04, 2010 @01:51AM (#31019264) Journal
    Given failed results wouldn't need as much verification, it may be possible for researchers to submit under pseudonyms to avoid embarrassment, and I should think not all researchers are so full of themselves to fear helping others. I agree we won't see the best stories reach this journal, but if nothing else it will be a good way for the honest, cooperative researchers to know they aren't alone.
  • Re:A great idea (Score:5, Interesting)

    by electrons_are_brave ( 1344423 ) on Thursday February 04, 2010 @01:55AM (#31019290)
    I agree - I have on occassion partially replicated previous research and failed to find anything significant. In psychology, at least, this is needed because so many people claim significant results on relatively small correlations (i.e. many psychs are bad at stats).

    Repeating the study on a different population and failing to find a significant result can also show that the results don't generalise to that population.

  • Re:A great idea (Score:5, Interesting)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Thursday February 04, 2010 @02:04AM (#31019326)

    I agree (my first paper was a negative-results paper), but I think there are some kinds of negative results that are relatively hard to get published. Papers along the lines of: "here's an approach you might have thought would work, but it turned out that it didn't, and in retrospect we can see why, which this paper will explain". If you try to submit a paper like that, you often get push-back of, "oh well, yeah it's obvious why that wouldn't work, dunno why you didn't see it earlier". And of course it often is obvious once you've read why it doesn't work.

    As you point out, it's quite a bit easier to get negative results published if someone else had already claimed them as positive results. In that case, you're not both proposing and shooting down the idea simultaneously, but shooting down (or failing to confirm) someone else's idea, which has the advantages that: 1) you have evidence that at least one presumably smart person really didn't think it was obviously a bad idea (in fact, they thought it was a good one, and even that it worked); and 2) you're positioned as correcting an error in the literature, rather than as introducing a correction for a hypothetical error nobody has yet made.

    It's a bit tricky to fix, because some negative results really are obvious: it does nobody in the field any good to publish "we tried X on Y, and it didn't work", if genuinely nobody who was competent in the field would've thought X would work on Y, and the reason was exactly the reason you discovered.

    Incidentally, here's [uottawa.ca] one previous attempt to start such a journal that didn't really get off the ground. Their one published article, which is quite good, is of the form I mention: the authors of a system called Swordfish recounted an idea they had to produce an improvement, Swordfish2, that in the end turned out to do be better than the original Swordfish. It was hard to get published elsewhere, because it wasn't correcting an existing result---nobody had previously proposed that doing what they tried to Swordfish2 was actually a good idea---but it's interesting (to me, at least) because it really does seem like a plausible idea, and I feel I learned something in reading why it didn't work.

  • Re:Fantastic idea (Score:3, Interesting)

    by complete loony ( 663508 ) <Jeremy@Lakeman.gmail@com> on Thursday February 04, 2010 @02:38AM (#31019454)

    using statistical analysis developed by economists

    Funny, given recent events I would be more worried about the economists models.

  • Re:A great idea (Score:4, Interesting)

    by irp ( 260932 ) on Thursday February 04, 2010 @03:09AM (#31019558)

    In my experience it has nothing to do with egos or competition.

    But it is damn hard to publish something that doesn't work!

    I was recently involved i developing a microfluidic system for diagnostics. Every milestone and sub-problem was solved. But when the final injection molded devices were tested, they failed due to an sort of interesting non-obvious combination of factors. Two issues with publishing this; the problems were very specific to our system and the conclusion could be written in 5 lines of text.

    It would have been like a movie with huge setup, but within the first 3 minutes the hero stumble, break his neck, and dies. End credits. It was a EU founded research project, no more money no more time. You can't get founding to continue a failed project. End of story.

    But my point is, in all my experience as scientist. I've never seen one of my colleagues say "we should hide this", but I've often heard "I would like to tell about this, but I don't know of a paper that would accept it".

    Also when something fails we need to carry on, but now we're behind schedule...

  • Re:A great idea (Score:3, Interesting)

    by thrawn_aj ( 1073100 ) on Thursday February 04, 2010 @03:30AM (#31019630)
    3 wonderfully candid and informative posts in a row. It's a pity that that won't stop the idiots crying "OMG conspiracy of silence by egotistical scientists!" :P
  • by Yvanhoe ( 564877 ) on Thursday February 04, 2010 @06:03AM (#31020258) Journal
    I can't help but remember Sony founder explaining how they were looking for ways of doing efficient small transistors with various materials and that they had learned from Bell labs that silicium gave very poor result so they spent minimum resources on that.

    I can't help also wonder if this is a good use of "peer reviewing" which has a kind of shortage, or so I heard.
  • by davros-too ( 987732 ) on Thursday February 04, 2010 @06:10AM (#31020312) Homepage
    Not all knowledge is in formal publications, a heck of a lot of information that falls short of the publication threshold is shared at conferences and through informal communication. While rivalries can sometimes reduce communication there is a lot of information shared between colleagues.

    In addition there is often a lot of benefit in working things out for yourself - this provides the in depth understanding to base deeper work on which can be lacking if merely following instructions...

I've noticed several design suggestions in your code.

Working...