Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Image

The Journal of Serendipitous and Unexpected Results 153

SilverTooth writes "Often, when watching a science documentary or reading an article, it seems that the scientists were executing a well-laid out plan that led to their discovery. Anyone familiar with the process of scientific discovery realizes that is a far cry from reality. Scientific discovery is fraught with false starts and blind alleys. As a result, labs accumulate vast amounts of valuable knowledge on what not to do, and what does not work. Trouble is, this knowledge is not shared using the usual method of scientific communication: the peer-reviewed article. It remains within the lab, or at the most shared informally among close colleagues. As it stands, the scientific culture discourages sharing negative results. Byte Size Biology reports on a forthcoming journal whose aim is to change this: the Journal of Serendipitous and Unexpected Results. Hopefully, scientists will be able to better share and learn more from each other's experience and mistakes."

*

This discussion has been archived. No new comments can be posted.

The Journal of Serendipitous and Unexpected Results

Comments Filter:
  • So... (Score:5, Funny)

    by Biff Stu ( 654099 ) on Thursday February 04, 2010 @12:28AM (#31019158)

    If the LHC generates an Earth-eating black hole, will it be published here?

  • A great idea (Score:5, Interesting)

    by al0ha ( 1262684 ) on Thursday February 04, 2010 @12:29AM (#31019164) Journal
    but the obstacles are immense. Egos are massive and competition is fierce, so asking researchers to admit a mistake or give the competition a short cut is a tall order.
    • Re:A great idea (Score:5, Interesting)

      by Cassius Corodes ( 1084513 ) on Thursday February 04, 2010 @12:47AM (#31019248)
      I've published a paper with negative results before - there is no great pressure against it - and sometimes failing to re-create claimed results is big news. Perhaps the reason why people think negative results are not published as often is because you don't write "why my study was a big fat failure" - you report on the results you did get - why they are not conclusive / their limitations and what you think future researchers can do to improve on it. I.e. you turn what is ostensibly a failure into a win for science (not to mention a paper for you). I have read many such papers - so they are hardly uncommon.
      • Re:A great idea (Score:5, Interesting)

        by electrons_are_brave ( 1344423 ) on Thursday February 04, 2010 @12:55AM (#31019290)
        I agree - I have on occassion partially replicated previous research and failed to find anything significant. In psychology, at least, this is needed because so many people claim significant results on relatively small correlations (i.e. many psychs are bad at stats).

        Repeating the study on a different population and failing to find a significant result can also show that the results don't generalise to that population.

      • Re:A great idea (Score:5, Interesting)

        by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Thursday February 04, 2010 @01:04AM (#31019326)

        I agree (my first paper was a negative-results paper), but I think there are some kinds of negative results that are relatively hard to get published. Papers along the lines of: "here's an approach you might have thought would work, but it turned out that it didn't, and in retrospect we can see why, which this paper will explain". If you try to submit a paper like that, you often get push-back of, "oh well, yeah it's obvious why that wouldn't work, dunno why you didn't see it earlier". And of course it often is obvious once you've read why it doesn't work.

        As you point out, it's quite a bit easier to get negative results published if someone else had already claimed them as positive results. In that case, you're not both proposing and shooting down the idea simultaneously, but shooting down (or failing to confirm) someone else's idea, which has the advantages that: 1) you have evidence that at least one presumably smart person really didn't think it was obviously a bad idea (in fact, they thought it was a good one, and even that it worked); and 2) you're positioned as correcting an error in the literature, rather than as introducing a correction for a hypothetical error nobody has yet made.

        It's a bit tricky to fix, because some negative results really are obvious: it does nobody in the field any good to publish "we tried X on Y, and it didn't work", if genuinely nobody who was competent in the field would've thought X would work on Y, and the reason was exactly the reason you discovered.

        Incidentally, here's [uottawa.ca] one previous attempt to start such a journal that didn't really get off the ground. Their one published article, which is quite good, is of the form I mention: the authors of a system called Swordfish recounted an idea they had to produce an improvement, Swordfish2, that in the end turned out to do be better than the original Swordfish. It was hard to get published elsewhere, because it wasn't correcting an existing result---nobody had previously proposed that doing what they tried to Swordfish2 was actually a good idea---but it's interesting (to me, at least) because it really does seem like a plausible idea, and I feel I learned something in reading why it didn't work.

        • Re: (Score:3, Interesting)

          by thrawn_aj ( 1073100 )
          3 wonderfully candid and informative posts in a row. It's a pity that that won't stop the idiots crying "OMG conspiracy of silence by egotistical scientists!" :P
          • Re: (Score:3, Funny)

            There had been 17 -- but some overlord deleted the others before anyone got a chance to see them. These three escaped censorship because they had already been seen.

            Go ahead -- prove me wrong!

          • by Rostin ( 691447 )

            Their stories aren't exactly what the article is talking about, imo. It sounds like they all did studies critical of existing positive results. They didn't just write a paper out of the blue that said, "We thought it would be interesting and important to show X. We tried, A, B, and C, but none of those things did what we expected. We still haven't shown X." The exception might be if A, B, and C are somehow exhaustive and so their failure to show X actually disproves X.

            I think part of the reason papers

            • by st0nes ( 1120305 )

              We thought it would be interesting and important to show X. We tried, A, B, and C, but none of those things did what we expected. We still haven't shown X

              I can see why journals might be hesitant to publish a paper like this. It's a little like that newspaper headline: "Fairly Small Earthquake in Andes--Not Very Many Killed", not exactly an eyeball grabber. Science journals are also dependant on sales for advertising revenue, and those sales depend on papers showing positive results.

        • Not all knowledge is in formal publications, a heck of a lot of information that falls short of the publication threshold is shared at conferences and through informal communication. While rivalries can sometimes reduce communication there is a lot of information shared between colleagues.

          In addition there is often a lot of benefit in working things out for yourself - this provides the in depth understanding to base deeper work on which can be lacking if merely following instructions...

          • by Trepidity ( 597 )

            Yeah, I agree with that, though it's nice to have things recorded also. As far as informal communication goes, I'm actually increasingly finding blogs actually to be a good source for that. They're more informal (and timely) than journal or conference papers, but still written, sometimes at length, and usually remain online for a while. Not a permanent record, but more permanent than a chance hallway conversation at a conference.

        • As you point out, it's quite a bit easier to get negative results published if someone else had already claimed them as positive results.

          This one is my favorite:

          G. Hathaway, B. Cleveland, Y. Bao, Gravity modification experiment using a rotating superconducting disk and radio frequency fields, Physica C: Superconductivity, Volume 385, Issue 4, 1 April 2003, Pages 488-500, ISSN 0921-4534, DOI: 10.1016/S0921-4534(02)02284-0.

        • by radtea ( 464814 )

          Papers along the lines of: "here's an approach you might have thought would work, but it turned out that it didn't, and in retrospect we can see why, which this paper will explain".

          In experimental papers I ususally try to have a section entitled (really) "Things that did not work so well" in which I mention approaches that seemed like good ideas that didn't work out. If I were an editor of an experimental journal I would make this part of the standard format, as any experiment that doesn't lead to the disc

      • In biology this is unfortunately not generally the case. Since i came from physics this took just a little getting use too.
      • Re: (Score:3, Insightful)

        by Hognoxious ( 631665 )

        you report on the results you did get - why they are not conclusive / their limitations and what you think future researchers can do to improve on it.

        And why you should get funding to do a follow-up study.

      • by dpilot ( 134227 )

        I went to a University whose perhaps biggest contribution to science and mankind was a failure. Case Western Reserve University was where the Michelson-Morley interferometer experiments took place - the failure to find the aether that launched Special Relativity. Einstein once came there, "to see where it all started."

    • Re:A great idea (Score:4, Interesting)

      by T Murphy ( 1054674 ) on Thursday February 04, 2010 @12:51AM (#31019264) Journal
      Given failed results wouldn't need as much verification, it may be possible for researchers to submit under pseudonyms to avoid embarrassment, and I should think not all researchers are so full of themselves to fear helping others. I agree we won't see the best stories reach this journal, but if nothing else it will be a good way for the honest, cooperative researchers to know they aren't alone.
      • WTF? What “embarassement”?

        Sorry, but real scientists don’t care if what happened, was what they expected. (Because it’s always cool new knowledge. Most of the time, the unexpected results are way cooler anyway.)
        Or even whether they got new information about what they studied. (Because they still gained the knowledge, that this method does not give any new information.)

        I really don’t get, in what twisted mindset one can see that as bad. It boggles my mind...

    • Re: (Score:3, Insightful)

      by Jurily ( 900488 )

      Egos are massive and competition is fierce, so asking researchers to admit a mistake or give the competition a short cut is a tall order.

      The funny thing is, discoveries are not "I told you!". They're "That's interesting...".

      • The problem is the implication that a lack of discovery is "That's not interesting...". There is certainly very strong evidence that publication bias [wikipedia.org] exists and is a tremendous problem in the academic world. While plenty of negative papers do get published the vast majority of published papers have some positive results. As has been discussed already when you get negative results and you want them published you write them out as if they were positive - there is a reason for this, and it's just how people wo

    • Re:A great idea (Score:4, Interesting)

      by irp ( 260932 ) on Thursday February 04, 2010 @02:09AM (#31019558)

      In my experience it has nothing to do with egos or competition.

      But it is damn hard to publish something that doesn't work!

      I was recently involved i developing a microfluidic system for diagnostics. Every milestone and sub-problem was solved. But when the final injection molded devices were tested, they failed due to an sort of interesting non-obvious combination of factors. Two issues with publishing this; the problems were very specific to our system and the conclusion could be written in 5 lines of text.

      It would have been like a movie with huge setup, but within the first 3 minutes the hero stumble, break his neck, and dies. End credits. It was a EU founded research project, no more money no more time. You can't get founding to continue a failed project. End of story.

      But my point is, in all my experience as scientist. I've never seen one of my colleagues say "we should hide this", but I've often heard "I would like to tell about this, but I don't know of a paper that would accept it".

      Also when something fails we need to carry on, but now we're behind schedule...

      • Some of the most useful publications I've read were people admitting stuff doesn't work. In computer science, you never see these in peer reviewed journals, but you often see them on researchers' blogs. When you read a paper that says 'we tried this and it works' it often comes with the small print 'in these very narrow conditions that aren't applicable to any real-world cases'. When you see an idea that someone tried and didn't work, quite often you can see a way of changing it slightly so that it will
      • by bkr1_2k ( 237627 )

        But my point is, in all my experience as scientist. I've never seen one of my colleagues say "we should hide this", but I've often heard "I would like to tell about this, but I don't know of a paper that would accept it".

        Isn't that the point of this journal? To be a place for exactly that type of publication?

    • Egos are massive and competition is fierce, so asking researchers to admit a mistake or give the competition a short cut is a tall order.

      This must depend on the field if it's true anywhere. In biology, you'd have to be the world's biggest ass to act like you've never had an unexpected result. From my limited experience, if you suggest that -most- of your results are completely what you were expecting, I'd suspect you were lying. It seems like on average, every other research presentation I see, by heads of labs included, the presenter admits some of the most interesting data was not what they expected.

      The discovery of penicillin was a mon

    • This is exactly right.

      A complete failure to grasp three fundamental human behaviors:

      1. The tendancy to rewrite history so that we don't look like idiots.
      2. The tendancy keep competitive advantages to ourselves.
      3. The tendency to reinvent stuff because of an NIH (Not Invented Here) mentality
    • Wrong perspective.
      Why do you call it a mistake? What crazy idea is that, to call something good (gaining useful new information) “bad”?
      It’s not bad, so you don’t have to “admit” anything.

      That’s the great thing about science: The worst thing that can happen, is that you don’t learn something new. (= 0)
      Everything else (= +x | -x) is a success.
      Who cares if it was expected.

      Frankly, I find the unexpected results to be far cooler than the expected ones. :)
      A scientis

  • This could be good (Score:2, Insightful)

    by Kitkoan ( 1719118 )
    No longer having to remake the broken wheel each time. Or it could lead to a bad side effect having a positive outcome like Viagra and Zyban. Both of these were not what was planned but had amazing results. Hell, penicillin saves millions and if I remember right, was a total mistake at the beginning.
  • Fantastic idea (Score:5, Insightful)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Thursday February 04, 2010 @12:43AM (#31019230) Homepage

    Sometimes talking to people with very pro-sciene views, you get the idea that "science" is either an accumulated set of known facts or a perfect method which, because of peer review, is infallible at learning absolute truth.

    In reality, it's just a set of processes that we've developed and which has been generally more successful at producing helpful results than other methods. No reason to think that the way we go about it couldn't be improved. I can't imagine that failing to share the results failed experiments doesn't sometimes result in the loss of important information.

    Coincidentally I just saw this talk [ted.com] which raises the question whether helpful data can be gathered even if it's not gathered through conventional rigorous scientific methods. It seems like an interesting idea-- they're essentially gathering lots of data from various sources and using statistical analysis developed by economists to try to draw conclusions. My biggest concern would be purposeful manipulation by someone with an agenda.

    But anyway, all of this is to say that this has gotten me thinking about how the scientific process may still be open to some innovation.

    • Re: (Score:3, Interesting)

      using statistical analysis developed by economists

      Funny, given recent events I would be more worried about the economists models.

    • Re: (Score:3, Funny)

      "using statistical analysis developed by economists to try to draw conclusions"

      this sounds promising
      /deadpan

    • Re: (Score:3, Informative)

      by phantomfive ( 622387 )
      Stripped to its bare, ideological minimum, science is nothing more than observation. You can extrapolate the implications of those observations, but in the end everything we know in science can be traced down to an observation. That is why intelligent design fails at being a science: while it technically might be true, it is not an observation, it is a guess. FSM is not an observation it is a (silly) guess.

      All the trappings of science, the double-blind experiments, the peer review, etc. are merely way
      • Re: (Score:3, Informative)

        by TapeCutter ( 624760 ) *
        "Stripped to its bare, ideological minimum, science is nothing more than observation."

        You went too far, you stripped off the meat. Science uses observation to find models that accurately predict new observations. The guts of the philosophy is that the utility of reliable predictive models is self evident.
        • I think I included that in my original post actually, when I talked about extrapolating the implications of those observations. Further observation will verify whether the extrapolations are correct or not.

          I think what you are saying is that if a model accurately predicts something, then the model is accurate; in reality all you know is that it accurately predicts something. The rest of the model may be completely in error, it hasn't been verified yet. A simple example of this is when Columbus presente
          • "I think what you are saying is that if a model accurately predicts something, then the model is accurate"

            Nope, I'm saying it's usefull. The Earth centric model lasted so long because it accurately predicted the motion of the moon and planets.
            • True, I cannot dispute that usefulness is a good measurement of usefulness.
              • You have a talent for over simplifying.
                • Oh, I'm not really trying to simplify, more trying to find common ground. I like looking at models as being derived from observation, and you like to look at them in a slightly different light. In practice, doing real science, I think it makes absolutely no difference whatsoever, but I think my way is pretty. :) Of course having a good model is useful, even if it is not entirely accurate, I can agree with that. And I think you can agree that observation is the 'bones' of science. The exact way they fit
                  • I can agree that observation is a necassary but insuffient foundation. I like the Asimov quote - "The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' but 'That's funny'"
      • Stripped to its bare, ideological minimum, science is nothing more than observation. You can extrapolate the implications of those observations, but in the end everything we know in science can be traced down to an observation. That is why intelligent design fails at being a science: while it technically might be true, it is not an observation, it is a guess. FSM is not an observation it is a (silly) guess.

        All the trappings of science, the double-blind experiments, the peer review, etc. are merely ways to improve the accuracy of our observations. It is really beautiful, actually, to realize that for any fact in science you can say, "how do we know this?" and get back to the original observations that show it to be true. This is not something you can do with religion, or philosophy, or literary criticism.

        Without philosophy, observation is only historical cataloging; mere correlation. Recognition of Causation requires something outside of observation: "silly" guesses. Even after doing hundreds of observations, we'll never know if any of the silly guesses are true, but we'll know which ones turn out false. But, pure philosophy has one on science: certain things can be proved true with mere thought experiment.
        Oblig. XKCD: http://xkcd.com/435/ [xkcd.com] But he forgot the logician on the far right telling the mathemat

        • Thought experiments are always based on observation. They are mental extrapolations of things thought to be true.
          • "I cannot be deceived into thinking that I don't exist, for by thinking, I must exist, else who does the thinking?" Cogito ergo sum. Yes, I suppose distinction between oneself and the rest of reality might be an observation, but probably not the variety you meant.
            • Yeah, I would say that an observation of one's self is very much an observation, although it is not a double-blind experiment which would give you more accurate results, in some cases a double-blind experiment is not possible.

              In this particular case, I've always thought Descartes was making a rather large assumption, not an observation.....how does he know he is thinking and not someone else's dream?
    • >>But anyway, all of this is to say that this has gotten me thinking about how the scientific process may still be open to some innovation.

      The sad fact is, there's a lot of work being done under the name of science that isn't really science, or perhaps, science-lite. Do you think "Climate Scientists" have the ability to run scientific experiments? (Let's start with 1,000 earths, add 100ppm of CO2 to half of them, and measure the difference in average temperature across 100 years.) No, of course not. B

      • I think you're a little too picky about your definition of science.

        You don't need to do a double blind experiment for your observations to become science. Geology, astronomy, and evolutionary biology are definitely science even if we can't try out two different methods of creating granite, initiating the big bang, or applying selective pressures to dinosaurs. Likewise with climate science. You're dealt a problem with a long history and a series of facts. Now try to figure out what will happen if variabl

        • If you go with that definition, then economics and psychology are science.

          In other words, if you're going to admit anyone that uses logic, observation, and rigorous statistics (but no experimentation) into the club of scientific disciplines, then you either have to admit nearly every discipline at a university, or nearly none of them.

          As I said in my previous post, our current definitions are severely flawed. People who get a degree in Math at my university, get a Bachelor or Master of Arts, not science. But

  • Problem is (Score:5, Insightful)

    by JanneM ( 7445 ) on Thursday February 04, 2010 @12:53AM (#31019276) Homepage

    The problem with any change or reform of the publishing system is that publications are so important for the individual scientist. A paper isn't just a neat way to disseminate results. They are your work evaluation and your CV; they are keeping the score as it were. Where you publish and how often you publish directly determines where - or if - you work another year or two down the road.

    And even a short paper takes a lot of time and effort to write. For an informal "don't do that; we tried and it didn't work"-email to a colleague you could just jot down three or four paragraphs after lunch. Make a paper out of it and you have weeks or more of work ahead of you - looking up other previous published reports on the same kind of experiment; doing your best to figure out and explain the exact causes; square your (lack of) results with the apparent success of other groups that did something similar; make neat, clear graphs and illustrations as needed; get formal permission from your lab and your funding agency (and your co-authors labs and funding sources) to actually publish the thing. Then revise and edit the paper multiple times after comments from your co-athours and reviewers.

    So, getting good publications is vital for your ability to make rent and buy food for your family. Writing publications take a lot of time and effort - time that is pretty limited. So, even though the will to spread the word on a negative result may be there, chances is, writing it up will be relegated to the "when I've got a bit of spare time"-pile, where it will likely sit until well after retirement.

  • by Anonymous Coward

    an article for the first issue? covers all questions

    So are we talking:

    Technique X fails on problem Y.

    Sue all music downloads fails to stop piracy?

    Hypothesis X can't be proven using method Y.

    All music downloaders can't be proven to be pirates

    Protocol X peforms poorly for task Y.

    Suing all music downloaders performs poorly for stopping music piracy

    Method X has unexpected fundamental limitations.

    Forcing people to buy music only on CD / Tape / Vinyl doesn't appeal to all customers

    While investigating X, you disco

  • by Fantastic Lad ( 198284 ) on Thursday February 04, 2010 @01:00AM (#31019312)

    This is a fantastic idea! It takes a great deal of strength to do this; one has to learn how to have fun and ignore the pangs of the ego.

    James Burke's Connections [wikipedia.org] was based on similar philosophy. Non-linear thinking is a very powerful method of moving through time. Many geeks live in the clutches of an obsessive desire to control everything so that they don't get hurt by being wrong. If they could just relax and roll with the ups and downs and not be so hard on themselves, not care if they are laughed at, then they would find their power and perhaps start living lives of consequence.

    One university professor described an enormously powerful way of doing research; When you're up against a wall, seeking fruitlessly to find a specific title to continue your line of thinking, instead just pull out some random book nearby. Doesn't even have to be from the same shelf or Dewey code. It will have the answer. -But only if you're tuned to your inner Jedi.

    Those who deny their inner Jedi are forever lost. But the upside, I guess, is that nobody will laugh at them.

    -FL

  • Most failed results have no useful knowledge in them. Having a huge amount of them is less useful that it sounds.
  • by Dr_Ish ( 639005 ) on Thursday February 04, 2010 @01:15AM (#31019364) Homepage
    One of the things that many scientists lack, is a good grounding in the Philosophy of Science. The public version of science, largely pushed by science teachers has an origin in the Vienna Circle of Logical Positivists. This is now largely known to be problematic, but is still the prevailing view. Folks should read Feyerabend's <a href="http://books.google.com/books?hl=en&lr=&id=8y-FVtrKeSYC&oi=fnd&pg=PR7&dq=%22Feyerabend%22+%22Against+method%22+&ots=vBXF8-Qt5G&sig=glkJVN6Pjfe3wLmKeTwPGE6-fgk#v=onepage&q=&f=false">*Against Method* </a>, or Ravetz's <a href="http://www.amazon.com/Scientific-Knowledge-Its-Social-Problems/dp/1560008512">*Scientific Knowledge and Its Social Problems*</a> for a more realistic view.

    As a scientist, I can also tell tales about how the scientific method gets distorted by ideology. When I was in grad school, I was working on a complex set of problems that were a horror -- a week doing eight hours a day pumping numbers into a scientific calculator is not my idea of fun. However, back then, it was a necessary evil. So, I was about to have to do another horror week with the calculator, which I did not want to do, so I was wasting time and did something silly. It turned out to be a great idea. It gave a whole new method to solve the problem type at hand. A number of other people had a hand in the final paper, but I got to be first author. Unfortunately, as only one author amongst many. The paper made claims about the hypotheses that was being tested, I objected very strongly to this -- there was no hypothesis, but we just got lucky. However, there is a paper with my name on in, published in the 20th Century, that contains claims about what we discovered which are false, at least with respect to hypotheses and all that stuff, in order to ensure that we were following someones idea of the scientific method. It irks me even today. Fortunately, a book about the issue now gives a more accurate account. However, there is no doubt that scientific ideology can drive out the truth. Thus, what is proposed here is a good idea. Telling the truth (even if it does not conform to the ideologically driven official method) is something I teach my grad students even today.
    • Re: (Score:3, Insightful)

      One of the things that many scientists lack, is a good grounding in the Philosophy of Science. The public version of science, largely pushed by science teachers has an origin in the Vienna Circle of Logical Positivists. This is now largely known to be problematic, but is still the prevailing view.

      Thanks so much for pointing this out. It never ceases to amaze me how many scientists seem to believe blindly in some sort of simplified method taught in middle school or the interesting, but ultimately useless, Popperian epistemology.

      Is it really that complicated to understand that "falsifiability" is only a useful concept (and a somewhat limited one at that) for describing the process of testing hypotheses that are already formulated, but it gives almost no guidance about how to come up with such hypot

      • Yeah, we've all seen the cartoon [magicanimation.com] but the scientfic method does not claim to offer a method for creativity, to do so would be tautological. What the scientific method offers is a useull way to test the fruits of creativity. The bad fruit tends to complain that there are no instructions for creativity [youtube.com].
      • by khallow ( 566160 )

        Thanks so much for pointing this out. It never ceases to amaze me how many scientists seem to believe blindly in some sort of simplified method taught in middle school or the interesting, but ultimately useless, Popperian epistemology.

        An alternate hypothesis here is that you are wrong in some important way. Given that Popperian epistemology isn't ultimately useless, I'd start with that.

        • Given that Popperian epistemology isn't ultimately useless, I'd start with that.

          Have you read the philosophy of science stuff mentioned by the GP (or other similar stuff)? If not, take some time to do that before assuming I'm just a troll.

          I'm not saying that Popper's falsifiability criterion isn't a helpful simplification, or that there aren't other aspects of Popper that don't have great insight. What's I'm saying is that if you try to use Popper as the sole basis for your epistemology, i.e., the complete theory of how you get your knowledge about the world, you'll find his theori

          • by khallow ( 566160 )

            Have you read the philosophy of science stuff mentioned by the GP (or other similar stuff)? If not, take some time to do that before assuming I'm just a troll.

            No, but I will.

            I'm not saying that Popper's falsifiability criterion isn't a helpful simplification, or that there aren't other aspects of Popper that don't have great insight. What's I'm saying is that if you try to use Popper as the sole basis for your epistemology, i.e., the complete theory of how you get your knowledge about the world, you'll find his theories are hopelessly incomplete... not to mention (as I said) an inaccurate fit when you look at the way science actually develops in practice.

            I guess what bugs me here is the rhetorical exaggeration. An "inaccurate fit" seems an appropriate description. "Hopelessly incomplete" does not. Nor does the phrase "ultimately useless" which triggered my original complaint. I'm not claiming that Popper doesn't have problems, I'm merely pointing out that his ideas remain useful. I imagine any serious discussion of knowledge-seeking descendants of the scientific method, millennia from now, will still include Popperian ideas in there.

            There have been hundreds, if not thousands, of books written on the philosophy of science in the past century, most of them since Popper. Obviously if the issues were all resolved by one simple idea from Karl Popper, people would have stopped writing. They haven't.

            There hav

      • >>Thanks so much for pointing this out. It never ceases to amaze me how many scientists seem to believe blindly in some sort of simplified method taught in middle school or the interesting, but ultimately useless, Popperian epistemology.

        Yeah, I've never understood why people prefer Popper's fetishistic focus on falsifiability to define what is "scientific", especially given what a large percentage of our knowledge is not falsifiable.

        Consider:
        I observe a comet through my telescope and I see it explode

        • Re: (Score:3, Informative)

          by AndersOSU ( 873247 )

          Watching something isn't scientific observation.

          Saying I saw a comet explode isn't science.

          Saying I saw a comet explode as it neared the sun is getting close because now you're hypothesizing that the sun had something to do with it.

          Saying, "the comet exploded due to the melting of water ice as it neared the sun, similar comets should explode as they near the sun as water ice appears to be a fundamental structural element" is science because now you're making testable and falsifiable statements about the com

          • Thanks for a very good explanation of a lot of these things.

            The hallmark of science is the development of models that yield useful information, but the only way to know if the model is right is to test it - which is why Popper and everyone else is so obsessed with falsifiability.

            Exactly. I think people misunderstood my previous post as saying that falsifiability is useless. (Because that's all people tend to know about Popper.) It's not. It's a simple model, and the way things actually work is more complicated, but it's not a bad place to begin.

            Instead, what I said is that Popper's epistemology isn't ultimately enough for us to understand the world. It isn't enough to make scientific advances or to predict which the

          • >>Saying I saw a comet explode isn't science.
            >>Saying, "the comet exploded due to the melting of water ice as it neared the sun, similar comets should explode as they near the sun as water ice appears to be a fundamental structural element"

            And this is where the general public would disagree with you, and (I'm guessing), most of the scientific community. Experimentation and observation are key components of what most people would define to be "science". If you go back to the grade school model of

    • by Shipud ( 685171 ) *
      "Philosophers, incidentally, say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong. For example, some philosopher or other said it is fundamental to the scientific effort that if an experiment is performed in, say, Stockholm, and then the same experiment is done in, say, Quito, the same results must occur. That is quite false. It is not necessary that science do that; it may be a fact of experience, but it is not necessary.
  • by syousef ( 465911 ) on Thursday February 04, 2010 @01:42AM (#31019476) Journal

    Einstein wasted the last half of his life on wishful thinking "God does not play dice". Well turns out we're pretty sure he does. See Bell's theorum which shows that it can't just be hidden variables. And by all accounts for a theoretical physicist he sucked at advanced math.

    Isaac Newton was a horrible little man. Ill tempered, neurotic, and did wild experiments that he was lucky didn't blind him. Let's not forget the nastiness with Leibniz.

    Galileo had the social skills of a village idiot which led to the suppression of his work and his imprisonment by the authorities that he angered. (They were idiots too but that's beside the point)

    They're three of the greatest but I could go on.

    We like to pretend our scientists are great men with a couple of eccentricities that are way too smart to socialise or tolerate fools but the fact is their thinking isn't so superior OR logical OR scientific EXCEPT in their areas of expertise. THAT is why they are remembered. Not because they were above being unscientific.

    • >>See Bell's theorum which shows that it can't just be hidden variables.

      No, it means there's (probably) no NON-LOCAL hidden variables. Given that nobody really understands what happens during wavefunction collapse (or rather, the mechanism behind it), it's hard to say that he's necessarily wrong. Quantum Mechanics are deeply weird, and science has been getting by describing how they work, rather than why.

      I'm not sure why you're trying to claim that scientists can't use intuition in science - if that w

      • by syousef ( 465911 )

        No, it means there's (probably) no NON-LOCAL hidden variables. Given that nobody really understands what happens during wavefunction collapse (or rather, the mechanism behind it), it's hard to say that he's necessarily wrong. Quantum Mechanics are deeply weird, and science has been getting by describing how they work, rather than why.

        Why should Quantum mechanics - something well beyond our everday experience - not be weird? You think Relativity is intuitive?

        I'm not sure why you're trying to claim that scien

        • >>How very unscientific of you. Everything has to do with something. Now put your petty sarcasm away.

          I think you're confusing "unscientific" with "everything that annoys syousef". If a scientist gets into an argument with another scientist over priority (and who published first IS related to science, whether you like it or not) annoys syousef, therefore, by definition, it is not science. Even though it is. Scientists working on their own are not scientists! Syousef says so!

          Do you realize how uninforme

    • Well I might argue that the people you're talking about weren't even "scientists" in the modern sense. What they practiced might be better described as natural philosophy [wikipedia.org]. It's not as though Einstein was remembered for his lab experiments. Essentially his innovation was that he re-imagined what it meant to "measure" something.

      • by syousef ( 465911 )

        Well I might argue that the people you're talking about weren't even "scientists" in the modern sense. What they practiced might be better described as natural philosophy. It's not as though Einstein was remembered for his lab experiments. Essentially his innovation was that he re-imagined what it meant to "measure" something.

        Since when is a theoretician not a scientist? Only an experimental scientist is going to be remembered for his experiments. Most of the time modern experiments are too complex for lay

        • Since when is a theoretician not a scientist?

          Depends on the level that they're theorizing. Are they working within the framework of "modern science working the way we think it works"? Or are they completely rewriting the way we think it works on a fundamental level?

          I'm not saying a scientist's work all takes place in a lab, but Einstein's most famous achievement wasn't so much in his reworking of a mathematical equation, tweaking an existing theory, working in a lab, or going through anything that we would call "the scientific method". His achieve

          • by syousef ( 465911 )

            Now you can call that science, but really it fits more into a particular branch of philosophy.

            You can call it rock climbing for all I care. You're wrong, and public opinion certainly isn't with you. Very few people would agree with you that Einstein wasn't a scientist. By the way his work made specific testable predictions and if that isn't part of the scientific method I don't know what is.

    • Your argument might have smelled more honest if you had included at least one counterexample, for instance Richard Feynman. Beyond his sheer brilliance, the man was committed to thinking scientifically in every one of his life endeavors (except maybe his divergences into art) and worked diligently to communicate the principles of scientific thinking to every student he ever came in contact with.
      • by syousef ( 465911 )

        Enough of your Feynman worship. He was very good at what he did, but was a womaniser and a drunk - certainly after he lost his first wife. Have you read one of the biographies on Feynman? I have. He also took unnecessary risks with his career for a basic ego trip (like showing up security theatre on the military base he worked on for the atom bomb) Furthermore his lectures aren't universally accepted as great. Some people think believe them to be long winded and confusing.

        Personally I think he was as good a

  • by InlawBiker ( 1124825 ) on Thursday February 04, 2010 @01:49AM (#31019494)
    Don't date Wendy from the admissions office. Spectacular failure.
    • by jamesh ( 87723 ) on Thursday February 04, 2010 @03:59AM (#31020028)

      And you are basing this on one datum? Have you learned NOTHING??? Go back and try again and see if you get the same outcome.

      • Sadly, past failures have forever tainted the sample.

      • Since we haven’t invented a time machine yet (working on it ;)... since Wendy’s state is not resettable, unless there was a massive amount of alcohol involved the first time, which already resetted it (wish I had that option the day after the date with her!)... and since there is only one Wendy just like her (oh thank god for that one!)... going back is not an option. You can only try with the accumulated state.

  • Redundant.

      Well, duh.

    SB

  • Maybe on that list of things that "not work" are things that never worked because the experiment was not well designed.

    Is my undestanding that the democracy world is better because we don't firmly control what people experiment. So people are free to try things that "don't work".

  • Old Problem (Score:3, Insightful)

    by RobinEggs ( 1453925 ) on Thursday February 04, 2010 @02:35AM (#31019652)

    Trouble is, this knowledge is not shared using the usual method of scientific communication: the peer-reviewed article. It remains within the lab, or at the most shared informally among close colleagues. As it stands, the scientific culture discourages sharing negative results.

    This sort of complaint goes back a very long ways, and it's certainly as good a time as any to address it head on.

    We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing.

    - Richard Feynman, Nobel Prize Acceptance Lecture, 1965

  • If it lives up to its title they be to much fun stuff, and to much mystery in this one journal. All the experiments which don't turn out like it was predicting will end up being documented in the JoSaUR. If we're unlucky these strange results will get burried in is ths journal and no one will bother to try to reproduce the unexpected. The serendipitous and unexpected is of course, exactly what moves science forward, so I hope experiments that end up in JoSaUR do get looked at again and hard.

    ---

    History o [feeddistiller.com]

    • I don't know if it is such a good idea to bundle unexpected facts from so many disciplines in one journal... I mean, if you're in computer science, would you like reading that such and such method for manipulating the DNA of a fruit-fly produces an anomaly in its social behavior? I think not...

  • never attribute to serendipity what can be explained by science

  • by Yvanhoe ( 564877 ) on Thursday February 04, 2010 @05:03AM (#31020258) Journal
    I can't help but remember Sony founder explaining how they were looking for ways of doing efficient small transistors with various materials and that they had learned from Bell labs that silicium gave very poor result so they spent minimum resources on that.

    I can't help also wonder if this is a good use of "peer reviewing" which has a kind of shortage, or so I heard.
  • I love how people often point out "you can't prove a negative" or "you can't publish negative results". Turns out that you are very wrong if you think that either is true.

    At first sight it appears that the idea behind this journal is to share failed attempts. But look at the kind of examples the website would like you to prove: "Prove that method X does not work for problem Y." This is *not* a failed attempt. You succeeded in proving something. Some great papers dealing with P?=NP problem prove exactly this

  • This idea was already executed a while ago by the Journal of Negative Results in Ecology and Evolutionary Biology, the [jnr-eeb.org] Journal of Negative Results in BioMedicine [jnrbm.com], the Journal of Negative Results in Speech and Audio Sciences [haikya.us] and probably a few others that Google will help you find, just as it helped me find. But, as I recall, even PLoS [plos.org] had publishing negative results in its charter and specifically PLoS ONE [plosone.org] encouraged them, being all-inclusive.

    The problem? Most of them (except for PLoS and PLoS ONE) have a

  • ... that explains how the whole Mentos+soda thing was actually a failed attempt at cold fusion?

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...