The Journal of Serendipitous and Unexpected Results 153
SilverTooth writes "Often, when watching a science documentary or reading an article, it seems that the scientists were executing a well-laid out plan that led to their discovery. Anyone familiar with the process of scientific discovery realizes that is a far cry from reality. Scientific discovery is fraught with false starts and blind alleys. As a result, labs accumulate vast amounts of valuable knowledge on what not to do, and what does not work. Trouble is, this knowledge is not shared using the usual method of scientific communication: the peer-reviewed article. It remains within the lab, or at the most shared informally among close colleagues. As it stands, the scientific culture discourages sharing negative results. Byte Size Biology reports on a forthcoming journal whose aim is to change this: the Journal of Serendipitous and Unexpected Results. Hopefully, scientists will be able to better share and learn more from each other's experience and mistakes."
So... (Score:5, Funny)
If the LHC generates an Earth-eating black hole, will it be published here?
Re:So... (Score:5, Funny)
Re: (Score:2)
I don't think so. This Journal will not publish any results anymore. Ever.
There. Fixed that for ya.
A great idea (Score:5, Interesting)
Re:A great idea (Score:5, Interesting)
Re:A great idea (Score:5, Interesting)
Repeating the study on a different population and failing to find a significant result can also show that the results don't generalise to that population.
Exhibit N: (Score:2)
Re:A great idea (Score:5, Interesting)
I agree (my first paper was a negative-results paper), but I think there are some kinds of negative results that are relatively hard to get published. Papers along the lines of: "here's an approach you might have thought would work, but it turned out that it didn't, and in retrospect we can see why, which this paper will explain". If you try to submit a paper like that, you often get push-back of, "oh well, yeah it's obvious why that wouldn't work, dunno why you didn't see it earlier". And of course it often is obvious once you've read why it doesn't work.
As you point out, it's quite a bit easier to get negative results published if someone else had already claimed them as positive results. In that case, you're not both proposing and shooting down the idea simultaneously, but shooting down (or failing to confirm) someone else's idea, which has the advantages that: 1) you have evidence that at least one presumably smart person really didn't think it was obviously a bad idea (in fact, they thought it was a good one, and even that it worked); and 2) you're positioned as correcting an error in the literature, rather than as introducing a correction for a hypothetical error nobody has yet made.
It's a bit tricky to fix, because some negative results really are obvious: it does nobody in the field any good to publish "we tried X on Y, and it didn't work", if genuinely nobody who was competent in the field would've thought X would work on Y, and the reason was exactly the reason you discovered.
Incidentally, here's [uottawa.ca] one previous attempt to start such a journal that didn't really get off the ground. Their one published article, which is quite good, is of the form I mention: the authors of a system called Swordfish recounted an idea they had to produce an improvement, Swordfish2, that in the end turned out to do be better than the original Swordfish. It was hard to get published elsewhere, because it wasn't correcting an existing result---nobody had previously proposed that doing what they tried to Swordfish2 was actually a good idea---but it's interesting (to me, at least) because it really does seem like a plausible idea, and I feel I learned something in reading why it didn't work.
Re: (Score:3, Interesting)
Re: (Score:3, Funny)
There had been 17 -- but some overlord deleted the others before anyone got a chance to see them. These three escaped censorship because they had already been seen.
Go ahead -- prove me wrong!
Re: (Score:2)
Their stories aren't exactly what the article is talking about, imo. It sounds like they all did studies critical of existing positive results. They didn't just write a paper out of the blue that said, "We thought it would be interesting and important to show X. We tried, A, B, and C, but none of those things did what we expected. We still haven't shown X." The exception might be if A, B, and C are somehow exhaustive and so their failure to show X actually disproves X.
I think part of the reason papers
Re: (Score:2)
I can see why journals might be hesitant to publish a paper like this. It's a little like that newspaper headline: "Fairly Small Earthquake in Andes--Not Very Many Killed", not exactly an eyeball grabber. Science journals are also dependant on sales for advertising revenue, and those sales depend on papers showing positive results.
conferences and informal communication help (Score:3, Interesting)
In addition there is often a lot of benefit in working things out for yourself - this provides the in depth understanding to base deeper work on which can be lacking if merely following instructions...
Re: (Score:2)
Yeah, I agree with that, though it's nice to have things recorded also. As far as informal communication goes, I'm actually increasingly finding blogs actually to be a good source for that. They're more informal (and timely) than journal or conference papers, but still written, sometimes at length, and usually remain online for a while. Not a permanent record, but more permanent than a chance hallway conversation at a conference.
Re: (Score:2)
As you point out, it's quite a bit easier to get negative results published if someone else had already claimed them as positive results.
This one is my favorite:
G. Hathaway, B. Cleveland, Y. Bao, Gravity modification experiment using a rotating superconducting disk and radio frequency fields, Physica C: Superconductivity, Volume 385, Issue 4, 1 April 2003, Pages 488-500, ISSN 0921-4534, DOI: 10.1016/S0921-4534(02)02284-0.
Re: (Score:2)
Papers along the lines of: "here's an approach you might have thought would work, but it turned out that it didn't, and in retrospect we can see why, which this paper will explain".
In experimental papers I ususally try to have a section entitled (really) "Things that did not work so well" in which I mention approaches that seemed like good ideas that didn't work out. If I were an editor of an experimental journal I would make this part of the standard format, as any experiment that doesn't lead to the disc
Re: (Score:2)
Re: (Score:3, Insightful)
And why you should get funding to do a follow-up study.
Re: (Score:2)
I went to a University whose perhaps biggest contribution to science and mankind was a failure. Case Western Reserve University was where the Michelson-Morley interferometer experiments took place - the failure to find the aether that launched Special Relativity. Einstein once came there, "to see where it all started."
Re:A great idea (Score:4, Interesting)
Re: (Score:2)
WTF? What “embarassement”?
Sorry, but real scientists don’t care if what happened, was what they expected. (Because it’s always cool new knowledge. Most of the time, the unexpected results are way cooler anyway.)
Or even whether they got new information about what they studied. (Because they still gained the knowledge, that this method does not give any new information.)
I really don’t get, in what twisted mindset one can see that as bad. It boggles my mind...
Re: (Score:3, Insightful)
Egos are massive and competition is fierce, so asking researchers to admit a mistake or give the competition a short cut is a tall order.
The funny thing is, discoveries are not "I told you!". They're "That's interesting...".
Re: (Score:2)
The problem is the implication that a lack of discovery is "That's not interesting...". There is certainly very strong evidence that publication bias [wikipedia.org] exists and is a tremendous problem in the academic world. While plenty of negative papers do get published the vast majority of published papers have some positive results. As has been discussed already when you get negative results and you want them published you write them out as if they were positive - there is a reason for this, and it's just how people wo
Re:A great idea (Score:4, Interesting)
In my experience it has nothing to do with egos or competition.
But it is damn hard to publish something that doesn't work!
I was recently involved i developing a microfluidic system for diagnostics. Every milestone and sub-problem was solved. But when the final injection molded devices were tested, they failed due to an sort of interesting non-obvious combination of factors. Two issues with publishing this; the problems were very specific to our system and the conclusion could be written in 5 lines of text.
It would have been like a movie with huge setup, but within the first 3 minutes the hero stumble, break his neck, and dies. End credits. It was a EU founded research project, no more money no more time. You can't get founding to continue a failed project. End of story.
But my point is, in all my experience as scientist. I've never seen one of my colleagues say "we should hide this", but I've often heard "I would like to tell about this, but I don't know of a paper that would accept it".
Also when something fails we need to carry on, but now we're behind schedule...
Re: (Score:2)
Re: (Score:2)
But my point is, in all my experience as scientist. I've never seen one of my colleagues say "we should hide this", but I've often heard "I would like to tell about this, but I don't know of a paper that would accept it".
Isn't that the point of this journal? To be a place for exactly that type of publication?
Re: (Score:2)
Egos are massive and competition is fierce, so asking researchers to admit a mistake or give the competition a short cut is a tall order.
This must depend on the field if it's true anywhere. In biology, you'd have to be the world's biggest ass to act like you've never had an unexpected result. From my limited experience, if you suggest that -most- of your results are completely what you were expecting, I'd suspect you were lying. It seems like on average, every other research presentation I see, by heads of labs included, the presenter admits some of the most interesting data was not what they expected.
The discovery of penicillin was a mon
Re: (Score:2)
This is exactly right.
A complete failure to grasp three fundamental human behaviors:
Re: (Score:2)
Wrong perspective.
Why do you call it a mistake? What crazy idea is that, to call something good (gaining useful new information) “bad”?
It’s not bad, so you don’t have to “admit” anything.
That’s the great thing about science: The worst thing that can happen, is that you don’t learn something new. (= 0)
Everything else (= +x | -x) is a success.
Who cares if it was expected.
Frankly, I find the unexpected results to be far cooler than the expected ones. :)
A scientis
This could be good (Score:2, Insightful)
Re:This could be good (Score:5, Informative)
Penicillin? Not a mistake so much as general messiness. The guy stacked up some bacterial cultures and went on vacation. One of them grew mold. He noticed that the bacteria near the mold were dead.
Re: (Score:2)
Penicillin? Not a mistake so much as general messiness.
OBLIG: If that's the case, then I've probably got the cure for cancer in my room somewhere.
Fantastic idea (Score:5, Insightful)
Sometimes talking to people with very pro-sciene views, you get the idea that "science" is either an accumulated set of known facts or a perfect method which, because of peer review, is infallible at learning absolute truth.
In reality, it's just a set of processes that we've developed and which has been generally more successful at producing helpful results than other methods. No reason to think that the way we go about it couldn't be improved. I can't imagine that failing to share the results failed experiments doesn't sometimes result in the loss of important information.
Coincidentally I just saw this talk [ted.com] which raises the question whether helpful data can be gathered even if it's not gathered through conventional rigorous scientific methods. It seems like an interesting idea-- they're essentially gathering lots of data from various sources and using statistical analysis developed by economists to try to draw conclusions. My biggest concern would be purposeful manipulation by someone with an agenda.
But anyway, all of this is to say that this has gotten me thinking about how the scientific process may still be open to some innovation.
Re: (Score:3, Interesting)
using statistical analysis developed by economists
Funny, given recent events I would be more worried about the economists models.
Re: (Score:3, Funny)
"using statistical analysis developed by economists to try to draw conclusions"
this sounds promising
/deadpan
Re: (Score:3, Informative)
All the trappings of science, the double-blind experiments, the peer review, etc. are merely way
Re: (Score:3, Informative)
You went too far, you stripped off the meat. Science uses observation to find models that accurately predict new observations. The guts of the philosophy is that the utility of reliable predictive models is self evident.
Re: (Score:2)
I think what you are saying is that if a model accurately predicts something, then the model is accurate; in reality all you know is that it accurately predicts something. The rest of the model may be completely in error, it hasn't been verified yet. A simple example of this is when Columbus presente
Re: (Score:2)
Nope, I'm saying it's usefull. The Earth centric model lasted so long because it accurately predicted the motion of the moon and planets.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Stripped to its bare, ideological minimum, science is nothing more than observation. You can extrapolate the implications of those observations, but in the end everything we know in science can be traced down to an observation. That is why intelligent design fails at being a science: while it technically might be true, it is not an observation, it is a guess. FSM is not an observation it is a (silly) guess.
All the trappings of science, the double-blind experiments, the peer review, etc. are merely ways to improve the accuracy of our observations. It is really beautiful, actually, to realize that for any fact in science you can say, "how do we know this?" and get back to the original observations that show it to be true. This is not something you can do with religion, or philosophy, or literary criticism.
Without philosophy, observation is only historical cataloging; mere correlation. Recognition of Causation requires something outside of observation: "silly" guesses. Even after doing hundreds of observations, we'll never know if any of the silly guesses are true, but we'll know which ones turn out false. But, pure philosophy has one on science: certain things can be proved true with mere thought experiment.
Oblig. XKCD: http://xkcd.com/435/ [xkcd.com] But he forgot the logician on the far right telling the mathemat
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
In this particular case, I've always thought Descartes was making a rather large assumption, not an observation.....how does he know he is thinking and not someone else's dream?
Re: (Score:2)
>>But anyway, all of this is to say that this has gotten me thinking about how the scientific process may still be open to some innovation.
The sad fact is, there's a lot of work being done under the name of science that isn't really science, or perhaps, science-lite. Do you think "Climate Scientists" have the ability to run scientific experiments? (Let's start with 1,000 earths, add 100ppm of CO2 to half of them, and measure the difference in average temperature across 100 years.) No, of course not. B
Re: (Score:2)
I think you're a little too picky about your definition of science.
You don't need to do a double blind experiment for your observations to become science. Geology, astronomy, and evolutionary biology are definitely science even if we can't try out two different methods of creating granite, initiating the big bang, or applying selective pressures to dinosaurs. Likewise with climate science. You're dealt a problem with a long history and a series of facts. Now try to figure out what will happen if variabl
Re: (Score:2)
If you go with that definition, then economics and psychology are science.
In other words, if you're going to admit anyone that uses logic, observation, and rigorous statistics (but no experimentation) into the club of scientific disciplines, then you either have to admit nearly every discipline at a university, or nearly none of them.
As I said in my previous post, our current definitions are severely flawed. People who get a degree in Math at my university, get a Bachelor or Master of Arts, not science. But
Re: (Score:2)
No, but neither do astronomers, cosmologists, epidemiologists, paleontologists...
The ability to run experiments is not a necessary condition for a field to be a science.
Alternative explanation: they are not science, but are science-y, and so people label them science. And I wouldn't even give epidemiology that much credit.
Personally, I think our current definitions are insufficient. Perhaps we need a new term, like "Statistical Science" to label things like climate science and epidemiology. There's indistin
Re: (Score:3, Insightful)
This is because statistical error -- the error you make because of the limits of your sample size -- goes down as 1/sqrt(N), but systematic error -- the error you make because of imperfect knowledge of your model, biases you can't estimate and control for, and so on -- does not.
A large and difficult part of the field of computational physics I do, at least, is accurately estimating systematic errors. Statistical errors are easy, you just do the probability shit. But honestly estimating systematic errors is
Re: (Score:2)
In the social sciences and medicine, I think it's an even deeper problem than models. In complex interconnected systems like "human psychology" or "human societies" or "the cardiovascular system", there are very few variables that turn out to be either: 1) absolutely unrelated to each other; or 2) exactly identical. Everything is slightly related, and slightly different, so given large enough datasets, you can always show statistical significance, because it really is a real effect. That's one reason there'
Problem is (Score:5, Insightful)
The problem with any change or reform of the publishing system is that publications are so important for the individual scientist. A paper isn't just a neat way to disseminate results. They are your work evaluation and your CV; they are keeping the score as it were. Where you publish and how often you publish directly determines where - or if - you work another year or two down the road.
And even a short paper takes a lot of time and effort to write. For an informal "don't do that; we tried and it didn't work"-email to a colleague you could just jot down three or four paragraphs after lunch. Make a paper out of it and you have weeks or more of work ahead of you - looking up other previous published reports on the same kind of experiment; doing your best to figure out and explain the exact causes; square your (lack of) results with the apparent success of other groups that did something similar; make neat, clear graphs and illustrations as needed; get formal permission from your lab and your funding agency (and your co-authors labs and funding sources) to actually publish the thing. Then revise and edit the paper multiple times after comments from your co-athours and reviewers.
So, getting good publications is vital for your ability to make rent and buy food for your family. Writing publications take a lot of time and effort - time that is pretty limited. So, even though the will to spread the word on a negative result may be there, chances is, writing it up will be relegated to the "when I've got a bit of spare time"-pile, where it will likely sit until well after retirement.
Technique X fails on problem Y. (Score:2, Funny)
an article for the first issue? covers all questions
So are we talking:
Technique X fails on problem Y.
Sue all music downloads fails to stop piracy?
Hypothesis X can't be proven using method Y.
All music downloaders can't be proven to be pirates
Protocol X peforms poorly for task Y.
Suing all music downloaders performs poorly for stopping music piracy
Method X has unexpected fundamental limitations.
Forcing people to buy music only on CD / Tape / Vinyl doesn't appeal to all customers
While investigating X, you disco
Awesome! It's about time! (Score:4, Insightful)
This is a fantastic idea! It takes a great deal of strength to do this; one has to learn how to have fun and ignore the pangs of the ego.
James Burke's Connections [wikipedia.org] was based on similar philosophy. Non-linear thinking is a very powerful method of moving through time. Many geeks live in the clutches of an obsessive desire to control everything so that they don't get hurt by being wrong. If they could just relax and roll with the ups and downs and not be so hard on themselves, not care if they are laughed at, then they would find their power and perhaps start living lives of consequence.
One university professor described an enormously powerful way of doing research; When you're up against a wall, seeking fruitlessly to find a specific title to continue your line of thinking, instead just pull out some random book nearby. Doesn't even have to be from the same shelf or Dewey code. It will have the answer. -But only if you're tuned to your inner Jedi.
Those who deny their inner Jedi are forever lost. But the upside, I guess, is that nobody will laugh at them.
-FL
Re: (Score:2)
Signal to noise (Score:2)
Re: (Score:2)
Just like your post, then. ;)
This is a great idea! (Score:5, Insightful)
As a scientist, I can also tell tales about how the scientific method gets distorted by ideology. When I was in grad school, I was working on a complex set of problems that were a horror -- a week doing eight hours a day pumping numbers into a scientific calculator is not my idea of fun. However, back then, it was a necessary evil. So, I was about to have to do another horror week with the calculator, which I did not want to do, so I was wasting time and did something silly. It turned out to be a great idea. It gave a whole new method to solve the problem type at hand. A number of other people had a hand in the final paper, but I got to be first author. Unfortunately, as only one author amongst many. The paper made claims about the hypotheses that was being tested, I objected very strongly to this -- there was no hypothesis, but we just got lucky. However, there is a paper with my name on in, published in the 20th Century, that contains claims about what we discovered which are false, at least with respect to hypotheses and all that stuff, in order to ensure that we were following someones idea of the scientific method. It irks me even today. Fortunately, a book about the issue now gives a more accurate account. However, there is no doubt that scientific ideology can drive out the truth. Thus, what is proposed here is a good idea. Telling the truth (even if it does not conform to the ideologically driven official method) is something I teach my grad students even today.
Re: (Score:3, Insightful)
One of the things that many scientists lack, is a good grounding in the Philosophy of Science. The public version of science, largely pushed by science teachers has an origin in the Vienna Circle of Logical Positivists. This is now largely known to be problematic, but is still the prevailing view.
Thanks so much for pointing this out. It never ceases to amaze me how many scientists seem to believe blindly in some sort of simplified method taught in middle school or the interesting, but ultimately useless, Popperian epistemology.
Is it really that complicated to understand that "falsifiability" is only a useful concept (and a somewhat limited one at that) for describing the process of testing hypotheses that are already formulated, but it gives almost no guidance about how to come up with such hypot
And then a miricale occurs (Score:2)
Re: (Score:2)
Thanks so much for pointing this out. It never ceases to amaze me how many scientists seem to believe blindly in some sort of simplified method taught in middle school or the interesting, but ultimately useless, Popperian epistemology.
An alternate hypothesis here is that you are wrong in some important way. Given that Popperian epistemology isn't ultimately useless, I'd start with that.
Re: (Score:2)
Given that Popperian epistemology isn't ultimately useless, I'd start with that.
Have you read the philosophy of science stuff mentioned by the GP (or other similar stuff)? If not, take some time to do that before assuming I'm just a troll.
I'm not saying that Popper's falsifiability criterion isn't a helpful simplification, or that there aren't other aspects of Popper that don't have great insight. What's I'm saying is that if you try to use Popper as the sole basis for your epistemology, i.e., the complete theory of how you get your knowledge about the world, you'll find his theori
Re: (Score:2)
Have you read the philosophy of science stuff mentioned by the GP (or other similar stuff)? If not, take some time to do that before assuming I'm just a troll.
No, but I will.
I'm not saying that Popper's falsifiability criterion isn't a helpful simplification, or that there aren't other aspects of Popper that don't have great insight. What's I'm saying is that if you try to use Popper as the sole basis for your epistemology, i.e., the complete theory of how you get your knowledge about the world, you'll find his theories are hopelessly incomplete... not to mention (as I said) an inaccurate fit when you look at the way science actually develops in practice.
I guess what bugs me here is the rhetorical exaggeration. An "inaccurate fit" seems an appropriate description. "Hopelessly incomplete" does not. Nor does the phrase "ultimately useless" which triggered my original complaint. I'm not claiming that Popper doesn't have problems, I'm merely pointing out that his ideas remain useful. I imagine any serious discussion of knowledge-seeking descendants of the scientific method, millennia from now, will still include Popperian ideas in there.
There have been hundreds, if not thousands, of books written on the philosophy of science in the past century, most of them since Popper. Obviously if the issues were all resolved by one simple idea from Karl Popper, people would have stopped writing. They haven't.
There hav
Re: (Score:2)
>>Thanks so much for pointing this out. It never ceases to amaze me how many scientists seem to believe blindly in some sort of simplified method taught in middle school or the interesting, but ultimately useless, Popperian epistemology.
Yeah, I've never understood why people prefer Popper's fetishistic focus on falsifiability to define what is "scientific", especially given what a large percentage of our knowledge is not falsifiable.
Consider:
I observe a comet through my telescope and I see it explode
Re: (Score:3, Informative)
Watching something isn't scientific observation.
Saying I saw a comet explode isn't science.
Saying I saw a comet explode as it neared the sun is getting close because now you're hypothesizing that the sun had something to do with it.
Saying, "the comet exploded due to the melting of water ice as it neared the sun, similar comets should explode as they near the sun as water ice appears to be a fundamental structural element" is science because now you're making testable and falsifiable statements about the com
Re: (Score:2)
The hallmark of science is the development of models that yield useful information, but the only way to know if the model is right is to test it - which is why Popper and everyone else is so obsessed with falsifiability.
Exactly. I think people misunderstood my previous post as saying that falsifiability is useless. (Because that's all people tend to know about Popper.) It's not. It's a simple model, and the way things actually work is more complicated, but it's not a bad place to begin.
Instead, what I said is that Popper's epistemology isn't ultimately enough for us to understand the world. It isn't enough to make scientific advances or to predict which the
Re: (Score:2)
>>Saying I saw a comet explode isn't science.
>>Saying, "the comet exploded due to the melting of water ice as it neared the sun, similar comets should explode as they near the sun as water ice appears to be a fundamental structural element"
And this is where the general public would disagree with you, and (I'm guessing), most of the scientific community. Experimentation and observation are key components of what most people would define to be "science". If you go back to the grade school model of
Re: (Score:2)
Great scientists weren't very scientific (Score:5, Insightful)
Einstein wasted the last half of his life on wishful thinking "God does not play dice". Well turns out we're pretty sure he does. See Bell's theorum which shows that it can't just be hidden variables. And by all accounts for a theoretical physicist he sucked at advanced math.
Isaac Newton was a horrible little man. Ill tempered, neurotic, and did wild experiments that he was lucky didn't blind him. Let's not forget the nastiness with Leibniz.
Galileo had the social skills of a village idiot which led to the suppression of his work and his imprisonment by the authorities that he angered. (They were idiots too but that's beside the point)
They're three of the greatest but I could go on.
We like to pretend our scientists are great men with a couple of eccentricities that are way too smart to socialise or tolerate fools but the fact is their thinking isn't so superior OR logical OR scientific EXCEPT in their areas of expertise. THAT is why they are remembered. Not because they were above being unscientific.
Re: (Score:2)
>>See Bell's theorum which shows that it can't just be hidden variables.
No, it means there's (probably) no NON-LOCAL hidden variables. Given that nobody really understands what happens during wavefunction collapse (or rather, the mechanism behind it), it's hard to say that he's necessarily wrong. Quantum Mechanics are deeply weird, and science has been getting by describing how they work, rather than why.
I'm not sure why you're trying to claim that scientists can't use intuition in science - if that w
Re: (Score:2)
No, it means there's (probably) no NON-LOCAL hidden variables. Given that nobody really understands what happens during wavefunction collapse (or rather, the mechanism behind it), it's hard to say that he's necessarily wrong. Quantum Mechanics are deeply weird, and science has been getting by describing how they work, rather than why.
Why should Quantum mechanics - something well beyond our everday experience - not be weird? You think Relativity is intuitive?
I'm not sure why you're trying to claim that scien
Re: (Score:2)
>>How very unscientific of you. Everything has to do with something. Now put your petty sarcasm away.
I think you're confusing "unscientific" with "everything that annoys syousef". If a scientist gets into an argument with another scientist over priority (and who published first IS related to science, whether you like it or not) annoys syousef, therefore, by definition, it is not science. Even though it is. Scientists working on their own are not scientists! Syousef says so!
Do you realize how uninforme
Re: (Score:2)
Well I might argue that the people you're talking about weren't even "scientists" in the modern sense. What they practiced might be better described as natural philosophy [wikipedia.org]. It's not as though Einstein was remembered for his lab experiments. Essentially his innovation was that he re-imagined what it meant to "measure" something.
Re: (Score:2)
Well I might argue that the people you're talking about weren't even "scientists" in the modern sense. What they practiced might be better described as natural philosophy. It's not as though Einstein was remembered for his lab experiments. Essentially his innovation was that he re-imagined what it meant to "measure" something.
Since when is a theoretician not a scientist? Only an experimental scientist is going to be remembered for his experiments. Most of the time modern experiments are too complex for lay
Re: (Score:2)
Since when is a theoretician not a scientist?
Depends on the level that they're theorizing. Are they working within the framework of "modern science working the way we think it works"? Or are they completely rewriting the way we think it works on a fundamental level?
I'm not saying a scientist's work all takes place in a lab, but Einstein's most famous achievement wasn't so much in his reworking of a mathematical equation, tweaking an existing theory, working in a lab, or going through anything that we would call "the scientific method". His achieve
Re: (Score:2)
Now you can call that science, but really it fits more into a particular branch of philosophy.
You can call it rock climbing for all I care. You're wrong, and public opinion certainly isn't with you. Very few people would agree with you that Einstein wasn't a scientist. By the way his work made specific testable predictions and if that isn't part of the scientific method I don't know what is.
Re: (Score:2)
Re: (Score:2)
Enough of your Feynman worship. He was very good at what he did, but was a womaniser and a drunk - certainly after he lost his first wife. Have you read one of the biographies on Feynman? I have. He also took unnecessary risks with his career for a basic ego trip (like showing up security theatre on the military base he worked on for the atom bomb) Furthermore his lectures aren't universally accepted as great. Some people think believe them to be long winded and confusing.
Personally I think he was as good a
Re: (Score:2)
Einstein may still have been correct. Bell's Theorem proved that the effects of quantum physics cannot be both deterministic (hidden variables) AND adhere to the Principle of Locality. There are indications that the Principle of Locality is incorrect.
It doesn't matter if he's right or not. His belief was not based on science. It was based on him being unable to look at other possibilities. Ironic for a man that revolutionised physics.
Re: (Score:2)
It doesn't matter if he's right or not. His belief was not based on science.
Well how on earth is your belief supposed to be "based on science" when there isn't adequate scientific proof?
Re: (Score:2)
Well how on earth is your belief supposed to be "based on science" when there isn't adequate scientific proof?
Your belief shouldn't blind you to other possibilities, and should be consistent with the known facts.
Re: (Score:2)
Known facts? You just acknowledged that Einstein may have been right. The facts still aren't known, so I don't see how his view was supposed to be "based on science".
Good scientists base their views on what they think makes sense. Evidence and scientific studies serve to educate that sensibility of "what makes sense," but scientists aren't supposed to go, "Oh, this makes no sense to me, but someone proposed a theory that hasn't been proven yet, so I have to accept it."
Saying "God does not play dice," i
failed experiment (Score:5, Funny)
Re:failed experiment (Score:4, Funny)
And you are basing this on one datum? Have you learned NOTHING??? Go back and try again and see if you get the same outcome.
Re: (Score:2)
Sadly, past failures have forever tainted the sample.
Re: (Score:2)
Since we haven’t invented a time machine yet (working on it ;)... since Wendy’s state is not resettable, unless there was a massive amount of alcohol involved the first time, which already resetted it (wish I had that option the day after the date with her!)... and since there is only one Wendy just like her (oh thank god for that one!)... going back is not an option. You can only try with the accumulated state.
Mod humanity (Score:2)
Redundant.
Well, duh.
SB
"what not work". (Score:2)
Maybe on that list of things that "not work" are things that never worked because the experiment was not well designed.
Is my undestanding that the democracy world is better because we don't firmly control what people experiment. So people are free to try things that "don't work".
Old Problem (Score:3, Insightful)
Trouble is, this knowledge is not shared using the usual method of scientific communication: the peer-reviewed article. It remains within the lab, or at the most shared informally among close colleagues. As it stands, the scientific culture discourages sharing negative results.
This sort of complaint goes back a very long ways, and it's certainly as good a time as any to address it head on.
We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing.
- Richard Feynman, Nobel Prize Acceptance Lecture, 1965
All the mystery goes to one journal (Score:2)
---
History o [feeddistiller.com]
Re: (Score:2)
I don't know if it is such a good idea to bundle unexpected facts from so many disciplines in one journal... I mean, if you're in computer science, would you like reading that such and such method for manipulating the DNA of a fruit-fly produces an anomaly in its social behavior? I think not...
some advice (Score:2)
never attribute to serendipity what can be explained by science
Is that a so good idea ? (Score:3, Interesting)
I can't help also wonder if this is a good use of "peer reviewing" which has a kind of shortage, or so I heard.
Semantic games? (Score:2)
I love how people often point out "you can't prove a negative" or "you can't publish negative results". Turns out that you are very wrong if you think that either is true.
At first sight it appears that the idea behind this journal is to share failed attempts. But look at the kind of examples the website would like you to prove: "Prove that method X does not work for problem Y." This is *not* a failed attempt. You succeeded in proving something. Some great papers dealing with P?=NP problem prove exactly this
Good Idea, Not the First (Score:2)
This idea was already executed a while ago by the Journal of Negative Results in Ecology and Evolutionary Biology, the [jnr-eeb.org] Journal of Negative Results in BioMedicine [jnrbm.com], the Journal of Negative Results in Speech and Audio Sciences [haikya.us] and probably a few others that Google will help you find, just as it helped me find. But, as I recall, even PLoS [plos.org] had publishing negative results in its charter and specifically PLoS ONE [plosone.org] encouraged them, being all-inclusive.
The problem? Most of them (except for PLoS and PLoS ONE) have a
When will they publish the article... (Score:2)
... that explains how the whole Mentos+soda thing was actually a failed attempt at cold fusion?
Re: (Score:2)
If we ONLY publish and peer review our successes, our failures and errata is discarded.
In this data could be a LOT of really amazing potential, given peer review and continuance.
No wonder we don't have a theory of everything yet, we're not looking at nearly all the data.
Hmm. With the additions of Facebook and Twitter, the web is now essentially a compendium of 99% failures and errata. And according to your hypothesis, we should still find some amazing potential in it. I think you're on to something.
Re: (Score:3, Insightful)