Journal Article On Precognition Sparks Outrage 319
thomst writes "The New York Times has an article (cookies and free subscription required) about the protests generated by The Journal of Personality and Social Psychology's decision to accept for publication later this year an article (PDF format) on precognition (the Times erroneously calls it ESP). Complaints center around the peer reviewers, none of whom is an expert in statistical analysis."
Re:Research Funding (Score:5, Insightful)
Any evidence for precognition instantly takes precognition into the scientific realm and out of the paranormal, supernatural and occult.
Re:Research Funding (Score:4, Insightful)
I'm not sure they're that confident in their evidence. Nor should they be - they did a study, they publish their findings, lots of other scientists either put down rebuttals (as has already happened), or repeat the study and see if it's accurate enough to be true. That's the way science is supposed to work.
What's not supposed to happen is "Scientist A does an apparently sound study that appears to demonstrate something that scientists B,C, and D consider silly, and scientists B, C, and D stop scientist A's work from ever seeing the light of day."
Re:Research Funding (Score:2, Insightful)
something to the effect of 'since there are no rich fortune-tellers, we have to assume that they are all fakes.'
the answer is simple, really. they are all fakes.
life isn't all complex. humans being fakers and liars is one constant we can count on in the universe.
Re:Great response paper (Score:5, Insightful)
Ouch. Taken down by two Bayesian tests on whether it's more likely that the paper is true or not. They didn't even need to get out of bed or dig out a big maths book to basically disprove the entire premise of the original paper using its own data.
As they hint at in that rebuttal - As a mathematician and someone of a scientific mind, I would just like to see *ONE* good test that conclusively shuts people up. Trouble is, no good test will report a false result and thus you'll never get the psychic / UFO / religion factions to even participate, let alone agree on the method of testing because they would have to accept its findings.
Never dabble in statistics - the experts will roundly berate you and correct you even if you *THINK* you're doing everything right. When PhD's can't even work out things like the Monty Hall Problem properly, you just know it's not something you can throw an amateur towards.
Re:Why Is It Wrong to Call This ESP? (Score:4, Insightful)
They failed that one of your hurdles - they didn't do a proper analysis on their data. Basically, their data conclusively shows that the chances of pre-cog existed COMPARED TO it not existing is extremely minimal (actually quite strongly in favour of it not existing). But they specifically chose only certain analyses to conclude that it *did* exist.
There are many rebuttals at the moment, most linked to in these comments, that you can read but basically - to remove all statistical jargon - they didn't bother to take account of how probable their data was by pure chance. Their "error margin" is actually vastly larger than their data could even escape, so they can't really make any firm conclusions and certainly NOT in the direction they did. Statistics is a dangerous field, and whoever wrote and reviewed that paper didn't have a DEEP grasp of it, just a passing one.
If you calculate the *chance* that their paper is correct versus their paper being absolute nonsense, not even taking into account anything to do with their methods or that their data might be biased, their data can ONLY mathematically support a vague conclusion that their paper is nonsense. To do the test properly and get a statistically significant result (not even a *conclusive* result, just one that people will go "Oh, that's odd") they would have to do 20 times as many experiments (and then prove that they were fair, unbiased etc.).
It's like rolling three sixes on a die and concluding that the particular die you rolled can only possibly roll a six. It's nearly as bas as claiming that so can every other die on the planet.
Re:Great response paper (Score:5, Insightful)
Re:Why Is It Wrong to Call This ESP? (Score:5, Insightful)
In one sense, it is a historically familiar pattern. For more than a century, researchers have conducted hundreds of tests to detect ESP, telekinesis and other such things, and when such studies have surfaced, skeptics have been quick to shoot holes in them.
I always thought the hole-shooting was an essential step in the scientific process. If you can't patch up the holes, you aren't really doing "science."
In science, you tell a story. Then everyone says, "no, that's wrong because..." Then you say, "I'm afraid I'm right, because..." It is an imperfect process because people have biases that are hard to overcome. But, if the empirical evidence is strong enough, you will overcome these imperfections. In the end, you build a consensus by presenting enough evidence that no one can argue with. I'm not sure what is more democratic than that.
Re:Why Is It Wrong to Call This ESP? (Score:5, Insightful)
I don't understand this. If the researchers did a proper experiment, respected the rules, followed proper procedures, and did a proper analysis of the data they collected, in a scientific way. Why is it a problem to publish ?
Shouldn't be any. However, this case isn't relevant to your question, since they didn't do a proper analysis. (Or at least that's what the rebuttals say; I haven't read the paper.)
One of the most glaring problems is that they (reportedly) went fishing for statistical significance in their results, without making the correction that is required for rigor when you do that. When you find significance at the traditonal 95% confidence level, there's a 5% chance that you're finding meaning in noise. If you test for 20 different effects and then go fishing to see whether *any* of them show significance at the 95% confidence level, you have 1 - .95^20 = 64% chance of finding "significance" in noise.
There are simple ways to fix that problem, e.g. the Tukey HSD "honestly statistically different" test. Apparently the authors were either ignorant or dishonest, and the reviewers were either ignorant, dishonest, are careless.
Re:Research Funding (Score:4, Insightful)
But when A doesn't do a sound study, then B,C,D should say something.
Even if it was a good study, there findings are the same you would expect from random chance. I.e. well within the margin of error.
Re:Why Is It Wrong to Call This ESP? (Score:5, Insightful)
Wait, you've lost me here... what about human beings can't be explained with classical physics?
2 girls 1 cup