Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Journal Article On Precognition Sparks Outrage 319

thomst writes "The New York Times has an article (cookies and free subscription required) about the protests generated by The Journal of Personality and Social Psychology's decision to accept for publication later this year an article (PDF format) on precognition (the Times erroneously calls it ESP). Complaints center around the peer reviewers, none of whom is an expert in statistical analysis."
This discussion has been archived. No new comments can be posted.

Journal Article On Precognition Sparks Outrage

Comments Filter:
  • Prediction (Score:5, Funny)

    by Anonymous Coward on Thursday January 06, 2011 @09:52AM (#34775490)

    I predicted this would happen.

  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday January 06, 2011 @09:53AM (#34775502) Journal

    on precognition (the Times erroneously calls it ESP).

    Why is that erroneous? Precognition and premonition are two facets of Extrasensory Perception. From its wikipedia article [wikipedia.org]:

    Extrasensory perception (ESP) involves reception of information not gained through the recognized physical senses but sensed with the mind. The term was coined by Sir Richard Burton,[citation needed] and adopted by Duke University psychologist J. B. Rhine to denote psychic abilities such as telepathy and clairvoyance, and their trans-temporal operation as precognition or retrocognition.

    So if you were dealing with anything of the above or anything external to our normal senses, I think that qualifies as ESP and calling it ESP. Sure that acronym has a lot of baggage but from the study itself:

    ... this is an experiment that tests for ESP (Extrasensory Perception).

    That's what the tests subjects were told and I don't think the article is erroneous.

    • I agree (Score:3, Funny)

      by Anonymous Coward

      There are many types of ESP, covering different aspects of life. Each is differentiated by a letter.

      For example, the 14th type of ESP covers sports. It is called ESPN.

    • I *knew* this would happen.
    • I don't understand this. If the researchers did a proper experiment, respected the rules, followed proper procedures, and did a proper analysis of the data they collected, in a scientific way. Why is it a problem to publish ? So now we should bar publication that don't agree with our general conception ? If it was done considering the specific guidelines set by the scientific community of how to do things, screw them. Science is not a democratic process, nor it should be politically correct. Science is scie
      • by AlecC ( 512609 ) <aleccawley@gmail.com> on Thursday January 06, 2011 @10:35AM (#34775898)

        From the summary, the implication is that the data analysis was not proper - or at least, not shown to be proper. Since the claimed effect is a fairly small artifact only detectable by sophistcated statistics, it seems reasonable that the reviewers should include those who have a deep understanding of such statistics - which, it is claimed, they did not.

      • by ledow ( 319597 ) on Thursday January 06, 2011 @10:56AM (#34776190) Homepage

        They failed that one of your hurdles - they didn't do a proper analysis on their data. Basically, their data conclusively shows that the chances of pre-cog existed COMPARED TO it not existing is extremely minimal (actually quite strongly in favour of it not existing). But they specifically chose only certain analyses to conclude that it *did* exist.

        There are many rebuttals at the moment, most linked to in these comments, that you can read but basically - to remove all statistical jargon - they didn't bother to take account of how probable their data was by pure chance. Their "error margin" is actually vastly larger than their data could even escape, so they can't really make any firm conclusions and certainly NOT in the direction they did. Statistics is a dangerous field, and whoever wrote and reviewed that paper didn't have a DEEP grasp of it, just a passing one.

        If you calculate the *chance* that their paper is correct versus their paper being absolute nonsense, not even taking into account anything to do with their methods or that their data might be biased, their data can ONLY mathematically support a vague conclusion that their paper is nonsense. To do the test properly and get a statistically significant result (not even a *conclusive* result, just one that people will go "Oh, that's odd") they would have to do 20 times as many experiments (and then prove that they were fair, unbiased etc.).

        It's like rolling three sixes on a die and concluding that the particular die you rolled can only possibly roll a six. It's nearly as bas as claiming that so can every other die on the planet.

      • by jason.sweet ( 1272826 ) on Thursday January 06, 2011 @11:03AM (#34776282)
        FTA:

        In one sense, it is a historically familiar pattern. For more than a century, researchers have conducted hundreds of tests to detect ESP, telekinesis and other such things, and when such studies have surfaced, skeptics have been quick to shoot holes in them.

        I always thought the hole-shooting was an essential step in the scientific process. If you can't patch up the holes, you aren't really doing "science."

        In science, you tell a story. Then everyone says, "no, that's wrong because..." Then you say, "I'm afraid I'm right, because..." It is an imperfect process because people have biases that are hard to overcome. But, if the empirical evidence is strong enough, you will overcome these imperfections. In the end, you build a consensus by presenting enough evidence that no one can argue with. I'm not sure what is more democratic than that.

      • by Black Parrot ( 19622 ) on Thursday January 06, 2011 @11:28AM (#34776646)

        I don't understand this. If the researchers did a proper experiment, respected the rules, followed proper procedures, and did a proper analysis of the data they collected, in a scientific way. Why is it a problem to publish ?

        Shouldn't be any. However, this case isn't relevant to your question, since they didn't do a proper analysis. (Or at least that's what the rebuttals say; I haven't read the paper.)

        One of the most glaring problems is that they (reportedly) went fishing for statistical significance in their results, without making the correction that is required for rigor when you do that. When you find significance at the traditonal 95% confidence level, there's a 5% chance that you're finding meaning in noise. If you test for 20 different effects and then go fishing to see whether *any* of them show significance at the 95% confidence level, you have 1 - .95^20 = 64% chance of finding "significance" in noise.

        There are simple ways to fix that problem, e.g. the Tukey HSD "honestly statistically different" test. Apparently the authors were either ignorant or dishonest, and the reviewers were either ignorant, dishonest, are careless.

    • Because its really not, as the article clearly shows. Anything that purports to be X & uses lies, half truths, and slung statistics to present its case clearly is not X.
    • by mcgrew ( 92797 ) *

      What would you call it if someone had extra perception through normal senses, say, the eyes of an eagle or the ears of a dog? Not extra senses, but extra perception?

      Seems that in some cases, the results may be the same.

  • Great response paper (Score:5, Informative)

    by 246o1 ( 914193 ) on Thursday January 06, 2011 @09:56AM (#34775554)

    For those who haven't seen it, here's a pretty sharp takedown of this paper, as well as some notes on statistical significance in social sciences in general: www.ruudwetzels.com/articles/Wagenmakersetal_subm.pdf

    • by Anonymous Coward on Thursday January 06, 2011 @10:11AM (#34775672)
      And for those who still haven't seen it, here's a proper link [ruudwetzels.com].
    • by ledow ( 319597 ) on Thursday January 06, 2011 @10:40AM (#34775992) Homepage

      Ouch. Taken down by two Bayesian tests on whether it's more likely that the paper is true or not. They didn't even need to get out of bed or dig out a big maths book to basically disprove the entire premise of the original paper using its own data.

      As they hint at in that rebuttal - As a mathematician and someone of a scientific mind, I would just like to see *ONE* good test that conclusively shuts people up. Trouble is, no good test will report a false result and thus you'll never get the psychic / UFO / religion factions to even participate, let alone agree on the method of testing because they would have to accept its findings.

      Never dabble in statistics - the experts will roundly berate you and correct you even if you *THINK* you're doing everything right. When PhD's can't even work out things like the Monty Hall Problem properly, you just know it's not something you can throw an amateur towards.

      • by wonkavader ( 605434 ) on Thursday January 06, 2011 @11:04AM (#34776292)

        Dabbling is fine when the results are good. He had 53%. If he'd had 65%, dabbling would have worked. But dabbling is just the start, and that's not just the nature of peer review, it's the nature of collaboration and a University setting. You find something neat by dabbling, and you walk down the hall to visit someone with more stats experience to get some clarity before you publish.

        He had 53%. He knew that if he walked down the hall, he'd get told he had squat. So he didn't walk down the hall.

        There's dabble initially, and that's fine. And there's dabbling (ONLY) and calling it done. That's not.

        Seems like the paper was written by a dabbler, then reviewed by a respected team of dabblers. And not one of them looked at 53% and walked down the hall. Bubbleheads.

    • by Burnhard ( 1031106 ) on Thursday January 06, 2011 @10:58AM (#34776212)
      But isn't that simply applying the maxim: extraordinary claims require extraordinary evidence? I.e. it is not asserting the evidence doesn't exist, only that the evidence isn't strong enough to make the assertion that the hypothesis is true with any confidence. There's a subtle difference.
      • But isn't that simply applying the maxim: extraordinary claims require extraordinary evidence? I.e. it is not asserting the evidence doesn't exist, only that the evidence isn't strong enough to make the assertion that the hypothesis is true with any confidence. There's a subtle difference.

        Not all that subtle, since you can say "the evidence isn't strong enough" about any claim you care to pull out of your 455.

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Thursday January 06, 2011 @09:57AM (#34775558)
    Comment removed based on user account deletion
    • Why settle for just 1M? Play the lottery for a few weeks and you don't even have to bother writing an article to be rich.

      • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday January 06, 2011 @10:31AM (#34775868) Journal

        Why settle for just 1M? Play the lottery for a few weeks and you don't even have to bother writing an article to be rich.

        So I realize a lot of people aren't going to read the article but here's the meaty parts for you statistics snobs (and really, the Bayes folks are going to be all over this one):

        Across all 100 sessions, participants correctly identified the future position of the erotic pictures significantly more frequently than the 50% hit rate expected by chance: 53.1%, t(99) = 2.51, p = .01, d = 0.25.3 In contrast, their hit rate on the nonerotic pictures did not differ significantly from chance: 49.8%, t(99) = -0.15, p = .56. This was true across all types of nonerotic pictures: neutral pictures, 49.6%; negative pictures, 51.3%; positive pictures, 49.4%; and romantic but nonerotic pictures, 50.2%. (All t values < 1.) The difference between erotic and nonerotic trials was itself significant, tdiff(99) = 1.85, p = .031, d = 0.19.

        There's a lot more about eliminating random number generators (by using this little guy [araneus.fi]) leading to prediction as well as running more tests where they are asked to pick a preference of two identical images. The most interesting part is that these results seemed to hinge on pornography. The individuals only exhibited this "precognition or premonition" when they were picking erotic images or rewarded with erotic images (albeit from the International Affective Picture System).

        The skeptic in me is very pleased and excited about this part of the paper:

        Accordingly, the experiments have been designed to be as simple and transparent as possible, drawing participants from the general population, requiring no instrumentation beyond a desktop computer, taking fewer than 30 minutes per session, and requiring statistical analyses no more complex than a t test across sessions or participants.

        Grad students across the country: get to work!

        But you would have to have the lottery involve some sort of erotic pictures containing the known numbers in order for this edge to be garnered. Which would be impossible unless the lotteries changed how they worked. Maybe play blackjack with a set of playboy cards? :-)

        • Re: (Score:2, Offtopic)

          by TaoPhoenix ( 980487 )

          There are only seventeen types of pornographic troll comments here in the recent history of Slashdot, and most of them are goatse. Therefore I predict that the third pornographic troll comment to appear seven stories from now will be a goatse. Further, there seems to be a rise in a goatse bot that can't spell which is supplanting classical troll phrasing.*

          * Statistics not properly peer reviewed. However, I predict that it is too much work for anyone to properly disprove this comment. Therefore I will get so

        • Haven't read the paper, don't know the sample size involved and experiment details, and don't have the time to run the math right now, but it looks like the numbers shown in your post would still support the null hypothesis that there is no precognition effect, so entirely opposite conclusions should be drawn from the same data.

      • Re: (Score:2, Insightful)

        something to the effect of 'since there are no rich fortune-tellers, we have to assume that they are all fakes.'

        the answer is simple, really. they are all fakes.

        life isn't all complex. humans being fakers and liars is one constant we can count on in the universe.

        • by wytcld ( 179112 )

          something to the effect of 'since there are no rich fortune-tellers, we have to assume that they are all fakes.'

          That's just brilliant. If there is someone who has precognition working well enough to become quite rich from it, do you think they'll be announcing that publicly, or just quietly getting rich?

          The general rule of thumb: Those who announce publicly that they've got such an ability working well for them probably don't - they're trying to make money not from the ability itself (which they don't have)

      • Maybe someone can correct me, here.

        I suspect (strongly) that if you have a 3% edge over everyone else, you'll still lose at the lottery. I think the odds in the lottery are so badly kiltered against you that even a real, solid 3% edge would leave you a loser.

        That's NOT true of any casino games. Take 3% to a casino and you'd leave a millionaire in short order. (No, wait. Actually you'd get bounced in short order and barred from the casino.)

    • by Rob Kaper ( 5960 ) on Thursday January 06, 2011 @10:11AM (#34775674) Homepage

      Any evidence for precognition instantly takes precognition into the scientific realm and out of the paranormal, supernatural and occult.

    • That'd be remarkably stupid, since they could just visit casinos and pull in a million dollars every couple of days. At least until the casinos run out of money.

      • by TheCarp ( 96830 )

        That assumes perfect precognition. The effects that I saw claimed were more like, 3% better than random.

        For a number of casino games, with "perfect play" (perfect not including black jack card counting, even though it should) the casino advantage is in that range usually. In fact, I believe the payout on slot machines is often close to 98%, again depending on the play. (some of the really pathalogically bad bets give the house much better odds)

        So, maybe these people DO hit up casinos, and... don't even know

        • by mangu ( 126918 )

          The effects that I saw claimed were more like, 3% better than random.

          Which should be more than good enough to make a fortune.

          At European casinos, if you play red vs. black in the roulette, you have a 18/37 chance, that is 48.64%, of winning double your bet.

          A 3% better than random precognition rate would let you get rich in a few hours, while still being random enough to be considered pure luck, so you could get filthy rich before they banned you from all the casinos in the world.

        • That assumes perfect precognition. The effects that I saw claimed were more like, 3% better than random.

          in the PDF rebuttal posted by 246o1, the authors show that with a mere 53.1% advantage on predicting the color a roulette wheel turns up, a psychic could "bankrupt all casinos on the planet before anybody realized what was going on".

          This is a no-brainer. If merely one person in a million could predict the future, teleport, move objects with their mind, or any of the other stuff usually claimed, our society simply couldn't operate as it does. There would be new layers of safeguards on darn near everything,

      • According to this study, people reacted instinctively (arousal) to what would happen (porn photo) mere seconds before hand. There's nothing to indicate that anyone could use that information to change the future. Imagine that the random picture game was a choice game: they will choose only one or the other photo, and the arousal/non-arousal will let them know which they'll receive, but not which they'll choose.
        They'd have to find someone who is aroused by watching a specific someone else win, and place
    • by dkleinsc ( 563838 ) on Thursday January 06, 2011 @10:21AM (#34775768) Homepage

      I'm not sure they're that confident in their evidence. Nor should they be - they did a study, they publish their findings, lots of other scientists either put down rebuttals (as has already happened), or repeat the study and see if it's accurate enough to be true. That's the way science is supposed to work.

      What's not supposed to happen is "Scientist A does an apparently sound study that appears to demonstrate something that scientists B,C, and D consider silly, and scientists B, C, and D stop scientist A's work from ever seeing the light of day."

      • by 246o1 ( 914193 )

        I'm not sure they're that confident in their evidence. Nor should they be - they did a study, they publish their findings, lots of other scientists either put down rebuttals (as has already happened), or repeat the study and see if it's accurate enough to be true. That's the way science is supposed to work.

        What's not supposed to happen is "Scientist A does an apparently sound study that appears to demonstrate something that scientists B,C, and D consider silly, and scientists B, C, and D stop scientist A's work from ever seeing the light of day."

        They shouldn't be confident in their evidence - you are right about the way science should be done, but I think in this case the rebuttals are easy to find because this paper seems to be the result of significance-chasing, with enough simultaneous 'experiments' going on within each individual experiment (i.e., the 'experiment' to determine whether men were affected by Thing Type A was a subset of the experiment of whether anyone was affected by Things Type A,B,C, or D) that it would be surprising if signifi

      • by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Thursday January 06, 2011 @12:52PM (#34778096) Homepage Journal

        But when A doesn't do a sound study, then B,C,D should say something.

        Even if it was a good study, there findings are the same you would expect from random chance. I.e. well within the margin of error.

    • by sycodon ( 149926 )

      I believe that they have to tell him what is in an envelope locked in a safe to get the money. Unless he's doubled down and made additional challeneges.

    • by elrous0 ( 869638 ) * on Thursday January 06, 2011 @10:28AM (#34775836)

      If they exist, I want them arrested immediately for aiding and abetting terrorists on September 10, 2001. Obviously, they all willfully stayed silent.

    • If precognition worked, they would make billions on the stock exchange. The 1m US$ would only be a tiny bonus.

    • No problem. Given the news that just came out about scientific reproducibility [google.com], they just have to do the experiment again in order to not verify (disprove) it.

  • "Extraordinary claims require extraordinary proof." - Marcello Truzzi

    • by paiute ( 550198 )

      "Extraordinary claims require extraordinary proof." - Marcello Truzzi

      "Extraordinary claims require a shiny object in my left hand to distract you from my right." - S. A. Scoggin

  • I don't know...just a bunch of squiggly lines?

  • but i saw this coming

  • I predict this article will spark a bunch of kvetching about having to register and obligatory links to bugmenot like sites. Do I win?
    • I agree with you completely except for the kvetching part. I find it difficult to agree because I don't know what it means. I suspect it is not an English word.

  • by LordNacho ( 1909280 ) on Thursday January 06, 2011 @10:10AM (#34775662)

    http://en.wikipedia.org/wiki/Sokal_affair [wikipedia.org]

    Basically, a physicist made up some BS and got it published in a journal called Social Text about postmodern cultural studies. He then came out later and revealed the hoax, embarrassing the reviewers and the journal. Lack of intellectual rigour seemed to be the target. This time, it seems to be more specifically aimed at the lack of understanding of statistics in certain subjects.

  • by Anonymous Coward

    I'm a Ph.D. candidate in EE, and I'm sometimes invited to review papers for IEEE journals.

    I always read the paper carefully at least 3 times, read the important parts of references that are new to me, check all the math and sometimes even reproduce some simpler simulations.

    Most reviewers aren't this careful. They either don't have the time or don't have the expertise to find some flaws. Keep in mind that reviewers aren't paid, and are anonymous. Also, the best reviewers are the best researchers, who are usu

    • by gweihir ( 88907 ) on Thursday January 06, 2011 @10:52AM (#34776134)

      I have a PhD in CS.

      You can also have good science rejected by getting three incompetent reviewers. Happened to me several times, the worst one when the program committee attached a note that showed they had not read or understood their own call for papers. I suspect a direct lie to keep me out. Published it later unchanged somewhere else and those people were surprised it got rejected earlier.

      In addition to incompetent reviewers, there are also those that are envious or want to steal your ideas. Peer-review is fundamentally broken. One friend who has a PhD in a different CS area thinks 70% of researchers are corrupt, reviewing things positively when they know the authors, no matter the quality and negatively otherwise. Lying in application to research grants is also quite common. The final result is that good researchers have trouble working and often leave research altogether, which may be an explanation for how glacially slow some fields move.

      • Who was it who said, "science progresses one funeral at a time"?
      • I have a PhD in CS.

        You can also have good science rejected by getting three incompetent reviewers. Happened to me several times, the worst one when the program committee attached a note that showed they had not read or understood their own call for papers. I suspect a direct lie to keep me out. Published it later unchanged somewhere else and those people were surprised it got rejected earlier.

        In addition to incompetent reviewers, there are also those that are envious or want to steal your ideas. Peer-review is fundamentally broken. One friend who has a PhD in a different CS area thinks 70% of researchers are corrupt, reviewing things positively when they know the authors, no matter the quality and negatively otherwise. Lying in application to research grants is also quite common. The final result is that good researchers have trouble working and often leave research altogether, which may be an explanation for how glacially slow some fields move.

        I am a researcher in micro and nanotech, and I can confirm this trend in my field, as well. In fact, one journal in particular has been especially bad in rejecting my articles with some awful refereeing, which I will save for posterity. I am tempted to rub my published articles under the nose of the (probably equally incompetent or corrupt) editor of that journal.

  • The word precognition should fall into the same kind of internal contradiction as the word almighty, at least for the future events on which you can change affect whether it happens or not. To predict the future is ok, but really knowing the future should violate some physics rules as going faster than the speed of light.
  • When I 10 years old, there was a commercial that said that if you roll a 6-sided die 60 times and you correctly guessed the results more than 10 times you were precognitive. Well I guess correctly 11 times, so there are your scientific results. Now I move into my new career as a stock market analyst.

  • Most people I encounter even have trouble with "postcognition". Yea, I'm looking at you Sarah Palin!
    • Most people I encounter even have trouble with "postcognition". Yea, I'm looking at you Sarah Palin!

      I was thinking, actually, that all of the people (including Palin) who predicted that Obama would turn out to be less or different or worse than the imaginary character that millions of people thought they were votiing for ... that that shows a certain level of cognition that a whole lot of post-election-coginition is allowing other people to realize they'd done something silly.
  • Make an article with a "forbidden word" or a "forbidden topic" or even something a little bit different from something "that everybody knows" (e.g. gastric ulcera) and it's immediately wrong.

    Remember guys, next time a crackpot says "oh but they laughed at Einstein as well", IT'S YOUR FAULT

  • http://science.slashdot.org/story/11/01/02/1244210/Why-Published-Research-Findings-Are-Often-False [slashdot.org] If you read this piece it talks about a "decline effect." Basically research that gets published has positive results to report and conforms to established opinion. However, further study shows that the effects aren't as strong as before. It talks about research showing pre-cognition before, but later be disproven. I think it's just fine to publish work in journals on pre-cognition, if the work was done i
  • Is this a dupe? I'd swear I had read this article before.

  • maybe... (Score:4, Interesting)

    by TheCarp ( 96830 ) <sjc.carpanet@net> on Thursday January 06, 2011 @10:30AM (#34775852) Homepage

    Maybe this is a hack. They say he has a sense of humor.... think of this... he did design his studies well, at least the ones that I have read about. The effects of this "time leaking" are fairly small. Perhaps the entire point...is to make a point about statistics.

    Added bonus? Put the ESP issue to bed. Him doing this, and specifically doing it so publicly and getting it passed peer review and publication, ENSURES that these studies are going to be replicated by numerous people, for the next several years. That, in and of itself, could produce enough evidence against ESP to really put the issue to bed :)

    Say what you want about his paper, the effects reported are as large as many "well accepted" study results. Which may be the scariest part of all.

    That said, I am no ESP believer (that may be obvious) but, some of the statements that are made against it are ridiculous too. "Why aren't people winning the lottory with their perfect precognition". The effects he is talking about here are on the order of a few percentage points better than random... which is more than the house advantage at many casino games (assuming optimal play)

    -Steve

  • Across all 100 sessions, participants correctly identified the future position of the erotic pictures significantly more frequently than the 50% hit rate expected by chance: 53.1%

    It's pretty easy to come up with significant results in this field: Just do a sufficiently large number of experiments, and you will inevitably come across some significant results. This works for any definition of significance, though of course it's easier for low standards.

    • by Hatta ( 162192 )

      Exactly. One out of every twenty "statistically significant" effects (P value =.05) is due to random chance.

  • by knewter ( 62953 ) on Thursday January 06, 2011 @10:46AM (#34776058)

    The headline SO should have been:

    Journal Article On Precognition Sparks Outrage BEFORE IT'S PUBLISHED.

    That is all. I expect more out of an editor.

  • Like retrocognition - the ability to perceive that something has already happened.
    • Someone with such superpowers might be able to predict things like stock market crashes, resource-driven wars, despotism and oligarchy.

      If we have learnt anything from history, it's that we don't learn from history

      or something like that.

  • I wonder if there is a correlation between posters who defend ESP and posters who post without first reading the article.

  • On a topological statistical equivalency...

    Last week I was explaining to my wife why it was her cousin always did so well at the casino. It’s not that she’s “a positive person, which attracts success in gambling” or any other warm-fuzzy explanation. Consider that over a long run of gambling, someone will have probabilistic periods of “good runs” and “bad runs”; likewise, among multiple people statistics dictate that there will be some who have a lifetime of

  • by t2t10 ( 1909766 ) on Thursday January 06, 2011 @02:09PM (#34779638)

    I looked at the paper. I don't believe the conclusions. But they seem to present all the necessary data so that you can do your own statistical analyses, and they offer to give you the software.

    I don't see any reason for people to get "outraged" over this. Publication in a peer reviewed journal is not a guarantee of correctness (in fact, probably the majority of peer reviewed publications contain significant errors), it merely means that the paper meets basic scientific standards in terms of approach and analysis.

    If there is an error in the methodology or results, then people should respond by publishing a paper pointing those out. That way, everybody can benefit from the discussion. So, that's where all those people who are "outraged" should channel their energy.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...