Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Research Data: Share Early, Share Often 138

Shipud writes "Holland was recently in the news when a psychology professor in Tilburg University was found to have committed large-scale fraud over several years. Now, another Dutch psychologist is suggesting a way to avert these sort of problems, namely by 'sharing early and sharing often,' since fraud may start with small indiscretions due to career-related pressure to publish. In Wilchert's study, he requested raw data from the authors of some 49 papers. He found that the authors' reluctance to share data was associated with 'more errors in the reporting of statistical results and with relatively weaker evidence (against the null hypothesis). The documented errors are arguably the tip of the iceberg of potential errors and biases in statistical analyses and the reporting of statistical results. It is rather disconcerting that roughly 50% of published papers in psychology contain reporting errors and that the unwillingness to share data was most pronounced when the errors concerned statistical significance.'"
This discussion has been archived. No new comments can be posted.

Research Data: Share Early, Share Often

Comments Filter:
  • Lie or Die (Score:2, Interesting)

    by Chemisor ( 97276 ) on Tuesday December 06, 2011 @01:39PM (#38282126)

    It is very difficult to make a man understand something when his job depends on not understanding it. If psychology research were made to adhere to any kind of stringent scientific standard, there would be no psychology research.

  • by DBCubix ( 1027232 ) on Tuesday December 06, 2011 @01:49PM (#38282254)
    I do research in textual web mining and from time to time I have other researchers ask me for my collections which I spider myself from copyrighted web sources. While my work is purely academic, I am covered by fair use. But since US intellectual property laws are obtuse and overbearing (imho), I cannot take the risk of sharing my collections with others for fear of running afoul of copyright law (since I can't control what is done with the collection once it is out of my hands and how do I know they would use it in a manner consistent with fair use). So it may be more than an unwillingness out of statistical fudging and more an unwillingness to become a target of copyright lawyers.
  • Re:Lie or Die (Score:3, Interesting)

    by Toonol ( 1057698 ) on Tuesday December 06, 2011 @02:34PM (#38282830)
    I wonder if we just haven't quite mastered the techniques necessary to deal scientifically with highly complex systems. Psychology, economics, climatology, etc., all are theoretically understandable, but are so chaotic that our standard scientific methodology can't be applied... you can't, for instance, repeat an experiment. You can't isolate one changing variable.
  • Re:Psychology (Score:4, Interesting)

    by rgbatduke ( 1231380 ) <rgbNO@SPAMphy.duke.edu> on Tuesday December 06, 2011 @04:53PM (#38284610) Homepage
    Hmmm, you really do need to read the climategate 2 letters, don't you.

    From message 4241.txt, a communication from Rob Wilson to Ed Cook (and others):

    I first generated 1000 random time-series in Excel – I did not try and approximate the persistence structure in tree-ring data. The autocorrelation therefore of the time-series was close to zero, although it did vary between each time-series. Playing around therefore with the AR persistent structure of these time-series would make a difference. However, as these series are generally random white noise processes, I thought this would be a conservative test of any potential bias.

    I then screened the time-series against NH mean annual temperatures and retained those series that correlated at the 90% C.L.

    48 series passed this screening process.

    Using three different methods, I developed a NH temperature reconstruction from these data:

    1. simple mean of all 48 series after they had been normalised to their common period

    2. Stepwise multiple regression

    3. Principle component regression using a stepwise selection process.

    The results are attached.

    Interestingly, the averaging method produced the best results, although for each method there is a linear trend in the model residuals – perhaps an end-effect problem of over-fitting.

    The reconstructions clearly show a ‘hockey-stick’ trend. I guess this is precisely the phenomenon that Macintyre has been going on about.

    Surely this vindicates Mann -- by proving that it does indeed turn white noise into hockey sticks! Not only is Mann wrong, but the hockey team knows it perfectly well! There are letters where people openly lament being involved with the hockey stick type reconstructions (and other places, e.g. where they "hid the decline" in tree ring data) because they are terrible science and because they are openly worried that sooner or later people will catch on. As indeed they have, although they have won the PR war (another great Mann quote) to such an extent that even though they themselves know that the hockey stick is bogus and that white noise fit according to Mann's cherrypicking methodology will produce nothing but hockey sticks, it just won't die, will it? Thanks to people like you!

    We could review the specific Climategate 2 letters where Jones talks about deliberately trying not to give away data to the people who requested it (something I would call "stonewalling", except that the circumstance in question is a FOIA request that was only a missed deadline away from being "a crime" upon the release of the CG emails), or about the points where it turns out that he does a lousy job of keeping records (problems with Excel spreadsheets) and no longer can reproduce his own results because he doesn't know what data he used, if you like.

    Or we could look at the many, many other places where internal communications show that the hockey team is well aware of many problems with their own results and consistently choose not to let the general public know about them lest we be led to doubt their conclusion. Then we could read Feynman's lovely article on "Cargo Cult Science": http://www.lhup.edu/~DSIMANEK/cargocul.htm [lhup.edu]. See how close you think the hockey team comes to Feynman's fairly modest standard for good, honest science, while reading Mann going on about the importance of winning the PR war, getting journal editors fired, and generally doing his very best to eliminate all challenge to his papers, or, if he can't manage that, eliminating the challengers themselves.

    But really, read them yourself. Don't accept what people tell you about them, read them! Then tell me that this is honest science, well done.


Who goeth a-borrowing goeth a-sorrowing. -- Thomas Tusser