Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Science

Open v. Closed Source-Climate Change Research 443

theidocles writes "The ongoing debate over the 'hockey stick' climate graph has an interesting side note. McKitrick & McIntyre (M&M), the critics, have published their complete source code and it's written using the well-known R statistics package (covered by the GPL). Mann, Bradley & Hughes, the defenders, described their algorithm but have only released part of their source code, and refuse to divulge the rest, which really makes it look like they have some errors/omissions to hide (they did publish the data they used). There's an issue of open source vs closed source as well as how much publicly-funded researchers should be required to disclose - should they be allowed to generate 'closed-source' solutions at the taxpayers' expense?"
This discussion has been archived. No new comments can be posted.

Open v. Closed Source-Climate Change Research

Comments Filter:
  • Short answer, no. (Score:4, Insightful)

    by maotx ( 765127 ) <{maotx} {at} {yahoo.com}> on Tuesday March 22, 2005 @08:53AM (#12010677)
    should they be allowed to generate 'closed-source' solutions at the taxpayers' expense?

    No. I paid for it I want to see it. How else will we know if it works the way they say it works?
    • Hmm... isn't the government one of Microsoft's biggest customers? Do they pay them with taxpayer money? Are taxpayers then allowed to "see" source of products from Microsoft?

      It's always bugged me that our elected officials hand our money to any vendor they'd like, but then on the flipside, one could argue, that's why they're elected officials.
      • An important difference is that the Microsoft tools the government uses are just tools. They were not developed with taxpayer money.
        The government buys licenses for Microsoft Windows, Office, etc. just like it buys toilet paper or doorknobs.
      • Under the heading of conflicts of interest, one wonders about the correlation between

        shares of MSFT in the portfolios of government decision makers, and

        selection of Microsoft products to support new projects.
        No real cures for this hypothetical problem that wouldn't be far worse than the disease, alas...

    • by ilikejam ( 762039 )
      Wasn't this sort of thing the whole reason the BSD license came about?
    • You also funded Microsoft if you purchased anything from them. It does not mean you should be able to see the source for anything at all.

      Same goes with any government function. Even with the freedom of information act, there is still classified information and the like. If someone doesn't want to give you their research... it's their research no matter who funds it.

      They have no legal obligation to give *you* anything.

      Thats the way the world works and the way it *should* work. Deal with it.
      • You also funded Microsoft if you purchased anything from them. It does not mean you should be able to see the source for anything at all.

        Why not?

        There is a bill before congress right now that says basically that - in relation to automobiles. It says basically that people have a right to be able to fix thier own autos and manufactures do NOT have the right to make you go to a dealer for repairs because they hide the source for automotive computer systems.

        Now living in a country where so many people can
  • by mfh ( 56 ) on Tuesday March 22, 2005 @08:53AM (#12010679) Homepage Journal
    how much publicly-funded researchers should be required to disclose

    All of it, baby. We're paying for it -- we should have the right to:

    a) Know what you're spending our money on
    b) Have the right to make it better ourselves
    c) Learn of security flaws early so we can correct them

    Especially when there is some doubt [slashdot.org] about the nature of the results in the closed source model from Mann et al.
    • Comment removed based on user account deletion
    • by R.Caley ( 126968 ) on Tuesday March 22, 2005 @09:27AM (#12010865)
      We're paying for it

      Of course, often you will only be paying for part of it. It is common for research to come out of a combination of `projects' funded from different sources.

      What should happen if, for instance, a drug company funds a project into developing statistical theory and signal analysis and so on to improve analysis of early candidate drug screening data, and then the researchers use the prototype implementation in a publicly funded project they are involved in on climate data and find something significant?

      Or what happens for part-state, part-commercially funded projects?

      I think one thing which could be done is to give companies a (bigger) tax break on money put into research (internal or when they give grants) if they sign up to give out not only all the data, but things like source of programs and detailed design of prototypes and experimental setups.

      Another thing would be to set up some kind of peer review process and then treat published source as a publication for the researcher. If your peers sign off to say that you have produced and documented the code to the level where it is a useful resource for other researchers, then it should count towards departmental and personal evaluations just as a journal article would. The formalised review process is important -- the average bit of lashed-together-to-get-the-data research code is more equivalent to a scribbled note on a whiteboard than to a journal article.

      Perhaps all that is needed is an online journal set up and run as a properly organised accademic journal, but specialising in publishing code. Imagine an infrastructure not a million miles away from sourceforge, but with a peer review process to decide what gets counted as a release.

      • Hear, hear!

        I think the idea of establishing incentives for fuller release of data and methods is a great one. Not only can it speed work, but it can finally start to break down the "build-it-yourself" mentality that seems to pervade science (or physics at least), and get people to think in terms of platform compatibility when they build software to solve a particular problem. The amount of repeated software work is simply staggering.

        On the other, there are legitimate reasons to want to withhold your cod
    • Funny how that argument looks if the research is being done by the NSA....
  • by Anonymous Coward on Tuesday March 22, 2005 @08:55AM (#12010686)
    I hate projects with names like R. I used R a while back, and it's a great program, but try searching for "R" plugins on Google. Not fun.
  • is in the interest of the science in this case.
  • You might want to ask to see the model behind the CIA data [flight800.org] which proved conclusively that a 747, deprived of its forward fuselage, can convince over 600 witnesses that said 747 was shot down by a SAM.
    • by Travis Fisher ( 141842 ) on Tuesday March 22, 2005 @10:45AM (#12011456)
      If you want to read someone knowledgeable, levelheaded, and intelligent about the TWA flight 800 investigation, and the actual physical evidence in that crash, check out testimony of metallurgist William Tobin [100megspop3.com] in the congressional hearing on the matter. Mr. Tobin was one of the lead scientific investigators of the recovered wreckage. A sample quote:
      • Senator Grassley. What were some of the characteristics which negated the missile theory?
      • Mr. Tobin. Well, probably the most prominent--actually, there were two main areas negating the missile theory. One, of course, again, is the absence of impulsive loading, or very high-speed fracture and failure mechanisms.

        But second was there were serious issues with every theory, or almost every theory, as to access of an external missile to the fuel, to the fuel tank. Even with, as I indicated earlier, if one would focus on an area where we did not recover all of the fuel tank, there were components nearby that would have blocked or at least recorded passage of any externally penetrating object. And if that were not the case, there were many layers, including the external underbelly of the aircraft, and that was recovered almost--a huge portion of that was recovered.

        So that, basically, the only plausible theory for some of the missiles to have occurred would have been if there were missiles such that could maybe get through a 1- or 2-inch opening, make an immediate left, go 90 degrees through a seam, and then maybe take another 90-degree right, and then maybe reverse itself and come back over. But those were some of the considerations.

      This is the voice of reason in a case where reason is ignored...
      • But then there's all that very troubling evidence of missile propellant on seats, etc etc.

        The point I was making is not that TWA800 was shot down, just that the CIA and NTSB released a 'closed-source' animation purporting to totally refute the many, many, many eyewitness accounts of a "streaking light" intercepting the aircraft. The animation is hugely flawed but they refuse to let anyone subject it to analysis.
  • by Anonymous Coward on Tuesday March 22, 2005 @09:03AM (#12010730)
    The problem with most of these studies is that they refuse to release the raw data.

    A lot of times they select subsets of the data and then normalize or otherwise massage the data.

    Thanks ... but no thanks !!!

    • Yes. Did you know, humans have an average of about 1.8 legs each (and eyes, and arms, etc. etc.). Anything can be done with a few figures, even more so when the way it is done is hidden (Governments use similar maths wizardry).
    • Well my objection to the /. 'we want to see the source - its our religion posters' is that they *did* publish the data. The question that arises for me is, so did anyone run it through the competing model and get different results. Here the openness of the code is not really the issue, its the model and especially the predictive power of that model that is important. If the same data gives comparable results then we can conclude that the models are comparable (and then really test them by predicting things)
  • The debate (Score:3, Insightful)

    by Mr_Silver ( 213637 ) on Tuesday March 22, 2005 @09:04AM (#12010735)
    So there is a debate going on? If that is the case, a link to where it is going on so we can see the arguments would be nice.

    For all we know, there could be a very valid reason why they haven't released all of it. I'm not sure what that reason could be, but given that we don't have anything to go on, we're stuck to just guessing.

  • ...and I agree with it -- of course anything paid for by public funding should show a return on public interest. It seems way too obvious.

    What isn't quite so obvious is employers owning works of an employee. It seems obvious that it should be restricted to stuff that is currently job related and developed on company time, but we all know of scenarios where companies reach too far. So without looking too deeply, I wonder if the other side considers some aspects of their work not relevant to that which wa
    • What isn't quite so obvious is employers owning works of an employee.

      That's correct under English Common Law. But at least the "open sourc" group is from the Netherlands, and they work under the maximes of the Berne Convention. In German Law (43 UrhG) it states explicitely:

      43 Urheber in Arbeits- oder Dienstverhältnissen

      Die Vorschriften dieses Unterabschnitts sind auch anzuwenden, wenn der Urheber das Werk in Erfüllung seiner Verpflichtungen aus einem Arbeits- oder Dienstverhältnis ge

  • No brainer... (Score:5, Insightful)

    by wileynet ( 779280 ) on Tuesday March 22, 2005 @09:08AM (#12010751)
    Science, like government, should be transparent. The public should be able to see and evaluate every part. Any science, or government, that hides it's implementation is inherently suspect to corruption.
    Closed science is half a step from religion. You are expected to have faith in the researcher's methodologies, analysis, assumptions, and motives. Sorry, but good science does not rely on faith.
    • The fallacy of your argument is the assumption that the public will review the work, the same goes for OSS, just because the code is open doesn't mean people are going to examine it in depth enough to find all of the flaws.
      • Re:No brainer... (Score:4, Interesting)

        by dpilot ( 134227 ) on Tuesday March 22, 2005 @09:43AM (#12010981) Homepage Journal
        It's not necessary that a large number of people in the public review the work. It's only necessary that SOME people in the public review it, and that those be self-selected people.

        I run quite a selection of software on my machines, and to be honest, I've never done a security review of ANY of it. To be equally honest, I'm not really competent to do a security review of the code, though with effort I could well become so. But by and large, I pay attention to OSS community discussions, and know that others who appear to be competent have review that software. Note that I use the term "appear to be competent," since I have no personal knowledge of their qualifications. However, when enough people who "appear to be competent" reach a concensus, either:
        1: They're all incompetent in the same way.
        2: They're all email aliases of the same guy hunched over the keyboard in his parents' basement.
        3: They're the techno-incarnation of the Club of Rome, bent on World Dominatino.
        4: They're a variety of informed backgrounds and opinions who have come to a rough concensus.
        I submit that 4 is most likely. 1 is possible, given that there are common misconceptions, but the larger the group, the less likely 1 becomes. 2 and 3 are just plain for the tin-foil hat club.

        I argue that the work needs to be open for the self-selection of reviewers. If the reviewers are selected by the authors, no matter how hard they try to find 'fair and neutral' parties or even antagonists, something will be missed.
    • Re:No brainer... (Score:2, Insightful)

      by provolt ( 54870 )
      Closed science is half a step from religion.


      But to many in the environmental movement it is a religion. Orthodox Environmentalism is just as strict as any other orthodox religion, and just as faith-based and close to new ideas.

    • Re:No brainer... (Score:2, Insightful)

      by Stalus ( 646102 )
      Any science, or government, that hides it's implementation is inherently suspect to corruption. Closed science is half a step from religion. You are expected to have faith in the researcher's methodologies, analysis, assumptions, and motives. Sorry, but good science does not rely on faith.

      No brainer. I think that means you didn't really think about the problem.

      In science, as in industry, there is a necessity to maintain a competitive advantage. The competition isn't over sales, it's over papers. Pape

  • by asciiRider ( 154712 ) on Tuesday March 22, 2005 @09:08AM (#12010752)
    The pharmaceutical industry receives huge subsidies from us - they don't produce "open" drugs - why should this be any different? I know it's apples and oranges - but one should be really careful about the idea of withholding funds from -good- research just because of licensing issues. Lesser of two evils? Would we rather have -no- research?

    complicated...
    • I work for a pharma company. When we publish, we have to publish the structure of the compound used. You or a skilled chemist could cook it up and reproduce my work. That makes it science. Even if it's patented, you can do this under the freedom to research clause.
    • by mccalli ( 323026 ) on Tuesday March 22, 2005 @09:21AM (#12010838) Homepage
      The pharmaceutical industry receives huge subsidies from us - they don't produce "open" drugs - why should this be any different?

      It shouldn't. But of course there are two ways to resolve this inconsistency:

      1. Allow publically funded closed-source climate models
      2. Require drug companies to open up that amount of research was was carried out using public funds.

      Option 2 please.

      Cheers,
      Ian

    • Speaking from my ivory tower (which is actually a nice glass fronted building overlooking the sea - which is much more fun when it's not raining than it is today) in academia, I feel strongly that the research community should justify its existence by releasing full results. The number of times I've come across papers which basically say `look at us! We did something clever! Be impressed!' without actually giving enough details of what they did to reproduce it is sickening. Society funds academics for t
  • McKitrick & McIntyre (M&M), the critics, have published their complete source code


    Uhrm ... where? I haven't been able to find any code on any on of the pages mentioned. I agree it's essential to disclose all data and source code ...


    and it's written using the well-known R statistics package
    ... especially since R can be such a pain (sorry, struggling right now)
    • Re:Where? (Score:3, Informative)

      by kippy ( 416183 )
      Uhrm ... where? I haven't been able to find any code on any on of the pages mentioned. I agree it's essential to disclose all data and source code ...

      Unless I'm mistaken...

      Source [climate2003.com] Data [climate2003.com]
  • by G4from128k ( 686170 ) on Tuesday March 22, 2005 @09:12AM (#12010784)
    Arguments for open-source science:
    1. Replication: Science thrives on replicable results. If researchers don't publish everything, others cannot replicate the results and the original findings become suspect.
    2. Knowledge Sharing: If the point of publicly funded academic research is to advance human understand of the world, then open access to methods (code) is a key part of that.
    3. Reduce/Eliminate Stealth Patents: Releasing knowledge into the public domain will help nip patents in the bud.
    4. Preserve Fair Use: University's trends toward turning research into money is threatening the basis of fair use for researchers. How long will it take intellectual property owners to get regulations on "fair use" because academic research has turned into another big, for-profit, corporate enterprise.

    Arguments against open-source science:
    1. Replication: Science thrives on replicable results. But the key is independent verification. If one scientist simply reuses another scientist's code, there is a chance (high, some would argue) that faults in that original code would corrupt both scientists' results. Closed-source forces independent verification.
    2. Commercialization: If the point of publicly funded academic research is to create widely-used products and services, then the system needs some scheme for protecting the value of intellectual assets. Where the cost of bringing the product to market is very high (e.g., pharma), the company/investors needs some assurance that another company can't just copy the results when the product comes out.

    I'm sure there are arguments on both sides.
    • Few experiments are directly replicated unless the original experiment found something really unusual. The reason for this is that it is difficult enough to get funding for novel research, trying to get funding to replicate what someone else has already done is even more difficult. Sure, there are some experiments that you can throw a grad student at - but for the most part, faculties and sponsors want you to find out something new with their dollars. A lot of scientists as well aren't willing to spend year
    • Well, we're talking about publically funded research. Commercialization is a valid concern only if the research was done using private funds.

      The "value of intellectual assets" is paid by the government, and thus, is a property of the government (and thus goes into Public Domain).

      If you want a monopoly on the fruits of your research, sure, go ahead! Just use your own funds for it. We deserve the results of the research we paid for with our taxes.
    • Where the cost of bringing the product to market is very high (e.g., pharma), the company/investors needs some assurance that another company can't just copy the results when the product comes out.

      You mean a patent? Anyone care to comment on a related note about why publicly funded research (like at a university) should be able to secure patents for the researchers? I have seen many an experiment hampered by a patent/NDA/legal nightmare when collaborating with other universities. The university pushes for

    • by goodmanj ( 234846 ) on Tuesday March 22, 2005 @10:32AM (#12011358)
      To frame replication of scientific results as an "open source" debate is both a no-brainer and misleading. A no-brainer because if an investigator does not provide enough information to allow their colleagues to replicate their work, they are not doing science -- in that sense, all science is "open source". Misleading because scientific ethics do not require totally open sharing of source code: it is sufficient to verbally describe the algorithms and data used in enough detail that someone else can repeat the experiment. In practice, journal article page limits often require that this description happens on a person-to-person level, rather than in published literature.

      Most of the "arguments against open-source science" mentioned here are not about science at all. The secrecy surrounding commericial and national-security "science" is good only in a financial or political sense: they do not help science, per se, at all. And personality conflicts are a factor as well: I suspect that Mann et al's reluctance to release source stems from an extreme personal frustration at McKitrick et al's persistent and (in my view) not always well-supported attacks.

  • by climb_no_fear ( 572210 ) on Tuesday March 22, 2005 @09:13AM (#12010786)
    I'm of the opinion that anything that gets published should be published in its entirety, at least at some point. For example, people who publish protein structures can put the coordinates "on hold" for up to 18 months.

    And to say because the research is done with "taxpayer's money" is missing the point: If you can't reproduce every step, it's voodoo, not science. And we make policy decisions based on science, not voodoo (I hope).
  • by MyRuger ( 443241 ) on Tuesday March 22, 2005 @09:13AM (#12010787)
    Obviously you have never written an SBIR or BAA. You when you do research "At the tax payers expense", you need to show your plans to commercialize the results of the research. The government wants you to create a IP twoards a commerial project which will spur the economy, not to contribute to the scientific community as a whole. Take it as you will, but I think that most research would not get funded if your commertilization plan was to release it on sourceforge.
  • Not in all cases. (Score:4, Insightful)

    by Shivetya ( 243324 ) on Tuesday March 22, 2005 @09:13AM (#12010795) Homepage Journal
    While I would like all works performed for the government that are not of National Security importance to be more open I don't think it is necessary.

    A lot of work peformed for government agencies is contractual with businesses. These same businesses employ tricks of the trade and such to deliver what is required. To have them detail how the work is just suicidal. The same goes for software they develop for use by the government. Unless specifically addressed in the contract I do not believe there is a right to disclose the code, let alone make it available to the public.

    That last part is key. Even if they disclose the source to the government there is no obligation on either party to make it public.

    This argument that they have something to hide is childish. It is designed to provide no leeway. Simply put, once labeled as such what other option other than disclosure exist? You might as well say "You have to release it, its for the children" and then proceed to use whole "hates kids, wants kids to die" guilt trip that is far to common in politics today.

    Summary. Release it if only its an upfront requirement of the project and agreed upon by both parties. In the future a requirement by law that all government projects must be fully disclosed to include the source of any software may be nice but I bet it would have so many exceptions written into it that it would result only in a "feel-good" law.

    • There's a difference between "work for the government" and fundamental research. If I'm building a missile guidance system, or a database application to manage government carpools, or a light rail control system, there's no reason to let the code out. On the other hand, if these guys are telling us their model is the whole of the argument, that the model says the ice caps are melting and it's CO2 doing the damage, we damn well better have that code.
    • Actually, THIS code is not only of National Security importance. It's of Global Security importance and NOT disclosing it may endanger whole planet. Sure disclosing it CAN endanger several of US businesses, therefore impacting the US economy -> National Security, but PLEASE set your priorities straight!
  • taxpayers vs boffins (Score:4, Informative)

    by dos_dude ( 521098 ) on Tuesday March 22, 2005 @09:15AM (#12010810) Homepage

    This is an extremely difficult issue, although it sounds pretty trivial.

    For one thing, the taxpayer is rarely participating in discussions like this one. Moreover, the success of scientific institutions is often measured in terms of number of patents, successfully launched businesses by former students/researchers, etc. So not only is there little or no opposition to closed-source software (or scientific articles!), there are also good reasons for researchers to go the closed-source road.

    Some researchers have a tendency towards secrecy. Some even seem a little paranoid when it comes to their data and methods. You could compare this to the tendency of the OSS zealot to suspect bugs, glitches, and omissions in any piece of closed-source software.

    And as a German side-note: There are laws over here that require you to have the patentability of any piece of software you develop checked by university lawyers. GPLing something is technically illegal for a researcher. I have no idea how this is regulated in other countries.

  • by Uksi ( 68751 ) on Tuesday March 22, 2005 @09:33AM (#12010904) Homepage
    The blurb author attempts to paint one side as having something to hide, since they only released a part of their source code. Nevermind that both papers' data can be independently validated--no no, one side is bad for only describing the algorithm and not its source code!

    So a team of real scientists (that is, by folks who work in climate science, not reporters or pundits) wrote a Dummies Guide [realclimate.org] to the latest controversy. Click on the link for a nice question-by-question breakdown, but I'll spoil the conclusion for you:

    (MBH98 is the old paper with "closed" source, MM05 is the new "open source") paper)

    7) Basically then the MM05 criticism is simply about whether selected N. American tree rings should have been included, not that there was a mathematical flaw?

    Yes. Their argument since the beginning has essentially not been about methodological issues at all, but about 'source data' issues. Particular concerns with the "bristlecone pine" data were addressed in the followup paper MBH99 but the fact remains that including these data improves the statistical validation over the 19th Century period and they therefore should be included.

    8) So does this all matter?

    No. If you use the MM05 convention and include all the significant PCs, you get the same answer. If you don't use any PCA at all, you get the same answer. If you use a completely different methodology (i.e. Rutherford et al, 2005), you get basically the same answer. Only if you remove significant portions of the data do you get a different (and worse) answer.

    9) Was MBH98 the final word on the climate of last millennium?

    Not at all. There has been significant progress on many aspects of climate reconstructions since MBH98. Firstly, there are more and better quality proxy data available. There are new methodologies such as described in Rutherford et al (2005) [realclimate.org] or Moberg et al (2005) [realclimate.org] that address recognised problems with incomplete data series and the challenge of incorporating lower resolution data into the mix. Progress is likely to continue on all these fronts. As of now, all of the 'Hockey Team' reconstructions (shown left) agree that the late 20th century is anomalous in the context of last millennium, and possibly the last two millennia.

    Read the rest [realclimate.org] for more explanation.
    • I think the important conlusion of this guide is that if you take all of the original Mann, Bradley and Hughes data and run it using the same fully open-source algorithms of McKitrick & McIntyre [uoguelph.ca], you get the same results.

      Which is reasonable since MM's argument is about source data and not methodology (as per this guide [realclimate.org]).
    • by ajs ( 35943 ) <ajs.ajs@com> on Tuesday March 22, 2005 @11:29AM (#12011941) Homepage Journal
      "Their argument since the beginning has essentially not been about methodological issues at all, but about 'source data' issues [...] Only if you remove significant portions of the data do you get a different (and worse) answer."

      You're over-trivializing a DRAMATICALLY IMPORTANT POINT. The original study is focused on North American data almost exclusively for certain time periods. That data (from a single species of tree) skews the results in such a way as to make the current trend seem unique and drastic. On the other hand, if you treat that data source in such a way as to balance it with the other data that is available, you see a VERY DIFFERENT TREND!

      The response has been to claim that weighting the data in this way reduces the number of data points unacceptably (I would agree, but that doesn't make MBH98 right).

      That's the whole point here, and the other side continues to say, "you're throwing away data" when any competent researcher would have thrown it out in the first place (note: there's an exception. if you produced a report that was specific to N. America, MBH98 would be your model, and it seems to be a fine model for that... N. America is seeing record warming as compared with the last few centuries, and that's all you can extract from MBH98).

      Also keep some perspective in mind here. We're in a period where temperatures could rise MORE than ANYONE is predicting and not make a dent in the graph over the last 10million years. If you graph out the last 10 million years, you see that temperatures over the last 10,000 years have been part of a huge, cyclical spike in temperatures. We're at what is likely the peak of a drastic temperature swing, and it WILL plumet again into a new ice age (unless we decide to and are capable of coming up with a way to prevent it). I'm not drawing any conclusions from that, just pointing out that there are natural forces at work here, capable of making temperature changes that we a) cannot yet conclusively explain and b) the likes of which no human has ever experienced.

      It's important to keep a sense of perspective and to remember that we have very impressive climate models... all of which might be wrong.
    • by jnaujok ( 804613 ) on Tuesday March 22, 2005 @12:38PM (#12012754) Homepage Journal
      You do know that Mann writes this website, right? You do realize that the source of your argument (http://www.realclimate.org/ [realclimate.org]) is a shill for Mann and his cronies?

      Second of all, there was a flaw in the original algorithm that was pointed out by McIntyre and McKitrick before they even got to the bad data being put into the equation.

      And, to top it off, Mann's equation always produces hockey-stick graphs [newsweekly.com.au], even with randomly distributed data.

      Don't point at Mann's own site as a defense of Mann.
      • by Silburn_Luke ( 672738 ) on Tuesday March 22, 2005 @02:54PM (#12014266)
        You do know that Mann writes this website, right? You do realize that the source of your argument (http://www.realclimate.org/) is a shill for Mann and his cronies?
        I'll just note that Mann's 'cronies' (all eight of them) are climate scientists of one sort or another doing relevant, current work in the field under question and that its a stretch (and how) to call the site a shill for Mann when his name is on the front page as a contributor.

        However there was a link to McIntyre and McKitrick's website [uoguelph.ca] in the topic summary. Why was it relevant for Timothy to include that link, but not include a link to the matching item on RealClimate.org [realclimate.org]? Is it just non-scientists who are allowed to have weblogs about this stuff?

        Regards
        Luke
      • Shill out [realclimate.org].

        Here, however, we choose to focus on some curious additional related assertions made by MM holding that (1) use of non-centered PCA (as by MBH98) is somehow not statistically valid, and (2) that "Hockey Stick" patterns arise naturally from application of non-centered PCA to purely random "red noise". Both claims, which are of course false, were made in a comment on MBH98 by MM that was rejected by Nature ,

        and subsequently parroted by astronomer Richard Muller in a non peer-reviewed setting--se

  • should they be allowed to generate 'closed-source' solutions at the taxpayers' expense?

    I think it's not fair for a public-funded project to do something like create a product and sell it without source code, or patent their work. But that doesn't mean that every artifact of the research project needs to be made public, necessarily. In this case, the end product that the grants paid for is the scholarly paper, not a computer program. Just as we don't demand their notebooks, time cards, e-mails, and meeting
    • In this case, the end product that the grants paid for is the scholarly paper, not a computer program.

      But the program is part of the authority of the paper. Put it like this: if the program was revealed to be a random number generator, do you think it would reflect badly on the value for money the taxpayers got? In which case, don't researchers have a duty to publish the program in order to help show the validity of their research, in all cases, not just this one?

      TWW

  • Replication (Score:3, Informative)

    by mwvdlee ( 775178 ) on Tuesday March 22, 2005 @09:44AM (#12010985) Homepage
    Significant research data is generally replicated independantly of the original researchers for verification of the results. Without a description of the method of research used (in this case; the computer model), how can the data be replicated and thus verified? Indeed the very methods itself are commonly scrutinized in the scientific world and, IMHO, any scientist that does not approve of this is not looking for truth but for something else (personal agendas, fame, etc.).

    Not detailing the methods used (in this case; giving the entire algorithms, either as source or as a 100% comlete and unambiguous description) basically limits the usefullnes of the resultant data as mere speculation, not proof nor even theory.

    If I remember correctly, the computermodel in this case is known to include a rather lacking model of rainfall, which seems like a pretty big omision in a climate model to me.
  • by operon ( 688118 ) on Tuesday March 22, 2005 @09:48AM (#12011003) Homepage
    Today biology heavily depends on specific software to analyse lab generated data. However, even academic, public funded software are not open-source. It's a sad situation, but there are efforts like Bioinformatics.Org [bioinformatics.org] trying to change the situation.
  • Besides paying for the research, how can another check on the accuracy/repeat the results without the original code?
  • by Nick Barnes ( 11927 ) on Tuesday March 22, 2005 @10:01AM (#12011108)
    See this debunking [unsw.edu.au] of McKitrick's work, showing, among other things, how he:
    • denies that average temperature is meaningful,
    • confuses degrees with radians,
    • invents a whole new temperature scale,
    • replaces missing data with zeroes
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday March 22, 2005 @10:03AM (#12011124)
    Comment removed based on user account deletion
  • How much trust should we put into a study where the computer simulations code does not come under peer review (closed source) versus one where it does come under peer review (open source). Seems to me that the code is as much part of the "study" as the results and data are. Especially considering how much finagling can be gone on in source code.

    Also, since the results have to be reproducable by ANYBODY, without the source you can not garuntee that the program is doing what it is being said it can do. Af
  • Shouldn't another scientist be able to replicate that experiment? Source code is an integral part and they won't let you know how they did that?

    That's BS, and all the more so because of the political implications of such research.
  • by kisak ( 524062 ) on Tuesday March 22, 2005 @11:04AM (#12011640) Homepage Journal
    It is an interesting debate, if scientist should publish their source or not. But that two economist with little understanding of the climate has published some source code is not very interesting. I guess they want to sell more of their book which I guess goes in the politics and economy section and not in the science section.

    I think there is no reason to demand that scientist should publish their source code, since scientist usually reuse their code and share their code with people they work with, but should not be obliged to help other scientist that they are competing for funding with to get their own simulation programs.

    The demand on scientist are clear though, they should give enough information in their publications so anyone interested (or who want to refute their results) can reproduce what they have done. So any statistical or mathematical methods used should be mentioned. And if they use commercial packages (with closed source usually for all parties), mention which packages they use would be wise so that if there are found bugs in these programs, any influence on their results can be taken into consideration. If enough information is given, then any scientist who can program, can check out the literature how to implement the nummerical algorithms and write their own program. Often they can buy (fairly expensive) commercial packages or even find open source liberies that have already implemented these algorithms, and then reproduce the results.

    If these two economist were able to reproduce the results of some major climate scientist, then these climate scientist have given enough information to their fellow scientist and the general public. So lets forget about these two guys, or buy their book if you want to believe they know better about climate changes than the general scientific community.

  • Obviously YES (Score:5, Insightful)

    by radtea ( 464814 ) on Tuesday March 22, 2005 @12:10PM (#12012429)
    How many times have you asked someone, "What does your code do to solve this problem?" and got a description of an algorithm which, when you finally get to see the source, does not match the code?

    In my case, the answer to that question is, "Lots." I have had it happen in pure science (neutrino physcis), applied science (medical physics) and software development (database programming, data analysis, etc.)

    I am painfully aware that my own published descriptions of algorithms have often left out minor details that may be critical in some applications, but that page limits in peer-reviewed journals necessitate. It is not uncommon to get a call from someone doing similar work asking for details about what you've done, how you've done it, and in some cases, asking to look at source code.

    In contentious areas of science such requests are not always met with full disclosure, which is a sign that the people involved are no longer doing science. They are doing politics. This happens a lot, and it brings the scientific process to a halt on the question at issue.

    In the case at hand, the original authors have done a very poor job of describing what they have done, and an extremely poor job of defending their work. Their refusal to publish their source code for their analysis gives credibility to their critics.

    There are certainly legitimate cases where code ought not be published. If a lab has spent many, many years developing a framework for solving a certain type of problem and wants to get the most advantage out of that framework before releasing it, they may reasonably want to limit it's disemination for a while. But those sorts of reason don't apply in this case, and the source should be made available to anyone who wants to reproduce their actual results. That would just be good science.

    --Tom
  • Replication (Score:3, Informative)

    by cvdwl ( 642180 ) <cvdwl someplace around yahoo> on Tuesday March 22, 2005 @12:52PM (#12012917)
    As an academic computer programmer in ocean modeling, let me just say it HAS TO be open. Yes, my work is open source, though why anyone would WANT my code is beyond me. Most of what I do is quick, short-time, badly coded, inefficient data processing and vizualisation scripts. Still, feel free to email me and I'll send you a tarball of any code on my machine or a link to the developer's page.

    1) Science functions only on open review. If you can't duplicate someone's results, they are useless (c.f. Ponds and Fleischman [sp?]). A scientific result is only of value if it describes a consistent replicatable process. This is why I consider the closed source work to be completely meaningless. It may be perfect, it may be bug-ridden garbage, we'll never know!

    2) Every tax paying American has paid for my code and work. While I regularly feel they're not getting their money's worth, I definitely don't feel they're paying me to enrich me. They are, in a very real sense, my bosses, and I AM obligated to report to them, if they care. Think of it as a company requiring rights to your work.

    3) As an academic working on a fairly limited budget, open source and free software have been a godsend for me and everyone else I know. We run linux because it's more efficient, secure and FREE; we use free or open-source compilers; and we cobble together high-perormance computers and beowulf clusters out of miscellaneous bare metal and lots of googling. The only piece of software I routinely have to pay for is MATLAB.

  • by jmason ( 16123 ) on Tuesday March 22, 2005 @03:23PM (#12014658) Homepage
    how much publicly-funded researchers should be required to disclose - should they be allowed to generate 'closed-source' solutions at the taxpayers' expense?

    It's worth noting that, while it makes sense that taxpayer-funded research should generate 'open-source' solutions, federal law dictates otherwise.

    The Bayh-Dole Act [uc.edu] was passed 25 years ago, which dictates:

    Universities were encouraged to collaborate commercial concerns to promote the utilization of inventions arising from federal funding.

    It was clearly stated that universities may elect to retain title to inventions developer through government funding.

    Universities must file patents on inventions they elect to own.

    So in other words the government has dictated since 1980 that government-funded research should not produce open-source solutions, necessarily, as the results of research are to be considered private-sector profit-generating centers for the host universities. (The implications for the 'next BSD4.3 TCP/IP stack', or similar advanced research, are obvious.)

    Anyway, regarding the 'hockey stick' controversy, Tim Lambert's weblog [unsw.edu.au] is worth a read.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...