Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Science

Call For Scientific Research Code To Be Released 505

Pentagram writes "Professor Ince, writing in the Guardian, has issued a call for scientists to make the code they use in the course of their research publicly available. He focuses specifically on the topical controversies in climate science, and concludes with the view that researchers who are able but unwilling to release programs they use should not be regarded as scientists. Quoting: 'There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies. For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.'"
This discussion has been archived. No new comments can be posted.

Call For Scientific Research Code To Be Released

Comments Filter:
  • Seems reasonable (Score:4, Insightful)

    by NathanE ( 3144 ) on Tuesday February 09, 2010 @10:44AM (#31071878)

    Particularly if the research is publicly funded.

  • great! (Score:4, Insightful)

    by StripedCow ( 776465 ) on Tuesday February 09, 2010 @10:47AM (#31071920)

    Great!

    I'm getting somewhat tired from reading articles, where there is little or no information regarding program accuracy, total running time, memory used, etc.
    And in some cases, i'm actually questioning whether the proposed algorithms actually work in practical situations...

  • by aussersterne ( 212916 ) on Tuesday February 09, 2010 @10:49AM (#31071940) Homepage

    seem to understand the very idea of scientific methods or processes, or the reasoning behind empiricism and careful management of precision.

    It's a failure of education, no so much in science education, I think, as in philosophy. Formal and informal logic, epistemology and ontology, etc. People appear increasingly unable to understand why any of this matters and they essentialize the "answer" as always "true" for any given process that can be described, so science becomes an act of creativity by which one tries to create a cohesive narrative of process that arrives at the desired result. If it has no intrinsic breaks or obvious discontinuities, it must be true.

    If another study that contradicts it also suffers from no breaks or discontinuities, they're both true! After all, everyone gets to decide what's true in their own heart!

  • Re:Why release it? (Score:3, Insightful)

    by ShadowRangerRIT ( 1301549 ) on Tuesday February 09, 2010 @10:50AM (#31071954)
    Please apply Hanlon's razor [wikipedia.org] before leaping to conspiracy theories. Or Occam's razor [wikipedia.org] might inform you that a conspiracy among thousands of scientists is a highly improbable occurrence; look for a solution that doesn't involve a perfect lid of secrecy among a group of (frequently) socially inept people.
  • by stewbacca ( 1033764 ) on Tuesday February 09, 2010 @11:00AM (#31072056)

    My bet is there is a simple explanation...namely that scientists outside of computer science are too busy in their respective fields to know anything about code, or even care. The egocentric Slashdot-worldview strikes at the heart of logic yet again.

  • by Cyberax ( 705495 ) on Tuesday February 09, 2010 @11:01AM (#31072080)

    His colleague was _sued_ (by a crank) based on released FOIA data. It might explain a certain reluctance to disclose data to known trolls.

  • by bsDaemon ( 87307 ) on Tuesday February 09, 2010 @11:05AM (#31072130)
    I think a lot of it has to do not just with failures in education, but also due to the way science (in particular, but everything in general) is reported in the media. One week a study saying coffee will kill you gets reported, then a couple of days later a story saying another study says coffee will make you immortal is reported on, both with equal voracity, neither with expert commentary or perspective. C+ students who look good on camera banter back and forth about it, laughing jocularly and ultimately creating a situation in which, by their own dismissal and misunderstanding, perpetuate that to their viewers.

    Its come to the point where many, many people just dismiss the whole business of science. "They can't even make up their minds!" they say, as if the point of science is to make up ones' mind. Of course, this is where the failure of education to actually educate comes into play. Classical liberalism has been turned over, spanked and made into the servant of corporate mercantilism and we're all just now supposed to sit down and shut up. Science, is in its essence, a libertarian (note small 'l') pursuit through which one questions all authority, up to and including the fabric of existence itself -- all assumptions are out the window and any that cannot pass muster is done away with.

    But, just like socio-political anarchism (libertarian socialism), the spirit of rebellion and anti-authoritarianism inherent in science has been packaged and sold in a watered down and safe-for-children package at the local shopping mall only to be taken out of the box when the powers that be feel that they can use it for their own purposes. Not to be a downer or anything, its just I really do think this is bigger than just science. It's to do with people willingly leading themselves as sheep to the slaughter on behalf of the farmer to make the dog's job easier.
  • Conspiracy? (Score:3, Insightful)

    by Coolhand2120 ( 1001761 ) on Tuesday February 09, 2010 @11:06AM (#31072148)
    Nobody said conspiracy, just plain crappy code. You don't need a conspiracy if you are "trying to prove" something, your crappy code spits out what you want to see and you run with it. You just need plain old incompetence.
  • by Idiot with a gun ( 1081749 ) on Tuesday February 09, 2010 @11:10AM (#31072190)
    Irrelevant. If you can't take some trolls, maybe you shouldn't be in such a controversial topic. The accuracy of your data is far more significant than your petty emotions, especially if your data will be affecting trillions of dollars worldwide.
  • by Anonymous Coward on Tuesday February 09, 2010 @11:11AM (#31072196)

    As it is written, the editorial is saying that if there is any error at all in a scientific computer program, the science is usually invalid. What a lot of bull hunky! If this were true, then scientific computing would be impossible, especially with regards to programs that run on Windows.

    Scientists have been doing great science with software for decades. The editorial is full of it.

    Not that it would be bad for scientists to make their software open source. And not that it would be bad for scientists to benefit from some extra QA.

  • Re:Conspiracy? (Score:5, Insightful)

    by obarthelemy ( 160321 ) on Tuesday February 09, 2010 @11:13AM (#31072238)

    Yes and no. Which assertion do you think more probable:

    1- "These are not the desired results. Check your code".

    2- "These are the desired results. Check your code".

    No conspiracy, but a conspiracy-like end result.

  • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday February 09, 2010 @11:19AM (#31072286) Journal
    The "The public deserves access to the research it pays for" position seems so self-evidently reasonable that further debate is simply unnecessary(though, unfortunately, the journal publishers have a strong financial interest in arguing the contrary, so the "debate" actually continues, against all reason). Similarly, the idea that software falls somewhere in the "methods" section and is as deserving of peer review as any other part of the research seems wholly reasonable. Again, I suspect that getting at the bits written by scientists, with the possible exception of the ones working in fields(oil geology, drug development, etc.) that also have lucrative commercial applications, will mainly be a matter of developing norms and mechanisms around releasing it. Academic scientists are judged, promoted, and respected largely according to how much(and where) they publish. Getting them to publish more probably won't be the world's hardest problem. The more awkward bit will be the fact that large amounts of modern scientific instrumentation, and some analysis packages, include giant chunks of closed source software; but are also worth serious cash. You can absolutely forget getting a BSD/GPL release, and even a "No commercial use, all rights reserved, for review only, mine, not yours." code release will be like pulling teeth.

    On the other hand, I suspect some of this hand-wringing of being little more than special pleading. "This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program." Right. I know that I definitely live in the world where all my important stuff: financial transactions, recordkeeping, product design, and so forth are carried out by zero-defect programs, delivered to me over the internet by routers with zero-defect firmware, and rendered by a variety of endpoint devices running zero-defect software on zero-defect OSes. Yup, that's exactly how it works. Outside of hyper-expensive embedded stuff, military avionics, landing gear firmware, and FDA approved embedded medical widgets(that still manage to Therac people from time to time), zero-defect is pure fantasy. A very pleasant pure fantasy, to be sure; but still fantasy. The revelation that several million lines of code, in a mixture of Fotran and C, most likely written under time and budget constraints, isn't exactly a paragon of code quality seems utterly unsurprising, and utterly unrestricted to scientific areas. Code quality is definitely important, and science has to deal with the fact that software errors have the potential to make a hash of their data; but science seems to attract a whole lot more hand-wringing when its conclusions are undesirable...
  • all (Score:2, Insightful)

    by rossdee ( 243626 ) on Tuesday February 09, 2010 @11:19AM (#31072296)

    So if scientists use MS Excel for part of their data analysis, MS should release the source code of Excel to prove that there's no bugs in it (that may favour one conclusion over another)
    Soumds fair to me.

    And if MS doesnt comply then all scientists have to switch to OO.org ?

  • Re:Conspiracy? (Score:3, Insightful)

    by bunratty ( 545641 ) on Tuesday February 09, 2010 @11:23AM (#31072358)
    Let's think through what would really happen if scientists released their code. The code has bugs, as all code does. People with an ulterior motive would point to the bugs and say "Look here! A bug! The science cannot be trusted!" And millions of sheeple would repeat "Yes! The code has bugs! And therefore I refuse to believe it!" It won't matter whether the bugs are relevant to the science; the fact that there are any bugs at all will cause people who want to disagree to say there's doubt about the results. Meanwhile, they will go about their business using computer systems that are riddled with bugs, but function well enough the vast majority of the time they're not even aware of the bugs.
  • Not a good idea (Score:5, Insightful)

    by petes_PoV ( 912422 ) on Tuesday February 09, 2010 @11:23AM (#31072364)
    The point about reproducible experiments is not to provide your peers with the exact same equipment you used - then they'd get (probably / hopefully) the exact same results. The idea is to provide them with enough information so that they can design their own experiements to [b]measure the same things[/b] and then to analyze their results to confirm or disprove your conclusions.

    If all scientists run their results through the same analytical software, using the same code as the first researcher, they are not providing confirmation, they are merely cloning the results. That doesn't give the original results either the confidence that they've been independently validated, or that they have been refuted.

    What you end up with is no-one having any confidence in the results - as they have only ever been produced in one way and arguments thatt descend into a slanging match between individuals and groups of vested interests who try to "prove" that the same results show they are right and everyone else is wrong.

  • by FlyingBishop ( 1293238 ) on Tuesday February 09, 2010 @11:26AM (#31072396)

    What's your point? If a Biologist has no understanding of code, they have no business running a simulation of an ecological system. If a physicist has no understanding of code, they have no business writing software to simulate atomic processes. If a Geneticist has no understanding of code, they have no business writing software that does pattern matching across genes.

    Those who don't want to write software to aid in their research may continue not to do so (and continue to lose relevance.) But if they're going to use software, they have to use best practices. To do otherwise likewise makes their work quickly fading in relevance.

  • by Anonymous Coward on Tuesday February 09, 2010 @11:26AM (#31072400)

    ...so science becomes an act of creativity by which one tries to create a cohesive narrative of process that arrives at the desired result.

    As someone who listens to Talk Radio on occasion, that sounds like you're creating a work of fiction. Rush and Hannity would have a whole week of shows based on that statement.

    I would put it more like "piecing the narrative from the evidence" or "from facts" or something like that.

    Scientists need to realize that if they're going to get public support, they really need to be very careful with their choice of wording. Like it or not, the scare mongers, and I mean scare mongers in the sense that there are people who are trying to scare folks into believing that Global Warming is some sort of wealth redistribution scheme by the socialists, are going to use any hint, real or not, that scientists are making up their findings.

  • by nten ( 709128 ) on Tuesday February 09, 2010 @11:39AM (#31072570)

    I am suspect of the interface reference. Are they counting things where an enumeration got used as an int, or there was an implicit cast from a 32bit float to a 64bit one? From a recent TV show "A difference that makes no difference is no difference." Stepping back a bit there will be howls from OO/Functional/FSM zealots that look at a program and declare its inferior architecture, lack of maintainability etc. indicate its results are wrong. These are programs written to be run once to turn one set of data into a more understandable and concise one. A truth test set run through it is good enough, they don't need iso compliant, triply refactored, perfectly architectured code to get the right answer. I don't think any of my CS proffs would have cared about such inane drivel they barely paid attention to what language we each picked to solve the assignment in. My software engineering proff would have yelled about comment density and coding standards compliance, but I consider that a different discipline primarily applicable to widely used and/or safety critical code.

    *However*
    Keeping track of digit precision through a calculation isn't CS, its fundamental grade school science. That is only one step from forgetting to do unit analysis for a sanity check. If they are forgetting that, they are probably also not looking at numerical conditioning, or trying to get by with doubles when they need bignums. None of this is CS egocentrism, its stuff we learn in math and science courses.

  • Re:Conspiracy? (Score:5, Insightful)

    by crmarvin42 ( 652893 ) on Tuesday February 09, 2010 @11:41AM (#31072608)
    And then they fix the bug and either...

    A. The results change, thus indicating that the bug was important in some way. In this case, fixing the bug gained us not only silencing the critics, but improving our understanding.

    or

    B. The results don't change, thus indicating that the bug, while still a bug, was not important to the final result. In this case, we've fixed a bug that the critics were using as a banner, and that they were mistaken in it's importance. We don't get the improved understanding, but we do get a chance to politely say STFU to the more vocal/less qualified critics.

    Either way looks like win/win to me.
  • by jgtg32a ( 1173373 ) on Tuesday February 09, 2010 @11:50AM (#31072742)

    Shit like this is why I'm hesitant about going along with Climate Change. I'm in no way qualified to review scientific data, but I can tell when someone is shady, and I don't trust shady people.

  • by PhilipPeake ( 711883 ) on Tuesday February 09, 2010 @11:54AM (#31072804)

    ... and this is the problem. The move from direct government grants to research to "industry partnerships".

    Well, (IMHO) if industry wants to make use of the resources of academic institutions, they need to understand the price: all the work becomes public property. I would go one step further, and say that one penny of public money in a project means it all becomes publicly available.

    Those that want to keep their toys to themselves are free to do so, but not with public money.

  • by Wardish ( 699865 ) on Tuesday February 09, 2010 @11:55AM (#31072826) Journal

    As part of publication and peer review all data and providence of the data as well as any additional formula's, algorithms, and the exact code that was used to process the data should be placed online in a neutral holding area.

    Neutral area needs to be independent and needs to show any updates and changes, preserving the original content in the process.

    If your data and code (readable and compilable by other researchers) isn't available then peer review and reproduction of results is foolish. If you can't look in the black box then you can't trust it.

  • by Rising Ape ( 1620461 ) on Tuesday February 09, 2010 @12:05PM (#31072966)

    Nonsense, they're not trying to produce code, they're trying to produce science. It doesn't matter how ugly the code is, or how inefficient, as long as it produces correct answers. Since software engineering "best practices" seem to change every week (and do not prove program correctness in any case), what are they supposed to do, spend huge amounts of time learning as much as a professional software engineer would? Do you do that for all the tools you use?

    Does anyone have any evidence that the code is *wrong*? I.e. does it actually produce significantly wrong answers? I suspect not - this is just the latest FUD-spreading trick.

    This is just typical programmer "when your tool's a hammer" mentality. Software's not the most important thing in the world, and science has better ways to verify correctness - have several independent analyses of the same thing for example, or different ways of measuring the same thing to check for consistency.

  • by John Hasler ( 414242 ) on Tuesday February 09, 2010 @12:06PM (#31072984) Homepage

    > This raises the question in what programming language the scientific code
    > should be published.

    The one it was written in. What should be published is the exact code that was compiled and run to generate the data. Think of it as similar to making the raw data available.

  • Re:Conspiracy? (Score:3, Insightful)

    by bunratty ( 545641 ) on Tuesday February 09, 2010 @12:10PM (#31073042)
    From recent events, I think both A and B are wrong. When an error is pointed out in research that shows AGW is happening, people use that error as an excuse not to believe any research that AGW is happening, even years after the error is corrected. When an error is pointed out in the IPCC report about a minor effect of climate change, people use that error to doubt all effects of climate change. Correcting the errors or pointing out they don't change the results will not silence the critics. It will only make the critics claim that their opinion is being suppressed even though the science has been indisputably proven to be flawed and therefore cannot be trusted!
  • by apoc.famine ( 621563 ) <apoc.famine@NOSPAM.gmail.com> on Tuesday February 09, 2010 @12:12PM (#31073080) Journal

    As someone doing a PhD in a climate related area, I can see both sides of the issue. The code I work with is freely and openly available. However, 99.9% or more of the people in the world wouldn't be able to do a damn thing with it. I look at my classmates - we're all in the same degree program, yet probably only 5% of them would really be able to understand and do anything meaningful with the code I'm using.
     
    Why? We're that specialized. Here, I'm talking 5% of people studying atmospheric and oceanic sciences being able to make use of my code without taking several years to get up to speed. What's the incentive to release it? Why bother with the effort, when the audience is soooo small?
     
    Release the code, and if some dumbass decides to dig into it, you either are in the position of having to waste time answering ignorant questions, or you ignore them, giving them ammo for "teh code is BOGUS!!!!" Far easier to just keep the code in-house, and hand it out to the few qualified researchers who might be interested. Unsurprisingly, a lot of scientific code is handled this way.
     
    However, I do very much believe in completely transparent discourse. My research group has two major comparison studies of different climate models. We pulled in data from seven models from seven different universities, and analyzed the differences in CO2 predictions, among other things. The data was freely and openly given to us by these other research groups, and they happily contributed information about the inner workings of their models. This, in my book, is what it's all about. The relevant information was shared with people in a position to understand it and analyze it.
     
    It'd be a whole different story if the public wasn't filled with a bunch of ignorant whack-jobs, trying to smear scientists. When we're trying to do science, we'd rather do science than defend ourselves against hacks with a public soapbox. If you want access to the data and the code, go to a school and study the stuff. All the doors are open then. The price of admission is just having some vague idea wtf you're talking about.

  • Re:I concur (Score:3, Insightful)

    by Rising Ape ( 1620461 ) on Tuesday February 09, 2010 @12:13PM (#31073112)

    > 600 lines of code in the main, no functions, no comments

    Does that make it function incorrectly?

    Looking pretty and being correct are orthogonal issues. Code can be well-structured but wrong, after all.

  • Re:I concur (Score:4, Insightful)

    by Rising Ape ( 1620461 ) on Tuesday February 09, 2010 @12:23PM (#31073248)

    >So, while it is perfectly understandable that, say, physicists can't spend 5 years learning CS, at the very least they should be made aware that it requires trained people to write sane code and that they must hand the job to specialists, and spend their valuable time doing what the're skilled at.

    And where will they get these specialists, and who will pay for them?

    Add the overhead of explaining exactly what the code is supposed to do, and the fact that the specialist won't know the physics purpose of it all, and I wouldn't be suprised if there were more errors this way, not fewer. Most science code is fairly short, so all the fuss about "structured programming" (or is it OOP these days?) isn't as important.

  • by acoustix ( 123925 ) on Tuesday February 09, 2010 @12:27PM (#31073330)

    "Why should I make the data available to you, when your aim is to find something wrong with it?"

    That used to be what Science was. Of course, that was when truth was the goal.

  • Re:Observations... (Score:1, Insightful)

    by Anonymous Coward on Tuesday February 09, 2010 @12:28PM (#31073332)

    Exactly this,
    In my field (Hydrology and Statistics) writing the code in its final form is a long process of experimentation with different approaches and tests. Publishing it before one is truly done with a subject is the same as inviting other people to "scoop" you.

    Also, many people seem to fail to understand the protectionism of scientists. Of course we like to build on other peoples results, and see the field grow. However, if we make this too easy (like handing them our code), they just might scoop us. Hence, we make it a little bit harder to ensure that we can feed ourselves as well.

    Finally, about the errors. I have yet to see a single piece of error-free scientific code, however, the results are rigorously tested with an array of tests. The chances of these all coming up the same over a coding error is small.

  • by TheTurtlesMoves ( 1442727 ) on Tuesday February 09, 2010 @12:30PM (#31073360)
    Your not the F***** pope. You don't get to tell people they are not worthy enough to look at your/code data. You don't like it, don't do science. But this attitude of only cooperating with a "vetoed" group of people is causing far more problems than you think you are solving by doing it. You are not as smart as you think you are.

    Want to make a claim/suggestion that has very real economic and political ramifications for everyone, you provide the data/models for everyone. Otherwise, have a nice hot cup of shut the frak up.
  • by mjwalshe ( 1680392 ) on Tuesday February 09, 2010 @12:36PM (#31073526)
    Well up to a point however its the model you have to validate. Years ago I helped write some code to model the behavior of pumps and one of the tests we did was to run the model and compare it to real life and also run the model in reverse to see if we got back to the same point we started from. With out knowing a ton about CS/Mathamatics and the modeling methods used and access to the origional data a non specialist is not going to get very far.
  • by apoc.famine ( 621563 ) <apoc.famine@NOSPAM.gmail.com> on Tuesday February 09, 2010 @12:42PM (#31073614) Journal

    Of all the stuff that's important in scientific computing, the code is probably one of the more minor parts. The science behind the code is drastically more important. If the code is solid and the science is crap, it's useless. Likewise, the source data that's used to initialize a model is far more important than the code. If that's bogus, the entire thing is bogus.
     
    Sure, you could audit it, and find shit that's not done properly. At the same time, you wouldn't have a damn clue what it's supposed to be doing. Suppose I'm adding a floating point to an integer. Is that a problem? Does it ruin everything? Or is it just sloppy coding that doesn't make a difference in the long run? Understanding what the code is doing is required for you to do an audit which will produce any useful results.
     
    Unless you're working under the fallacy that all code must be perfect and bug free. Nobody gives a shit if you audit software and produce a list of bugs. What's important is that you be able to quantify how important those bugs are. And you can't do that without knowing what the software is supposed to be doing. When it's something a complicated as fluid dynamics or biological systems, a code audit by a CS person is pretty much worthless.

  • by ae1294 ( 1547521 ) on Tuesday February 09, 2010 @12:52PM (#31073784) Journal

    1) Do you seriously think that the whole climate science depends on one scientist's data?

    Irrelevant, if you use public money to do your research your boss gets all that work.

    2) CRU was trolled by FOIA requests. They are nuisance to deal with, as far as I was told.

    Irrelevant, FOIA requests are part of the deal when you take public money. Don't like it? Don't take public money. The whole idea that FOIA requests can be labeled troll sounds like a very bad idea. I for one don't want to start hearing the government claim that the EFF are trolls and thus are ignoring their FOIA requests.

    3) Scientists are people, people have emotions. That's why peer review is used.

    Irrelevant, ???

  • by Troed ( 102527 ) on Tuesday February 09, 2010 @01:00PM (#31073930) Homepage Journal

    You argument is void. A bug is a bug. Either it affects the outcome of the program run or it doesn't - and I still don't need to know anything about what it's supposed to do to verify that. You just need to re-run the program with a specified set of inputs and check the output - also known as verified against its own test suite.

    (Yes, I'm a Software Engineer by education)

  • Precisely (Score:4, Insightful)

    by Sycraft-fu ( 314770 ) on Tuesday February 09, 2010 @01:02PM (#31073968)

    The more important the research, the larger the item under study, the more rigorous the investigation should be, the more carefully the data should be checked. This isn't just for public policy reasons but for general scientific understanding reasons. If your theory is one that would change the way we understand particle physics, well then it needs to be very thoroughly verified before we say "Yes, indeed this is how particles probably work, we now need to reevaluate tons of other theories."

    So something like this, both because of the public policy/economic implications and the general understanding of our climate, should be subject to extreme scrutiny. Now please note that doesn't mean saying "Look this one thing is wrong so it all goes away and you can't ever propose a similar theory again!" However it means carefully examining all the data, all the assumptions, all the models and finding all the problem with them. It means verifying everything multiple times, looking at any errors any deviations and figuring out why they are there and if they impact the result and so on.

    Really, that is how science should be done period. The idea of strong empiricism is more or less trying to prove your theory wrong over and over again, and through that process becoming convinced it is the correct one. You look at your data and say "Well ok, maybe THIS could explain it instead," and test that. Or you say "Well my theory predicts if X happens Y will happen, so let's try X and if Y doesn't happen, it's wrong." You show your theory is bulletproof not by making sure it is never shot at, but by shooting at it yourself over and over and showing that nothing damages it.

    However that this process is done right becomes more important the bigger the issue is. If you aren't right on a theory that relates to migratory habits of a sub species of bird in a single state, ok well that probably doesn't have a whole lot of wider implications for scientific understanding, or for the way the world is run. However if you are wrong on your theory of how the climate works, well that has a much wider impact.

    Scrutiny is critical to science, it is why science works. Science is all about rejecting the ideas that because someone in authority said it, it must be true, or that a single rigged demonstration is enough to hang your hat on. It is all about testing things carefully and figuring out what works, and what doesn't.

  • Only if... (Score:3, Insightful)

    by captainpanic ( 1173915 ) on Tuesday February 09, 2010 @01:03PM (#31073982)

    Only if the real programmers out there promise to be nice to us scientists.

    Most scientists will know a lot about, well, science... but not much about writing code or optimizing code.

    Like my scripts. All correct, all working... lots of formulas... but probably a horribly inefficient way to calculate what I need. :-)

    the last thing I need is someone to come to me and tell me that the outcome is correct but that my code sucks.
    (And no, I am not interested in a course to learn coding - unless it's a 1-week crash course).

  • by joocemann ( 1273720 ) on Tuesday February 09, 2010 @01:09PM (#31074084)

    You argument is void. A bug is a bug. Either it affects the outcome of the program run or it doesn't - and I still don't need to know anything about what it's supposed to do to verify that. You just need to re-run the program with a specified set of inputs and check the output - also known as verified against its own test suite.

    (Yes, I'm a Software Engineer by education)

    You assume far too much. I don't trust an analysis of anything, by anyone, who doesn't know what they are actually looking at. In your example you can look and analyze but you don't need to understand what it is....

    I'm seeing a pretty clear parallel between your view of how the code can be analyzed and the AGW ignoramus skeptic view of AGW science as a whole. I don't trust arguments for or against AGW that aren't by people with educations to demonstrate they at least *might* know what they are talking about.

    You're basically saying you're qualified to analyze and discuss a topic you do not understand simply because you know a language. That is just B.S.

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 09, 2010 @01:13PM (#31074140)
    Comment removed based on user account deletion
  • by MikeBabcock ( 65886 ) <mtb-slashdot@mikebabcock.ca> on Tuesday February 09, 2010 @01:13PM (#31074152) Homepage Journal

    Both are issues. If your code is buggy, the output may also be buggy. If the code is bug-free but the algorithms buggy, the output will also be buggy.

    The whole purpose of publishing in the scientific method is repeatability. If the software itself is just re-used without someone looking at how it works or even better, writing their own for the same purpose, you're invalidating a whole portion of the method itself.

    As a vastly simplified example, I could posit that 1 + 2 = 4. I could say I ran my numbers through a program as such:

    print f(1, 2);
    f (a, b):
    print $b + $b;

    If you re-ran my numbers yourself through MY software without validating it, you'd see that I'm right. Validating what the software does and HOW it does it is very much an important part of science, and unfortunately overlooked. While in this example anyone might pick out the error, in a complex system its quite likely most people would miss one.

    To the original argument, just because very few people would understand the software doesn't mean it doesn't need validating. Lots of peer review papers are truly understood by a very small segment of the scientific population, but they still deserve that review.

  • by bmajik ( 96670 ) <matt@mattevans.org> on Tuesday February 09, 2010 @01:18PM (#31074234) Homepage Journal

    However, 99.9% or more of the people in the world wouldn't be able to do a damn thing with it. I look at my classmates - we're all in the same degree program, yet probably only 5% of them would really be able to understand and do anything meaningful with the code I'm using.

    I think the world is very lucky that Linus Torvalds wasn't as narrow-sighted and conceited as you are.

    Why? We're that specialized. Here, I'm talking 5% of people studying atmospheric and oceanic sciences being able to make use of my code without taking several years to get up to speed. What's the incentive to release it? Why bother with the effort, when the audience is soooo small?

    Release the code, and if some dumbass decides to dig into it, you either are in the position of having to waste time answering ignorant questions, or you ignore them, giving them ammo for "teh code is BOGUS!!!!" Far easier to just keep the code in-house, and hand it out to the few qualified researchers who might be interested. Unsurprisingly, a lot of scientific code is handled this way.

    However, I do very much believe in completely transparent discourse. My research group has two major comparison studies of different climate models. We pulled in data from seven models from seven different universities, and analyzed the differences in CO2 predictions, among other things. The data was freely and openly given to us by these other research groups, and they happily contributed information about the inner workings of their models. This, in my book, is what it's all about. The relevant information was shared with people in a position to understand it and analyze it.

    It'd be a whole different story if the public wasn't filled with a bunch of ignorant whack-jobs, trying to smear scientists. When we're trying to do science, we'd rather do science than defend ourselves against hacks with a public soapbox. If you want access to the data and the code, go to a school and study the stuff. All the doors are open then. The price of admission is just having some vague idea wtf you're talking about.

    Have you heard of "ivory tower"? You're it.

    Your position basically boils down to this: "unless you read all the same things I read, talked to all the same people I talked to, went to all the same schools I did... you're not qualified to talk to me".

    That is _the_ definition of monocultural isolationism.. i.e. the Ivory Tower of Academia problem.

    Here's the problem: if your requirement is that anyone you consider a "peer" must have had all of the same inputs and conditionings that you had... what basis do you have for allowing them to come out of the other side of that machine with a non-tainted point of view?

    As a specific counterpoint to your way of thinking:

    My dad is an actuary.. one of the best in the world. He regularly meets with the top handful of insurance regulators in foreign governments. He manages the risk of _billions_ of dollars. The maths involved in actuarial science embarass nearly any other branch of applied mathematics. I have an undergraduate math degree and I could only understand his problem domain in the crudest, rough-bounding box sort of fashion. Furthermore, he's been a programmer since the System/360 days.

    Yet his code, while there is a lot of it, is something I am definitely able to help him with. We talk about software engineering and specific technical problems he is having on a frequent basis.

    You don't need to be a problem domain expert in order to demonstrate value when auditing software.

    Furthermore, as a professional software tester, I happen to find that occasionally, not over-familiarizing myself with the design docs and implementation details too early allow me to ask better "reset" questions when doing design and code reviews. "Why are you doing this?" And as the developer talks me through it, they understand how shaky their assumptions are. If I had been "travelling" with them in lock step

  • by Starlet Monroe ( 512664 ) on Tuesday February 09, 2010 @01:38PM (#31074500) Journal

    This is a conundrum for me. My research is in the world of radiation physics, where results can definitely be life-changing. I absolutely respect the amount of impact small discrepancies can have on outcomes, but I also struggle to find a balance. The project I'm on right now is a retrospective analysis, so the results we report won't directly affect anyone. If policy changes are made from what we determine, the results will.

    My role is to conduct some fairly complex calculations against a data set, for which I've built some custom software and a database. The software isn't great software...it's good enough to get the job done. I validate the input...a little bit. Just enough to make sure we're using the right file. I confirm that the data I need exists in our input, but I don't do any boundary checking on it. Why should I? There's only one data file that gets analyzed, and as we collect more data, we run it again. I'll probably use this code in "production" four times in the course of the study. Are there stupid bugs that crop up if strings show up in the data instead of floats? Sure. But there won't ever be strings in the data, and the code won't ever be used after we run the data through. We don't have the budget for me to spend the time to write it "right", the way I would if it was for enterprise use. And we sure can't afford to QA it, too.

    I respect the idea that all code should go through a complete development cycle before use in production, and I think it's certainly important for that to happen in science, but I think there have to be limits. Sometimes the object is to get something done, and the difference between doing it "best" and "good enough" doesn't mean the difference between "right" and "wrong."

  • by drooling-dog ( 189103 ) on Tuesday February 09, 2010 @01:39PM (#31074522)

    It seems to me that what's important is the theory being modeled, the algorithms used to model it, and of course the data. The code itself isn't really useful for replicating an experiment, because it's just a particular - and possibly faulty - implementation of the model and as such is akin to the particular lab bench practices that might implement a standard protocol. Replicating a modeling experiment should involve using - and writing, if necessary - code that implements the model the original investigators intended to implement, but distinct from that which they actually used.

    Running the same code on the same data demonstrates very little, and finding bugs in the original code tells you nothing about what results would/should have been achieved had the model been implemented correctly. But of course it's great for throwing stones and "discrediting" a result without actually adding anything constructive to the issue at hand.

  • by Shannon Love ( 705240 ) on Tuesday February 09, 2010 @01:50PM (#31074694) Homepage

    I hate to break it to you but all programming is highly specialized. Climatology is in no way special in this regard.

    Neither do programmers have to understand the abstract model of the program to write it or evaluate it. The vast majority of professional programmers do not understand the abstract model of the code they create. You do not have to be a high-level accountant to write corporate accounting software and you don't have to be a doctor to write medical software. Most programmers spend most of their time implementing models created by non-programmers from fields of which the programmers have no detailed knowledge.

    Does that mean that programmers can't spot crappy code just because they don't understand the details of the model? No, it does not. Most software errors don't arise from the model but from sloppy practices in the management of the software project itself. An experienced programmer doesn't even have to know the language of project to see that it's creation and maintenance was incompetently handled.

    You don't have to be a climatologist to know that the CRU software was utter crap that would produce sound outputs only by divine intervention. For any experienced programmer, it was immediately obvious that it was a great reeking gob of amateur coding with no structure, no plan and no standards. In my experience, most scientific software is like the CRU software. It evolves in an ad hoc manner over many years with no governing organizational structure.

    Commercial software developers have created a wide range of tools and procedures to manage large, vital projects. In the main, scientist use none of these tools and most of them appear unaware they even exist much less why and when they are needed. As a result most scientific software project management is completely amateurish. If most scientific software were written for commercial applications, the developers would be sued or imprisoned for fraud.

    Scientist tend to be arrogant and dismissive of the work of others especially those who work in the commercial sector. You believe that because you understand climatology that you therefore understand all the tools you are using. Well you don't. You think that because no one can understand your abstract model that therefore they cannot find significant errors in your code. Well, they can. You think we should reengineer our entire civilization based on your unquestioned and unexamined computerized ivory tower auguries.

    Well, we won't.

    Your just going to have to suck it up and withstand the at least the same scrutiny we give important commercial software.

  • Re:Not a good idea (Score:3, Insightful)

    by petes_PoV ( 912422 ) on Tuesday February 09, 2010 @02:00PM (#31074878)

    Experiments produce results

    Errrm, experiments produce data. It's the analysis of that data plus the insight and knowledge of the analysts and scientists that turn it into results. The problem is that if everyone uses the same software they'll never notice any systemic failures in the processing it performs.

  • by Anonymous Coward on Tuesday February 09, 2010 @02:13PM (#31075078)

    You seem to have completely missed the point of the gp. Scientists are often more than willing to listen to software engineers, but noone pays for software engineers to talk to them. If a scientist isn't "researching" and is instead handling first level support on code he wrote five years ago (or risk being slated for not dong so), then they're not going to remain in post very long.

    If you add to this controversial areas of science (ie not Steven Hawking's area but climate change) then there are well funded lobby groups and others with too much time on their hand looking for ANYTHING that is wrong. As various people have commented, codes have bugs in them. Most o them don't matter and you can check the results a posteri to detect any critical ones.

  • by Troed ( 102527 ) on Tuesday February 09, 2010 @02:17PM (#31075172) Homepage Journal

    You're not doing science if you're not performing work that can be falsified (and replicability is a cornerstone in that).

    I'd rather have you do science.

  • by bmajik ( 96670 ) <matt@mattevans.org> on Tuesday February 09, 2010 @02:29PM (#31075404) Homepage Journal

    there are well funded lobby groups and others with too much time on their hand looking for ANYTHING that is wrong.

    Errors are only errors if they are reported by the "right" people?

    Do you want to know how many questions Linus Torvalds has answered for me? Zero.

    I actually _have_ gotten personal responses from Theo DeRaadt on some OpenBSD issues but they all have the general form of "you're not interesting, don't waste my time".

    Nevertheless, I rely on OpenBSD. The fact that Theo has neither the time nor the interest in having a deep meaningful conversation with me about his code neither changes the quality of his code nor prevents him from releasing every 6 months, on schedule.

    I don't think that there is an expectation that scientists stop doing their day jobs to do software support for people. I think there is an expectation that publicly funded research used to set public policy be easily available to all comers.

    I'm a bit frustrated by the apparent contradiction. For the first time perhaps in history in the USA, you have armchair folks trying to do technical audits of scientific tools, research, and publications -- for free.

    I thought the "normal" problem in America is that the population is too apathetic to care and too stupid to provide any critical analysis. And yet we see this happening more and more frequently and the climate-science establishment is circling the wagons instead of celebrating the fact that there are a handful of people that for once give a damn about interesting research tools and methods.

    I must concede that there are some downsides to discussing your opinions and findings with others: When people disagree with you, it ends up taking some of your time.

  • by bdwlangm ( 1436151 ) on Tuesday February 09, 2010 @02:45PM (#31075680)

    If I find code that will cause heap corruption in your code (e.g. you wrote past the end of an array in C), then there is a bug in that code whether you do fluid simulations, or make 3D games. I worked as an undergraduate RA under some guys doing ocean modelling, and found several small bugs before I had the foggiest idea what most of the code was meant to do. Yes there will be many problems someone without your background can't find in your model, but that is not an argument for closed source science.

    A more important concern is that someone else who does have your background should have access to your code. That would be part of "peer review". Otherwise they're taking your computations on faith, with no way to reproduce.

  • by Pentagram ( 40862 ) on Tuesday February 09, 2010 @02:53PM (#31075830) Homepage

    You assume far too much. I don't trust an analysis of anything, by anyone, who doesn't know what they are actually looking at. In your example you can look and analyze but you don't need to understand what it is....

    If the code is freely available and so are the data used, what is stopping you rerunning the experiment with the same data if you find a bug? No analysis comes into it: if the results are significantly different, you can show that the program is running incorrectly.

    I'm seeing a pretty clear parallel between your view of how the code can be analyzed and the AGW ignoramus skeptic view of AGW science as a whole. I don't trust arguments for or against AGW that aren't by people with educations to demonstrate they at least *might* know what they are talking about.

    A mathematician could point out flaws in the calculations of climate science, a physicist could point out problems with the understanding of the physics, a chemist could point out issues with the understanding of the chemistry... you don't have to understand an entire issue to notice problems with a subset of the science. I speak as someone who accepts the majority expert view of climate change.

  • by bdwlangm ( 1436151 ) on Tuesday February 09, 2010 @02:59PM (#31075930)

    just wait until some amateur gets a hold of the code, runs it, and claims that all global warming data is questionable because this model has a bug or produces weird output

    The onus is on the researcher to demonstrate/argue that for the inputs given the code produces meaningful results. If you don't like that then stop doing research with computations? Idiots can always misrepresent you, no matter how you publish. Most of us understand that simulations are limited.

    Second, it will waste the researchers time releasing the code and then responding to questions when people are like "lolz this code blows"

    What makes you think that there will be more people trying out that code and not understanding it, than currently there are people reading the paper and not understanding it? Personally I'm not going to waste my spare time downloading complex simulations that I know nothing about and try to invalidate them.

    That being said, it should definitely be available as a part of the peer review process if something is really called into question.

    So make it available and reference it in your paper. No one's asking you to tell everyone on the planet about it.

  • by Pentagram ( 40862 ) on Tuesday February 09, 2010 @03:05PM (#31076046) Homepage

    Maybe its a bug that only pops up on certain inputs. Maybe the researcher knows this and avoids those inputs (or wrote the program without intending to go anywhere near the input range where the code fails). This sees fine to me...researcher needs a one-off set of statistics and writes some quick and dirty code that does it even if it isn't robust or even efficient.

    Sorry, but I wouldn't trust any code that fails on certain inputs!

    I can accept code that isn't efficient, that's just not necessary. I can accept bugs in peripheral code (such as an added-on GUI) but the code that actually does the science really should be as good as the scientist can write. If it has known bugs they should be fixed before any research is published that is based on the code.

    I speak as someone who has written code for scientific research.

    Releasing this code is probably bad for two reasons. If the researcher is not aware of bugs outside of the exact inputs they used, they probably aren't going to disclose them--just wait until some amateur gets a hold of the code, runs it, and claims that all global warming data is questionable because this model has a bug or produces weird output.

    Good. That means researchers will be more careful about the code they are writing, and we can all have more confidence in the science.

    I don't expect researchers to write great code for everything...it may be repetitive or inefficient but they can usually tell from the result (and comparing it to other models) whether or not something went wrong.

    Comparing it to other models? What if they are wrong too? Perhaps that's how they verified their results. Trying to tell if the program is correct from the results is even worse. You end up fixing bugs until the code produces the result you want.

    I know that I write code at work (IANAClimate Researcher) that is quite sloppy or wasteful because I just want to see what the result looks like (and will never run the program again)

    That's exploratory programming, and is quite fair enough (in fact I think people should do it more), but you shouldn't use such code to do anything important. Throw it away and start again.

  • by philipgar ( 595691 ) <pcg2 AT lehigh DOT edu> on Tuesday February 09, 2010 @03:18PM (#31076300) Homepage

    Actually, I'm pretty sure everyone is fairly close with the current data they're generating to prevent other groups from beating you out the door with your idea. The exceptions to this rule are when professors trust one another, and know that the other wouldn't use the information you're supplying them with to do the same research you are already working on.

    As a graduate student, you definitely don't want to share code you've developed immediately. You may spend 2 or 3 years of a PhD writing code, and get a couple papers out of it, but with the code base in place you plan on getting a handful more. More to the point, these papers become relatively easy to generate, because you spent those years developing the program that allows you to do it. Writing papers, and generating results, analyzing them etc takes time, so you can't do everything at once. Releasing your code too early means other groups can do these other experiments, and you, the grad student who spent so many years setting up the code or experiment for them, still wouldn't be able to graduate, because you have not produced enough original research, and instead only developed the tools others used to pump out results.

    As a student nears graduation, they might be more willing to release their code, as then competition is less of a concern. Someone won't pick up your code and be releasing a paper based on it in 2 or 3 months, it just takes too long to get up to speed. However, the BIGGEST impediment to releasing software in academia is the support that you have to give to your software if anyone is going to use it. You first need to audit and clean up your code, a non-trivial task. You have to supply documentation on how to use the software, another non trivial task, and then provide documentation on the basics of how it works etc. All of this stuff takes a lot of time, and doesn't tend to help a student graduate. Also, once code is released, there's an expectation that you'll be providing some level of help with questions. Granted that normally rarely happens (as the author has gone on to do other things, and hasn't touched the code in years). It just becomes a difficult thing to do.

    Phil

  • by Urkki ( 668283 ) on Tuesday February 09, 2010 @03:48PM (#31076702)

    You assume far too much. I don't trust an analysis of anything, by anyone, who doesn't know what they are actually looking at. In your example you can look and analyze but you don't need to understand what it is....

    I'm seeing a pretty clear parallel between your view of how the code can be analyzed and the AGW ignoramus skeptic view of AGW science as a whole. I don't trust arguments for or against AGW that aren't by people with educations to demonstrate they at least *might* know what they are talking about.

    You're basically saying you're qualified to analyze and discuss a topic you do not understand simply because you know a language. That is just B.S.

    So if the scientist who wrote the computer model isn't a qualified software engineer and doesn't have intimate knowledge of the workings of processor architectures, computer languages and all that, then any results he gets using a computer program of his own making are not to be trusted?

    I think you just threw out a significant portion of latest science...

  • by Xyrus ( 755017 ) on Tuesday February 09, 2010 @04:06PM (#31077018) Journal

    I think the world is very lucky that Linus Torvalds wasn't as narrow-sighted and conceited as you are

    Linus Torvalds was writing an operating system, software intended for a general computing audience. Something like a climate model has a very exclusive audience and requires that the users have a deep understanding about the subject. You are comparing apples to oranges.

    Have you heard of "ivory tower"? You're it.

    Your position basically boils down to this: "unless you read all the same things I read, talked to all the same people I talked to, went to all the same schools I did... you're not qualified to talk to me".

    That is _the_ definition of monocultural isolationism.. i.e. the Ivory Tower of Academia problem.

    No, what he saying is that unless you have an education in the subject material then you're not going to understand what is going on. Have you ever had to explain to anyone that those programming montages in certain movies really are not accurate? It's sort of like that.

    Sure, you can decipher the code if you're a programmer, but you may not know WHY they are doing the things they are doing. Naively going through a code base with an engineering ax without understanding what the code is doing is a sure way to seriously screw things up.

    As a specific counterpoint to your way of thinking...

    That's not a counterpoint. You're proving his point. You're a software engineer working hand in hand with the "expert". But that's not the issue. The problem is having every idiot who thinks they're $DIETY's gift to programming come through a scientific codebase and expecting said scientist to do tech support, which they would have to as they may be the only one who understands what they did. This is a waste of time for said scientist, as their primary job is research, not conducting Fluid Dynamics 101. And a lot of the codes written are one-time solutions for a particular bit of research.

    Seriously, if you want to invalidate some results read the papers you want to attack and duplicate their algorithms in whatever programming language you want to use to prove that their wrong. Most code used in scientific papers are fairly short. Plus, doing and independent implementation helps further validate the research. What's the point of using their code? It's going to give you the same answers. Write your own, then you can be sure it was done "right".

    ~X~

  • by Xyrus ( 755017 ) on Tuesday February 09, 2010 @04:12PM (#31077122) Journal

    The discussion in scientific circles is constructive.

    The quasi/anti-science mad-dog drivel that makes up almost all the rest of the discussion is what they're circling the wagons about. It's like having Joe Sixpack looking come into your workplace screaming that your professional work is bullshit and you should be fired.

    And actually having people in power listen to him.

    ~X~

  • Re:Not a good idea (Score:3, Insightful)

    by Rising Ape ( 1620461 ) on Tuesday February 09, 2010 @04:24PM (#31077282)

    >Do it the same way you do it in industry: document the model number of the oscilloscope, the firmware revision and every important setting you can get your hands on.

    There's no reason on earth why you'd even do that. Just say "the voltage was measured to be x +- y". Results from science experiments should *not* depend on specifics of equipment any more than they should depend on a specific scientist. In fact, the wider the variety of equipment, code and analysis methods used to measure the same thing, the better - it makes the result more robust.

    In your example, both people should recheck their results independently, perhaps try different methods, even do another experiment.

    There are some situations where seeing the code is useful, but only after all other methods to reproduce the result have failed. Sharing code is just inviting common errors.

    In your hypothetical scenario below, the result could be reproduced by writing a new program to do the same thing.

  • by Troed ( 102527 ) on Tuesday February 09, 2010 @05:06PM (#31077886) Homepage Journal

    Yes - but the fact that there are classes of errors (specially those pertaining to the construction of the model) that would be hard to find without domain knowledge does not invalidate the fact that you'll be able to find other classes of errors.

    Errors as those detailed in the article.

  • by pod ( 1103 ) on Tuesday February 09, 2010 @05:14PM (#31078000) Homepage

    Exactly, although I echo the sentiment the presentation could have been better.

    Everywhere we turn there are people who think they are smart telling us what to do and what to think, because they know what is best for us. They're the experts with years of training, and we know nothing. Do not question the high priests, do not pay attention to the man behind the curtain.

    This is just following the general trend of late, culminating in "this time, it's different, trust us". We think we're smarter, we're better, we have more tools, we have more knowledge, we have more insight, and that things are somehow fundamentally different, and that today we can fix all the problems that our predecessors have been unable to fix in centuries past. In the end, the more we "fix", the more we break.

    As a lay person, I know we cannot predict what the weather will be like next week, and all I see around me is global climate hysteria. I don't see science, I don't see deliberation, I don't see openness, I don't see debate. I see politics and dogma. Enough of this "you're not smart enough to understand so just trust me" nonsense. Enough of this "science by consensus". It doesn't exist, and it's not scientific anyways even if it did.

    Show everyone the science, open up the process, accept opposing data (heck, accept ALL legitimate data to begin with), interpretations and views, so we can all see why it is that we need to undertake a complete reorganization of economy, society and personal life, at a cost of trillions of dollars and undoubtedly much resulting misery and suffering.

    It was global cooling and visions of frozen wastelands and a new ice age. Where did that go? Then it was the ozone hole that would fry anyone not wearing SPF1000 sunblock. Where did that go? Then it was global warming and sea level rise that would make disaster movies seem like documentaries. Where did that go? Now we have the amorphous all-encompassing "climate change".

    But THIS TIME, it's different. Really. This time, we're smarter, and we have better science, and we've learned, and we know better, we know for sure. Trust us.

    Well, sorry. You're gonna have to do better than that.

  • by wealthychef ( 584778 ) on Tuesday February 09, 2010 @05:21PM (#31078108)
    Just release your god damned code and don't worry about it. What are you afraid of? The sky will not fall. Your reputation will not crumble. Of course it's not perfect, duh. The point of releasing it is not to have people check for perfection, it's to see if there is a bug that could explain your surprising results. It's part of defending your results. Deal with it. I don't trust you.
  • by Thiez ( 1281866 ) on Tuesday February 09, 2010 @07:31PM (#31079920)

    > Then it was the ozone hole that would fry anyone not wearing SPF1000 sunblock. Where did that go?

    We stopped using the CFCs that were identified as a major contributor to the problem and it appears that is working. Oh sorry, I don't think that supports your argument.

  • by ChrisMaple ( 607946 ) on Tuesday February 09, 2010 @10:21PM (#31081348)

    Something like a climate model has a very exclusive audience

    The final audience of a climate model is (economically) every person alive. If the models are as good as some climatologists claim, the final audience is every living thing on earth.

    Making their code public doesn't mean they have to answer their phone. But they're going to have to answer to someone if it can be shown that their code deliberately produced false results, as was the case with the "hockey stick" scandal.

  • by apoc.famine ( 621563 ) <apoc.famine@NOSPAM.gmail.com> on Wednesday February 10, 2010 @02:37AM (#31082596) Journal
    Spoken like a Software Engineer!

    A bug isn't just a bug. Either it affects the outcome of the program run or it doesn't. The issue is that if you don't know what the outcome should be, you won't be able to tell. Nobody in scientific computing just "re-run(s) the program with a specified set of inputs and check(s) the output". The input is 80% of the battle. We just ran across a paper which showed that the input can often explain 80%+ of the variance in the output of models similar to the one we use.

    So there's our dilemma - what we feed the model is very, very, VERY limited. If something crashes or returns an anomalous result when fed a string instead of an integer, we'll never notice. Why? Because we'll NEVER feed it a string. If all the climatological data we get to feed the model is from NCAR reanalysis, we'll make damn sure the model can handle that data input. Might there be serious issues if another format is fed it? Sure. But that will probably never happen.

    Scientific programming is garbage, by and large. Perform a code audit on it, and you'll find a lot of bugs. But largely, the parts that are in active use are relatively bug free. Why? Because we compare our output with that of other modeling groups. In my office there are two posters comparing seven models from seven different universities. I can tell you who treats oceanic uptake of carbon the same as our group does, and who treats it differently. If one model was a major outlier, we'd have identified that, and asked them what code they use to calculate oceanic carbon uptake.

    This is science, not Software Engineering. We troubleshoot and find bugs by comparing OUTPUT, not CODE. It's only when we find that output is significantly different that we look to code to figure out why. It's akin to having 7 browsers all try to render a page. If 5 of them render the same thing, one is close, and one doesn't look anything like the others, your first guess is to take a look at what that one oddball is doing. The same goes for scientific code.

    The people writing it aren't software engineers, by a long shot. But if they really screw up, everybody knows. It's not through a code audit - it's because their output doesn't match either what's observed in nature, or what other models output. Would rigorous code audits make our code better? Sure. Is CS volunteering to come do it for us? No. Would we have the time to deal with their nit-picking? No. We validate output, not code. And largely, it works.
  • by Troed ( 102527 ) on Wednesday February 10, 2010 @05:59AM (#31083552) Homepage Journal

    Sorry, no. You're just displaying your ignorance above. You cannot look at the output and say that just because it fits with your preconceived notions it's therefor correct. You do not know if you have problems in a farhenheit to celcius conversion, a truncation when casting between units etc (yes, examples chosen on purpose). You might get a result that's in the right ballpark. You might believe you have four significant digits when you only have three. Your homebrewn statistical package might not have been audited by a statician etc.

    You simply do not know all the things you claim above that you do know.

  • by Troed ( 102527 ) on Wednesday February 10, 2010 @10:53AM (#31085684) Homepage Journal

    If I write a program to model ocean currents, and it spits out a map of oceans very, very similar to what's been well observed in the ocean. I can assume my code is good enough.

    No. As long as you believe that, you're not doing science.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...