Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Open Source Experiment Management Software? 122

Alea asks: "I do a lot of empirical computer science, running new algorithms on hundreds of datasets, trying many combinations of parameters, and with several versions of many pieces of software. Keeping track of these experiments is turning into a nightmare and I spend an unreasonable amount of time writing code to smooth the way. Rather than investing this effort over and over again, I have been toying with writing a framework to manage everything, but don't want to reinvent the wheel. I can find commercial solutions (often specific to a particular domain) but does anyone know of an open source effort? Failing that, does anyone have any thoughts on such a beast?"

"The features I would want would be:

  • management of all details of an experiment, including parameter sets, datasets, and the resulting data
  • ability to "execute" experiments and report their status
  • an API for obtaining parameter values and writing out results (available to multiple languages)
  • additionally (alternately?) a standard format for transferring data (XDF might be good)
  • ability to extract selected results from experimental data
  • ability to add notes
  • ability to differentiate versions of software
In my dreamworld, it would also (via plugin architecture?) provide these:
  • automatically run experiments over several parameters values
  • distribute jobs and data over a cluster
  • output to various formats (spreadsheets, Matlab, LaTeX tables, etc.)
Things I don't think it needs to do:
  • provide a fancy front-end (that can be done separately - I'm thinking mainly in terms of libraries)
  • visualize data
  • statistical analysis (although some basic stats would be handy)
The amount of output data I'm dealing with doesn't necessitate database software (some sort of structured markup is ok for me), but some people would probably like more powerful storage backends. I can see it as experiment management 'middleware'. There's no reason such software should be limited to computer science (nothing I'm contemplating is very domain specific). I can imagine many disciplines that would benefit."
This discussion has been archived. No new comments can be posted.

Open Source Experiment Management Software?

Comments Filter:
  • Experience (Score:5, Insightful)

    by robbyjo ( 315601 ) on Saturday April 19, 2003 @08:40PM (#5766613) Homepage

    I also did lots of comp sci empirical experiments. My experience is that the tools used for experimenting itself is very ad-hoc and not easily scriptable. Most of the times we are required to tend the hour-long experiments to see what happened on the output and decide what to do next. And... the decision is often times not clear cut. Some sort of heuristic is needed. Not to mention about the frustations when the errors occur (especially when the tool is buggy, which is very often in research settings). So, considering this, what I would do is to construct a script and do the experiments in phases. Run it and see the result several days after.

    I also noticed that from one experiment to another is sometimes radically different that I would doubt it is easily manageable.

    • Re:Experience (Score:3, Interesting)

      by jkauzlar ( 596349 )
      I agree with the parent post after giving the problem a little thought. There may tools available, but I think what you need is to set up scripts for your experiments.

      What comes to mind when I think about experiment management software is unit testing software. Correct me if I'm wrong, but when you run empirical software experiments, you are essentially unit testing the software.

      Something like Python, Perl, or TCL (probably Python-- powerful, easy to read) should suit you ideally. Other options include Ma

      • Writing scripts is my current solution and my desired solution would probably require scripts as glue to bind various applications to the management system. However, this would lead to much less work dedicated to each project.
      • Re:Experience (Score:4, Interesting)

        by robbyjo ( 315601 ) on Saturday April 19, 2003 @09:52PM (#5766876) Homepage

        Sorry, but I must disagree. Most of the times, research experiment != unit testing.

        To illustrate: Take for example a data mining project. The first phase is data preparation -- which is easily scriptable. But how to prepare the data is different story. We must examine the raw data case by case to decide how to treat it. For example: When to discretize and using what method (linear scale, log scale, etc), when to reduce dimensionality, etc etc. This requires human supervision.

        Even after we do the data prep, we look at the result. If the cooked data contains too much loss of information due to prep stage, we have to do it again using different parameters. This is painful.

        Then, next on the pipeline: What algorithm to use. This is, again, depend on the characteristics of the cooked data. You know, some "experts" (read: grad students) will determine it using some "random" heuristics of their mind given some reasonable explanations.

        If after the result is out and is not desirable, we might go back for different algorithm or choose different data prep parameters, and so forth...

        Given this settings, I doubt that there is a silver bullet for this problem...

      • SpecTCL was developed for use at the National Superconducting Cyclotron Laboratory. It is apparently avalible through sourceforge. While I'm not intimately familiar with it it (I wrote my own code from scratch, but it is hardly general purpose), SpecTCL allows objects (1D & 2D histograms, gates, etc) and procedures to be defined using the TCL scripting languate or C++ while providing a consistant display.

        The high energy folks also have a similar set of packages (as other nuclear labs probably do).

  • by Anonymous Coward on Saturday April 19, 2003 @08:43PM (#5766624)
    Take a look at the object modeling system. It is currently being developed by Agricultural Research Service but many other agencies are cooperating.

    http://oms.ars.usda.gov/
  • You could look into providing some kind of web-services feel to the computation, and then use an open-source provenance server.

    A provenance server might handle the recording of queries, results etc. Not sure how many good open source ones there are.
  • by use_compress ( 627082 ) on Saturday April 19, 2003 @08:45PM (#5766634) Journal
    1. You cannot (?) afford commercial software.
    2. It is impractical for you to continue writing your own software.
    3. You cannot find open source software.
    -------
    Conclusion: Steal commercial software! -)
    • must...resist....

      4. Profit!!!!

      Sorry, I've been reading slashdot too much and must append such an item to all lists I encounter. :P

      And it's not stealing, it's copyright infringement. ;)

      Seriously, though, I think using commercial software still won't cover all the bases. Alea said, "I can find commercial solutions (often specific to a particular domain)..." which I would assume means that there don't appear to be any general-purpose experiment packages.

      As some others have already posted, 'experiments' c

    • 1. ...
      2. ...
      3. ...

      conclusion: share your software, start a new project , see if other people are willing to help out.
  • Perl (Score:1, Insightful)

    by Anonymous Coward
    Sounds like you need to use Perl.
    I find it to be an excellent language for maintaining data.
  • by Anonymous Coward on Saturday April 19, 2003 @08:49PM (#5766650)
    I'm also an empirical computer scientist, and another aspect I would look for is handling dependencies. Make is the standard tool for doing this, but it's not up to this task.

    Ideally, I'd type make paper and it would start from the beginning stages of the experiment and go all the way through creating the paper. Moreover, if anything died along the way, I could fix the problem, type make again, and it would more or less pick up where it left off, not re-running things it had already done (unless they were affected by my fix).

    But after playing with this for a few days, I became convinced that make wasn't up to snuff for what I wanted. I have these sort of `attribute-value' dependency constraints. From one raw initial dataset, I create several cross-validation folds, each of which contains one training set and a couple varieties of test set. the filenames might look like

    base.fold0.testA.test base.fold0.testB.test base.fold0.train
    Now suppose that the way I actually run an experiment involves passing a test set and the corresponding training set to the model I'm testing, a command like:
    modelX base.fold0.testA.test base.fold0.train > base.modelX.fold0.testA.run
    Since, however, I have to run this over several folds (and other variations that I'm glossing over), I'd like to write an 'implicit rule' in the Makefile. This involves pattern-matching the filenames. But it's a very simple pattern-matching: you get to insert one .* (spelled %) in each string, which corresponds to the same thing. Given that, there's noway I can specify the command I have above.

    You might be thinking, you could do

    %.modelX.testA.run : %.testA.test %.train
    but then I have to copy this rule several times for each sort of test set, even if the command they run is the same.

    The underlying problem, I think, is that the pattern-matching in make's implicit rules is too simple. What I would rather have is some kind of attribute-value thing, so I could say something like

    { fileid=$1 model=modelX test=$2 filetype=run } : {fileid=$1 test=$2 filetype=test } { fileid=$1 filetype=train }
    where fileid corresponds to 'base.fold0' and whatever other file identifying information is needed.

    This notation is sort of based on a natural language attribute-value grammar.

    Anyway, if anyone has any suggestions as to this aspect of the problem, I would be grateful


    • Have you tried automake? (autotut [seul.org], autobook [redhat.com])
    • by Anonymous Coward on Sunday April 20, 2003 @12:00AM (#5767351)
      I ran into this problem when I was in graduate school, too. What I eventually did was to abandon make because of the limitations you are running into, and construct a special-purpose experiment running utility that would know about all the predecessors, etc. It turned out not to be too hard, actually. However, if you don't know perl or another language that gives you good pattern matching and substring extraction capability, then this will be very hard to do.

      I just wrote two functions. (I wrote them in the shell, but if I were doing it again, I'd probably do it in perl.) construct() simply makes a file if it is out of date (see example below). Construct() is where are of your rules go: it knows how to transform a target filename into a list of dependencies and a command.

      It uses a function called up_to_date() which simply calls construct() for each dependency, then returns false if the target is not up to date with respect to each dependency. If you don't do anything very sophisticated here, up_to_date will only be a few lines of code.

      "construct" will basically replace your makefile. For example, if you did it in perl, you could write it something like this:

      sub construct {
      local $_ = $_[0]; # Access the argument.

      if (/^base\.model(.)\.fold(\d+)\.test(.).run$/) {
      @dependencies = ("base.fold$2.test$3.test",
      "base.fold$2.train");
      if (!up_to_date($_, # Output file.
      @dependencies, # Input files.
      "model$1")) { # Rerun if prog changed, too.
      system("model$1 @dependencies > $_");
      }
      }
      elsif (/^....$/) { # .. Check other patterns. ...
      }
      }

      What you've gained from this is a much, much more powerful way of constructing the rule and the dependencies from the target filename. Of course, your file will be a little harder to read than a Makefile--that's what you pay for the extra power. But instead of having many duplicate rules in a makefile, you can use regular expressions or whatever kind of pattern matching capability you want to construct the rules.
  • R? (Score:4, Informative)

    by Elektroschock ( 659467 ) on Saturday April 19, 2003 @08:49PM (#5766652)
    Did you consider R, a Splus clone? For Scientific Statistics a very flexible solution. http://www.r-project.org
  • by Xerithane ( 13482 ) <xerithane AT nerdfarm DOT org> on Saturday April 19, 2003 @08:50PM (#5766653) Homepage Journal
    We do something that almost parallels this, and we still haven't had the time to complete the Ant setup. The basic gist of it is that Ant has properties files that can contain any number of parameters, along with embedded XSLT functionality. This allows Ant to generate new build.xml files (The Ant build file) and run it, on the fly, given a set of user-entered commands, environment variables, or file parameters. The parameter files are easy to modify and update, and combined with CVS you can even do version control on the different experiments.

    What I would end up doing is setup an Ant build file for each experiment, under each algorithm.

    Algorithm/experiment_dataset1.properties
    Algori thm/experiment_dataset2.properties

    And then you can update property files, using a quick shell script, or something along those lines at the end of the data set, as well as having build/run times that Ant can retrieve for you. Good solution, and you aren't reinventing the wheel.

    Requires Java, which depending upon your ideology is either a good thing or a curse. :)
  • A testing tool like JUnit might be a good place to start. I suspect you'll be writing most of your solution. Put it on sourceforge if you do, this sounds useful.

    Good Luck

  • AppLeS? (Score:2, Informative)

    by kst ( 168867 )
    Something like the AppLeS Parameter Sweep Template [sdsc.edu] software might suit your needs. I've never used it myself, but it looks like it might be close to what you're looking for.

    See here [sdsc.edu] for other projects from the GRAIL lab at SDSC and UCSD.
  • Uh-huh (Score:5, Funny)

    by Ryvar ( 122400 ) on Saturday April 19, 2003 @08:54PM (#5766675) Homepage
    I don't mean to sound cynical, but this seems to come across to me as a very nicely written:

    Ne3D H3lp WIt M4H H4x0RiN!!!!!

    I mean, let's face it, much of what modern hacking closed-sourced software consists of is throwing a variety of shit against a variety of programs in a variety of configurations and seeing what breaks and then following up to make an exploit out of it.

    While this probably isn't the case here, it's very hard to read that note and not snicker just a tiny, tiny bit . . .
    • What? Re-read the original story. You are wayyyyy off the mark.
    • I don't think so...

      Or at least, I read the story and immediately thought of applications to my own projects (I'm a Research Assistant at my University, and I'm a little tired of writing Perl scripts to batch long jobs with combinatorial arguments).

      If such a tool exists, I too would be interested in it. I think it rash to assume that the poster is looking for exploit automation.
  • by Anonymous Coward on Saturday April 19, 2003 @08:55PM (#5766682)
    The Academic Community, especially those strange AI people, have long sought complicated programs and machinery that could automate all of their work and projects, keep track of complicated "parameter sets, datasets, etc....".

    But what you are looking for, sir, is the cheap labor commonly known as a Graduate Student
    • Many of these "grads" [as they are commonly known] have INDEED been able to " 'execute' experiments and report their status", as well as "writing out results (available in multiple languages)".
    • The Graduate Student is often known for their abilities to create and distribute notes in lieu of bringing that onerous burden upon more high-ranking academic officials
    • ...you don't even have to dream about doing "clustered work" or "outputing results to spreadsheets, Matlab, LaTeX tables, etc....". These fancy machines can definately do that...
    • Of course, there are several "graduate students" that provide a fancy front end (and rear end, for that matter). I think that I would agree with your assesment that they do not need to have that feature, although it might make your days a bit more... ermm... *pleasant* :-)
    • As well, most graduate students have the capability of performing "basic stats", although most don't have an extensive faculty for performing such calculations...
    • And don't you even worry about the price -- you'll see that they're quite affordable.
    To conclude, you say that "There's no reason such software should be limited to computer science (nothing I'm contemplating is very domain specific). I can imagine many disciplines that would benefit". I would wholeheartedly have to agree with you: just about every discipline can do more and see farther by standing on the backs of their graduate students.
    In fact, I'm afraid to report that you are a bit behind the times in this department as these "Graduate Student" devices are quite common at universities and research labs.
    • Ah, you see... there's the problem... I am, in fact, a cheap alternative to the much vaunted "Graduate Student". I'm a "Lazy Graduate Student" (TM), with slow update rates, poor accuracy, and long downtimes. Eventually, I'll probably break down completely into a "Professor", in which case someone will have to find some "Graduate Students" to get the work done...
    • Be sure to get a fun new head to go with your grad student:

      http://catalog.com/hopkins/text/head.html [catalog.com]

    • by quantaman ( 517394 ) on Saturday April 19, 2003 @10:09PM (#5766941)
      Of course, there are several "graduate students" that provide a fancy front end (and rear end, for that matter). I think that I would agree with your assesment that they do not need to have that feature, although it might make your days a bit more... ermm... *pleasant* :-)

      That does have it's advantages though you should be cautious. In my experience those models often have a large number of bugs in their systems and tend to be a lot more likely to pick up viruses as well.
      This shouldn't be a problem for most operations but ocassionally if you try to interface them with your other components you may find your other systems becoming infected as well. In extreme cases you may also find interfacing with these systems can cause additional child processes to be created. These child processes are extremely hard to get rid of, early on you may be able to simply kill them but this command becomes extremely impratical after a few months of operation. These processes are known to take up huge amounts of resources and maintainance and often take the better part of 2 decades to subside (they're still present but resource demands drop considerably). Of course many of these risks can be alliviated by using a proper wrapper class while working with this "graduate student" systems.
    • You forget the primary advantage of graduate students: they can be programmed in natural language.
  • ROOT (Score:5, Informative)

    by kenthorvath ( 225950 ) on Saturday April 19, 2003 @09:20PM (#5766783)
    http://root.cern.ch/

    We experimental high-energy physics folk have been using it (and PAW) for some time. It offers scripting and histogramming and analysis and a bunch of other features. And it's open source. Check it out.

    • BTW, I posted a link to an addition to ROOT that probably makes it a little more suitable (http://roofit.sourceforge.net/).
  • by john_heidemann ( 104993 ) on Saturday April 19, 2003 @09:21PM (#5766787) Homepage

    I've been very happy using jdb (see below) to handle individual experiments, and directories and shell scripts to handle sets of experiments.

    JDB is a package of commands for manipulating flat-ASCII databases from shell scripts. JDB is useful to process medium amounts of data (with very little data you'd do it by hand, with megabytes you might want a real database). JDB is very good at doing things like:

    • extracting measurements from experimental output
    • re-examining data to address different hypotheses
    • joining data from different experiments
    • eliminating/detecting outliers
    • computing statistics on data (mean, confidence intervals, histograms, correlations)
    • reformatting data for graphing programs

    For more details, see http://www.isi.edu/~johnh/SOFTWARE/JDB/.

  • by Anonymous Coward

    http://sourceforge.net/projects/pythonlabtools/
  • With the slight re-grouping of the title phrases as above, I think we can all agree the answer is:

    FBI's Carnivore.

    (Well, that's the way the headline parsed out for me the first time I glanced at it...)
  • You seem to suggest that the specifics of the software used in the experiments themselves is too varied and engineered to respond to object management within the native environment.

    Ok, you take the management piece into a meta-environment like web e-commerce. Each iteration produces a transaction, essentially a line in a table containing the common meta-elements and then you perform your management via linked queries on this data set ala Napster.

    If all of your data engines are connected (Intranet), the o
  • tcltest (Score:3, Informative)

    by trb ( 8509 ) on Saturday April 19, 2003 @09:58PM (#5766900)
    It might not satisfy all your requirements out of the box, but could you put something together with tcltest [www.tcl.tk]?
  • It might take some work, but Eclipse from IBM has improved a great deal towards becoming a good environment for project management. Its geared towards projects written in Java, but there is a C/C++ Perspective plugin if you prefer...

    Its a good platform for managing a collection of custom ant build scripts if you decide to go that direction (assuming your in java of course...)

    If you'd prefer something more specialized, the plugin architecture isn't bad and could save some time with interface work. Especi
  • I was able to do almost everything on my thesis using open source tools, LaTeX on Linux, except when it came to data reduction - I was forced to use the crippled student version of SPSS. I would love to see a GNU clone of this functionality the way they have cloned Matlab.

  • by Anonymous Coward on Saturday April 19, 2003 @10:43PM (#5767074)
    What you describe does indeed sound like High Energy Physics.

    And the "middleware" you need are the GNU tools gluing together the specialized programs that do the specific things you want.

    We have been using unix for a long time, and many of us prefer the combination of small targeted tools philosophy rather than a single monolithic package.

    I will repeat, and you can stop reading now if you want. The GNU tools, unix, and specialized scriptable programs are already the "middleware" you seek.

    If you are just missing some of the tools in the middle, here are the ones used in HEP. You might find more appropriate ones closer to whatever discipline you work in.

    All the basic unix text processing tools and shells.
    bash. csh. Perl. grep. sed. and so on.

    Filename schemes ranging from appropriate to clever to bizarre.
    (See other posts here)

    Make it so that all the inputs you want to change can be done on the command line or with an input steering text file.

    Same tools combined with some simple c-code to produce formats for spreadsheets or PAW or ROOT or whatever visualization or post-processing thing you need done. Has ntuple and histogram support automatically, which might be all you need.

    Almost always I choose space delimited text for simple output to push into PAW, ROOT, or spreadsheets. I keep a directory of templates to help me out here.

    Some people use full blown databases to manage output. For a long time there have been databases specific to the HEP needs. I recently have started using XML-style data formats to encapsulate such things in text files if the resulting output is more complicated than a single line. You mention XDF, sure, that sounds like the same idea.

    CONDOR (U Wisconsin) has worked nicely for me for clustering and batch job submission when I need to tool through 100 data files or 100 diffrent parameter lists on tens of computers. The standard unix "at" is good enough in a pinch if you play on only 5 computers or so.

    HEP folks use things like PAW and ROOT (find them at CERN) which contain many statistical analysis things and monstrous computation algorithsm. Or at least ntuples, histograms, averages, and standard deviations. You could go commercial or the gsl here if you prefer such things.

    CVS or similar to take care of code versions.
    Don't forget to comment your code.

    We write our own code and compile from fortran or c or c++ for most everything else.

    Output all plots to postscript or eps.

    LaTeX is scriptable.

    And use shells, grep, perl to glue it all together. Did I mention those already?
    I get a good night's sleep more often than not.

    And decide what to do next after coffee the following morning.
    This is where you put your brain, and if you have done the above well enough, this is where you spend most of your time.
    The answer I get each morning (as another post suggests) is always so suprising that I need to start from scratch anyway.

    I bet that is what you are doing already. Probably no monolithic software will be as efficient as that in a dynamic research environment.

    What did I miss from your question?

    Oh, yes. Get a ten-pack of computation notebook with 11 3/4 x 9 1/4 inch pages (if you print things with standard US letter paper). And lots of pens. And scotch tape to tape plots into that notebook. Laser printer and photocopier. Post-it notes to remind yourself what you wanted to do next (or e-mail memos to yourself). Maybe I should have listed this first.

    Good luck.
    • I wish I could mod the above up, great advice, I might not have posted below if I'd seen it.

      I'd emphasize that using a scriptable graphing/postprocessing program (I used to use gnuplot and octave, there are many interesting options more widely documented now) is really key.

      Nothing like starting a script and being able to walk away from it for the afternoon, or the night, or the weekend...
    • Heh... you've pretty much described what I'm using now. I'm most interested in what you say about using XML for output data. Are you using any particular DTD/Schema or just cooking up something for each occasion?

      I agree that a "monolithic" solution isn't going to do much for you. That's why I'm thinking more in terms of middleware, something that will help me bind these tools together. I envision that some scripting would be necessary to bind in new tools and experimental software for each new project,
      • The really killer advantage to good middleware for this kind of thing would be improvements in the user experience and relief from the drudgery of learning yet another arcane sub-language to get the results you want.

        I think you're asking for a very powerful, very well-designed IDE with good integration with configuration management, software instrumentation, etc.
    • I wonder if I should post this here again... oh well, I will. http://roofit.sourceforge.net/ is based on ROOT and adds some of what is wished for.
  • You want the computer to run the experiment, catalog all the results and present them in a nice format. Maybe when it's done it can put your name on the results and publish it for you too.

    Just Kidding ;o)

    But if your determeined to let the computer do the work, perhaps some form of Genetic Algorithm could be applied here. If you can define you domain into something that can be broken down well enough and tested for selection criteria there are lots of tools and research available. If you have an API to
  • schema (Score:2, Informative)

    by Tablizer ( 95088 )
    Draft relational schema:

    Table: experiments
    ----
    exprmntID
    exprmntWhen // date-time stamp
    exprmntDescr // description
    outcome

    Table: params
    ----
    paramID // auto-num
    exprmntRef // foreign key to experiments table
    paramName
    paramValue

    Table: dataSet
    ----
    dataSetID // auto-num
    filePath
    datasetDescr
    isGenerated&nbs p ; // "True" if from experiment
    CRC // ASCII check-sum to make sure not changed

    Table: dataSetUsed
    ----
    exprmntRef // foreign key to experiments table
    dataSetRef // foreign key to dataSet table

    Table: softwareVers
    • I agree that defining the schema is a good place to start, and that a db backend is the "right thing" to do. For this app though, Postgresql offers some attractive features such as inheritance, stored procedures, and an eclectic set of datatypes.
      • Postgresql offers some attractive features such as inheritance, stored procedures, and an eclectic set of datatypes.

        I never was much of a fan of schema inheritance. Most examples I have seen were based on bad designs IMO. And funky datatypes decreases porting and sharing of the data to other DB's.

        Plus, I think they wanted something "lite" in the DB department based on one comment, and Postgre has a bit more of a learning curve.
  • It looks like you need - da da da da! - [b]EXTREME[/b] PROGRAMMING!
  • The features I would want would be:

    management of all details of an experiment, including parameter sets, datasets, and the resulting data


    This can be handled by an ad-hoc database, a flat file in most cases. If you were a Windows power user, you'd spend an hour or two putting together something in Access for it.

    ability to "execute" experiments and report their status

    make with a little scripting, or whatever you use as a build system.

    an API for obtaining parameter values and writing out results (a
  • by g4dget ( 579145 ) on Sunday April 20, 2003 @01:35AM (#5767626)
    Managing and organizing really huge amounts of data is one of the big strengths of UNIX--you just have to learn how to use it well:
    • Consider using "make" or "mk" for automating complex processing steps. "make" also lets you parallelize complex experiments (by figuring out which jobs can be run safely in parallel), and some versions of "make" are capable of dealing with compute clusters. If you need to try something with multiple parameter values, write make rules and put the parameter values in there as dependencies.
    • Organize your data into directory hierarchies; pick meaningful and self-explanatory names. Don't let directories become too big. Keep related data files and results together in the same directory, and keep different data files in different directories.
    • Keep scripts and programs along with the data, not in completely separate source trees.
    • Write scripts that summarize the data and give them obvious names; you can figure out later from that what needs looking at and what it means.
    • Use textual data files as much as possible and have your programs add information to those files as comments that document what they did.
    • If you generated important result, keep a snapshot of the sources that generated it along with it.
    • Leave copious README files everywhere, containing notes to yourself, so that you can figure out what you did.
    • If you generate junk during some trial runs, delete it, or at least rename it to something like "results.junk", otherwise you'll trip over it later.
    • Back things up.
    • Learn the core UNIX command line tools, tools like "sort", "uniq", "awk", "cut", "paste", "find", "xargs", etc.; they are really powerful. You probably also want to learn Perl, but don't get into the habit of trying to do everything in Perl--the traditional UNIX tools are often simpler.
    • If you are using Windows, switch to UNIX. Windows may be good for starting up MS Office, but it is no good for this sort of thing. If you absolutely must use Windows for data analysis, stick your data into a relational database or Excel spreadsheets.
    • Learn to use environment variables.
    • Learn to use the Bourne/Korn/Bash shell; the C-shell is no good for this sort of thing.
    • For certain kinds of automation, expect is also very handy.
    • For visualizing data, write scripts that analyze your data and automatically generate the plots/graphs--you will run them again and again.

    Distribution of jobs, running things with multiple parameter values, etc., all can be handed smoothly from the shell. This is really the sort of thing that UNIX was designed for, and the entire UNIX environment is your "experiment management software".

    • If you are using Windows, switch to UNIX. Windows may be good for starting up MS Office, but it is no good for this sort of thing. If you absolutely must use Windows for data analysis, stick your data into a relational database or Excel spreadsheets.

      What is it intrinsically about Windows that makes it "no good for this sort of thing"? Windows provides all the system services you need to do these tasks, and all the tools you mention are available natively for Windows. Come to think of it, they're avai

      • all the tools you mention are available natively for Windows

        Yes, but are they built-in?

        GNU just works out-of-the-box. Windows, UNIX and any other commercial solution just seems to miss the whole point of having a computer. Processing data. Which is why they forget to include a spreadsheet and a database and a compiler and scripting languages and various other tools that are a requirement for this data processing we all love to do so much.
        • As for working out-of-the-box, I will go ahead and answer your rhetorical question: no. Windows comes with nearly none of those tools installed. But you're changing the subject. We were not discussing convenience, we were discussing ability. They said (or certainly strongly implied) that Windows was not capable of managing data in the method he suggested. I completely disagree.

          Yes, it's inconvenient for a Windows user to download and install the extra applications (a better shell, all the command-li

          • I agree with you, but I'm still playing devils advocate here :)

            Windows standard edition alone can not do what you're talking about. You have to have their professional edition with SQL, Excel, etc. This is several hundreds of dollars in additional software on top of an expensive "professional" version so you have a chance at getting stability. You wouldn't dare attempt to manage important data on a home version of a Microsoft OS. When your standard RedHat 9 download includes EVERYTHING you need and is
      • What is it intrinsically about Windows that makes it "no good for this sort of thing"? Windows provides all the system services you need to do these tasks, and all the tools you mention are available natively for Windows.

        So, you are saying that you have never actually run large scale computational experiments, and you don't actually have any recommendations for how to run them on Windows, but based on a list of Windows "system services" you think it should be pretty good for that.

        Well, I have run large

        • I didn't say one way or the other, but no, I have never actually run large scale computational experiments. And, no, I didn't make any recommendations on how to run them on Windows. That's because your original posting did a good job of describing one approach to solving this problem. My point was to correct your error in saying Windows was "no good for this sort of thing". It is entirely capable of doing "this sort of thing".

          What I meant by "system services" was services offered to user programs by

      • In my experience, GUIs apply fundamental restrictions to data manipulation due to the graphically-dominated input and output systems. There are lots of neat programs like SAS for doing data manipulation through a gui, but every worthwhile tool I've ever used on Windows also offered a CLI. When I've encountered experts with these applications, they've always preferred the CLI Perhaps they prefer the CLI because that's what the learned with, but I expect it is because it works better for them. Matlab, SAS,
        • For what it's worth:
          • Windows never had anything called "Presentation Manager"--that was the GUI shell for OS/2. Windows Explorer is the default GUI file/directory navigation tool, equivalent to GMC or Nautilus under GNOME, or Konqueror under KDE. The Windows GUI is simply referred to as USER, after the implementing library (originally user.dll, these days user32.dll).
          • Focus is the same in Windows as it ever was. There is the Microsoft Power Tool "Xmouse", which makes the mouse behave in a vaguely
          • Are you sure that Windows never had a Presentation Manager? I'm glad you knew what I was getting at. But I thought Windows (maybe 3.0, 3.1, or 3.11) had a pm process running somewhere.

            I played with Xmouse when I was using Windows, and I agree with your assesment. My wife has messed with various virtual-screen switchers in Windows, and all of them seem to have problems with some "too clever" apps, just like Xmouse. I've seen a lot of folks at my university play with various X "servers" (well, mostly Hum
            • I bet you're thinking of "Program Manager", the Windows 3.x (and before) program launcher application... precursor to the Start Menu.

              As for messing with Windows to make it like UNIX, that depends on what you mean. Trying to make the GUI behave more like X is a fruitless endeavor. But I use zsh and Python every day. Windows' glass-TTY sucks for cut and paste, but it works just fine for everything else.

    • Word!

      Make is fantastic for organizational purposes. makefiles are basically a language for describing dependencies. If you have a directory-based structure, it works wonderfully. You just have to make sure your stuff is orthogonal :-)
  • I've been working on this, though it's not yet released as open source. If you'd like to try my system out it might speed up the release date. It's a web application written in python with a postgresql backend. Give me a ring at jmichelz at mail dot com for more details.

    John
  • I do the same kind of thing with computational experiments and I'd like to agree with those who've suggested using a variety of tools.

    Its not strange, for example, for me to use python to generate the actual program runs, the shell to actually manage the run and move the input/output data files, then any of several graphics programs to handle the output (and often output graphs are done automatically as the programs run).

    This gives me a pile of flexibility which is often useful. For instance, when doin

    • XML and XSLT are also becoming increasingly useful in describing input data, recording results and keeping track of things done.

      Those of you that are finding XML useful for this sort of thing, what tools and ideas are you using?

      • I'm increasingly using xml to save information for a specific run of the program. Usually this is ad-hoc and the markup tends to be simple enough so that the program itself, or the driver will handle building it.

        Then I use xslt (again, ad hoc) to digest this and produce output (often in html these days) that I can look at.

        I could do it all in other ways - the advantage of xml is that I can describe the markup somewhere so later on I'll not forget what the data actually was, and that I can use XSLT wit

  • Experimental tools (Score:2, Informative)

    by clusterboy ( 667260 )
    Have you considering using python? researchers at DIKU [www.diku.dk] have created a benchmark tool [cphstl.dk] for . It seems usefull. Python is also excellent for gluing modules together.
    A wellthought example on how to setup your code for experimental work is the lemur [cmu.edu] toolkit from CMU. This toolkit has a concept of "parameter" files that is very handy :) Generally I would recommend against exporting directly to spreadsheet formats - I tend to export to flat files (with field separators). Postgres' COPY command is extremely h [cmu.edu]
  • SMIRP (Score:2, Informative)

    by sco08y ( 615665 )
    I'm one of the principal designers of a system called SMIRP [drexel.edu].

    It started out as a very simple system that didn't act as much more than a set of tables with some simple linking structures. On top of that is an alerting system, (so you can track new experiments being done) a full text index, bots for automating certain procedures, and a system for transferring data to Excel.

    What's surprising is that for the most part, the underlying structure stayed exactly the same even though we've been running all the oper
  • Funding agencies in the USA (NSF, NIH) and Europe have recently decided to target the construction of such software, and many competing projects have been given grants, most of which involve the production of open source software.

    Relevant keywords are "eScience", "Experimental Data Management", "Experimental Metadata", and to some extent "Grid Computing".

    Here is a paper which lays out the program of research [semanticgrid.org].

    I work for one such NSF & NIH funded project [fmridc.org] at Dartmouth College. We're developing such a tool [fmridc.org]: Java-based, completely open, available at sourceforge, currently in alpha, to be released for fMRI use in July, but designed from the start to be generalizable for all of experimental science. This is built on top of a pre-existing framework [stanford.edu] for semantic data management and modeling from Stanford.

    I'll try to list some of the features relevant to your needs:

    • the thing will organize all your data across all experiments and sports a nice Java API, annotations, a set of interchangable & sophisticated query engines, and java plugins for supporting, among other things, application specific tasks, application specific rendering widgets for data, and new backend data formats.
    • currently supported backend formats include: RDF, DAML+OIL, XML, text files, and SQL databases.
    • we should have cluster job submission support integrated in by july, but it depends on your cluster set-up. currently this is presented to the user by way of executing "processing pipelines" for data. If this metaphor doesn't work for you, you may have to write some additional code for us!
    • since the experimental designs are represented in a prolog-style knowledge-base, it would be very simple to put some intelligence in about how to "run" or "execute" a given class of experimental designs and do a lot of automatic reasoning or planning re: dependencies. In fact, I think that someone at Stanford has already done this, but I'd have to look into it.

    Finally, I would like to stress that our project is one of many, and that if it doesn't meet your needs, within a year there will be many competing "eScience" toolkits.

    You may contact me for more information by reversing the following string: "ude.htuomtrad@exj".

  • by spirality ( 188417 ) on Sunday April 20, 2003 @11:26AM (#5768733) Homepage

    The Computer Aided Engineering (CAE) world has much the same problem you do.

    They model their products with several different analysis codes, each with its own input and output format. This generates a gob of data, and is currently managed in ad hoc ways, is not easy to integrate with other results and wastes the time of lots of engineers.

    The product we've come up with to manage both the models, the process for executing the models, and the data generated by running the models is a software framework called CoMeT [cometsolutions.com] (Computational Modeling Toolkit).

    We are also capable of managing different versions of the model, parameter studies, and some basic data mining. The whole thing is scriptable with Scheme.

    Unfortunately, we are a commercial software company, and the software is still under development, although everything I mentioned above can currently be done. We are mostly working on a front end now, although we still need to make a few improvements to the framework and add support for many analysis codes.

    The reason I'm replying to this is that your list of requirements is a perfect subset of ours. We are aiming our product at CAE in the mechanical and electrical domains (Mechatronics).

    I know, it's not free, but we feel we've done some very innovative things and it has taken several people many years of low pay to get this far. We really want to make some money off it eventually.... :)

    If you want more information check out the web-site or email me here. We're in need of proving this technology in a production environment so maybe we can work something out.

    -Craig.

  • Might be suitable? (Score:2, Informative)

    by gowdy ( 135717 )
    http://roofit.sourceforge.net/
  • by Anonymous Coward
    more experienced programmers.

    I think someone told all the computer scientists that there's a theoretical way to write a program that does everything. I'm a computer scientists and it's clearly impossible to generalize that greatly.

    In fact it is so dangerous, the general purpose OS is the root cause of our software is the number one cause of our downward spiral in software quality. It took BeOS a short time to write their general purpose OS. I think it's silly to think it's that monolithic of a project tha
  • ExpLab (Score:2, Informative)

    I'm in precisely the same situation as Alea, so I read the suggestions here with considerable interest.

    I'd like to mention ExpLab [sourceforge.net].

    Though I haven't used ExpLab yet, these folks have been associated with other very high quality work (CGAL) so I expect good things. Here are three goals they list for the project:

    • to provide a simple way to set up and run computational experiments;
    • to provide a means of automatically documenting the environment in which an experiment is run so the experiment can be easil
  • You seem to be talking about a process automation tool, aka workflow management. If you're doing research you might have a look at DSTC [dstc.edu.au]'s Breeze [dstc.edu.au], free for non-commercial/academic use.

    Ralf

  • Stop! Don't re-invent that wheel! The work - has already been done... The system you desire was developed at U C Berkeley nearly ten years ago and has been in production at places like JPL and the Langley Research Center since 1997. This software contains all the features you desire and a lot more, including a fully distributed processing system, a collaborative distribution system, archive hooks and even has robust security features. And, conveniently, it doesn't contain any of the features you said you

What is research but a blind date with knowledge? -- Will Harvey

Working...