Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Open Source And Genetics 81

UnanimousCoward writes "SFGate has an article about some researchers pushing for the open sourcing of genetic research software. Of course, the pros and cons are debated." It's the age-old debate; what follows the heart of the scientific method more? Peer review, or getting the information out as fast as possible?
This discussion has been archived. No new comments can be posted.

Open Source And Genetics

Comments Filter:
  • Redundant? (Score:2, Interesting)

    Didn't we see this article yesterday right here []?

  • for dummies... (Score:2, Insightful)

    If it gets open sourced can we expect to find a book on our bookshelves titled "Genetic Engineering for Dummies..." and "Learn Genetic Engineering in 21 days"?
    • expect to find a book on our bookshelves titled "Genetic Engineering for Dummies

      Yes, accompanied by, "Buying 1.5 Million Dollar Genetic Sequencing Equipment on Which to Run Open Source Sequencing Software for Dummies"

      It's not like you run the software on your PC while sucking on a serial cable.

    • Re:for dummies... (Score:2, Informative)

      by cbare ( 313467 )
      Well, we already have these:
    • Re:for dummies... (Score:1, Informative)

      by Anonymous Coward
      There are already plenty of "for dummies" books on the subject: 84 4/107-1368491-7993314

      For a summary, look at: 5_ 001.shtml
  • Scientific Method (Score:2, Interesting)

    by tbone1 ( 309237 )
    It's the age-old debate; what follows the heart of the scientific method more? Peer review, or getting the information out as fast as possible?

    This is just my opinion (YMMV, IANAS, etc), but I would argue for peer review, insofar as it relates to the ability to duplicate results and experiments. My thinking here is that the ability to duplicate results is at the heart of science. If others can't duplicate them, the results aren't accepted.

    A grain of salt is provided for those who disagree.

    • by Bikku ( 531345 )
      Hear, hear.

      Speed is of no interest wrt scientific objectives (although it may be of some value in engineering - the application of scientific knowledge for practical benefit). The biggest problem in trying to figure out how the world works, and get some reliable "knowledge", is that nature is extremely subtle and we humans are very good at fooling ourselves into believing our latest theories are actually true descriptions of nature. Hence the rigorous insistence on peer review as just one more mechanism to try to ensure researchers are not inadvertently deluding themselves.

      I can't understand why anyone would think "It's the age-old debate". Seems crystal clear to me that the value of science as a method of generating knowledge lies in not making errors (and does not lie in making errors quickly).

      Re-read The Feynman Lectures for a refresher on this?

  • Redundancy... (Score:3, Insightful)

    by metlin ( 258108 ) on Monday November 26, 2001 @03:55PM (#2615109) Journal
    Despite all the arguments favouring IP rights et al, I'd say that most of the research has to be open-sourced, so to speak.

    There is no point in re-inventing the wheel. It becomes ridiculous when 10 different companies engage in enormous investments independently on 10 different projects, when they could have all done 10 different projects to begin with.

    IP has it's own say, no doubt, but over doing it leads to commercialization of science, and defeats it's very purpose. Engaged in silly patent and copyright wars, we have no doubt postponed many a useful invention by at least a few decades, IMHO.
    • Re:Redundancy... (Score:2, Insightful)

      by Anonymous Coward
      I think IP rights HELP the dissemination of knowledge.

      As you mentioned, property rights (basically the right to exclude others from making using or selling your property) in knowledge does seem at first to be a problem. Everyone should be allowed to benefit, and widespread knowledge leads to more discoveries using that knowledge.

      But unfortunately, today lots of "knowledge" comes at a high investment cost. To get many of the products we want, we have to let the maker get a little money out of the deal. So we grant them propety rights and tell them they can profit exclusively from this idea for a finite period of time, both to recoup investment they made (which beefitted us all but which the maker alone paid for) and to profit (to create incentive for the maker to make more useful things in the future).

      In fact, granting IP rights allows inventors, etc. to publish their ideas when they often would be forced to keep them secret for fear of others appropriating their ideas. The vast majority of trade secrets are ideas that do not meet the threshold for patent protection. So the only way the maker can protect it is to keep it secret.
      • Re:Redundancy... (Score:1, Insightful)

        by Anonymous Coward
        no. most ideas are patent protected AND kept a trade secret. look at all the patents out there -- how many actually give the necessary info to duplicate the invention ? not many. and this is a BAD THING.
        • ___ "no. most ideas are patent protected AND kept a trade secret. look at all the patents out there -- how many actually give the necessary info to duplicate the invention ? not many. and this is a BAD THING." ___ I must point out that your above post is completely incorrect. One of the requirements of obtaining a patent is to disclose the invention to the public. Patents are published, you can go to and find them. To get a patent, the applicant must disclose enough information so that a person of ordinary skill in that particular "art" can build the invention. So if you invent and try to patent a device that uses knowledge of semiconductor processing, those who understand the current science and technology of semiconductor processing must be able to read your patent and be able to build your invention. This is called "enablement," and is a requirement of the patent law statute (35 USC, sec. 112, I think). If you don't disclose enough to allow someone else to build your invention, you cannot legally obtain a patent for it. This is part of the quid pro quo of patent law. The inventor gives something to society by creating something that did not previously exist. In return, the people (via the government) give the inventor property rights to the invention (for a limited time--17 years I think). This is to provide profit incentive for the inventor to create inventions, and recoup investment cost. With property rights, an inventor (more often a corporate assignee who paid the inventor to invent) can exclude others from making, using, or selling the patented idea. The point of all this is that patents most definitely are NOT secret. Part of that quid pro quo is that the inventor must tell everyone, up front, exactly how to build the thing and make it work. In fact, it is a legal requirement that an inventor disclose the "best mode" of practicing the invention.
    • Good research is best accomplished by having all access to all information, so that you don't have to re-invent the wheel or relearn the experiments.

      So why do some companies reproduce research that 10 other groups/companies are doing?
      1. To see if the work can actually be reproduced. Patents are notoriously unreliable for their reproduciblity in the chemical sector.
      2. To learn more about the technology, to either improve it or use it for another project. By doing the experiments yourself, you get a lot of knowledge that is never written down on paper.
      3. Having a completely different group of scientists working on a research project can lead to a completely different way of interpreting the results. Therefore, the conclusions and applications of the research are different. (Diversity of thought!)
      4. The idiots forgot to do a literature search before starting their project. This unfortunately happens more often than it should.

      It does look like at first glance that reproducing other previously done work is a waste of time. However, given the advantages of the first 3 points, it will continue to occur this way, and for the benefit of science, it probably should.
  • Macs are used extensively in some areas of genetic research, to put some perspective on the open source involvement.
    • Yeah, but we [] use Open Source software extensively. We use Linux quite a bit (though not as much as Solaris), Perl for damn near everything, and Apache for our web servers.

      We use Mac's too. For image processing. And one particular Gene-specific application. But that totals maybe 10 machines. We have more Linux desktops. And more Penguins (4) in the data center than Apples(0).

  • One of the major arguments around open source software is the question of "free speech or free beer". The distinction is frequently drawn by using the words libre to indicate the freedom of action in OSS (e.g. to distribute modified copies) and gratis to indicate freedom from cost.

    LSS is very much libre. Users of LSS packages are free to distribute copies without reference to the authors, make modifications and distribute the modified copies. They may even execute the software without charge, provided that they do it for a purpose for which the authors do not require payment, and the definition of LSS ensures that any LSS package can be executed for experimental purposes. However LSS is not always gratis. Some uses of the software will require payment.

    The word "liberal" is derived from "libre". It denotes freedom of action. Hence "Liberal Source Software". The source is available, and you can do what you like with it.

  • If someone researcher at a university is being funded privately, there is no problem with the research and software staying closed. If public tax money is used, I think a BSD style license should be used. I do think there is a valid point in not using GPL in these cases and using a BSD style license.
  • Right now with the boom in genetics this software is top-dollar in some cases. I doubt these emerging companies would be willing to part with their profit margins in the name of open source.
  • It's important to get informationout as quickly as possible, especially medical or scientific knowledge that can be quickly applied for our betterment. That said, peer review is incredibly important to ensure that the information being disseminated isn't MISinformation.

    That's my brief 2 cents.
  • by phranking ( 134197 ) on Monday November 26, 2001 @04:08PM (#2615192)
    Be more clear in the writeup: the subject of the article is a petition to open-source any code that results from federal grant money from reseachers in a university setting. It is *not* about open sourcing all bioinformatic software, everywhere. In other words, code developed in a private setting, independant of university grant money - who cares? (at least in this case). Alternatively, grad students who strike it rich on code they wrote while on the government dole? Open source *their* work, since it was already paid for by the taxpayers. I don't see anything wrong with this particular distinction.
  • In our times people need to understand that information is always most usefull when free. Only when some piece of mind's work is released to anyone look at it this information can archieve it's full potential of changing society. Every single time that someone, somewhere, is hiding information this person is hurting everyone else in this world. Sure I don't talk about personal information and the kind of stuff that should not be know to others (because it would not be useful), but about scientific information... that's what make our world go forward, really.

    The truly beauty of the free software and of the open source movement relies precisely in that kind of freedom.

    This genetic software must be relealed as soon as possible. Not doing so is a way of harming society.
  • If someone patents your DNA sequence, and you are a surogate mother, do you have to pay royalties on the child? (Since you are giving birth for profit.)
  • I don't think genetic code and source code should be treated the same. After all malicous sourcecode only hurts computers and inconvienices users. Genetic code effects us on a much more real level.

    I can release code that has a bug in it and fix it pretty quicky. If it causes problems, it only causes problems. Meddling with life is not so trivial and the concequences can be fatal. It is for this reason that I am against open source genetics.
    • releasing genetic code into the public domin won't help bioterrorists, just knowing what the sequence of a piece of dna doesn't help a person know what it does or how that code can be used to alter other organisms pathogenic or otherwise the majority of academic genetics is already open source just look at,, and this is software and dna sequences, computer code and the genetic code annot be equated we by no means understand teh gentic code well enough yet for that sort of thing to be possible and certainly is unlikely that we will in the near future
  • That are given to offer an incentive to produce a product. I have no problem with Universities patenting research for funding. There's no way my paltry tuition can pay for the sonet ring around my campus, or the high end multiprocessing computers that seem to be in abundance (BYU). I do however take exception to the idea that the HUMAN genetic code can be licensed and sold. I don't see a problem with that researcher going on his own in his spare time and developing an alternate free source for his genetics program. However, if it is the work of many interested parties, (corporations, individual students and university admin, then it's just up to each person to do what they are able on their own. No one who is in open source makes software on company time and expects to give it away for free. In general, a researcher is employed by the university, on this specific case, however, he has them by the hoary beard, because they agreed to a contract that his work would be open, as solid as a contract if he would have had to patent all his work
  • "It's the age-old debate; what follows the heart of the scientific method more? Peer review, or getting the information out as fast as possible? "

    Both peer review and getting the information out there are both important. Honesty in publication is most important. If your work has not been peer reviewed, then you should just say so.

    There is also a danger in peer reviewed work being too easily accepted by people even though the work itself is shoddy.
  • Why not both? Quick publication with clear marks as unconfirmed or waiting to be replicated would do both. It also would provide a quick source of worthwhile experiments, repeating previous successs and confirming them.
  • This (repetition) has happened a number of times since I started reading slashdot.

    I don't get the meaning. I know the faq says that this can happen, but sometimes I just wonder if d'man wants us to read and give more feedback, perhaps? What seems worse, is that then the moderators waste a bunch of points punishing people who complain.

    Perhaps we are being "herded" towards a goal? While that's cool, I mean, you are d'man, after all, I wish you'd just say, "Hey, could y'all take another look at HERE and give some additional feedback? We need more profile data on some of you."

  • by rice_burners_suck ( 243660 ) on Monday November 26, 2001 @04:23PM (#2615315)


    Software for research in genetics, biology, fluid dynamics, astronomy and any other subject that requires such colossal amounts of computations should, in my opinion, be open sourced. This way, several things can take place:

    • While some folks are using the software, others can optimize it for increased speed. One of the things that bothers me when I read about some great new supercomputer is wondering how many of those bajillion cycles per second goes to waste because of nonoptimal computation loops. A small change here will make a large difference in the amount of time it take to achieve results.
    • The software can by analysed and broken up into distributed programs which can be run on millions of computers worldwide, a la SETI@home.
    • Distributed programs can be analysed and combined into one monolithic program that runs on a supercomputer.
    • Scientists (or programmers) can add features to these programs, making them perform multiple similar purposes. Or, scientists can pick out a specific computation that they need done on huge amounts of data, and save countless hours, days, or even months that would have been spent computing unnecessary data.

    Well, you get the picture. All of this becomes incredibly expensive with closed source software. Of course, nobody said it has to be free software. Obtaining the source code could very well require an NDA, if that's what will float the developer's boat.


    • Some things don't work so well hugely distributed. Your example, fluid dynamics, for instance.

      Most fluid dynamics simulations (at least the ones that I helped do) involve lots of I/O. don't work for anything requiring huge amounts of I/O. They might work in a beowulf-type setting (our fluid dynamics stuff split up across nodes fairly well) but there is a really large amount of traffic, which won't work when people are on the end of a 56k modem. For, there is very little traffic compared to the amount of computation. Similar for seti@home.

      Most things that require large amounts of computation, though, also require large amounts of bandwith to go through large amounts of data.

  • If public funds were used to finance the research, then it should be public information. The only exceptions should be driven by security concerns. If private monies are put into the mix, let us thank the donor for their donation.

    We are the public and our tax dollars are used to generate this information. We own it.

    Give it to us at the bare cost of distribution.
  • by DaoudaW ( 533025 ) on Monday November 26, 2001 @04:33PM (#2615395)
    Unlike scientists who keep research secret until it is published in a peer-review journal, some software developers -- who get little credit when their code leads to a genetic breakthrough -- want to share their work as soon as it leaves their keyboards.

    It's an old debate in the world of computing -- and a new culture clash in bioinformatics...

    It seems like there is some confusion here. Sure there is some time-lag between the research and its publication, but peer-reviewed journals are in fact similar to bugzilla.

    The problem being addressed by the petition isn't what is published in peer-reviewed journals, its what isn't being published. Making scientific techniques proprietary not only slows the advance of science, but takes technology which could benefit the public and allows a few people to benefit disproportionate to their contribution.
    • "The problem being addressed by the petition isn't what is published in peer-reviewed journals, its what isn't being published."

      There are several reasons for scientific research being published. Sometimes its because it is tied up in IP issues. Other times its because the work did not pass peer review and the original researchers decided not to press the issue. Or, the original researchers decided not to do additional research to answer the reviewer's comments. Finally, it may be because the experiments did not work. The sheer amount of information that is not made public because the experiment was a failure is staggering. To try and go through all that though, may or may not be of benefit.

      All that being said, I do agree that making scientific techniques proprietary does slow the advance of science. However, there are cases where you don't want the technique to become common knowledge. The Manhattan Project is a perfect example of this.
  • I once obtained RNA secondary prediction software from Washington University. Before we could obtain it, we had to sign waivers that we would not re-distribute it outside of our own research university. Once we got the software though, it came in source form. So AFAIK the source for academic software developed for life sciences is probably available upon request, you just can't redistribute it at will. As for publications, the alogrithms for predicting the RNA secondary structures were published, but not the code itself. Hope this is informative.
  • Look at it this way...It starts with patented genetic software, which leads to some sort of discovery that could cure some disease. Now the researcher that made the genetic discovery patents the actual base pair code (genetic code). Now they have the ability to charge what ever they want for the "treatment". That is a terrifying thought. This is a real Pandora's Box
  • by atheist666 ( 525252 ) on Monday November 26, 2001 @05:34PM (#2615750)
    What should be open source and free beer free is the specialized software used by many researchers for processing DNA and Protein sequence information. For instance, take the very nice software package Vector NTI. Does many things researchers want. Also costs $5000. Instead of NIH (government science funding agency) money being spent many times (once per funded lab), what they should do is earmark some money to get programmers to write for free usage software like Vector NTI. It would cut down on the redundant purchase of software by many government-funded laboratories. Don't even get me started on how much research money is being spent on Windows, Word, Excel, PowerPoint, and Adobe Acrobat...
    • "what they should do is earmark some money to get
      programmers to write for free usage software like Vector NTI. "

      The same argument of course applies to pretty much all software, not just bioinformatics software.

      Actually I think that there is a more convincing argument here. I would argue that the process which is used to generate data is in fact a material part of that data. Without access to the software that operates on the primary data to produce the final result, it is not possible to replicate that result independantly. If bioinformatics wants to be taken seriously as a science, then it needs to able to address the issue of repeatability. Code is part of the methodology, methodology is part of the result. If code is not freely available, then the result is not either.

  • Open source it all! (Score:2, Informative)

    by atomicgirl ( 517460 )
    I work in genetics, and particularly with the computing aspect of it. My lab has a tight budget, and open source analysis software would be a tremendous help to us.

    Of course, we have some OSS tools already, namely EMBOSS and others, but the big hitters in the bio-computing field (VectorNTI, MacVector, DNAstar, etc) run in the $5000-10000 range per license. Right now, they are much more usable than their open sourced cousins. I'd love to see genetics software continue to be developed open source. We could give less taxpayer money to software corporations.
  • "It's the age-old debate; what follows the heart of the scientific method more? Peer review, or getting the information out as fast as possible?"

    The scientific method is both peer review and the fast, free flow of information. Further, both are so intertwined that it is impossible to separate the two or say one is more important than the other. Without the free flow of information, one cannot cannot give a proper peer review of the work. Without good peer review, the information you get is garbage, thus clouding the review and making the knowledge suspect.

    The work should be made open source after good review, not before. That way, you build upon that previous work such that the next scientific advancement is upon solid ground.
  • by Anonymous Coward
    Geeting info ASAP for review is the only REAL peer review. What journals call ``peer review'' is just editorial censorship of any real advance in science; any revolutionary science is deemed ``not suitable for publication''. Or sent to refeeres chosen by the *editor*. Any paper that does not conform to the prevailing views is NEVER published in major journal---want to check? Only more-of-the-same.

    So, as a scientist myself, I vote for fast release: let ME (a real peer) do the review. Editorial censorship? Thank you very much, but no, I stopped needing paternalism after my early teens.

    Gustavo A. Concheiro Perez
  • []

    They host a large number of these open-source bio software. Really worth a look if you're interested in the topic.
  • Being that by the end of the week I'll actually be working for a biotech software company, I sure hope that the scientific community doesn't force mandatory Open Source. Now while the agrument can be made that keeping the software restricted is hindering the biotechnology industry, compared to the evils of gene patenting, software has a very small role in the big picture.

    Now even though I would love to find my own gene and would probably patent it and start a company like everyone else, I do believe that patents on genes that occur naturally is wrong. Its akin to patenting Heart Failure, and then forcing everyone to pay you dues whenever they try to see if someones heart is at risk to failing, or when they try to revive the heart. Now whats even worse is that the patents of genes are used to create a monopoly where only the patent holder and those who have enough cash are allowed to try to fix the gene. Its like the hypothetical heart failure patent holder suing Abicor for creating an artifical heart.

    So to recap, forcing companies to make their software open source, not very important. Preventing companies from patenting naturally occuring genes, much more important.
    • I don't think you can get a patent on a naturally occurring thing. Genes are naturally occurring, so you can't "own" a particular gene sequence. I think the patents in this area are for methods of isolating/modifying/etc. the genes that are already there. If I come up with an innovative way to identify or isolate or modify a particular gene, then I can patent my method. I can patent the new gene created by my innovative method. But I can't find a gene then say, "I found it, I want to patent it and own every occurrence of this gene in nature." That is not possible with our current patent laws. I think this system is ok for now. Without allowing people to obtain patents on the results of bio-research, would you spend millions (billions?) of dollars doing the research? (Answer: Note, not if you are smart.) Neither would the companies currently doing this research. We need to provide some incentive to them, let them recoup their investment costs and even allow profit (gasp!). Patent law is the best friend of innovation and dissemination of scientific information. It provides motive to create. It allows monumental investment in something that, once disclosed, can be used by anyone (since it is only an idea).
  • I'm sure you can ask for the code to be open sourced from now until doomsday with no result. It's like asking God to let you know how to create life. I mean, these people are God, right?


    Seriously, with the amounts of money to be made in tomorrow's genetics industry at stake, corporations are going to protect the research they fund and will do their own debugging, thank you very much.

  • The whole point of what is called "the Scientific Method" (check out any 1st year university-level science textbook for what this means) is that any research has to be peer-reviewed and results have to be reproducible. In this sense, the scientific community has been "open-sourcing" their work for centuries. Keeping work secret invalidates it as serious research. Open-sourcing the tools with which the research is carried out only makes sense.
  • There was an interesting discussion on this topic last year over on, with contributions from several authors of significant bioinformatics software (search for 'open source' in the group at's also a good interview with Ewan Birney (of BioPerl and EnsEMBL fame) about open source bioinformatics here:

    Like some of the other contributors to this thread, the argument I find most convincing is that peer-reviewed scientific publications usually (and for very good reasons) require full disclosure of methods - why should software be exempt from this? Similarly valid criticisms were made about _Science_'s decision to publish the Celera human genome paper with only conditional access to the primary data. Biotech companies are of course free to conceal both their data and methodology, but the scientists involved should not then expect the right to contribute to the peer-reviewed literature.

    There are also sound economic reasons for an open source approach to significant scientific software: it's ironic that software developed with public funding has in some cases been commercialized by the host institution of the programmers, leading to a situation where other laboratories that receive grants from the same funding body end up paying substantial license fees! (some funding agreements are now savvy enough to include a clause preventing this). In fields like molecular biology, there's also the risk that initially reasonable license fees can rocket when the software proves to be useful. This is pretty much what happened in the case of GCG, a set of sequence manipulation utilities originally developed by a university department, but later acquired by a large biotech company. The increasingly restrictive and expensive license was one of the factors that led to the creation of EMBOSS:

    a GPL/LGPL'd alternative that in many respects now surpasses GCG, and runs on a wider range of Unix-like platforms (including Linux, Cygwin and MacOS X).
  • I recently gave a presentation for the Central Valley Bioinformatics Users Group ( []) titled "Clustering BLAST with Linux, Perl, PHP, and Postgres". Some of the slides are available here []. I say some, because I just started the process of open sourcing my work. Management at Sagres Discovery (my employer) has been very receptive to releasing my code. And why not? My involvement in the open source community (and CVBIG in particular) taps the company into large reservoir of skills and experience (not to mention potential employees). Besides, open sourcing the tools does not mean open sourcing discoveries. Bioinformatics will continue to benefit from the open source movement. Fortunately, many in academia and industry recognize its value.
  • As long as we're repeating ourselves [] today:

    I received the following email message from the CFO of a company called LabBook [], about my Bull Shit Markup Language (BSML) web page [].

    Appearently, they would prefer that people searching for "BSML []" did not turn up my web page. I wonder if they've tried to get the Boston School for Modern Languages [] to change their name, too?

    Now isn't the whole point of properly using XML and namespaces to disambiguate coincidental name clashes like this? If LabBook thinks there's a problem with more than one language named BSML, then they obviously have no understanding of XML, and aren't qualified to be using it to define any kind of a standard.

    Maybe LabBook should put some meta-tags on their web pages to decrease their relevence when people are searching for "Bull Shit" or "Modern Language".



    From: "Gene Van Slyke" <>
    To: <>; <>
    Sent: Monday, November 12, 2001 10:36 AM
    Subject: BSML Trademark


    While reviewing the internet for uses of BSML, we noted your use of BSML on [].
    While we find your use humorous, we have registed the BSML name with the United States Patent and Trademark Office and would appreciate you removing the reference to BSML from your website.

    Thanks for your cooperation,

    Gene Van Slyke
    CFO LabBook


    Here's the page I published years ago at []:


    BSML: Bull Shit Markup Language

    Bull Shit Markup Language is designed to meet the needs of commerce, advertising, and blatant self promotion on the World Wide Web.

    New BSML Markup Tags

    CRONKITE Extension

    This tag marks authoritative text that the reader should believe without question.

    SALE Extension

    This tag marks advertisements for products that are on sale. The browser will do everything it can to bring this to the attention of the user.

    COLORMAP Extension

    This tag allows the html writer complete control over the user's colormap. It supports writing RGB values into the system colormap, plus all the usual crowd pleasers like rotating, flashing, fading and degaussing, as well as changing screen depth and resolution.

    BLINK Extension

    The blinking text tag has been extended to apply to client side image maps, so image regions as well as individual pixels can now be blinked arbitrarily.

    The RAINBOW parameter allow you to specify a sequence of up to 48 colors or image texture maps to apply to the blinking text in sequence.

    The FREQ and PHASE parameters allow you to precisely control the frequence and phase of blinking text. Browsers using Apple's QuickBlink technology or MicroSoft's TrueFlicker can support up to 65536 independently blinking items per page.

    Java applets can be downloaded into the individual blinkers, to blink text and graphics in arbitrarily programmable patterns.

    See the Las Vegas and Times Square home pages for some excellent examples.

  • This is slightly off topic, [in that I deal with Open Source Science, rather than the opening of source of a scientific project] but relevant.

    Science and Open Source are different animals.

    Just because Scientists appear to be independent agents, it does not mean that Science in itself is not cohesive.

    Open Source was to some extent initiated by published listings, including source code for the early Unixen. When this was closed, the attempt was to develop "free" copies of the utilities, and then, the ultimate operating system.

    The Open Source is now just moving into more advanced applications, replicating the larger commercial things in scope.

    In some extent, Web Browsers were the first really open-source application that was a class leader. Netscape made a browser that rolled in the email client and ftp client, but Mosaic was there first.

    But what we need is some sort of open-source idea development.

    Science, especially as practiced in the West, is more based on a "religion" that has won its place through arguement, rather than cooperation.

    In essence, an idea that moves into a spot becomes entrenched, and more difficult to move. So a new idea needs to arrive as more powerful to displace it through arguement.

    For this reason, Scientists tend to be more cautions of letting ideas gain a foothold.

    In the book "The Cathedral and the Bazaar", Eric S Raymond contrasts the world of acedemia and open source, the former competative and the latter cooperative.

    Acedemia places fully developed ideas in competition, and your status in acedemia depends on your publications. The Open Source community is more open to the development of someone else's idea, and overlooks the flaws in the supplied implementation.

    So whereas Scientists rejected, say, Velikovsky's ideas as crazy [because the presented package as a whole does not work], the open source people accepted Linus' alpha kernel as an idea to be extended into Linux.

    We could have an open source Science that looks into alternate views. An Enthusist's science, not guided by acedemia, so to speak. But we disparage the opportunities as cranks, and build no alternate to acedemia.

    de Bono offered Six Hats of thinking, as an alternate. Maybe we should look at this.

Disks travel in packs.