Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Biotech

Convergence of Biology and Computers? 388

Pankaj Arora asks: "This summer I am working on both Bioinformatics and Molecular Biology research projects at the Mayo Clinic Rochester. Being an MIS major with a heavy CS background, I've been learning about biochemistry performing polymerase chain reactions (PCRs) and RNA retranslation among other things. I've learned biology works a lot like computers; binary has 1s and 0s, DNA has nucleotides: A, T, C, and G. Binary has 8 bits to a byte, DNA has 3 nucleotides to a codon. Computers and biology seem to have a natural fit; information is encoded and represented 'digitally' in a sense. I was wondering what people thought about the future of biology-based and genetics-based computing due to the immense efficiencies that lie in nature. This has been discussed to an extent here, but there were some specific aspects that I feel are quite important and were not discussed thoroughly, thus I have a few questions to pose to the Slashdot community."

"The aspects I would like discussed are as follows:

  • In the long run, will biology rewrite computing or will modern day technology concepts and theory be integrated into biology? If both are true, which will have the greater effect? I understand long run is ambiguous in this question, but Iâ(TM)m interested in all thoughts using any applicable definition.
  • Tied to the first question: How will the nature of computing, and how we perceive it, change due to biology integration? More to the point, how much of the theory we learn today may change?
  • What will be the biggest issue determining the success of the adoption of biology-integrated computing? Will it be technology factors or will it be societal factors (e.g., rebellion by the Right Wing), or something else? What things must hold true to make the idea succeed?
  • And perhaps the hottest issue of all: Is there anything inherently wrong with pursuing this avenue? What may be some of the consequences?
I'll have some experts from Mayo Clinic contribute some of their expertise to this discussion."
This discussion has been archived. No new comments can be posted.

Convergence of Biology and Computers?

Comments Filter:
  • by ne0nex ( 612727 ) on Monday June 16, 2003 @12:24PM (#6214024)
    What needs to happen is a happy medium, biology chaning the face of computing, and computing doing the same for biology. Advances will be gained in both this way.
  • What I say (Score:3, Funny)

    by GreenJeepMan ( 398443 ) * <josowski@tybi o . c om> on Monday June 16, 2003 @12:25PM (#6214029) Homepage Journal
    "biology rewrite computing or will modern day technology concepts and theory be integrated into biology"

    Modern day technology concepts and biology will both one day become so advanced that they are are... indistinguishable. .eom
    • Re:What I say (Score:2, Insightful)

      by bain_online ( 580036 )
      Modern day technology concepts and biology will both one day become so advanced that they are are... indistinguishable. .eom

      Yes it will be called _Physics_

  • Speaking as a cyborg (Score:5, Interesting)

    by Anonymous Coward on Monday June 16, 2003 @12:25PM (#6214035)
    (I have an insulin pump) It really hasn't changed my life much yet. Still have to program the thing, refill it, etc. Maybe one day when it's internal and self-regulating, but for now, it's a fancy needle/pda.
    • by Anonymous Coward
      What if we were to use our modern technology and computers to discover the proteins that could cause some adult stem cells in your body to regrow a nice internal self-regulating pancreas. Sounds like convergence to me. Would you still be a cyborg, despite that fact that your man-made (or man-initiated) pancreas is now biological?
    • maybe you should look at this- http://mitsloan.mit.edu/news/archives/2003/50K-03. html
  • equation (Score:5, Interesting)

    by chef_raekwon ( 411401 ) on Monday June 16, 2003 @12:26PM (#6214048) Homepage
    it all boils down to this:

    binary + DNA = phi

    (try and figure that one out ;)
    • When I first read your post, I read it as:

      binary + DNA = pi

      Somehow, I envisioned a future where we all took electric drills to our heads.
    • phi is better defined in the three-dimension schrodinger's equation... :p
    • Re:equation (Score:5, Informative)

      by pookybum ( 672238 ) on Monday June 16, 2003 @01:05PM (#6214488)
      Phi = (1+sqrt(5))/2, about 1.618 This number appears all over the place in nature, and, most interestingly, in the structure of DNA: One rung of the DNA ladder has two golden mean pentagrams, two hexagons, and a golden mean rectangle in the middle, more or less. Also, the helix of the DNA molecule advances by a vertical increment of 1.618 per turn. How's that?
    • Re:equation (Score:3, Interesting)

      by dubious9 ( 580994 )
      Since you bring up binary and DNA, let me ask a question. We use a binary systems for our computers because it is easy,efficient, and fast to express things in terms of combinations of on and off. Furthermore we haven't run into any large problems using binary in bigger faster systems so it seems that binary is nearly infinately scalable.

      If binary makes so much sense for representing information and doing useful work with it, why is it that the fundamental building block in our body uses four base pairs
      • Re:equation (Score:5, Informative)

        by mikerich ( 120257 ) on Monday June 16, 2003 @02:09PM (#6215245)
        If not, why didn't nature choose a binary system?

        Binary isn't THAT efficient if you want to store information in a small space. Quaternary systems (like DNA) are more efficient space-wise.

        (Simplifying wildly) DNA stores 3 base pair 'words' called codons. Each codon either codes for an amino acid (each amino acid is coded by more than one codon) - such as the sequence ATG which codes for the acid methionine; or represents a 'start gene' or 'stop gene' switch.

        With three letter sequences for a codon and four possible letters for each position you end up with 64 possible codons (there are just 20 amino acids); to store the same amount in binary you would need six bases.

        So DNA is actually very efficient at what it does.

        Best wishes,
        Mike.

  • Speed (Score:4, Interesting)

    by snitty ( 308387 ) * on Monday June 16, 2003 @12:27PM (#6214053) Homepage
    The major advantage and disadvantage to biological computing right now is speed. While it can solve some problems much faster than normal computers (due to it's massive parallel computing capabilities), making the DNA to solve the problem, and finding the answer take a long time as well. While both those are speeding up, it will be sometime before it is economically sound to do DNA calculations in anything other than a laboratory environment.
    • Re:Speed (Score:5, Interesting)

      by FuegoFuerte ( 247200 ) on Monday June 16, 2003 @12:53PM (#6214390)
      DNA in and of itself can't do calculations (well, that I know of... show me how and I'll believe you). The brain can do massively parallel computations (think facial/object/voice recognition in microseconds, and at the same time). Here's one big problem in taking advantage of that kind of thing (other than ethical issues). Say you have taken some head and hooked it up to a computer. It may recognize things just fine and many times faster than any normal computer, but how to get that information back to the computer? Sure you can interface with specific neurons, etc. but which ones? Do you tap into Wernicke's or Broca's area (the parts responsible for speech/word comprehension)? How do you interpret the signals coming from that area? If you interface anywhere else, you likely wouldn't have any kind of word/name/etc, because Wernicke's is responsible for all speech comprehension (without it there's no giving meaning to the words one hears or reads, and no putting words to actions, feelings, anything else).

      So what biological computing has to offer in speed is basically countered by the difficulty in gaining access to the information, unless MAJOR advances are made. And for simple math-type computing problems, biological processing would probably never catch up to what we have now in electronic computers.

      Just my 2 bits worth.
    • by Digital_Quartz ( 75366 ) on Monday June 16, 2003 @03:30PM (#6216071) Homepage

      It was Leonard Adleman (of RSA fame) who first proposed the idea of using DNA to perform simple computations in a 1994 paper entitled "Molecular computation of solutions to combinatorial problems" (you can find it here [usc.edu].


      Adleman's DNA computer computed the answer to the Hamiltonian Path problem for a small graph. The Hamiltonian Path problem is solvable on a conventional computer, however it is an "NP-Complete" problem, which means that all known deterministic algorithims have a running time which is exponential with respect to the problem size.


      Adleman's solution was to encode random paths through the graph in billions of DNA strands, then use custom engineered enzymes to eliminate those strands that were not a Hamiltonian path. Essenially, he simulated a non-deterministic machine through massive parallelism.


      While this is increadibly clever, and very interesting, it isn't necissarily practical; at least, not for NP-complete problems. Adleman acheived linear execution time for an NP-complete problem, but he did so at the expense of requiring an exponential number of DNA strands with respect to his problem size. A small graph with only a few hundred nodes would require more strands of DNA than there are atoms in the universe.


      This is not to say that DNA computers are of purely academic interest; Adleman's computer was merely a "proof of concept". I'm sure there exist problems in P which would benefit immensely from massively parallel computing. It's just a question of finding problems which are cost effective to implement.


      Perhaps many of these "distributed" computing efforts that are underway now would better be served by a DNA computer.


  • by lysander ( 31017 ) on Monday June 16, 2003 @12:27PM (#6214057)
    Godel, Escher, Bach talks all about the overlaps and comparisons between biology and computers. In particular, Hofstadter details a one-to-one correspondence from the Central Dogma to Godel's Incompleteness Theorem. It's dense, but it's great stuff.
  • Wow ! (Score:5, Funny)

    by doru ( 541245 ) on Monday June 16, 2003 @12:27PM (#6214060) Homepage
    ...there were some specific aspects that I feel are quite important and were not discussed thoroughly, thus I have a few questions to pose to the Slashdot community.

    You must be new around here...

  • by Lord_Slepnir ( 585350 ) on Monday June 16, 2003 @12:28PM (#6214066) Journal
    What will be the biggest issue determining the success of the adoption of biology-integrated computing?

    If we can finally assimilate that pesky planet at sector 001, then we will consider ourselves to be a success.

  • Already there (Score:5, Informative)

    by Telastyn ( 206146 ) on Monday June 16, 2003 @12:29PM (#6214079)
    I don't remember the artcile, or the location of the reference [ http://www.nature.com/nsu/000113/000113-10.html thanks google]...

    Well anyways, the travelling salesman problem was solved using specially crafted DNA sequences.
  • by CVaneg ( 521492 ) on Monday June 16, 2003 @12:30PM (#6214089)
    Actually, there has already been a large scale integration of biology and computing. You can see a summary of the work here [imdb.com]. In fact they've already done a follow up experiment [imdb.com], and I here that there's a third project [imdb.com] in the works.
  • Mmmm.. (Score:4, Funny)

    by ahkbarr ( 259594 ) * on Monday June 16, 2003 @12:30PM (#6214097)
    Binary has 8 bits to a byte, DNA has 3 nucleotides to a codon.

    I got a big codon while I was reading the linux kernel source.
  • by antarctican ( 301636 ) on Monday June 16, 2003 @12:31PM (#6214110) Homepage
    As someone who works in bioinformatics research coming from the computer side I think your mixing issues.

    There's using computing to forward and analyize biological questions, that's one field. (and the one I'm in)

    The other is using biology to build things like nanotech and other molecular circuitry.

    Both of these are using one as a tool to forward the other, it's not a straight integration like putting chocolate and peanut butter together, and never will be.

    Each field will simply adapt and use tools from other fields. Just as in molecular biology physics and chemistry concepts are used to help understand biological mechanisms. Don't look for a Unifying Theory for all these fields.

    Anyhow, that's my opinion, my boss will probably say I'm completely wrong ;-)
    • I have to say I agree. Your PB and choco analogy is good, and what I was trying to thunk up. Perhaps the author is thinking we will move to wetware, maybe that we'll have semi-sentient biological computers doing our processing.
    • by catbutt ( 469582 ) on Monday June 16, 2003 @12:46PM (#6214315)

      Both of these are using one as a tool to forward the other, it's not a straight integration like putting chocolate and peanut butter together, and never will be.

      Each field will simply adapt and use tools from other fields. Just as in molecular biology physics and chemistry concepts are used to help understand biological mechanisms. Don't look for a Unifying Theory for all these fields.


      That seems rather short-sighted....never is an awful long time.

      Maybe not a Unifying Theory, but a blurring of the lines until they no longer exist? I think so.

      For instance, one current bleeding edge, analyzing genomes, hasn't yet resulted in a lot of building of completely new gene structures. Because we don't understand them very well, and because our tools for assembling genes and creating new organisms based on our created genes are still very crude.

      Jump forward 50 years (or 150 years, or a thousand years!), and I'm willing to bet that won't be the case any more.

      At that point, I think you will see a complete meshing of information technology and biology.
      And certainly the two issues you mention (analyzing vs. building) will have long since integrated into one....much like long ago the study of the phenomenon of electricity (think flying kites in thunderstorms) integrated into the building of useful devices using electricity. There is no need to think of them as two separate issues.
  • digital media (Score:2, Insightful)

    by mikeee ( 137160 )
    It's easy to see why DNA is digital; it means that copies can be made with 100% fidelity. You don't want random mutations every time a cell divides.

    This forces some processes to be essentially digital, but most of biology is an unbelieveably messy analogue nightmare for anybody trying to figure out what's going on.
    • Re:digital media (Score:3, Informative)

      by Anonymous Coward
      Bzzt! Thanks for playing.

      Mutation is the grist for the mill of natural selection. Were it not for mutation, Earth would be a swamp of highly advanced algae right now. What you want is a balance between mutation and error correction - enough correction so that organisms can survive and breed, but enough mutation so that you can have variation that will allow adaptation to new niches.
    • Re:digital media (Score:2, Informative)

      there are random mutations every time a cell divides, it's called evolution.

    • DNA encoding (Score:2, Informative)

      by geirhe ( 587392 )

      It's easy to see why DNA is digital; it means that copies can be made with 100% fidelity. You don't want random mutations every time a cell divides.

      This forces some processes to be essentially digital,(...)

      It certainly forces humans to think of them as essentially digital, thus the digital model of the output of couple of million years of iterating one or more biological learning algorithms.

      We didn't understand the evolved FPGA pattern implementing an XOR either, although we think of it as a bit patt

  • by jeeves99 ( 187755 ) on Monday June 16, 2003 @12:34PM (#6214143)
    As one of the chosen few attempting to understand the fundamentals of protein folding, I can say that we are still a long way off from understanding how these "few" 20 amino acids fold into highly-specific structures. There are people with access to super computing centers (ala: UCSD super computing center, IBM's Gene Blue) who still cannot devise a simulation that accurately reproduces biological systems. The amount of atomic and subatomic properties that must be taken into account is just overwhelming. It can take a 64cpu cluster of computers a week to reproduce what nature does in 1 nanosecond!

    So how can we restructure our current computing system to a model that is based upon something that we understand only at basic level? We can't. While I agree that a biologically-derived computing architecture could be quite powerful indeed, we are still a LONG way off from the level of understanding needed to even put this idea on the drawing board.
  • Question tho... (Score:3, Informative)

    by anzha ( 138288 ) on Monday June 16, 2003 @12:34PM (#6214151) Homepage Journal

    The first things that come to mind is, "What time frames are you speaking in for this technology?" and "What application are you talking about?" Each of these are very important.

    If you are talking raw number crunching, it might end up having some problems with competition with rival technologies. The High Productivity Computational Systems Effort @ DAPRA is intended to bridge the gap between current supercomputers and quantum computers in capability. If the realistic xpectations for quantum computers are realized, and not the hype, then it might end up making the biological tech a case of an 'also ran' much like gallium arsenide seems to have become. Unless there is something that biotech processors do better than the traditional architectures and the projected quantums, then it might remain a lab curiousity.

    On the other hand, if you mean something else, like revolutionary computer-human interfaces, or AI work, or something I'm not thinking of, then we might see something generated from this indeed.

    If you could be more specific about what you have intended this technology applied to...

  • I for one, look forward to the days when Microsoft try running their wonderful code in my DNA. I mean, imagine all the potential:

    "I'm sorry, your DNA has just crashed. You're experiencing the blue goop of death."

    Of course, all the geeks would run their DNA on Linux. They'd be capable of doing many things faster, they'd live forever compared to their microsoft bretherin and the vast majority of society would never, ever, want to interact with them. So no change there then.
    • > Of course, all the geeks would run their DNA on
      > Linux. They'd be capable of doing many things
      > faster, they'd live forever compared to their
      > microsoft bretherin and the vast majority of
      > society would never, ever, want to interact with
      > them. So no change there then.

      Except that the Linux geeks would forget everything they learned as soon as they went to sleep. To get around this problem, they'd start keeping "Journals" that they'd have to read before they started their day. Eventually
    • Actually, to put that into the proper view of the typical home-based computer user, it would be more like:

      People running their DNA from Microsoft will experience periodic downtime, intentional or not, and will occasionally have to have a friend or relative come by, wipe everything clean and replace it with a new copy.

      People running their Linux brand DNA would have much less downtime than those using Microsoft's brand, however they would have to go extinct and re-evolve from the primordial soup every so of
  • by Brown Eggs ( 650559 ) on Monday June 16, 2003 @12:35PM (#6214164)
    If I can do a slightly different interpretation of the questions being asked - can biology inspire changes in computing? The answer is yes - it already has. Many of our ideas of aritificial intelligence or computer learning have come from neural network-type studies of brain structures. At some point, the equivalent circuit in silicon may precisely reproduce what the neuron is doing. Aside from the time issue (nerve conduction is blazingly fast), you would serve your function staying in silicon.
    • Aside from the time issue (nerve conduction is blazingly fast), you would serve your function staying in silicon.

      Blazingly fast ? WTF? I think you meant to say that silicon conduction is blazingly fast.

      Nerve impulses can be measured in tens or hundreds miles per hour, pulses over wire or silicon is measured in tens of thousands of miles per second.

      This page [washington.edu] is aimed at kids but happens to have a good chart of various speeds of various nerves; the top speed they show is about 225 mph, and they compare
  • by RevAaron ( 125240 ) <revaaron@hotmail. c o m> on Monday June 16, 2003 @12:36PM (#6214172) Homepage
    I'm an aspiring computational ecologist, majoring in biology, minoring in CS. (for the uninformed- ecology != environmentalism or anything of that sort) I'm in Minnesota, although at the other end of the state.

    I don't think biology will rewrite CS. It will influence it, for sure, but there isn't anything fundamentally different between a biological solution and a technological one. I think as we learn more of the bigger picture in various biological fields, when we truly understand it, we will integrate that knowledge into applied CS. We've been reading the book for some time now, but we really don't know enough about the subject matter to really apply it.

    I think there is a lot of use for biomimicry in computing. I think integration of biological elements into our computers is quite a bit far off and perhaps a bit sci-fi-ish for now, but taking ideas (algorithm would often be an understatement) that work well in biological systems and using them in computing is something we can do now with some success.
  • Some thoughts (Score:5, Interesting)

    by Otter ( 3800 ) on Monday June 16, 2003 @12:36PM (#6214176) Journal
    As a molecular biologist who is relatively knowledgeable about computing, here are my impressions:
    • The demonstrations of DNA-based computing that have been made are extremely clever and elegant. But they involve spending enormous amounts of money and effort on primer synthesis before and sequencing after the very quick "calculation" step that they hype?
    • Who knows? Maybe as genomics technology gets cheaper, DNA-based methods will have practical value in occasional applications that would require enormous brute force for a traditional solution. But I'll be astonished if they become common in general computing.
    • No offense, but your bit about "rebellion by the Right Wing" comes across more as ignorant prejudice on your part than as any realistic understanding of the concerns of people unlike you.
    • No offense but his concern about "rebellion by the right wing" is a legitimate one as in the U.S. this group is indebted to evangelical Christians and appears to cater to their agenda to maintain their support. Not that there is anything wrong with that - but you should not dismiss his concern out of hand and without some sort of disclaimer on your part.
  • by dfn5 ( 524972 ) on Monday June 16, 2003 @12:37PM (#6214180) Journal
    binary has 1s and 0s, DNA has nucleotides: A, T, C, and G.
    The ARMY has live soldiers and dead soldiers

    Binary has 8 bits to a byte, DNA has 3 nucleotides to a codon.
    The ARMY has 8 to 10 soldiers to a squad.

    Computers and biology seem to have a natural fit;
    The ARMY also seems to fit the computer model using the same criteria. Does that make it a computer?

  • by 1337G ( 563665 )

    Tied to the first question: How will the nature of computing, and how we perceive it, change due to biology integration? More to the point, how much of the theory we learn today may change?

    IANACS (I'm a biologist) but, Most of today's models of computing still go back to Turing. New models of computing will most likely spring up in the future that are based on the function of biomolecules, metabolic pathways, et al. Work has already been done showing how gene regulation is analagous to boolean expressi

  • The mere speed and size of biological communication and information storage gives modern computing technology much to reach for. We are talking about nanometer size particles that store an incomparable amount of information, and when something needs to be done it isn't more than a phosphorylation (occurring in what, 10^-14 seconds) seconds away.

    Everything technology attempts to mimic everything natural, like your monitor for example. It is a visual representation of the world and the information therein.
  • by under_score ( 65824 ) <mishkin@berteig. c o m> on Monday June 16, 2003 @12:39PM (#6214216) Homepage

    Technology is not morally or societally neutral, despite what we would like to think. A very simple example of this is the car: cars, in order to maximize their utility, require a vast network of roads, parking spaces, and gas stations. This network is expensive to society for environmental reasons and has definite social and economic effects (such as time lost in commuting and traffic jams). These are unavoidable if we wish to use the technology of cars.

    I have an essay in progress on this topic: The Analysis of Technologies [berteig.org] - its got some stuff that is quite out of date since I started working on the essay eight years ago :-)

    A really great book on the subject of analyzing the future effects of technology is "In the Absense of the Sacred" by Jerry Mander. This book is very much slanted politically to the "small/simple is beautiful" outlook, but provides a very substantial wealth of logical arguments and academic studies to demonstrate some of the necessary principle of analyzing technologies.

    As for your specific questions, one obvious effect will be that in our commercial environment, not everyone will have equal access to the benefits that may be provided by the integration of computational and biological technologies.

    Since it will not be genetic engineering in the "traditional" sense, this technology may be used as a backdoor for creating designer babies without actually modifying a zygote's genetic material.

  • by goldspider ( 445116 ) on Monday June 16, 2003 @12:39PM (#6214222) Homepage
    Who wants to put some money down on a wager that the first significant merger of biology and computers will be accomplished by the pr0n industry?
  • by chunkwhite86 ( 593696 ) on Monday June 16, 2003 @12:40PM (#6214231)
    We already know about [wtop.com] the convergence of computing and biology. ;-)
  • Tech isn't there (Score:5, Insightful)

    by AKAImBatman ( 238306 ) <<moc.liamg> <ta> <namtabmiaka>> on Monday June 16, 2003 @12:41PM (#6214240) Homepage Journal
    The biggest problem right now is that the technology isn't there yet. Simply decoding a single strand of DNA is a long process fraught with the use of various enzymes and chemicals to find out what the actual composition is. If and when we develop better ways of dealing with bio material (nano-bots?) biological computers could be a very good choice. The advantages in parallel computing alone would completely revolutionize computing as we know it.

    Of course, there is a downside. Massive parallelism means that programming will become orders of magnatude more difficult. People today can barely wrap their heads around out of order instructions and code that works well with superscaler architectures. What happens when we increase this complexity by a million fold?! I'm thinking that bio computing could produce some rather interesting advances in the way we communicate/program computers.
  • Well, bioinformatics is certainly the hot field right now (for those who want a little background I wrote a little introduction to bioinformatics here) [applelust.com], (although it is biased towards Macs in bioninformatics).

    To answer part of your question, there are many parallels between biology and computers, however some biological systems are much more complex and can only be modeled to a limited extent right now. Some systems are more easily examined in terms of circuitry, but we are still only half way to knowing
  • Regarding your first question, some applications combining our knowledge of computing in biology is already being considered. See the following link DNA Computing [rsasecurity.com]
  • If an DNA program that checks the stock ticker happens to be a really deadly virus...

    course the pr0n industry will love the crossover applications...

  • As far as I can see, computing and biology are exactly the same thing, only taken at different timescales appear to be different things. Computing is about applying logical steps to solve problems. Biology is about applying logical steps to solve problems. The difference is that we don't expect or design our computers to take millions of years to come up with solutions. Biology is analogue... well, so is computing. Binary zeroes and ones are a convention, the matter of a computer is as analogue as you
  • by sporty ( 27564 )
    Am I the only one who was reminded of that old 'toon, 'Mask'? "Part man, part machine!"
  • My computer is ill! (Score:2, Interesting)

    by amalcon ( 472105 )
    What will be the biggest issue determining the success of the adoption of biology-integrated computing?

    Well, lifeforms have certain weaknesses that rocks and electrons alone do not. Among them are:
    -A lifespan
    -Virus vulnerability (no pun intended)
    -Nutrition requirements
    (your typical cell needs things that are harder
    to mass-transport than electrons. Water comes to mind)
  • by Tackhead ( 54550 ) on Monday June 16, 2003 @12:48PM (#6214330)
    As a guy who cut his teeth disassembling 6502 and 6809 code way back in the day (we're talking old-school, run the disassembler and walk out with 100-200 pages of paper), I still get a laugh out of the idea that 99% of our genome is "junk DNA".

    The first hour or two of disassembling was figuring out where the code was, and where the data was.

    The next day or so of poring over those printouts were spent mapping out where the entry/exit points for subroutines were.

    I got to the point where I could guess where game graphics were, just by looking for oddly repeating patterns in the "data" areas. (Yup, in binary, those 8-byte sequences make up the bitmaps for the characters "A", "B", "C"...")

    "Oh, XX AA XX BB XX CC, somewhere near XXDD in memory space. Must be a list of pointers to something."

    "Oh, XXAA, XXBB and XXCC all start with the same byte, and that byte is XXAA minus XXBB (or XXBB minus XXCC). Now I know how big each element in the structure is."

    And so on. The first day or two of hacking would result in me figuring out about 10% of what the data was for.

    The other 90% was the hard part, typically requiring running some coke, poking at the data, and running the code again to see what changed. "Maze wall moved here, then things crashed when I tried to walk through it."

    Sure, 99% of our genome might be junk. There were plenty of areas of address space that contained "data" that was never accessed, even with the tight code written in the 8-bit days.

    But when I found a string of bytes I didn't understand, the working assumption that usually went better for me was that "I don't know what this stuff does", not "these bytes are random".

    I'll bet that 90% of the genome is never executed nor referenced as data. (Evolution's a messier programmer, and there's 4.5 billion years of cruft!) But I'll bet that a lot of that "junk" is just code we haven't reverse-engineered yet.

    Ramblings over - to the poster, all of the ideas in this post are probably ancient history (and poorly-written at that - you can tell I have no bio background), but it's nice to see I'm not completely off my rocker.

    I went the CS route because when they taught biology in high school, it was seen as preparation for "become a doctor". Nothing wrong with doctors, but I was interested in more interested in hacking and figured it would be a long time, if ever, before we could manipulate DNA the way I could manipulate bits on a machine. (I've been pleasantly surprised with the way things turned out, though! :)

    CS grads are a dime a dozen in the job market; I like my job, but career-wise, the field's been played out. If you're about to go into college, and especially if you like to reverse-engineer stuff "because it's fun", get into bioinformatics, computational biology, and do your CS as a minor. At least, that's what I'd do if I were gonna start over.

    • The other 90% was the hard part, typically requiring running some coke, poking at the data, and running the code again to see what changed. "Maze wall moved here, then things crashed when I tried to walk through it."

      Just like women, 90% of the places I poke at cause me to crash face first into the ground. Amazing!

    • by Jerf ( 17166 ) on Monday June 16, 2003 @01:21PM (#6214647) Journal
      It's junk DNA not because we don't know what it does, but because it's never accessed at all.

      The equivalent in computer science would be if you plotted every possible route through a program and some code is still never conceivably executed, that would be the equivalent of "junk DNA". Even if you went into the machine language code and replaced it with random values, the program would still never crash because it never executes.

      In the computer world we tend to call that "dead code".

      Thus, we do know that the "junk" is truly junk. The debate on its usefulness centers around the other physical implications of the existance of such DNA, and where it might have come from, but "computationally" (in biological terms "is it ever used to produce a protein?") it is indeed junk.

      Please consult any elementary (but up to date... the understanding of junk DNA has progressed a lot in the last decade) textbook on genetics.
      • > It's junk DNA not because we don't know what it does, but because it's never accessed at all.

        Cool. I didn't know we had that level of ability when it came to profiling DNA code. (In fact, I almost exposed more of my biotech non-cluefulness by saying "It'd be way cool if someone wrote a DNA profiler, so that we knew what chunks of code were/weren't accessed during a typical organism's lifespan".

        I retract my "Junk ain't necessarily junk" statement.

        I also agree that the other interesting thing is

      • In the computer world we tend to call that "dead code".

        I agree with the parent post. We don't actually know what this "junk" is for. In my own projects, I have certain things turned off and on by dummy variables and seemingly unnecessary if-then statements. You could very well go into my stuff and say "ten percent of this stuff is unnecessary. You could erase it or replace it and nothing will go wrong.

        And you would be right. Except that you killed off some functions not meant for today. You killed

        • by Jerf ( 17166 )
          From my original post: "The debate on its usefulness centers around the other physical implications of the existance of such DNA, and where it might have come from, but 'computationally' (in biological terms 'is it ever used to produce a protein?') it is indeed junk."

          Nothing you say contradicts that.

          Also, don't forget Nature is not purposeful. Putting useful stuff in the junk is not useful, because you're no more likely to mutate such that the formerly useful code is expressed then you are to mutate such
    • While I agree that probably 90% of our DNA isn't "junk" DNA, there is a massive difference between tightly-written ASM and DNA. DNA wasn't created, it simple evolved over millenia. Genes are routinely duplicated (sometimes many, many times), transferred, and mutated. Every once in a while, something useful would pop up, but still a lot of "junk" material is neccesary in order to allow for evolution. It actually makes sense that a very large portion of our DNA isn't used for anything immediate (ie, never exp
    • Interesting, but understanding other people's assembler has nothing whatsoever to do with understanding DNA. I'm not flaming you here, but suppose that I would confront you with an assembler program where something that at one point was data was suddenly executed, but only after some repair algorithm in some other part of the program made some seemingly nonsensical changes to it? You can understand assembler because you have a pretty good model of how you other people are taught to write assembler programs
  • Answers to questions (Score:4, Interesting)

    by 56ker ( 566853 ) on Monday June 16, 2003 @12:48PM (#6214335) Homepage Journal
    Q. In the long run, will biology rewrite computing or will modern day technology concepts and theory be integrated into biology? If both are true, which will have the greater effect? I understand long run is ambiguous in this question, but Iâ(TM)m interested in all thoughts using any applicable definition.

    Biology will (extremely slowly) be integrated into modern day technology. There will be some technology ---> biology transition too. However biology is far more adaptable. It's not a case of rewriting - it's just a case of historical progression.

    In answer to your second question - technology concepts, computing etc as they're designed by biology are already in mainstream use eg:-

    computer
    phone
    automobile etc

    Biology affecting technology has had less of an effect eg Velcro - however the balance will change over the next few decades. Biotech is already advancing in great strides.

    There isn't any definition as such - predicting the future is all guesswork. You can use statistics - all kinds of methods - in the end it comes down to a gut reaction.

    Q. How will the nature of computing, and how we perceive it, change due to biology integration?

    It'll become easier for biology to use eg:-

    handwriting recognition
    voice recognition
    etc etc etc (all fifth-generation tasks - read up on sixth-generation if you like)

    This is due to technology "evolving" to become more link biology though. The change'll happen too slowly to perceive.

    Q. More to the point, how much of the theory we learn today may change?

    The fundamentals still remain the same - like mathematics though - it just gets more complicated. ;o) If we jumped forward a hundred years - what we know now would be seen as primitive and childlike dabblings at it. Look at how old fashioned 1903 seems now (when cars were "modern technology").

    Q. What will be the biggest issue determining the success of the adoption of biology-integrated computing?

    Economics. When computers cost millions of dollars only governments and large organisations could afford them. The second problem is marketing (read persuading people they need them). It'd take years though - look at the computer mouse as an example.

    Q. Will it be technology factors or will it be societal factors (e.g., rebellion by the Right Wing), or something else?

    It'll just happen - although factors will influence how slowly/ quickly certain parts of it do. Technology in the end comes down to ideas + money.

    Q. What things must hold true to make the idea succeed?

    That we can understand biology & manipulate it to serve us (probably other things too).

    Q. And perhaps the hottest issue of all: Is there anything inherently wrong with pursuing this avenue?

    Not in my opinion - although all technological advances bring ethical dilemnas - who do you sell it to etc? What (out of many) uses do you put it to?

    Q. What may be some of the consequences?

    A lot of them have already happened or are in the process of happening. ;o)

    A society that suffers from greater obesity, global communication, increasing reliance on power production etc etc etc
  • Related reading (Score:4, Informative)

    by ccarr.com ( 262540 ) <chris_carr@NospaM.slashdot.ccarr.com> on Monday June 16, 2003 @12:49PM (#6214348) Homepage
    An excellent book discussing some of the isomorphisms between computers and biology is Godel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter. I can't recommend it highly enough.
  • by Jasin Natael ( 14968 ) on Monday June 16, 2003 @12:51PM (#6214365)
    It's all about I/O speed, not the raw pace of calculations. Programming DNA chemically in a lab is very time-consuming, and the hassle overcomes the utility of doing so for all but the very most specific applications. A computer without a usable interface is hardly a computer at all, is it? Until we've programmed biological computers to be sufficiently complex, they'll need to rely on a lot of things that silicon does better.

    I have to think that both technologies will come to a point where they can't advance without the other, at least in the medium-term. We know (or think we know) that silicon will reach barriers it can't overcome. And at this point, we don't have a way to create complex biological computers without using existing complex organisms and therefore shooting ourselves in the foot politically. Before real-world interfaces to biological computers can be developed, we need an efficient way to interface with the biology at a low level. Traditional computers will have to provide this for us.

    We may even see a true, permanent mesh of the technologies. Silicon is extremely good at some things (communications; providing an interface to mechanical items -- keyboards & mice, monitors, speakers, solar panels, servos, etc.), while it's hard to imagine really good natural language processing, learning, and nonlinear problem solving, much less a modicum of emotion to enhance usability, occuring without biology.

    Who knows? My prediction is as follows:
    1. Machines give us a way to program biological computers, and develop enough utility to make them commercially viable.
    2. Products are released that use simple biological computers to enhance existing mechanical products, such as auxiliary processors for supercomputers.
    3. Biological components gradually take a more active role in defining the behavior of gadgets and make it to consumers, in things like electronic pets and home security systems.
    4. Biological elements become avatars for their integrated electronic functionality. Instead of an electronic pet with a biological brain, you have a real pet that accesses the internet and communicates with your home via its built-in Wi-Fi and stores your finances on removable storage.
    5. If social attitude allows, humans become the avatars for their own integrated electronic functionality.

    Just a little fantastic speculation...

    --Jasin Natael
  • by enkidu ( 13673 ) on Monday June 16, 2003 @12:52PM (#6214382) Homepage Journal
    I'd like to confront your basic thesis, that computing and genetic biology are similiar enough to influence each other. Sure the basic building blocks may look similar as you have pointed out, but there is no comparing a modern cell to anything we consider a "computer". We may know some of the basics of how a cell works, but we're still a long way from coding anything in DNA. Genetic code is massively parallel and distributed (and operates in both the genetic and bio-chemical realm simultaneously) and (through evolution) has been both obfuscated and optimized. Most, if not all, of our current state of genetic knowledge consists of "let's break this piece and see what happens" and "this stuff over here looks like that stuff over there" comparison. Call me an old stick-in-the-mud, but having "decoded" the human genome doesn't mean squat until we know what all of the instructions do (and we don't, because we are only looking at the genetic side of things, not the bio-chemical operations which result from the genetic code). Progress will be made, but it will be made through hard slogging over trenches, marshes and mountains, not on a high speed railroad.

    I think that biology will push computing into interesting directions, not through application of any biological principals we discover, but through the demands of biological investigation. Biological systems are too interconnected to be adapted to building software or computers. I take that back, the details of biological systems are too interconnected to be adapated to building software or computers, but the gross principals (e.g. the immune system: T-cells, B-cells etc.) will be increasingly copied in software and computer design.

    I believe that eventually we will be able to write complex organisms from scratch. These may not be as robust as what nature produces, but will be useful to us in many fields. Starting with the medical and spreading through the agricultural and even industrial area. I dream of trees which produce a sap, which is easily refined into methane or natural gas. But it's going to take much longer than most people seem to think.

  • by blach ( 25515 ) on Monday June 16, 2003 @12:53PM (#6214391)
    Hi there,

    As a medical student with an undergraduate degree in Mathematics, I'm really pleased to see that other scientists are getting excited about the convergence of Mathematics/Compuation and molecular genetics.

    First let me correct the slight error in your Ask Slashdot submission: we say that there are three nucleotide bases in an mRNA codon (not DNA codon). If you want a review of how DNA becomes RNA becomes proteins, you can check out the intro to my undergraduate thesis paper (link below).

    In fact, I would encourage you to read through my paper in any case, as it may stimulate your thinking or open you up to new areas of bioinformatics research. The paper focuses mostly on a survey of analytic techniques of gene-expression microarrays, but is highly accessible to well-read / intelligent persons (it is light on technical mathematics by design).

    Please let me know what you think of it (my email address should be easily inferrable from my website address), and you get a high-five from me if you can find the glaring mathematical error that I didn't get fixed before my defense.

    http://blachly.net/james/documents/thesis.html [blachly.net]

    The best,
    James

  • by Tackhead ( 54550 ) on Monday June 16, 2003 @12:53PM (#6214393)
    > And perhaps the hottest issue of all: Is there anything inherently wrong with pursuing this avenue? What may be some of the consequences?

    To hell the luddites. Hack the genome.

    With apologies to Steven Levy:

    1) Access to the genome, and anything which might teach you something about the way life works, should be unlimited and total. Always yield to the Hands-On Imperative.

    2) All information should be free.

    3) Mistrust authority- promote de-centralization.

    4) Hackers should be judged by their Hacking, not bogus criteria such as degrees, age, race, or position.

    5) You can create art, beauty and even life by hacking DNA.

    6) Genetic hacking can change your life for the better.

  • First off, these ideas have been around and been discussed ever since Watson and Crick cracked the DNA code in the fifties, so there has been a significant amount of literature and thought devoted to the subject. Here's my own thoughts on your questions...

    --In the long run, will biology rewrite computing or will modern day technology concepts and theory be integrated into biology? If both are true, which will have the greater effect? I understand long run is ambiguous in this question, but Iâ(TM)m

  • by zubernerd ( 518077 ) on Monday June 16, 2003 @12:58PM (#6214432)
    First, there is a difference between bioinforamatics and DNA computing. Bioinformatics is the application of computer algorithms and statistical techniques to figure out how a biological system works. DNA computing is more of an engineering project, since you are addapting DNA to do your computational bidding (e.g. a DNA based microprocessor)
    I my self am in the field of bioinformatics/molecular biology [uwp.edu] with my primary interest being in RNA regulation and regulatory elements. I am trying to find and figure out how RNA regulation works in model systems.
    Now for your questions...
    >In the long run, will biology rewrite computing or will modern day technology concepts and theory be integrated into biology?
    Both will happen...
    >If both are true, which will have the greater effect?
    I don't know about biology rewriting comuting. First, yes DNA encodes information 'like' binary 1's and 0's, but we are still figuring out the system works. We know how to find some genes by just looking at the sequences, but we still have the problems with predicting genes in a sequence (e.g. gene splicing, post transciptional events, etc.
    I think it would be more sane to use the modern day technological concepts and theory, but with an emphasis on parallel computing.
    >I understand long run is ambiguous in this question, but Iâ(TM)m interested in all thoughts using any applicable definition.
    Tied to the first question: How will the nature of computing, and how we perceive it, change due to biology integration?
    Well we can have those clean computers powered by photosynthesis... ok, all kidding aside, it change computing for those tasks DNA would excel at: A DNA computer is a type of non-deterministic computer [dna2z.com]. We have to overcome some of the problems imposed by DNA... its a chemical that is in an aqueous environment that tends to mutate over time; also the DNA computers I have seen work in a test tube, and you have to sequence it to get a result. That should hopefully change in time.
    >More to the point, how much of the theory we learn today may change?
    In biology - a sh*t load most likely; like I said above, we are still trying to understand biological systems and how they interact with each other, including DNA and how it codes for life.
    >What will be the biggest issue determining the success of the adoption of biology-integrated computing?
    Get it out of the test tube first... place it on a chip, like a microprocessor. Also the energy source... I don't want to share my doritos with my desktop...
    >Will it be technology factors or will it be societal factors (e.g., rebellion by the Right Wing), or something else?
    Don't like the right wing, eh? Well as a card carrying member of the vast right wing conspiracy, you have just as much to worry about from the left... those environuts who think we are tampering with nature (like we haven't been doing that for the last 10000+ years (e.g agriculture). Both extremes muzzel science... get used to it.
    If we start to integrate computers into our selves... yeah I think society will have some issues to face about what it means to be human. (I'll go with David Hume with this gem "I'm human because my parents were human")
    >What things must hold true to make the idea succeed?


    1. Perfect DNA computing
    2.
    3. Profit -- of course!

    Ok, seriously -- there need to be interest in the scientific community, we need to figure out how DNA works in living beings... how it encodes all its data (and how about that junk DNA?). We need to get it on a chip (not a microarray chip... some times called DNA chips). And there needs to be a profit motive.
    >And perhaps the hottest issue of all: Is there anything inherently wrong with pursuing this avenue? What may be some of the consequences?
    Hell no! But if you are interested in DNA computing, the bioinformatic
  • Bio is catching up (Score:3, Interesting)

    by thogard ( 43403 ) on Monday June 16, 2003 @12:58PM (#6214433) Homepage
    They figured out the code segments in DNA. Now they need to figure out the data segments and maybe in time they can figure out how datasegments in DNA manage to make their way into a creatures memory. Thats a few levels of indirctions that have to be figure out.

    DNA decoding is starting to pick up on some of the debugging concepts that have been in the digial world for 50 years. There are ways to iterating over code so it looks like the single steping is going places. Its just hard to pull off on a multithreaded cluster and understand whats going on.

    Of course what they are having a real problem is with the DRM stuff thats making it hard to build replacement brains out of stem cells.
  • ... will computing stimulate a revolution in biology, or will biology stimulate a revolution in computer science?

    Computing has accelerated biological research, but I'm unconvinced that it will fundamentally alter the prevalent paradigm in the biological sciences. OTOH, biology may provide the concepts that will push a change in computer science.

    Not being a computer scientist, I can't say this for sure, but I think one of the places to look would be in membrane potentials & how that might be applied

  • Ethics (Score:3, Insightful)

    by Kafka_Canada ( 106443 ) on Monday June 16, 2003 @01:09PM (#6214531)
    I have no specific technical background from which to address your questions (I know, I know... this is Slashdot), but your moral questions are interesting:

    * What will be the biggest issue determining the success of the adoption of biology-integrated computing? Will it be technology factors or will it be societal factors (e.g., rebellion by the Right Wing), or something else? What things must hold true to make the idea succeed?

    First, I know it's only an example you've given (lit., "exempli gratia"), but the "societal" factors as you call it -- more political, really, but let's compromise on socio-political -- are not an exclusively "Right Wing" threat. The modern Left holds many central beliefs contrary to the integration of technology and biology, especially concerning human biology, for instance the primogeniture of society over the individual and (partially by extension) the malleable, ahistorical understanding of the human mind (a notion commonly referred to as "tabula rasa"). Under this view, attempting to "improve" or in any way alter humans as conscious beings by improving or altering us as biological beings will seem either immoral or, more likely, futile. This mostly to point out that limiting factors for the progress in your field don't come exclusively from conservative ideology.
    In general there seems to be a growing trend in intellectual/ethicist circles toward acknowledging the massive (though far from exclusive) importance of our evolutionary past, which in simple political terms is more or less centrist or apolitical, though could be interpreted as slightly "right wing" (more libertarian or classical liberal than conservative), which suitably allows you scientists to carry forth your apolitical and almost-amoral research, leaving as the likely culprit for "most likely to impede the progress of biology-integrated computing" common economical factors: what innovations will ultimately create the most value, and therefore what innovations will proximately be most likely to succeed (in getting funded, in getting researched, etc.). And if you take exception to my "almost-amoral" comment (which you shouldn't), I mean it compared to people who spend their lives sweating over the ones and zeroes of right and wrong -- not that you value ethical behaviour any less than they do, only that you likely (likely) pay less attention to the nuances of what makes ethical behaviour ethical; my guess is you probably subscribe to a simplistic (and ages-old and approximately, though probably not absolutely right) axiom like the biblical (new and old testament) reciprocating Golden Rule or the commission-of-harm-avoiding Hippocratic Oath -- good on you.

    * And perhaps the hottest issue of all: Is there anything inherently wrong with pursuing this avenue? What may be some of the consequences?

    There are numerous criteria for wrongness, and in the case that you mean moral wrongness there are numerous defensible moral systems. Also, if you mean specifically moral wrongness, most moral systems taken into consideration not just consequences of actions, but the intents behind them as well (brick-in-the-head obvious example, Western legal tradition's distinction between premeditated and non-premeditated murder, or either of those and accidental homicide) -- if you meant to imply the connection I've understood between inherent wrongness and consequences.
    I don't see anything one could construe as inherently wrong with the research you propose, if you don't believe in God or make intuitive essentialist ascriptions to the human form or subscribe to the aforementioned primogeniture of society over the individual and all that entails. In other words, assuming of course otherwise ethical behaviour, if you're a modern freedom-loving humanist (or, in the trivial case if you're a nihilist), it seems to me there's no basis for having qualms about the philosophical nature of what you're doing -- but it's healthy of you to be wary of slipping into less-than-savoury situations, and to constantly question yourself to defend against aforementioned slipping and to ameliorate yourself -- no doubt the skeptical scientist in you.
  • A striking aspect of this analogy is how poorly functional units are separated from one another in organisms. The largely distinct functions of storage and computation appear to overlap at molecular and cellular levels. I feel that a coming big revolution in computers is a blurring of these distinctions, but that idea is vague futurism by me.
  • Computing is already helping biology, like with protein folding. This is only going to get stronger.

    Biology may help build better computers, either by "growing" things like media, or with nanotechnology indistinguishable from biology being used to grow chips.

    However, the "ultimate" convergance of a biological computer is not going to happen, except perhaps in an isolated sense where it can be made cheaper to grow a computer. The problem with biological computing is that generally we want to compute, not b
  • "How can I turn this into a thesis?"
  • You do know (Score:3, Funny)

    by Azureflare ( 645778 ) on Monday June 16, 2003 @01:26PM (#6214714)
    You do know that we're just part of a 10 billion year computer program on Earth, the greatest computer ever built in space and time, and commissioned by mice?
  • by Laxitive ( 10360 ) on Monday June 16, 2003 @01:28PM (#6214733) Journal
    I recently started working at a bioinformatics position as well, coming from a pure CS background. I havn't learned enough of the biology side of things to really get into much more than tool support for distributed sequence analysis toolchains, but what the hell, might as well comment.

    One thing I want to say before responding to your points: nature is _NOT_ "efficient" like computers are "efficient". Natural systems are enormous, ad-hoc, kludges. They work extremely well, and have tons of redundancy and fault-tolerance, but that's mainly due to about 4-billion years of slow, brutal, optimisation by the evolutionary process. Natural systems do certain things faster than computer systems because:

    1. They've been optimised for a hell of a long time, and they've found ways to engineer and construct extremely complicated structures and processes that are still "small" (compared to modern human-engineered technology).

    2. They've been allowed to search through a much larger solution space than what computers have searched through. Computers are inherently limited by the fact that they are tools which can still be reconciled for a large part with human reason - they were constructed using models that humans can understand and reason about, and explain fully from the start. Evolution, on the other hand, is much more of a blind search.

    Another thing to note is that natural systems all try to solve one problem: existence and self-perpetuation. All the natural systems we are able to observe today exist because they are structured such that they can fulfill these basic requirements. Now, in the process of solving this single-minded problems, nature has managed to come up with solutions for many other problems - many of which can be borrowed and applied to human problems. But it's erroneous to think of nature as "god's textbook of problem-solving", or anything like that.

    > In the long run, will biology rewrite computing or will
    > modern day technology concepts and theory be
    > integrated into biology? If both are true, which will have
    > the greater effect? I understand long run is ambiguous in
    > this question, but Iâ(TM)m interested in all thoughts using any
    > applicable definition.

    There are two aspects to this - borrowing ideas from biology (i.e. reimplementation), and borrowing biological structures themselves (e.g. using bacteria to make enzymes, viruses as delivery vectors for drugs, growing muscle tissue for robot-locomotion, etc.). Both are happening to a certain extent.

    I think it'll be a while yet before we will be able to jump into biological systems and "change the code to do what we want". We do it in really primitive, crude ways right now, but the level of complexity of biological systems, I think, will mean that it'll take time before we are able to fully control them.

    >Tied to the first question: How will the nature of
    > computing, and how we perceive it, change due to
    > biology integration? More to the point, how much of the
    > theory we learn today may change?

    I don't think biology will change theory that much. CS theory comes from the human reasoning process. I don't think there are that many abstract concepts that we can extract out of biological systems. I think the real impact will be in engineering aspects - mimicing, or reusing wholesale, biological structures to acheive the properties that we want.

    > What will be the biggest issue determining the success
    > of the adoption of biology-integrated computing? Will it
    > be technology factors or will it be societal factors (e.g.,
    > rebellion by the Right Wing), or something else? What
    > things must hold true to make the idea succeed?

    Forget the right wing. They make a lot of noise, but ultimately they are not that powerful, especially in the capitalist west. The religious conservatives are used as a tool to get votes, by pandering to their pet causes, but once people figure out a w
  • by argStyopa ( 232550 ) on Monday June 16, 2003 @01:32PM (#6214778) Journal
    I think that ultimately biology will contribute more to CS than the other way 'round.

    Presuming you're not a creationist, there are MILLIONS of generations worth of Darwinism at work in even a simple worm - weeding out the inefficient in times of stress, etc.

    Granted, the process in biology is neither linear nor even relatively efficient, but there are tremendous lessons in autonomous operation, fault-tolerance (HUGE), adaptability, etc that bio systems can teach or implement in computer situations - what can bio-systems get from computers? It just seems natural (ha!) that the more we learn from bio-systems, the more we'll apply it to computer paradigms. Until now, it's been too complex for us to really understand.
  • by rocketman74 ( 681910 ) on Monday June 16, 2003 @01:46PM (#6214959)
    You've asked some very broad questions which delve into both technical and social issues. I'm not much of a social theorist, but I do know something about computing and biotechnology. I'm a postdoc in a lab that studies genomics and biological regulatory networks using computational methods. There are two basic approaches to merge bio and computing: 1) You try to improve computing by using ideas or techniques from bio, and 2) You try to do something interesting in bio by using ideas from computing. Examples of (1) trying to improve computing by using bio would be such things as DNA computing or doing massive combinatorial searches in chemical solutions. In DNA computing, you use various enzymes or chemical agents to modify a DNA string. Think of it as a turing machine acting on a strip, except the strip is now a piece of DNA. Since the DNA strip is modified over the procedure, the "state function" is partially encoded in the data strip, not just internally in the chemical agent. The great advantage of DNA as a computing medium is that there are methods for selectively replicating DNA based on its "state". So you can run your chemical procedure over many different possible DNA sequences simultaneously and then only replicate the particular sequence with the desired state, which gives your answer. At the moment, DNA computing is most useful for search problems. For example, several years ago, the traveling salesman problem was tackled in a DNA system. There is a lot of research now into new operations that can be performed on DNA strings (e.g. ways of doing multiplication or addition using various enzymes and data encodings) to broaden the types of problems that can be tackled. Anyway, this is one way people are using bio to improve computing, broadly defined. In a lot of ways, this isn't really bio anymore. Scientists discovered DNA and enzymes in cells, but now we're just using them as materials for computation. People also use similar search techniques with non-biological molecules. Some similar search and amplification procedures are used to make synthetic organic compounds in drug discovery. DNA, however, is particular useful because it's a long molecule so a lot of operations can be performed on it. As far as how DNA will affect computing in the long run, I don't know. We're still very far from making a dna computer that can achieve anything like what silicon-based systems can. But there could be big technological advances eventually. I don't know of any ways that bio systems have affected our ideas about computing at a software level -- except to perhaps funnel more interest towards massive parallelism. Again, I don't want to imply pessimism about what could be invented. As for (2) how computing could affect biology, this is much less concrete. I'll interpret this to mean that one is trying to program biological systems to do something. i.e. if we give a well-defined instruction set, can we get a cell, organ, or organism to yield a particular output? This to me is just the basic problem of science -- trying to understand how stuff works. We'll be able to "program" cells, organs, or organisms if we understand them as well as we now understand the chemical properties of DNA, or even better, as well as we understand silicon-based semiconductors.
  • I forgot to feed the computer!
  • When the biology folks and the computing folks get together, it always gets really theoretical and really useless. All of the real-world-benefit folks are busy working on problems of a fundamental chemical nature: it is enzymes that shape, edit, and cut DNA. It is enzymes that create drugs and ARE drugs. You cannot learn too much about protein chemistry or structural biology. These disciplines are under-appreciated and there are very real biology and computational problems involved in them for those who
  • As a person who trained as a Biochemist and Computational Chemist and then went on to develop business systems, I was struck about the similarities between how computer scientists and bioligical systems handle complexity.

    When I was learning the concepts of EJB's and remote processing, I began to see patterns between how Cells intercommunicate and the way Clients/Servers work. I began to see similarities between how cells set up "firewalls" and open firewalls and the way we do it in computing.

    As an OO p

  • I remember learning about RNA translation and PCRs in 9th grade biology. I thought the link between dna and computers was interesting so I did a search and found a Howstuffworks.com article entitled "how dna computers will work" http://computer.howstuffworks.com/dna-computer.htm
  • http://news.com.com/2030-6679_3-998622.html
  • by fasta ( 301231 )

    There are important similarities between the information processing and transfer in living organisms and mathematical computation, which have been recognized for more than 50 years (see Gunther Stent's "Paradoxes of Progress" for some essays on the nature of genes and biological information transfer as the central dogma was emerging). But there are critical differences as well, which are often misunderstood.

    The fundamental difference between computing in biology and computing with man-made computers is t

  • > In the long run, will biology rewrite computing
    > or will modern day technology concepts and
    > theory be integrated into biology? If both
    > are true, which will have the greater effect?
    > I understand long run is ambiguous in this
    > question, but Iâ(TM)m interested in all thoughts
    > using any applicable definition.

    These aren't really "either-or" but "both"

    Computing will probably evolve to DNA-based biological computation to some extent. There will remain an "inorganic" el

  • by Salis ( 52373 ) on Monday June 16, 2003 @09:55PM (#6219583) Journal
    I'm a graduate student in chemical engineering at the University of Minnesota and this is my field of research...
    (Sorta strange how Minnesota is a big center for medical devices / chemical engineering)

    I'm in the process of designing systems of genes that interact to perform specific functions, like switches, oscillators, filters, etc. I won't go into a long harange over how it's done or the detailed specifics, because if you're really interested you can read my paper to be published in 'Computers in Chemical Engineering' that will be published sometime in November/December. (Yes, shameless self-promotion.)

    Very briefly, systems of biological reactions occur in such small volumes and in such small concentrations that the traditional mathematics of describing chemical reactions breaks down. One requires probability theory and the usage of Markov processes, a type of stochastic process, to accurate describe what's really going on inside cells. One does this with a very handy algorithm developed by a guy named Daniel Gillespie (search the literature if you're interested) and big freakin computers. (I'm going to gloat: I'm getting access to the 54th fastest computer in the world. Oh, fellow Slashdotters, it brings a tear to my eyes...)

    Here's my two bits on the subject of integrating biology and computers...

    You have two distinct areas of computational biology (as Slashdotters know it) that will probably go into different directions. One can use computers to design biological systems in order to perform certain functions (medical, industrial, etc). This is entirely analogous to an engineer using a computer to design a factory before building it...and knowing exactly (or almost) how it will all turn out _prior_ to building it. This is also why buildings don't regularly fall down.

    Then you have the Cyborg fantasy... Ie. Putting computers in your body to somehow enhance performance. Well, I would say that is numerous decades away because we currently lack the understanding of our brains...and the enhancement of our brains' computational speed is the only area in which digital computers can enhance human performance significantly (I discount super strength as novelty rather than enhancement.)

    But, there is a useful aspect to the 'cyborg' fantasy: Using designed cells to enhance the performance of humans. Cures to many of our current diseases require significant changes to our DNA and/or microscopic structure of our cells. Currently, the approach has been to design (or discover randomly...) molecules that interact with our cells in a way that improves our health.

    Now extend that thinking further... What about designing whole cells to interact with our cells in order to improve health. Here's some examples that may come true in the next twenty years:

    A cell (of human origin) that lives benignly in one's body until it detects a protein that is only produced (in large quantity) by a cancerous cell. When it detects large numbers of that protein, it may do the following actions:

    --Replicate itself quickly (in a controlled fashion, unlike cancerous cells, however)
    --Warn the person by producing a visible indicator (ie. make the person urinate blue (har har))
    --Recruit the person's immune system to attack the cancerous cell
    --Attack the cancerous cell itself (phagocytosis, etc)
    --Produce a molecule (a drug) that is known to kill that cancerous cell

    Here's another example:

    Someone designs a microbe that detects one or more specific chemicals in order to alert humans of its presence...a biosensor.

    When the microbe (or its ten+ million neighbors) detects a specific chemical (Anthrax, ricin, smallpox, influenza, etc, etc), it produces a green fluorescent protein (GFP)..and tells all of its neighbors to produce GFP too. Thus one has a very sensitive, very specific biosensor. Place 'em in every airport and seaport in the world and one now has an (almost) instant indicator of the presence of such toxins...

    So, to answer one o

Someday somebody has got to decide whether the typewriter is the machine, or the person who operates it.

Working...