Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Biotech IBM Medicine Privacy

IBM Researchers Working Toward Cheap, Fast DNA Reader 90

nk497 writes "IBM scientists are working on ambitious research where nano-sized holes will be drilled into computer chips and DNA passed through to create a 'genetic code reader.' A DNA molecule would be passed through a hole just three nanometers wide, while an electrical sensor 'reads' the DNA. The challenge of the silicon-based 'DNA Transistor' would be to slow and control the motion of the DNA through the hole so the reader could decode what is inside it. IBM claimed that if the project was successful it could make personalised genome analysis as cheap as $100 to $1,000, and compared it to the first-ever sequencing done for the Human Genome Project, which cost $3 billion."
This discussion has been archived. No new comments can be posted.

IBM Researchers Working Toward Cheap, Fast DNA Reader

Comments Filter:
  • Re:Amazing! (Score:5, Informative)

    by toppavak ( 943659 ) on Tuesday October 06, 2009 @12:12PM (#29658273)
    Oh its been done. [google.com] In fact, ordering custom DNA sequences is pretty cheap [idtdna.com].
  • Re:Other uses (Score:3, Informative)

    by Taibhsear ( 1286214 ) on Tuesday October 06, 2009 @12:12PM (#29658275)

    Insurance companies will use it to deny health insurance outright or label any diseases that this thing finds as "pre-existing conditions".

    There is already a law banning them from doing this.

  • by Michael G. Kaplan ( 1517611 ) on Tuesday October 06, 2009 @12:17PM (#29658363)

    The New York Times published an article in August about a technology that decoded a human genome for less than $50,000 [nytimes.com]. The inventor speculates that the technology will be able to decode a genome for just $1,000 in 2-3 years.

    That being said it will be amazing to see the IBM project succeed. Either way the cost of decoding a genome is dropping so quickly it puts Moore's Law to shame.

  • by dAzED1 ( 33635 ) on Tuesday October 06, 2009 @01:05PM (#29659063) Journal

    From the article: Dr. Quake's DNA sequencing machine, about the size of a refrigerator, works by splitting the double helix of DNA into single strands and breaking the strands into small fragments that on average are 32 DNA units in length.

    That's not terribly different than what happens now; we cut things into chunks of X units (say, 400 base pairs), and then use all sorts of tools to guess how to put it all back together. The major problem being something elsewhere mentioned in that article: A computer program then matches the billions of 32-unit fragments to the completed human genomes already on file...

    There are many situations where this is simply not ideal. Not the least of which is when someone wants to sequence a species that has yet to be sequenced. Additionally, you're basing your alignment upon the alignment of the previous set, without actually knowing whether the previous set is properly aligned. More to point, it is well accepted that it is not properly aligned, so you're testing against a known bad. This is (imo) the biggest problem in bioinformatics right now.

    IBM's machine doesn't seem to be taking that same approach, and instead appears to be cutting the DNA in to exponentially larger chunks (from what I've read about it). This isn't something where Moore's Law applies; it's a paradigm shift, not a technology improvement.

  • by reverseengineer ( 580922 ) on Tuesday October 06, 2009 @01:21PM (#29659391)
    There are a few different ways of doing DNA sequencing now, but most automated sequencing uses what's called the dye termination method, which is an advancement on the Sanger (aka dideoxy) method. If the sample needs to be amplified, you can use the polymerase chain reaction, which uses heat to separate strands of DNA, then uses a heat stable DNA polymerase to make copies- a process that can be cycled to exponentially increase the sample. Once again, PCR is only necessary if you need more material to work with.

    The sequencing itself starts by unzipping your DNA sample to a single strand, and then making copies of this strand in an environment where a small fraction of the available deoxynucleotides used for building the copied strands are replaced with labeled dideoxynucleotides, which are added to the growing strand but then terminate further growth. This produces a series of DNA fragments of differing lengths, each with a tag on the last base added. You can use electrophoresis to separate the fragments by size, which creates a map of where your tagged dideoxy bases are located. If you have managed to tag each location at least once, then you know the entire sequence.

    The original Sanger method used radioactive tags, and you had to run the reaction 4 times- with labeled A, T, C, and G separately. Modern automated sequencers usually use four fluorescent tags at different wavelengths, so all four can be run in the same pot.
  • Re:Amazing! (Score:1, Informative)

    by Anonymous Coward on Tuesday October 06, 2009 @01:44PM (#29659835)

    If by longer reads you mean read the whole thing at once, then yes. The removal of shotgun sequencing techniques would remove much of the cost I would think.

  • Re:Other uses (Score:2, Informative)

    by Taibhsear ( 1286214 ) on Tuesday October 06, 2009 @02:03PM (#29660139)

    Challenge accepted. [wikipedia.org]

Save the whales. Collect the whole set.

Working...