Sequencing a Human Genome In a Week 101
blackbearnh writes "The Human Genome Project took 13 years to sequence a single human's genetic information in full. At Washington University's Genome Center, they can now do one in a week. But when you're generating that much data, just keeping track of it can become a major challenge. David Dooling is in charge of managing the massive output of the Center's herd of gene sequencing machines, and making it available to researchers inside the Center and around the world. He'll be talking about his work at OSCON, and gave O'Reilly Radar a sense of where the state of the art in genome sequencing is heading. 'Now we can run these instruments. We can generate a lot of data. We can align it to the human reference. We can detect the variance. We can determine which variance exists in one genome versus another genome. Those variances that are cancerous, specific to the cancer genome, we can annotate those and say these are in genes. ... Now the difficulty is following up on all of those and figuring out what they mean for the cancer. ... We know that they exist in the cancer genome, but which ones are drivers and which ones are passengers? ... [F]inding which ones are actually causative is becoming more and more the challenge now.'"
Re:Here's what I want to know... (Score:4, Informative)
Typically they sequence every base at least 30 times.
Re:Here's what I want to know... (Score:4, Informative)
Data analysis a rapidly growing problem in Biology (Score:5, Informative)
One of the big problems studying expression patterns in cancer specifically is the paucity of samples. The genetic differences between individuals (and tissues within individuals) means there's a lot of noise underlying the "signal" of the putative cancer signatures. This is especially true because there are usually several genetic pathways that a given tissue can take to becoming cancerous: you might only need mutations in a small subset of a long list of genes, which is difficult to spot by sheer data mining. While cancer is very common, each type of cancer is much less so; therefore the paucity of available samples of a given cancer type in a given stage makes reaching statistical significance very difficult. There are some huge projects underway at the moment to collate all cancer labs' samples for meta-analysis, dramatically increasing the statistical power of the studies. A good example of this is the Pancreas Expression Database [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.
DNA is digital (Score:2, Informative)
Re:Moore's law at work? (Score:2, Informative)
Fingerprinting doesn't rely on DNA sequencing, but does rely on the DNA sequence being different between people. Everyone's DNA contains subtle differences (particularly in the non-coding DNA regions). These differences can be exploited by various laboratory techniques to produce small pieces of DNA which will be of different sizes because of these differences. When these fragments of DNA are run down a suitable gel (usually agarose, a substance derived from seaweed) under an electric current the fragments will separate by size. The pattern of fragments formed will be unique for each individual.
Several fingerprinting techniques rely on what most programmers would best recognise as regular expression matching. For example there are enzymes in biology which will recognise certain DNA sequences but not others, and will cut the DNA in two where ever this sequence is matched. (in perl:
is the equivalent of what an enzyme called EcoRI [wikipedia.org] does). Not everyone will have the same numbers of this sequence in their DNA, and nor will they be in the same place, thus the number and size of fragments will differ. By using a suitable range of such enzymes you can generate a pattern of DNA fragments which is sufficiently unique as to identify a single person amongst a population of several billion.
for more information take a look at DNA Profiling [wikipedia.org] on wikipedia