Sequencing a Human Genome In a Week 101
blackbearnh writes "The Human Genome Project took 13 years to sequence a single human's genetic information in full. At Washington University's Genome Center, they can now do one in a week. But when you're generating that much data, just keeping track of it can become a major challenge. David Dooling is in charge of managing the massive output of the Center's herd of gene sequencing machines, and making it available to researchers inside the Center and around the world. He'll be talking about his work at OSCON, and gave O'Reilly Radar a sense of where the state of the art in genome sequencing is heading. 'Now we can run these instruments. We can generate a lot of data. We can align it to the human reference. We can detect the variance. We can determine which variance exists in one genome versus another genome. Those variances that are cancerous, specific to the cancer genome, we can annotate those and say these are in genes. ... Now the difficulty is following up on all of those and figuring out what they mean for the cancer. ... We know that they exist in the cancer genome, but which ones are drivers and which ones are passengers? ... [F]inding which ones are actually causative is becoming more and more the challenge now.'"
Re: (Score:2)
Illumina will sequence your genome for $48,000.
http://scienceblogs.com/geneticfuture/2009/06/illumina_launches_personal_gen.php [scienceblogs.com]
Details.
Re: (Score:1)
Illumina will sequence your genome for $48,000.
http://scienceblogs.com/geneticfuture/2009/06/illumina_launches_personal_gen.php [scienceblogs.com]
Details.
Helluva lots of details. After wasting some perfectly useful clicks all I could come up with was:
Illumina's technology is extremely well-established, and serves as the backbone for most large-scale genome sequencing projects currently underway (including the majority of the samples sequenced as part of the 1000 Genomes Project); that gives it an edge over the more experimental technology employed by competing sequence provider Complete Genomics.
That's so much details that I really want my clicks back. You should be ashamed of yourself, Sir.
Re: (Score:3, Interesting)
a whole human genome will fit on a CD.
if you just transmit the diffs from the generic human you could put it in an e-mail
Re:Passing this data back to the scientist (Score:4, Insightful)
I suppose it's worth noting that the intermediate (raw) data sets can get pretty large. they are actually getting larger as the trend goes towards shorter less informative "reads" that require more of them to recover the connective information and to recover from errors and duplications. However that's a tend that has a stopping point. While more reads is better at some point there is almost no added value from more reads. So at that point that's the maximum amount of data you need to collect. it's won't increase ever. meanwhile hard drive and network speeds will go up factors of ten.
thus the storage issues here are well tolerated at present and soon will become trivial.
Re: (Score:2)
This actually suggests that perhaps we should start transmitting into space or on space crafts the genome of all the genes ever sequence, even the ones hauled out of the ocean that we don't know what organism they belong too. you send that, plus the molecular composition of DNA, and the molecular structure of the ribosome and T-rna
while there's more to a cell than just that, it's well known that in virto you can get transciption of the DNA from just that. It won't be too long I suspect before you could co
Re: (Score:2)
yes, let's give those aliens something to experiment on, so they can figure out what bugs to send our way to exterminate us :)
Re: (Score:2)
This actually suggests that perhaps we should start transmitting into space
In an infinite universe, our DNA already exists out there. Its just a question of how far. Philosophically, there is nothing to be gained by pumping it out to our possible neighbours. It could only be used against us.
Re: (Score:1)
A single run on a Solexa next gen sequencer can generate over 200GB of data and half a million files. And that is for 8 samples only. You get into the terabyte range very quickly.
That's why data is delivered on hard drives.
Re: (Score:2)
I'm curious how you figure 200GB of data. A solexa 1G only produces tens of millions reads per run, each read being about 36 bases.
Re: (Score:1)
There are raw intensity files which are used for base calling. The output of the base calling is used to generated alignments, quality, etc. You don't just get the short reads at the end. Most people are just going to use the short reads but that still can be 30G of data for a run.
Re: (Score:2)
8*300*4*37*2 =710400
On a GA1 those files are 2mb each, giving you around a terabyte and a half of of primary data to process. Image analysis takes place processing those files in to "intensity files". Those are further processed in to corrected intensities, t
DNA GATC (Score:5, Funny)
Functions that don't do anything, no comments, worst piece of code ever!
I say we fork and refactor the entire project.
Re:DNA GATC (Score:5, Interesting)
'I say we fork and refactor the entire project.'
You mean like this?:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=16729053 [nih.gov]
How do you know it's NOT comments? (Score:2)
Functions that don't do anything, no comments, worst piece of code ever!
Most of it doesn't code proteins or any of the other things that have been reverse-engineered so far. How do you know it's NOT comments?
(And if terrestrial life was engineered and it IS comments, do they qualify as "holy writ"?)
Re: (Score:2)
There was some SF book I read, where it was explained that the comments were "made by a demo version of creature editor" and that was the reason for humans to die after 100 years. Some hacker has then found a way to reset the demo counter and thus to make people live forever.
Re: (Score:2)
Like this girl [go.com]?
Re: (Score:2)
bullshit. the average western live expectancy is over 80 nowadays and even third world countries manage 60 years. and that is only because lots of people die young (cancer, car accidents and so on).
Re: (Score:1)
There was some SF book I read, where it was explained that the comments were "made by a demo version of creature editor" and that was the reason for humans to die after 100 years. Some hacker has then found a way to reset the demo counter and thus to make people live forever.
Hey, what if that's how DNA work? Would that not be like awesome and stuff?
Re: (Score:2)
How do you know it's NOT comments?
Come on, how many programmers do you know that write comments, meaningful or not? I personally have a massive descriptive dialogue running down the side. "Real" programmers have told me that is excessive. Looking at their code I find one comment every 20 to fifty lines, and descriptive identifiers, like i, x or y. The genome will be just like that. (Also, given that any big project ends up with lots of dead code. (yes, I know the compiler identifies that, but ...)
Re: (Score:2)
Come on, how many programmers do you know that write comments, meaningful or not?
Plenty. And the ones that do tend to have more functional programs, too. B-)
(My own code is heavily commented - to the point of providing a second full description of the design. And a colleague once said I'm the only person he'd trust to program his pacemaker. B-) )
Re: (Score:2)
Re: (Score:1)
I would like to announce publicly that my genome is released under the GPL
So you'll only allow you children to mate with other GPL'ed people?
Re: (Score:1)
Re: (Score:2)
At least it's backed up well. 3 backups of almost everything ain't bad.
Two strands on each chromesome... I'm probably in the wrong crowd of nerds...
Re: (Score:3, Funny)
Re: (Score:1)
You thought God can't spell "job security"? Mind you, he's omnipotent!
and also dead! So go know.
Re: (Score:2)
Actually it's just the arrogance of some scientist. Who later found out, that all those parts who seemingly did not do anything, were in fact just as relevant. Just in a different way. Whoops!
Re: (Score:2)
Another problem they never mention are artifacts from the chemical protocol; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap (usually only the last few bases are unreliable); turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on.
Do you have any more details about this? I'm working on solexa sequencing of ChIP DNA with (modified
Re:DNA GATC ... G-GNO-ME (Score:1)
Splice too much of that bad, useless, convoluted code into a "new" human and we might end up with a G-Gnome or GNOME (Gratuitous, Nacent, Ogreous, Mechanised Entity). Call it... "G-UNIT", and give it a uniform and a mission. Or, give it a script and a part and call it Smeegul/Smigel...)
Here's what I want to know... (Score:3, Insightful)
Re:Here's what I want to know... (Score:4, Informative)
Typically they sequence every base at least 30 times.
Re:Here's what I want to know... (Score:4, Informative)
Re: (Score:2)
It can get even more complicated too: if you have 10x coverage of a position, and 9 say T while 1 says G, it may be an allelic variation. There's a one in 16 chance this'll happen randomly instead of 5 Ts and 5Gs as you expect.
Re: (Score:2)
Re: (Score:3, Interesting)
"Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same?"
Not [wikipedia.org] necessarily [wikipedia.org]. ;-)
Re: (Score:1)
"Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same?"
;-)
Not [wikipedia.org] necessarily [wikipedia.org].
[Reference needed]
Re: (Score:2)
Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same?
They should be. An individual's genome does not change over time. Gene expression can change, which can itself lead to significant problems such as cancer.
Re: (Score:1)
The genome sure as hell changes, lots of mutations happening all the time in probably every cell of our body. The usual cause of cancer is certain genetic mutations beyond the scope of the body's mechanisms for dealing with them. That in turn causes your gene expressions to change since they're also, to a large extent, controlled by the genome. Purely, or mostly, gene expression caused cancer is apparently also possible however it's not the only cause of cancer.
Re: (Score:2)
The genome sure as hell changes
Not necessarily. The genome refers specifically to the genes encoded by DNA; mutations can also occur in the non-coding regions. Indeed the non-coding regions are often the most critical for gene expression.
Hence a non-genomic mutation can have a profound effect on gene expression.
lots of mutations happening all the time in probably every cell of our body
Also not necessarily true. For example, a non-dividing cell has no reason to duplicate its own genome, hence it has almost no chance to acquire mutations.
That in turn causes your gene expressions to change since they're also, to a large extent, controlled by the genome.
As I already described, much of gene expression is regulated by non c
Re: (Score:2)
The genome refers specifically to the genes encoded by DNA;
Genome(gene+[chromos]om) refers to all the genetic material, coding and non-coding, gene or not. It has meant that for the past 90 years, before non-coding regions were even thought of I'm guessing, and last I checked it hasn't been redefined. You have very nicely explained why having it refer to simply coding regions is stupid. If I'm wrong then I'd love to know but you'll need to provide me a reference.
The definition of a gene on the other hand may be changing to include both coding and non-coding regions
Re: (Score:2)
Just to add so we're all clear on definitions, as I understand it we're talking about these regions of dna here:
a) rna/protein coding region
b) transcribed non-coding regions (introns)
c) regulatory region
d) unknown/junk/other non-coding region
I suspect there's some additional stuff I missed but it's been a while since I cared too much about this.
In my molecular biology classes "gene" was used to refer to a, b and c as they relate to a protein. To be honest that someone would t
Re: (Score:2)
The Non-Homologous End Joining (NHEJ) DNA double strand break repair process can produce mutagenic deletions (and sometimes insertions) in the DNA sequence. Both the Werner's Syndrome (WRN) and Artemis (DCLRE1) proteins involved in that process have exonuclease activity in order to process the DNA ends into forms which can be reunited. The Homologous Recombination (HR) pathway, which is more active during cell replication, is more likely to produce "gene conversion" which can involve copying of formerly m
Re: (Score:2)
I would suggest that you spend some time studying the topic in more detail before you make comments on /.
At all the genome conferences I've been to the "genome" includes everything -- the chromosome number and architecture, the coding, regulatory & non-coding regions (tRNA, rRNA, miRNA/siRNA, telomere length, etc.). But the non-coding, highly variable parts of the genome can be considered part of the "big picture" because the amount of *really* junk DNA may function as a free radical "sink" which prot
Re: (Score:2)
I would suggest that you spend some time studying the topic in more detail before you make comments
Starting your response by insulting the other person? I ordinarily wouldn't bother responding, but since you took the time to provide a peer-reviewed article as a reference I'll give you a chance.
The DNA composition and ultrastructure may also effect things like gene expression (DNA unwinding temperature, variable access to genes, etc.).
Actually if you had read what I said earlier about gene regulation you would have seen that I already said that. Non-coding regions are where transcription factors for gene expression often bind.
If you knew about the 5+ types of DNA repair (BER, NER, MMR, HR, NHEJ) involving 150+ proteins or had some knowledge
You could be less arrogant and presumptuous in your statements. You seem to have taken what you wanted to see in my
Re: (Score:1)
Except cells do undergo the occasional survivable mutation, and then there are the people that integrated what would have been a twin, and so on.
Re: (Score:2)
Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same.
You're talking about for different individuals? There will be differences, yes, but most of that difference should be in non-coding regions. The actual regions making proteins should be nearly identical. I only work with a few DNA sequences that code for proteins, so that's all I'd be interested in, but there are other applications for medicine that the variation in non-coding regions would be important.
Re: (Score:2)
Each person have a different sequence, while the first time they sequenced one of the billions "human genomes". Doing different people could help finding what makes one person different from another and on the other hand what make us similar. :-)
How about storing it in analog format? (Score:5, Funny)
Just store all that data as a chemical compound. Maybe a nucleic acid of some kind? Using two long polymers made of sugars and phosphates? I bet the whole thing could be squeezed into something smaller than the head of a pin!
Re: (Score:1, Funny)
I bet the whole thing could be squeezed into something smaller than the head of a pin!
That's what she said!
DNA is digital (Score:2, Informative)
Re: (Score:2)
Plus histones, methylation, imprinting, a few thousand proteins, and a few pieces of RNA to bootstrap the transcription/translation.
Re: (Score:1)
Re: (Score:1)
You're using a very liberal definition of DNA.
I'm telling you, liberals are destroying this great country of ours.
Re: (Score:1)
I bet the whole thing could be squeezed into something smaller than the head of a pin!
or, indeed, smaller than the point of a pin
Money well spent (Score:5, Insightful)
We pissed away $3 billion dollars and 13 years of time, when we could have waited a few more years and got it done in a week, and much, much cheaper. What a waste of time and money that was....
I know I'm being trolled, but you're an idiot. It's pretty obvious that the ability to sequence the genome in a week could only result from techniques developed and information gathered in the original Human Genome project.
Re: (Score:2)
It doesn't really look like a troll, more a facetious back-handed complement.
Re: (Score:2)
We pissed away $3 billion dollars and 13 years of time, when we could have waited ...
It's pretty obvious that the ability to sequence the genome in a week could only result from techniques developed and information gathered in the original Human Genome project.
But that doesn't mean you needed to spend $13b! Maybe only $1b was spend on the vital R&D, and the other 12b spent on building the massive parallel infrastructure and processes to implement it full-scale. Plus patent lawyers.
Re: (Score:2)
The problem with any real-world project is that you can't tell a priori what the "vital" part is and what isn't. If that were the case, we as humans wouldn't waste time on trial and error, and it would instead be trial and success.
Re: (Score:2)
Re:We pissed away $3 billion dollars (Score:5, Insightful)
What's funny is that there is actually people who think like that. Apparently if we just sit around and wait, things will get better. I call this the dark side of the "invisible hand" of the market.. because it is invisible, people forget how it comes about. In order to get improvement in technology you need a market for that technology. And, typically, you need some loss-leader to create the market in the first place. Government funding serves this purpose well.
Re: (Score:2)
What's funny is that there is actually people who think like that. Apparently if we just sit around and wait, things will get better. I call this the dark side of the "invisible hand" of the market.. because it is invisible, people forget how it comes about. In order to get improvement in technology you need a market for that technology. And, typically, you need some loss-leader to create the market in the first place. Government funding serves this purpose well.
The sad thing is that this seems to be pretty much par for the course. If only we wait just a little while and skip all those annoying intermediate steps, we will soon have fantastically good rockets / fusion reactors / whatever else without having to pay anything...
Re: (Score:1)
Well, EENterestingly, that's pretty much what people are saying when they complain that early paleontologists ruined priceless artifacts.
You learn as you go, like when you're learning to play Ghosts 'n' Goblins and you keep getting killed by the red gargoyle, but then you eventually learn that you have to jump away from him as he swoops and fire frantically towards him. I know other people have made similar responses, but I only understand things in terms of analogies. Particularly ones related to throwing
Re:Moore's law at work? (Score:4, Interesting)
Re: (Score:2)
What does "sequence a genome" actually mean. The name "sequence" suggests that it has something to do with the "order" of something. Your post makes it sound like sequencing is something done before the computer gets ahold of the data. Can you explain for us genetics laypersons what the heck "sequencing" is? Tnx.
Very simply: Your DNA is stored in chromosomes. Each chromosome contains DNA in tight bundles with lots of weird secondary and tertiary structure. Suppose that you took all the chromosomes from o
Re: (Score:2)
If it has taken 13 years until recently to properly sequence the genome of a single human, how has it been possible to do DNA "fingerprinting" e.g. for crime investigations? Is the actual sequencing not required there?
Re: (Score:2, Informative)
Fingerprinting doesn't rely on DNA sequencing, but does rely on the DNA sequence being different between people. Everyone's DNA contains subtle differences (particularly in the non-coding DNA regions). These differences can be exploited by various laboratory techniques to produce small pieces of DNA which will be of different sizes because of these differences. When these fragments of DNA are run down a suitable gel (usually agarose, a substance derived from seaweed) under an electric current the fragments
Data analysis a rapidly growing problem in Biology (Score:5, Informative)
One of the big problems studying expression patterns in cancer specifically is the paucity of samples. The genetic differences between individuals (and tissues within individuals) means there's a lot of noise underlying the "signal" of the putative cancer signatures. This is especially true because there are usually several genetic pathways that a given tissue can take to becoming cancerous: you might only need mutations in a small subset of a long list of genes, which is difficult to spot by sheer data mining. While cancer is very common, each type of cancer is much less so; therefore the paucity of available samples of a given cancer type in a given stage makes reaching statistical significance very difficult. There are some huge projects underway at the moment to collate all cancer labs' samples for meta-analysis, dramatically increasing the statistical power of the studies. A good example of this is the Pancreas Expression Database [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.
Re: (Score:1)
For example when looking at duplications/expansions in cancer, an expansion of a locus results in about a 50% correlation between DNA level change and expression level chane. Protein and gene expression levels correlate 50 to 60% of the time (or less depending on who's data you look at). So therefore, being gracious and assuming a 60% correlation at the two levels you are already belo
Re: (Score:1)
A good example of this is the Pancreas Expression Database [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.
Kim Jong-il [yahoo.com] will be ecstatic to hear that. Dear Leader can't very well put the Grim Reaper into political prison....
Re:Data analysis a rapidly growing problem in Biol (Score:2)
The vast majority of genes only have effects when translated into protein
That depends on your definition. If you define a gene as "stretch of DNA that is translated into protein," which until fairly recently was the going definition, then of course your statement is tautologically true (replacing "the vast majority of" with "all.") But if you define it as "a stretch of DNA that does something biologically interesting," then it's no longer at all clear. Given the number of regulatory elements not directly
Re: (Score:2)
Buttload of data (Score:2, Interesting)
Re: (Score:1)
The next thing to hit is supposed to be 3D DNA modeling where the interactions within the genome is mapped. Meaning; if x and y is present b will be produced but will only be active if x is at position #100 or position #105 if it's at another position c will be created, etc. It differs from normal mapping because the code AND position within the code is taken into account so there are conditions ad
Humans have ~810.6 MiB of DNA (Score:2, Interesting)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2, Interesting)
Re: (Score:2)
'The new method sounds like they're doing a microarray or something and just storing high resolution jpegs. I could see why that would require oodles of image processing power. It does seem like an odd storage format for what's essentially linear data.'
There's a good summary of the technology here:
http://seqanswers.com/forums/showthread.php?t=21 [seqanswers.com]
Millions of short sequences are analysed in a massively parallel way, and you need to take a new high resolution image for every cycle to see which 'polonies' the ne
Re: (Score:2)
So, what's going on here? Are the file formats used to store this data *that* bloated?
<genome species="human">... ;-)
I also manage a Next-gen Sequencing Machine (Score:3, Interesting)
Next gen sequencing eats up huge amounts of space. Every run on our Illumina Genome Analyzer II machine takes up 4 terabytes of intermediate data, most of which comes from the something like 100,000+ 20 Mb bitmap picture files taken from the flowcells. All that much data is an ass load of work to process. Just today I got a little lazy with my Perl programming and let the program go unsupervised...and it ate up 32 gb of ram and froze up the server. Took redhat 3 full hours to decide it had enough of the swapping and kill the process.
For people not familiar with current generation sequencing machines, they can scan between 30-80 bp reads and use alignment programs to match up the reads to species databases. The reaction/imaging takes 2 days, prep takes about a week, processing images takes another 2 days, alignment takes about 4. The Illumina machine achieves higher throughput than the ABI ones but gives shorter reads; we get about 4 billion nt per run if we do everything right. Keep in mind though, that 4 billion that they mention in the summary is misleading: the read cover distribution is not uniform (ie you do not cover every nucleotide of the human's 3 billion nt genome). To ensure 95%+ coverage, you'd have to use 20-40 runs on the Illumina machine...in other words, about 6-10 months of non-stop work to get a reasonable degree of coverage over the entire human genome (at which point you can use programs to "assemble" the reads in a contiguous genome). WashU is very wealthy so they have quite a few of these machines available to work at any given time.
the main problem these days is that processing all that much data requires a huge amount of computer knowhow (writing software, algorithms, installing software, using other people's poorly documented programs), and a good understanding of statistics and algorithms, especially when it comes to efficiency. Another problem they never mention are artifacts from the chemical protocol; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap (usually only the last few bases are unreliable); turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on. Even for good reads, a lot of the bases can be suspect so you have to do a huge amount of averaging, filtering, and statistical analysis to make sure your results/graphs are accurate.
Genome as a cause? (Score:2)
Well, how about pollution, processed food, and all that trash being the main reason we get cancer?
Cancer was not even a known disease, a century ago, because nobody had it. (And if people get cancer now, way before the average age of death a century ago, then it can't be that it is because we now get older.)
But I guess there is no money in that. Right?
wow. read a book. (Score:3, Insightful)
First, kinds of cancers were known to exist a century ago. Tumors and growths were not unheard of. Most childhood cancers killed quickly and were undiagnosed as specific disease other than "wasting away". When the average lifespan was 30-40 years, a great many other cancers were not present because people didn't live long enough to die from them.
As we cure "other" diseases, cancers become more likely causes of death. Cells fail to divide perfectly, some may go cancerous others simply don't produce as
Re: (Score:2)
Cancer has become more common over the last hundred years or so. A huge part of that is simply the fact that we're living much longer, meaning that the odds of a given person developing cancer are much higher.
I want to copyright my dna. Then, it can't be.... (Score:2)
...used against me for anything without violating the DMCA. The act of decoding it by some forensics lab paternity test or future insurance company medical cost profile would become unlawful and I'm sure the RIAA would help me with the cost of prosecuting the lawsuit.
Where's Nedry ? (Score:2)
Check the vending machines !
Diff? (Score:2)
While a single human genome is a lot of information, storing thousands shouldn't add much requirements, one can simply store a diff from the first.