Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Biotech

The DNA Data Deluge 138

the_newsbeagle writes "Fast, cheap genetic sequencing machines have the potential to revolutionize science and medicine--but only if geneticists can figure out how to deal with the floods of data their machines are producing. That's where computer scientists can save the day. In this article from IEEE Spectrum, two computational biologists explain how they're borrowing big data solutions from companies like Google and Amazon to meet the challenge. An explanation of the scope of the problem, from the article: 'The roughly 2000 sequencing instruments in labs and hospitals around the world can collectively generate about 15 petabytes of compressed genetic data each year. To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall. And with sequencing capacity increasing at a rate of around three- to fivefold per year, next year the stack would be around 6 to 10 miles tall. At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station.'"
This discussion has been archived. No new comments can be posted.

The DNA Data Deluge

Comments Filter:
  • by The_Wilschon ( 782534 ) on Thursday June 27, 2013 @09:10PM (#44128807) Homepage
    In high energy physics, we rolled our own big data solutions (mostly because there was no big data other than us when we did so). It turned out to be terrible.
    • You must have learned that early.

      http://en.wikipedia.org/wiki/IBM_1360 [wikipedia.org]

    • by Anonymous Coward

      I rolled my own but forgot what the results were...

    • by stox ( 131684 )

      Being the wake in front of the Bleeding Edge, HEP gets to learn all sorts of lessons before everyone else. As a result, you get to make all the mistakes that everyone else gets to learn from.

    • I'm no techie, I programmed some in basic as a kid thanks to 321 contact, and the last thing I did of note was to put a girl I liked in math's TI on an infinite loop printing 'I got drunk last weekend and couldn't derive' or some such. Been running linux because I inherited a netbook with no disc drive and couldn't get windows to install from USB and I can't afford a new computer, and I've been reading slash for years and read about USB installs. My question is, is there any movement to use compute cycles
      • by Samantha Wright ( 1324923 ) on Thursday June 27, 2013 @10:59PM (#44129343) Homepage Journal

        I can't comment on the physics data, but in the case of the bio data that the article discusses, we honestly have no idea what to do with it. Most sequencing projects collect an enormous amount of useless information, a little like saving an image of your hard drive every time you screw up grub's boot.lst. We keep it around on the off chance that some of it might be useful in some other way eventually, although there are ongoing concerns that much of the data just won't be high enough quality for some stuff.

        On the other hand, a lot of the specialised datasets (like the ones being stored in the article) are meant as baselines, so researchers studying specific problems or populations don't have to go out and get their own information. Researchers working with such data usually have access to various clusters or supercomputers through their institutions; for example, my university gives me access to SciNet [scinethpc.ca]. There's still vying for access when someone wants to run a really big job, but there are practical alternatives in many cases (such as GPGPU computing.)

        Also, I'm pretty sure the Utah data centre is kept pretty busy with its NSA business.

      • Cycles are rarely the issue for us in HEP, and when they are, all we need is more nodes to split the problem into smaller pieces (wiki: embarassingly parallel problem). The actual computational needs are (typically) pretty small. The main bottleneck is usually data throughput. We discard enormous amounts of data (that may or may not be useful, depending on who you ask) simply because we can't store it anywhere close to as fast as we can make it (many orders of magnitude difference between the data produc
    • by msevior ( 145103 )

      But it (mostly) works...

    • In high energy physics, we rolled our own big data solutions (mostly because there was no big data other than us when we did so). It turned out to be terrible.

      But genetic data isn't particle physics data. It makes perfect sense to roll out a custom "big data" (whatever that crap means) solution because of the very nature of the data stored (at the very least, you will want DNA-specific compression algorithms because there's huge redundancy in the data spread horizontally across the sequenced individuals).

    • by delt0r ( 999393 )
      Well in your defense. You [high energy physics] process far more data in an hour than they are talking about producing in years.

      I shouldn't say they. I work with next gen DNA data on a daily basis. The main problem is everyone in biology uses awful flat ascii files for so many things. And databases... well most are so badly done because they are literally done by someone reading the "SQL for Dummies" book as he does it.

      The last but not least of the problems are experimental design. Too often things
  • by Anonymous Coward

    don't store it all on DVDs, then

  • Bogus units (Score:5, Insightful)

    by Marcelo Vanzin ( 2819807 ) on Thursday June 27, 2013 @09:17PM (#44128835)
    Everybody knows we should measure the pile height in Libraries of Congress. Or VW Beetles.
  • by Anonymous Coward

    why aren't they storing it in digital DNA format?. Seems like a pretty efficient data storage format to me! A couple of grams of the stuff should suffice.

    • by Anonymous Coward
      That brings up an interesting point. I wonder how they ARE storing it? With 4 possible bases, you should only need two bits per. So, 4 per byte, with no compression. I hope they aren't just writing out ASCII files or something ...
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Actually ASCII files are the easiest to process. And since we generally use a handful of ambiguity codes, it's more like ATGCNX. Due to repetitive segments GZIP actually works out better than your proposed 2-bit scheme. We do a lot of UNIX piping through GZIP which is still faster than a magnetic harddrive can retrieve data.

        • Re: (Score:3, Interesting)

          by wezelboy ( 521844 )
          When I had to get the first draft of the human genome onto CD, I used 2 bit substitution and run length encoding on repeats. gzip definitely did not cut it.
    • by the gnat ( 153162 ) on Thursday June 27, 2013 @11:30PM (#44129461)

      why aren't they storing it in digital DNA format

      Because they need to be able to read it back quickly, and error-free. Add to that, it's actually quite expensive to synthesize that much DNA; hard drives are relatively cheap by comparison.

  • by Krishnoid ( 984597 ) * on Thursday June 27, 2013 @09:39PM (#44128931) Journal

    To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

    Once that happens, they'll be able to stop storing it on DVDs and move it into the cloud.

    • by c0lo ( 1497653 )

      To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

      Once that happens, they'll be able to stop storing it on DVDs and move it into the cloud.

      And before anyone knows, we would have a space elevator within the next 5 years instead of the eternal +25 [xkcd.com].

    • Please, do continue measuring the massless sizeless thing in units of things with mass and size. It makes lots of sense.

  • by Gavin Scott ( 15916 ) on Thursday June 27, 2013 @09:42PM (#44128945)

    ...what a shitty storage medium DVDs are these days.

    A cheap 3TB disk drive burned to DVDs will produce a rather unwieldy tower of disks as well.

    G.

    • by fonske ( 1224340 )
      I remember the picture of Bruce Dickinson (of heavy metal fame) taking a big bite out of two compact discs filled like a (big) sandwich with all things that make you fat, meant to illustrate the robustness of CD's.
      I also remember the feeling when I had to face it that I lost data beyond repair on a CD "backup".
    • by delt0r ( 999393 )
      It does not sound that impressive when you say a "box of hard drives" after a year, and a whooping 5 boxes after 5 years.

      Of course it would sound more impressive if they used a stack of punched cards....
  • by Anonymous Coward on Thursday June 27, 2013 @09:46PM (#44128963)

    Publish a scientific, paper stating that potential terrorists or other subversives can be identified via DNA sequencing. The NSA will then covertly collect DNA samples from the entire population, and store everyone's genetic profiles in massive databases. Government will spend the trillions of dollars necessary without question. After all, if you are against it, you want another 9/11 to happen.

  • by VortexCortex ( 1117377 ) <VortexCortex@Nos ... t-retrograde.com> on Thursday June 27, 2013 @09:58PM (#44129015)

    Bit rot is also a big problem with data. So, the data has to be reduplicated to keep entropy from destroying it, which means a self corrective meta data must be used. If only there were a highly compact self correcting self replicating data storage system with 1's and 0's the size of small molecules...

    My greatest fear is that when we meet the aliens, they'll laugh, stick us in a holographic projector, and gather around to watch the vintage porn encoded in our DNA.

    • by __aasqbs9791 ( 1402899 ) on Thursday June 27, 2013 @10:23PM (#44129137)

      I propose we call this new data method Data Neutral Assembly.

    • by phriot ( 2736421 )

      If only there were a highly compact self correcting self replicating data storage system with 1's and 0's the size of small molecules...

      In the future, if sequencing becomes extremely fast and cheap, it might make sense to discard sequencing data after analysis and leave DNA in its original format for storage. That said, if the colony of (bacteria/yeast/whatever you are maintaining your library in) that you happen to pick when you grow up a new batch to maintain the cell line happened to pick up a mutation in your gene of interest, you won't know until you sequence it again. I'm a graduate student in a small academic lab and if I want to "

    • by c0lo ( 1497653 )

      Bit rot is also a big problem with data.

      Take a whiff from a piece of meat after 2 weeks at room temperature and compare it with how a DVD smells after the same time.
      Complains on bit rot accepted only after the experiment.

      • by chihowa ( 366380 )

        I know you were going for funny, but much of what you will be smelling in your experiment is from bacteria eating the protein and polysaccharides in the meat. The DNA is remarkably stable and even if some of it is fragmented, you have a massively redundant set in your pile of meat.

        We've sequenced DNA from nearly a million years ago and I regularly store DNA dried out and stuck to a piece of paper. DVDs won't last nearly that long before the dyes start to break down. For a long term archival system, we could

  • by hawguy ( 1600213 ) on Thursday June 27, 2013 @10:07PM (#44129057)

    It seems a little overly sensationalist to aggregate the devices together when determining the storage size to make such a dramatic 2 mile high tower of DVD's... If you look at them individually, it's not that much data:

    (15 x 10^15 bytes/device) / (2000 devices) / (1 x 10e9 bytes/gb) = 7500GB, or 7.5TB

    That's a stack of 4TB hard drives 2 inches high. Or if you must use DVD's, that's a stack of 1600 DVD's 2 meters high.

  • The LHC generates a shitton of data as well, but from what I've seen (something like this [youtube.com]) they use extremely fast integrated circuits to skim the data. Perhaps geneticists could use a similar technique.
    • pronounced shi-TAWN ?
    • by delt0r ( 999393 )
      We don't in fact produce that much data. The reason they have a lot of data they don't know what to do with is because they didn't do any experimental design in the first place. And its cheap to keep intermediate data around you probably don't need to keep anyway.
  • by esten ( 1024885 ) on Thursday June 27, 2013 @10:12PM (#44129089)

    Storage is not the problem. Computational power is.

    Each genetic sequence is ~3GB but since sequences between individuals are very similar it is possible to compress them by only recording the differences from a reference sequences making each genome ~20 MB. This means you could store a sequences for everybody in the world in ~132 PB or 0.05% or total worldwide data storage (295 exabytes)

    Now the real challenge is more in having enough computational power to read and process the 3 billion letters genetic sequence and designing effective algorithms to process this data.

    More info on compression of genomic sequences
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074166/ [nih.gov]

    • by Anonymous Coward

      Each genetic sequence is ~3GB but since sequences between individuals are very similar it is possible to compress them by only recording the differences from a reference sequences making each genome ~20 MB.

      That's true, but the problem is that as a good scientist you are required by most journals and universities to keep the original sequence data that are coming off these high-through sequencers (aka the fastq files) so that you can show you work so-to-speak if it ever comes into question. These files often contain 30-40x coverage of your 3Gb reference sequence and even compressed are still several GB in size. Additional, because these large-scale sequencing projects are costing millions of dollars, the NIH

    • That's your own germline DNA. But it would be cool to get the distinct sequences of all the cells in your body. Most of those cells (by count, not mass) are various microorganisms, lots in your gut, or infections that are making you sick or wearing down your immune system, or a latent conditions like HIV or HPV, and you would see the evolution of a few strains of precancerous / cancerous cells evolving too. Taken altogether that would be a huge amount of DNA. But I guess a lot of the distinct genomes ar
    • by B1ackDragon ( 543470 ) on Thursday June 27, 2013 @11:27PM (#44129449)
      This is very much the case. I work as a bioinformatician at a sequencing center, and I would say we see around 50-100G of sequence data for the average run/experiment, which isn't really so bad, certainly not compared to the high energy physics crowd and given a decent network. The trick is what we want to do with the data: some of the processes are embarrassingly parallel, but many algorithms don't lend themselves to that sort of thing. We have a few 1TB ram machines, and even those are limiting in some cases. Many of the problems are NP-hard, and even the for the heuristics we'd ideally use superlinear algorithms, but we can't have that either, it's near linear time (and memory) or bust which sucks.

      I'm actually really looking forward to a vast reduction in dataset size and cost in the life sciences, so we can make use of and design better algorithmic methods and get back to answering questions. That's up to the engineers designing the sequencing machines though..
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      A single finished genome is not the problem. It is the raw data.

      The problem is that any time you sequence a new individual's genome for a species that already has a genome assembly, you need minimum 5x coverage across the genome to reliably find variation. Because of variation in coverage, that means you may have to shoot for >20x coverage to find all the variation. The problem is more complex when you are trying to de novo assemble a genome for a species that does NOT have a genome assembly. In this

    • by Kjella ( 173770 )

      Each genetic sequence is ~3GB but since sequences between individuals are very similar it is possible to compress them by only recording the differences from a reference sequences making each genome ~20 MB. This means you could store a sequences for everybody in the world in ~132 PB or 0.05% or total worldwide data storage (295 exabytes)

      For a single delta to a reference, but there's probably lots of redundancy in the deltas. If you have a tree/set of variations (Base human + "typical" Asian + "typical" Japanese + "typical" Okinawa + encoding the diff) you can probably bring the world estimate down by a few orders of magnitude, depending on how much is systematic and how much is unique to the individual.

    • Does it say something about handling this way also internal repeats?

  • by plopez ( 54068 ) on Thursday June 27, 2013 @10:15PM (#44129097) Journal

    They should use a NoSQL multi-shard vertically intgrated stack with a RESTfull rails driven in-memory virtual multi-parallel JPython enabled solution.

    Bingo!

    • by Tablizer ( 95088 )

      They should use a NoSQL multi-shard vertically intgrated stack with a RESTfull rails driven in-memory virtual multi-parallel JPython enabled solution.

      Brog, that tech stack is like soooo month-ago

    • Wait, I've heard this one before [howfuckedi...tabase.com].
    • They should use a NoSQL multi-shard vertically intgrated stack with a RESTfull rails driven in-memory virtual multi-parallel JPython enabled solution.

      Sounds like the technological equivalent of the human body => sounds about right!

    • Come on, that are yesterday's buzzwords! You can do better, I'm sure.

  • "...At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station."

    And another 10 years after that, the amount of DVDs used will have almost reached the number of AOL CDs sitting in landfills.

    Sorry, couldn't help myself with the use of such an absurd metric. Not like we haven't moved on to other forms of storage the size of a human thumbnail that offer 15x the density of a DVD...

  • If there were only some way to store the information encoded in DNA in a molecular level storage device... oh wait, face palm.

  • by WaywardGeek ( 1480513 ) on Thursday June 27, 2013 @10:35PM (#44129211) Journal

    Please... entire DNA genomes are tiny... on the order of 1Gb, with no compression. Taking into account the huge similarities to published genomes, we can compress that by at least 1000X. What they are talking about is the huge amount of data spit out by the sequencing machines in order to determine your genome. Once determined, it's tiny.

    That said, what I need is raw machine data. I'm having to do my own little exome research project. My family has a very rare form of X-linked color blindness that is most likely caused by a single gene defect on our X chromosome. It's no big deal, but now I'm losing central vision, with symptoms most similar to late-onset Starardt's Disease. My UNC ophthalmologist beat the experts at John Hopkins and Jacksonville's hospital, and made the correct call, directly refuting the other doctor's diagnosis of Stargartd's. She though I had something else and that my DNA would prove it. She gave me the opportunity to have my exome sequenced, and she was right.

    So, I've got something pretty horrible, and my ophthalmologist thinks it's most likely related to my unusual form of color blindness. My daughter carries this gene, as does my cousin and one of her sons. Gene research to the rescue?!? Unfortunately... no. There are simply too few people like us. So... being a slashdot sort of geek who refuses to give up, I'm running my own study. Actually, the UNC researchers wanted to work with me... all I'd have to do is bring my extended family from California to Chapel Hill a couple of times over a couple of years and have them see doctors at UNC. There's simply no way I could make that happen.

    Innovative companies to the rescue... This morning, Axeq, a company headquartered in MD, received my families DNA for exome sequencing at their Korean lab. They ran an exome sequencing special in April: $600 per exome, with an order size minimum of six. They have been great to work with, and accepted my order for only four. Bioserve, also in MD, did the DNA extraction from whole blood, and they have been even more helpful. The blood extraction labs were also incredibly helpful, once we found the right places (very emphatically not Labcorp or Quest Diagnostics). The Stanford clinic lab manager was unbelievably helpful, and in LA, the lab director at the San Antonio Hospital Lab went way overboard, So far, I have to give Axeq and Bioserve five stars out of five, and the blood draw labs deserve a six.

    Assuming I get what I'm expecting, I'll get a library of matched genes, and also all the raw machine output data, for four relatives. The output data is what I really need, since our particular mutation is not currently in the gene database. Once I get all the data, I'll need to do a bit of coding to see if I can identify the mutation. Unfortunately, there are several ways that this could be impossible. For example, "copy number variations", or CNVs, if they go on for over a few hundred base pairs, are unable to be detected with current technology. Ah... the life of a geek. This is yet another field I have to get familiar with...

    • CNVs actually can be detected if you have enough read depth; it's just that most assemblers are too stupid (or, in computer science terms, "algorithmically beautiful") to account for them. SAMTools can generate a coverage/pileup graph without too much hassle, and it should be obvious where significant differences in copy number occur.

      (Also, the human genome is about 3.1 gigabases, so about 3.1 GB in FASTA format. De novo assembles will tend to be smaller because they can't deal with duplications.)

      • I agree, CNVs are really easy to detect if you have the read depth. I've been using the samtools pileup output to show CNVs in my study organism. However, to make the results mean anything to most people, I've got to do a few more steps of processing to get all that data in a nice visual format.

        If you don't have the read depth, you lose the ability to discriminate small CNVs from noise. Large CNVs, such as for whole chromosomes, are readily observed even in datasets with minimal coverage.

        • Thanks, guys, for the CNV info. I'm doing only 30-deep sequencing, but I will get 3 exome sequences all probably having the same defect on the X chromosome. Combining the data should give me some reasonable CNV detection ability.

    • Dang, this is cool. In a post above I mentioned I work as a bioinformatician at a sequencing center--if you need guidance or advice, contact me and I'll see if I can point you in the right direction!
      • Thanks! I certainly will need some guidance, so if you don't mind, I'll ping you when I get the data. Same thing for the guy below who also offered to help.

        • by heuermh ( 948407 )

          I also do this for my day job, from the side of downstream variant analysis and population genetics, so please feel free to contact me as well.

          • Will do! I wasn't expecting so much generosity in reply to my post, but thanks! I'm no dummy, but all I have is a Wikipedia level of knowledge of genetics, so any help I can get will be very much appreciated.

    • As this sort of thing is my day job [stationxinc.com], I find this sort of thing really cool, and I'd be happy to help if I can. I'd recommend looking into snpEff [sourceforge.net], it's pretty straightforward to use, and is available on SourceForge. (Feel free to track me down and message me, I think we overlapped at Cal [berkeley.edu].)
      • Thanks! I will check out snpEff. I certainly will need some help, so if you don't mind, I will contact you when I get the data. Same thing for the guy above who offered to help.

    • >I need is raw machine data

      Too bad genome centers disagree with you (I, au contraire, agree with you). We need raw NMR data for structures as well.

  • To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

    NO. This does not put anything "into perspective", except it meant "a lot of data" for the average Joe.

    To put it into useful perspective, we should compare with large data encountered in other sciences, such as 25PB per year from the LHC. And that's after aggressively discarding collisions that doesn't look promising in the first pass, it would be orders of magnitude bigger otherwise.

    But now just 15PB per year doesn't look that newsworthy, eh?

    • Well, if you really need to have that kind of contest...

      The data files being discussed are text files generated as summaries of the raw sensor data from the sequencing machine. In the case of Illumina systems, the raw data consists of a huge high-resolution image; different colours in the image are interpreted as different nucleotides, and each pixel is interpreted as the location of a short fragment of DNA. (Think embarrassingly parallel multithreading.)

      If we were to keep and store all of this raw data, th

      • by khchung ( 462899 )

        Not really trying to turn it into a contest, but just "to put this into perspective". More or less, the point is other science projects have been dealing with similar data volume for a few years already, if there is anything newsworthy about this "DNA Data Deluge", it better be something more than just the data volume.

        • Even within biology this is pretty stale news. I'm pretty sure this story is technically a shill piece for the products mentioned: Hadoop and Amazon ECC.
  • "At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station."

    Use more than one stack. You're welcome.

  • If you have the genomes of your parents, and your own genome, yours is about 70 new spot mutations, about 60 crossovers, and you have to specify who your parents were. About 600 bytes of new information per person. You could store the genomes of the entire human race on a couple terabytes if you knew the family trees well enough. I tried to nail down the statistics for that in http://burtleburtle.net/bob/future/geninfo.html [burtleburtle.net] .

  • Within your character's lifestyle as being a main character, you can enterprise in a few quests to perform heroic acts along with create the globe a greater spot for a are in.
  • by Charles Jo ( 2946783 ) on Friday June 28, 2013 @02:10AM (#44129955)
    Scientists who viewed this sequence also viewed these sequences...
  • The amount of biological data doubles every 9 months, processing power doubles every 18 months we have already reached the crossover point.
  • Why not have a reference genome? For everyone else, simply store deviations from the reference. Seems a possibility.
  • The sequence read archives (such as the one hosted by NCBI) as a repository for this sequencing data, uses "compression by reference," a highly-efficient way to compress and store a lot of the data. The raw data that comes off these sequencers is often >99% homologous to the reference genome (such as human, etc), so the most efficient way to compress and store this data is only to record what is different between the sequence output and the reference genome.

  • ... to store all this data.

    I suggest storing it molecular form by pairs of four different bases (guanine, thymine, adenine, cytosine) combined in an aesthetically pleasing, double helical molecule!

  • To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

    This doesn't put it into perspective at all. What is a DVD and why would I put data on it?

    • For God's sake! Give it to me in a useful unit that is normally provided by journalists... Give it to me in Library of Congresses!

  • It's not like they can use the data, it all is or soon will be patented! Even the patent holders are SOL because anything their bit of patented gene interacts with is patented by someone else. What a lovely system we have!

  • In the future, DVD's will be made much thinner and won't stack up as high.

  • NSA could get to do something nice. They already know how to deal with mountains of useless data. Scanning for cancer and potential terrorists? Mutations of genes and behaviour? Possibilities are endless.

Hackers are just a migratory lifeform with a tropism for computers.

Working...