Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Slashdot Deals: Cyber Monday Sale! Courses ranging from coding to project management - all eLearning deals 25% off with coupon code "CYBERMONDAY25". ×
Biotech Science

700,000-Year-Old Horse Becomes Oldest Creature With Sequenced Genome 69

sciencehabit writes "Scientists have sequenced the oldest genome to date—and shaken up the horse family tree in the process. Ancient DNA derived from a horse fossil that's between 560,000 and 780,000 years old suggests that all living equids—members of the family that includes horses, donkeys, and zebras—shared a common ancestor that lived at least 4 million years ago, approximately 2 million years earlier than most previous estimates. The discovery offers new insights into equine evolution and raises the prospect of recovering and exploring older DNA than previously thought possible."
This discussion has been archived. No new comments can be posted.

700,000-Year-Old Horse Becomes Oldest Creature With Sequenced Genome

Comments Filter:
  • by plover (150551) on Thursday June 27, 2013 @12:10AM (#44119671) Homepage Journal

    We may never get a dinosaur theme park, but we've got a decent shot at a carousel full of ancient horses, saber-tooth tigers, and wooly mammoths. What could possibly go wrong?

  • by danbert8 (1024253) on Thursday June 27, 2013 @12:11AM (#44119677)

    I'm glad we now understand the genetic base of the jackass. Maybe with this knowledge medical science can remove those genes from human DNA.

    • by Anonymous Coward

      What? And leave an empty shell?

  • by cold fjord (826450) on Thursday June 27, 2013 @12:11AM (#44119679)

    I've read some articles on attempts to extract and sequence old DNA in this sort of range, and I'm surprised they've been able to do this given the half-life of DNA.

    I wonder how many other researchers are making claims of extracting DNA this old? It seems improbable, but maybe the state of the art has greatly improved.

    DNA has a 521-year half-life [nature.com]

    The team predicts that even in a bone at an ideal preservation temperature of 5 C, effectively every bond would be destroyed after a maximum of 6.8 million years. The DNA would cease to be readable much earlier — perhaps after roughly 1.5 million years, when the remaining strands would be too short to give meaningful information.

    • by Anonymous Coward

      If that research states that DNA can be possibly be recovered if it is less than 6.8 million years old, why is 10% of that time a surprise? Moreover, what you quoted did not specify the sample size. Assuming their DNA half-life is correct, larger sample sizes will be able to produce meaningful information at dramatically older ages. 6.8 million years ago may only be relevant for a specific amount of genetic material from a specific source. What if there are more strands in each cell, More cells per gram, or

      • by cold fjord (826450) on Thursday June 27, 2013 @01:12AM (#44119855)

        You have to not only recover it, but to read it as well. And the fine article from the post indicates they were able to actually conduct genetic analysis on it. That pulls the maximum viability date in quite a bit. The jump in age over previous finds in which they've been able to extract viable information is pretty significant, going from 130,000 years to between 560,000 and 780,000. And note that the figures I show from the story I quoted were under ideal preservation. Maybe it is all correct, but it seems a bit of a longshot to pull that information from such an old bone. I suppose they could just have been lucky.

        • by julesh (229690)

          You have to not only recover it, but to read it as well. And the fine article from the post indicates they were able to actually conduct genetic analysis on it. That pulls the maximum viability date in quite a bit.

          Which is why the article you cited goes on to state "[t]he DNA would cease to be readable much earlier — perhaps after roughly 1.5 million years, when the remaining strands would be too short to give meaningful information." Given that 1.5Myear figure, why is 700Kyear surprising? It's not like they're expecting a technological breakthrough to make that 1.5My figure possible: we can already sequence pretty-much any single DNA strand we want, and reconstruction from short fragments is also an existing

        • It looks like it was a difficult accomplishment:
          They also combined DNA sequencing techniques to get maximum DNA coverage — using routine next-generation sequencing with single-molecule sequencing in which a machine directly reads the DNA without the need to amplify it up which can lose some DNA sequences. [1]
          Such genetic puzzle assembly generally includes multiple samples from each part of the genome, sometimes as many as five or 10. In this case, the so-called coverage was just 1.12. [2]
          "We sequenced

      • by csirac (574795) on Thursday June 27, 2013 @02:31AM (#44120061) Homepage

        To cut a long story short, at "6.8 mllion years old" I assume they mean "the longest read (maximum number of consecutive GATC 'letters' in a row) you're possibly going to get is one". Imagine having a pile of letters which were once arranged into the collective works of William Shakespeare: could you re-assemble the original work? No. But what if you had 4-letter fragments? You might be able to learn something about the english language, indirectly, but you probably won't be able to reverse-engineer the complete original work. Now what if you had slightly longer fragments? That would help. What if the garbled pile of letters/fragments actually consisted of multiple, similarly (randomly!) shredded copies of Shakespeare? Well, as long as they're randomly fragmented in different ways - you can imagine that where we guess two fragments might join each other, if we have a fragment from that same region from another copy wich spans that join - we can become more and more confident about forming a plausible assembly. So we can take advantage of this redundancy and randomized fragmentation to attempt recovery of the original work.

        In other words, the more degraded the DNA, the shorter the fragments and the harder it is to come up with an assembly. At some point the fragmentation might be so bad that the only way you can attempt to achieve anything is to try to use a relevant, well understood reference sequence from a modern day specimen/consensus for comparison (or clues, or to fill-in-the-blanks)... if one exists. I'm no geneticist, but I think in those circumstances the confidence in the results start to go from "hey, that's cool!" to "interesting" to, eventually, an artist's rendition of what an ancient genome might have looked like - drawing from long lost cousins which are still alive today.

        Happily, re-assembling short, fragmented DNA happens to be how commodoty high-speed, high-throughput, low-cost sequencing works these days [wikipedia.org] - DNA is split into small lengths, Eg. 500-ish basepairs, and then depending on the experiment/purpose/targets etc. it's all (or partially) re-assembled by finding enough overlapping bits (hopefully beginning and ending with proprietary markers used in the splitting process) with statistical tricks to qualify if the data is sufficient, which areas are problematic in coverage/confidence etc... and it helps enormously if you're working on an organism that's already been sequenced to death for comparison.

        So there are many well advanced tools for coming up with contiguous DNA from a pile of short reads.

        IIRC, the other trick with ancient DNA is - first of all, extracting enough useful material to begin with, without damage. As reads get shorter, increased redundancy helps - more randomly overlapping regions can ease the task of re-assembly - but very short reads might mean that a number of different assemblages are possible. Not to mention delicate amplification methods which might increase the noise as well as the signal...

        • by ledow (319597)

          Kind of like the way that one photo of Jupiter through a telescope will be blurry and useless.

          But take 10,000, choose the best 5%, layer them over each other, centre them, then form an overlaid image, and you can get some STUNNING results from even a blurry, horrible, 10,000 image source.

          (For reference - google "Registax").

          I'd go for that. It doesn't seem implausible at all, and DNA is much more simple in construction than you might think - which gives fewer combinations but more tricky fitting together.

          • by csirac (574795)

            I'd go for that. It doesn't seem implausible at all, and DNA is much more simple in construction than you might think - which gives fewer combinations but more tricky fitting together. Get enough fragments, though, and you can throw it through a computer and get something useful out of the other end.

            But that's the whole problem! Doesn't matter if you image a lonely letter 'A' on a shred of paper in 72dpi, 300dpi, 60000dpi - it's still a letter A, and you're never going to know what its neighbours were :-)

            • by ledow (319597)

              Yes, it's a little closer to, say, reassembling a shredded document. If you shred it enough, of course it's just a bundle of characters that are really hard to piece together.

              But if DNA really takes as long as stated to decompose even under ideal conditions, we are basically one half of the time required for it to decompose under those conditions to the point where it's probably not capable of reassembly. So we aren't stuffed, even without ideal conditions. And tying in with what we know of horse DNA, we

              • by csirac (574795)

                I only mention the contamination issue because, at one of the seminars run by the Ancience Centre for DNA in Adelaide - it was highlighted as a significant problem early on in their research which resulted in detailed and rigorous sampling and processing protocols to get any worthwhile results at all. I seem to recall that early ancient DNA efforts had several false success which later turned out to be contamination - it's non-trivial. Even the act of using bare hands to wash an old bone in water overwhelms

              • by csirac (574795)

                And putting huge amount of computation and DNA from the same animal through it, and we're even less stuffed. Seems to me to be a pretty damn useful technique, overall, even if it's only "statistically" correct.

                Which technique are we discussing? Next-gen contig/alignment is quite mature, as is the understanding of the limitations. Older, slower, more expensive tech is still in use in some lesser-studied critters which, for want of a better word, aren't entirely validated on the next-gen stuff and some experi

          • by csirac (574795)
            Not to mention - imaging a planet doesn't affect the planet. Extracting DNA, without contamination is a huge challenge for ancient DNA. It's hilarious how many NCBI sequences of mammal specimens turn out to matches for fish or insects (lab assistant's lunch? Did a fly get smooshed into a vial?) etc. Even if you do successfully extract, isolate and amplify some ancient DNA, how do you know you amplified actual DNA of the specimen and not something living in it (nematode etc)? In any case, I was just specula
    • by Gr8Apes (679165)

      I read your link - I don't know why you're surprised. The article itself states 500K years as the oldest known successful extraction and reading of DNA.

      Furthermore, all the assumptions and theories are based on the analysis of 3000-8000 year old bird bones. The most significant factors in DNA degradation are stated to be exposure to water, followed by oxygen and micro-organism activity. Enzymes will only take you so far, and are listed as the starting point. I'd say that this particular theory will undergo

  • You can't beat a dead horse.

  • by antifoidulus (807088) on Thursday June 27, 2013 @01:23AM (#44119867) Homepage Journal
    So they sequenced a 700,000 year old horse? In other words, a Tesco hamburger?*rimshot*
  • by K. S. Kyosuke (729550) on Thursday June 27, 2013 @03:34AM (#44120257)
    If there's DNA inside, how can that be a fossil? I always thought that this would make it a subfossil, by definition.
  • My horse is amazing.
    http://www.youtube.com/watch?v=etm3kN6VBv8 [youtube.com]

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]