Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Biotech Science

Study Suggests Genome Instability Hotspots 72

Dr. Eggman writes "Ars Technica reports on a new study that suggests not only that certain areas of the mouse genome undergo more changes, but that changes to those areas are more tolerable by the organism than changes in other areas. Recently published in Nature Genetics, the study examined the certain copy number variations of the C57Bl/6 strain in mice that have been diverging for less than 1,000 generations. The results were a surprising number of variations. While the study does not address it, Ars Technica goes on to recount suggestions that genomes evolved to the point where they work well with evolution."
This discussion has been archived. No new comments can be posted.

Study Suggests Genome Instability Hotspots

Comments Filter:
  • Genome Hotspots (Score:5, Informative)

    by mauthbaux ( 652274 ) on Monday November 05, 2007 @02:08AM (#21238413) Homepage
    It's been known for quite a while that certain sections of the genome mutate faster than others. Areas where the genes are less likely to mutate are typically referred to as 'conserved' regions, and most genome browsers will even indicate which regions they are. The UCSC genome browser is great for checking things like this ( With that browser, you can look up genes and compare them to the coding sequences in other animals.

    For very highly conserved genes such as the homeobox sequences, the degree of conservation is enormous. Nearly everything has the homeobox -or 'hox' sequence, and the sequence itself hasn't changed significantly (in comparison to most other genes). tRNA sequences as well don't change significantly; neither do ribosomal genes. Some stuff you simply can't change without experiencing lethal (or at least highly detrimental) results.

    Other regions such as non-coding regions, and introns to a lesser extent, can be mutated significantly without any change to the phenotype of the organism. In fact, this is what a lot of DNA fingerprinting is based on - big variations in sequence lengths and other polymorphisms between individuals. These variations don't occur frequently enough within coding sequences to be of any use in identification. Rather, they check the non-coding areas and other mutational hotspots for differences. Conversely, changes in the protein-coding regions can be used to determine the relatedness between species (say, human and chimp differences, or rat and mouse) on a much longer scale.

    Now, having said that, there are always exceptions. Some organisms have entirely novel mutation patterns. The influenza virus (admittedly, not an organism in the traditional sense) mutates almost exclusively in the coding areas of its envelope proteins. Even stranger, only 1 strain of the virus seems to survive every year to propagate the next. (See the 2001 article by Bull and Wichman entitled "Applied Evolution" in the journal 'Annual Review of Ecological Systems".)

    Basically, what I'm saying is that the fact that some parts of the genome mutate faster than others is something we already know. This isn't necessarily news. The only way I can think that this would be significant is that lab mice are generally thought to be basically genetically identical. They're normally inbred for about 20 generations (most don't survive past 7) to ensure the homozygosity of the mice. Inbred mice like this are valuable because the way they react is consistent and reproducible (traits that are mainstays of science). If they're mutating faster than we expected, it may have an affect on the reliability of the studies done with these mice.
  • Re:heh (Score:3, Informative)

    by wizardforce ( 1005805 ) on Monday November 05, 2007 @02:50AM (#21238567) Journal

    "...evolved to the point where they work well with evolution" ya think?
    It doesn't mean what you think it means. What the article was talkin about in this regard is that a genome will tend to evolve in such a way that mutation rates will be at a good rate for the organism. Not too many as to cause irreversable damage to the genetic line and not too few as to cripple the organism's ability to adapt genetically to the environment.
  • Re:The Next Step (Score:5, Informative)

    by mauthbaux ( 652274 ) on Monday November 05, 2007 @02:59AM (#21238599) Homepage
    I find it strange that organisms would allow *any* viruses etc. to tinker with its DNA.

    That's kinda what viruses do. The virus by itself cannot reproduce (that's why it's normally not considered to be 'alive') - it has to hijack a cell's reproduction machinery to do the reproducing for it. In order to hijack the cell, it inserts its own viral DNA (or RNA - depends on the virus) into the cell's genomic DNA, and reprograms the cell to make more viruses.

    Often, if the cell doesn't die from the infection, it passes on the viral genes as well when the cell reproduces. Our own human genome has a significant amount of viral DNA in it; most of it has been inactivated, but we still produce some viral proteins in very small amounts (reverse transcriptase for instance). I once heard the estimate that a full 15% of our genome has viral origins, but cannot find any reference to verify this claim at the moment - take it with a large grain of salt.

    Now, cells do have several mechanisms that they use to defend against viral attacks. Most notably, restriction endonucleases. These are enzymes that chop up the DNA at certain sites. We use these enzymes all the time in genetics work. If you've seen images of agarose or acrylamide gels with patterns of lines on them, that's usually DNA that's been chopped into pieces by some of these endonucleases, and then separated by size. Restriction endonucleases are commonly found in bacteria, but can also be found in lower eukaryotes like yeasts.
    Another method for defending against viral attacks is RNAses (enzymes that chew up RNA). This primarily works against viruses that use RNA as their genetic material. There's also the trick of marking your own genes with methyl groups so that you can tell the difference between it and foreign DNA, (if it's not marked, destroy it). Eukaryotes typically destroy any DNA found in the cytoplasm. So yeah, the cell does have several methods to defend against viral attack.

    But I suppose it may spend more energy to defend the sensitive areas such that those areas that are more flexible to mutations are not as well protected; meaning they get hit more.

    Once the virus genes have been inserted, removing them is quite difficult. Generally, viruses don't have a specific site that they insert to either, it's typically inserted at random. The reason that our own genes don't get significantly interrupted is that the majority of our genome doesn't code for anything; viruses insert themselves into areas we aren't using anyhow.
  • Re:heh (Score:3, Informative)

    by stranger_to_himself ( 1132241 ) on Monday November 05, 2007 @05:15AM (#21239077) Journal

    What the article was talkin about in this regard is that a genome will tend to evolve in such a way that mutation rates will be at a good rate for the organism.

    Indeed. Which is kind of related to this previous study Rate of Evolution Metrics Observed [] which showed that the optimum rate of evolution varied between small fast reproducing animals and larger slower reproducing ones.

    What this adds is the news (or further evidence if it was already known) that the optimal rate might even vary across different parts of the genome in the same organism.

  • Re:The Next Step (Score:5, Informative)

    by semiotec ( 948062 ) on Monday November 05, 2007 @10:56AM (#21241135)

    The reason that our own genes don't get significantly interrupted is that the majority of our genome doesn't code for anything; viruses insert themselves into areas we aren't using anyhow.
    This is an outdated idea. It would be more correct to say that we don't know what the majority of our genome encode for. Currently, it is estimated that around 40-60% of our genomic DNA is transcribed into RNA. A small fraction of these are messenger RNAs, which encode for proteins. We have no idea why (or often even "how" as not all have apparent signals for transcription to begin) these sequences are expressed. One of the new-ish idea in evolution is that many novel micro/small RNAs are rapidly (on the evolutionary scale) evolved in and out of the genome.
  • Misinterpretation (Score:4, Informative)

    by protobion ( 870000 ) on Monday November 05, 2007 @12:08PM (#21241939) Homepage
    Unfortunately, ars technica and by consequence Slashdot, have completely mis-interpreted the original paper, at least regarding the headline used. As many people have stated, there is no wonder in finding that there are genome instability hot-spots. This has been known for years. What was not obvious , is the existence of hot-spots leading to a specific kind of mutation - i.e , copy number variation (CNV). Even though CNVs are mutations in the classical sense, modern molecular biology reserves the term 'mutation' for single nucleotide or codon changes. Drastic changes at the genomic, chromosomal or transcript level are generally called by their specific names such as deletion, truncation, transposition, duplication etc. What this study seems to suggest is that certain regions of the genome (irrespective, it seems, if these regions are genes or have a known biological function) seem to have a fluctuating copy number in the genome, with the rate of fluctuation much higher than expected in a random process - suggesting the existence of a mechanism that allows for this fluctuation to occur. It implies, that evolution has caused these particular regions to become uncoupled from potential lethality or drastic abnormality that arises in organisms , when similar variations occur on other regions (for example: variation in X-chromosome number leads to Turner or Klinefelter's syndrome). The interesting question that I see, is if there is a mechanism that allows this "tolerance" to exist to variations in these particular regions, and if there is such a mechanism, can it be tailored to allow changes in other regions...leading to the possibility of creating strains of organisms specially suited for particular scientific experiments-with multiple copies of a gene etc. - animals that currently are simply impossible to create because these changes are lethal. A far shot would be therapeutics. There are certain diseases that arise simply because of a cells inability to tolerate certain changes in the genome, irrespective of whether those changes are the cause of the lethality. In other words, the cells defense system itself is the cause of the disease rather than the genetic change. This might be the case in several autoimmune diseases or developmental diseases where upon sensing a genetic change, cells undergo apoptosis - irrespective of whether the genetic change is detrimental during the natural life of the cell. So, if one reads the Nature article, there is really some news there
  • Re:heh (Score:5, Informative)

    by Rei ( 128717 ) on Monday November 05, 2007 @01:08PM (#21242859) Homepage
    The thing is, certain changes are more likely to be advantageous than others, so it only makes sense that certain parts of the genome would adapt more quickly. This isn't anything new. For example, bacterial plastids tend to evolve many times faster than the main bacterial genome. This helps them adapt more readily to changing food sources, threats, etc, without posing the higher risk of lethal mutations that changing arbitrary genes carries.

    An extreme example of segments of DNA mutating faster than the rest comes from the mammalian immune system. Picture this: a mouse has less than 100k genes, but can make more than a million different antibodies. How? Each of the millions of B lymphocytes circulating in the bloodstream can only make one antibody, just one amino acid sequence. When the mouse is attacked by a particular disease, almost all of them will be useless against the disease. But the few that display antibodies that have any sort of ability to bind the disease do so, and this triggers those cells to undergo rapid mitosis. This produces many clones that can attack the disease; however, they're not exact clones. The gene that codes for the antibody has "C" (constant) regions and a "V" (variable) regions. Each antibody uses two identical "heavy" regions and two identical "light" regions. There are three parts of the "heavy" variable region -- VH, DH, and JH -- while there are two for the "light" variable region -- VL and JL. A molecule called AID changes a cytosine to a uracil, which isn't normally found in DNA. The body's DNA repair mechanism attempts to correct it, changing the gene in the process. There are about 50 possibilities for VH, 23 for DH, 6 for JH, 57 for VL, and 9 for JL. So, doing the math, you come out to a staggering 3 1/2 million antibody possibilities, all thanks to the extremely rapid evolution of the V regions. The more effectively the B lymphocite binds with the pathogen, the more its reproduction is activated, and the more copies of itself -- both identical and with slightly changed V regions -- it makes. This whole process is called "somatic hypermutation", and we couldn't survive without it.

The absent ones are always at fault.