Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science Technology

GZipping Life Forms: Deflate Reveals Bare-Bones 245

An anonymous reader writes "To distinguish images derived from living vs. non-living sources, USC and NASA JPL researchers report today using the standard gzip compression utility. As a measure of overall pattern complexity, they find that the inherent pixel content of biologically generated fossils produces higher image compression ratios [more data redundancy], compared to their non-biological counterparts. The more the file shrinks, the more likely it is that a living process was involved. A test is live online here. This extends the simple, but powerful, uses of gzip to biogenic fossil detectors, in addition to spam cop filters, DNA sequence comparisons, digital camera image crunchers, etc. In nine months, the two Mars rovers will send back the first microscopic-scale images of Mars rocks, which should be amenable to some of these same techniques: thus gzipping is apparently pretty zippy."
This discussion has been archived. No new comments can be posted.

GZipping Life Forms: Deflate Reveals Bare-Bones

Comments Filter:
  • I'd assume (Score:3, Interesting)

    by Omkar ( 618823 ) on Monday March 31, 2003 @10:37AM (#5631049) Homepage Journal
    that this has something to due with patterns and image continuity. If so (enlighten me!), then it would be a decent filtering tool, but reliability would be a major problem. Geological (or whatever) patterns could fool the algorithm. Finally, the most compressible image consists of monochrome - is it alive?

    (Mods: the last line was a joke, intended to point out a particularly simple example of a problem - not a troll)
  • horsefeathers. (Score:1, Interesting)

    by Anonymous Coward on Monday March 31, 2003 @10:37AM (#5631050)
    It is true that many pictures of life forms compress to better or worse than than in-antimate objects. Just beause a picture of something compresses similarly to a life form doesn't mean it is a life form. This is simply coincidence.
  • uhhh.. huh? (Score:2, Interesting)

    by SamBeckett ( 96685 ) on Monday March 31, 2003 @10:37AM (#5631059)
    Doesn't gzip only look for patterns in one dimension? Assuming they are using these for pictures, they are missing the boat on at least one more area of complexity!
  • bzip2? (Score:3, Interesting)

    by maxwell demon ( 590494 ) on Monday March 31, 2003 @10:43AM (#5631093) Journal
    Has anyone checked if bzip2 is better or worse in detecting biological products?

    After all, they have quite different compression characteristics (on one hand, compression of a megabyte of zeroes is much better in bzip2, OTOH adding the same file on top of itself and then compressing gives much less additional compressed size with gzip than with bzip2 - tested with /usr/src/linux/kernel/sys.c, 24957 bytes uncompressed).
  • by RNG ( 35225 ) on Monday March 31, 2003 @10:43AM (#5631095)
    Although I'm certainly no compression expert, I think this makes sense. Many (most?) natural systems have fractal structures on some level so it only makes sense for them to compress better (ie: have more self-similar features) than systems which don't have this feature.

    Then again, what do I know? Maybe something more immersed in this field can tell us whether there's a seed of truth to my ramblings ...

    Greetings
    --> R
  • by Anonymous Coward on Monday March 31, 2003 @10:45AM (#5631113)
    Companies like Image Metrics use a mathematical translation into n-dimensional space similar to a compression algorithm to perform some interesting kinds of image recognition and processing. Examples are medical diagnosis, facial recognition, crystal growth monitoring and the like.

    http://www.image-metrics.com/pages/technology.as p
  • by 16977 ( 525687 ) on Monday March 31, 2003 @10:53AM (#5631157)
    One of the posters brings up an interesting point. Although meaningful data has more information than pure noise, it also has less than a blank signal. When you download pictures, regardless of the "meaning" they have to you, their compression can vary a considerable amount. And you've probably heard the statistic that the english language is 50 percent redundant. That figure may vary a bit too, but the point is that english's meaning to us is independent of its information content. And the probability that an image of a life form with more information will also have more "meaning" is probably just as uncertain.
  • by MarkWatson ( 189759 ) on Monday March 31, 2003 @10:53AM (#5631158) Homepage
    This seems like a "sort of" restatement of Kolmogorov Complexity.

    Roughly, Kolmogorov Complexity is a measure of randomness - the measure is how long a computer program needs to be to reproduce data (pardon an oversimplification).

    -Mark

  • ahh, but the picture of your wife contains a lot of inanimate objects. I'm sure if you cropped the picture down to just her (or reasonably close) she would fare better in this comparison.
  • by dpbsmith ( 263124 ) on Monday March 31, 2003 @10:59AM (#5631186) Homepage
    zip is a fine thing, but it's not a pattern-recognition program!

    This is the loopiest thing I've heard of since Rosenblatt reported that his Perceptrons could distinguish between music composed by Bach and music composed in imitation of Bach.

    Good heavens, any picture that's slightly out of focus will now be declared to be evidence of "biological processes."

    I'm guessing that the researchers are not as nutty as they sound and that they've done more than is being reported, but still...

    Reminds me of the researchers in the sixties who were publishing analyses of data that supposedly showed "biological clocks." It turned out that they were using smoothing algorithms that, basically, were filters that had a 24-hour peak in the frequency domain--so their analysis was creating the patterns they claimed to be detecting. A debunking article was published in Science in which another research used data from a random number table (the "unicorn" data) and showed that the same analysis techniques showed that the unicorn had a biological clock.

  • Slightly Dodgy (Score:5, Interesting)

    by jolyonr ( 560227 ) on Monday March 31, 2003 @11:10AM (#5631238) Homepage
    This whole thing is slightly dodgy, and I begin to wonder whether it was released a day early by mistake.

    The big problem is the use of JPEG source images. Unless you've stuck it up to the maximum size on quality, then the jpeg artifacting (which is in effect repeating blocks of image data after transitions) will probably mask any hidden level of complexity in the images - the human brain is a much better tool at pattern recognition than most computer algorithms (especially those algorithms not designed for the task!).

    Throw high-resolution bitmap files at it, and I'd be more persuaded that there is a genuine effect. Until then, I suspect it's more of a happy coincidence that the files they've thrown at it give results they are excited about.

    Jolyon
  • Re:why no bzip2 ? (Score:5, Interesting)

    by bill_mcgonigle ( 4333 ) on Monday March 31, 2003 @11:10AM (#5631242) Homepage Journal
    doesnt bzip2 outperforms gzip ?

    gzip might be preferable because it works more locally. It only keeps track of the last n bytes of data and does substitutions based on patterns seen in those n bytes.

    bzip2 uses a markov predictor and the chain length is typically much longer than gzip uses, so the compression is less local. That's great if you're going for compression but for this work, it might be misleading.

    That said, gzip doesn't know about image formats, so I wonder if these guys are getting some false positives on scanline wraps and other non-image data.
  • by KingRamsis ( 595828 ) <kingramsis&gmail,com> on Monday March 31, 2003 @11:20AM (#5631279)
    It was an interesting coffee break discussion with one of my professors, we were arguing if there is neat way to estimate the semantic content of a neural network after training it, I recall suggesting to compress the value of the weights of all layers and the less compressible the more this neural network is trained.
  • by AugustMoon ( 593085 ) on Monday March 31, 2003 @11:21AM (#5631283)

    Your DNA is only sufficient to create another state machine with the same rules you had at birth.

    It will not re-create your complexity because our dna-state machines are designed to create brains which are 'genetically-memoryless', capable of self modification, and have incredible data collection and storage capacity.

    Think of your DNA as the graphics engine for Quake. It is relatively small (space-wise) compared to the textures and levels. Add different data, and you have still have a first-person game, but a completely different one.

  • Re:Makes sense... (Score:5, Interesting)

    by jolyonr ( 560227 ) on Monday March 31, 2003 @11:27AM (#5631306) Homepage
    Unfortunately it's not that simple, inorganic systems can have as much visual complexity as organic things. For example.. um.. (looks out of window here in Toronto).. a snowflake! Fractal complexity, such as that seen in the branches of a tree, is frequently mirrored in the inorganic world - the snowflake is one example, another less well known example are manganese dendrites, they look just like fossil plants, but are totally inorganic such as these [vic.gov.au] [Victoria Museum]. The patterns of frost on a frozen windscreen are another example. I can't see how a computer program can distinguish whether such complex patterns are signs of life or not. Still, if it helps NASA get more funding, then who am I to argue! Jolyon
  • Pattern Recognition (Score:3, Interesting)

    by cyber_rigger ( 527103 ) on Monday March 31, 2003 @11:30AM (#5631330) Homepage Journal

    I envision a whole array of compression algorithms.

    Each algorithm could be fine tuned for a paticular type of pattern.

    Is that an elephant or a giraffe?
    Does it compress better with the elephant algorithm or the giraffe algorithm?
  • Seperate the chaff (Score:2, Interesting)

    by Anonymous Struct ( 660658 ) on Monday March 31, 2003 @11:38AM (#5631385)
    I doubt this is very accurate for marking photos as hits or misses directly. This kind of thing may be useful more for detecting the lack of life rather than the presence of it. If compression rates are low, maybe you don't have to look at this photo so much. If they're high, maybe you want to examine it more closely. If you're dealing with truck loads of data and you're looking for a needle in a haystack, a mechanism for ruling out uninteresting data is invaluable.

    That having been said, it sounds good in theory that 'organisms are highly patterned and therefore compress better', but then why would you use gzip? Why not take that theory and build something a little more adept at locating particular types of patterns you're interested in, or ruling out the ones you know are going to create false positives?

    So, THAT having been said, I'm forced to wonder if somebody forgot that March has 31 days. Lord knows I can never keep track.
  • hidden markov models (Score:3, Interesting)

    by nounderscores ( 246517 ) on Monday March 31, 2003 @11:38AM (#5631389)
    Interesting. For genome analysis Hidden Markov Models have been used in a lot of software. [wustl.edu]

    Maybe if you could have an image recognition system do the Hard Machine Vision probelm of generating a schematic of the picture, and then fed the "leg bone is connected to the hip bone" kinda data into a HMM you could work out which fossils are ancient Cambrian crustations and which ones are Trogdor the Burninator.
  • viruses? (Score:2, Interesting)

    by Mentally_Overclocked ( 311288 ) on Monday March 31, 2003 @11:40AM (#5631403)
    I wonder if viruses (sorry - didn't RTFA) would compress like living life forms or if they would be more similar to nonliving.

    Just a thought.
  • Re:Slightly Dodgy (Score:3, Interesting)

    by kris_lang ( 466170 ) on Monday March 31, 2003 @11:48AM (#5631437)
    I've seen similar errors made by vision science (note that I did not say "image processing") researchers trying to analyze natural scene statistics and come up with interesting patterns. They created "basis functions" and did principal component analysis on sets of images and came up with a basis set that looks curiously like the base images of the DCT (discrete cosine transform), the underlying calculations of the JPEG image format. This is to be expected when you start with a set of images that are JPEG compressed.

    This was actually published in a (barely) peer-reviewed journal, Vision Research. I didn't say "image processing" above because a lot of these vision scientists seem to be psycologists doing visual psychophysics without having a strong background in math, or optics, or (it seems at time) the fundamentals of science.

    The other thing to take into consideration is that gzip is "pseudolinear". It does not take into account the 2-dimensional correlations that exist in image data. Even fax compression takes advantage of it. (and yes, I do realize that gzip can account for runs from previous regions regardless of length or location, but I am trying to point out that there is a specific 2-dimensional set of correlations extant in 2-d image data).

    In these cases being cited that use GZIP, the major function of GZIP seems to be as an indicator of the presence or absence of high-frequency components in the signal stream. Lots of irregular high frequency -> Low compressibility, very little irregular high frequency --> High compressibility factors.

  • Re:Cool (Score:3, Interesting)

    by tijnbraun ( 226978 ) on Monday March 31, 2003 @01:12PM (#5631837)
    A similiar technique has been used by italian mathematicians to differentiate pages from various authors by using zip. A nature article can be found here [nature.com]. After a request from a dutch newspaper they were able to identify one author (Marek van der Jagt, which made his first debut) to be the same as an already well-known author (Arnon Grunberg).
  • Re:I compress.. (Score:3, Interesting)

    by 4of12 ( 97621 ) on Monday March 31, 2003 @01:18PM (#5631872) Homepage Journal

    Not only are you, but are uniquely Mr Methane, because each individual author has unique and identifying characteristics that can be measured using - guess what - compression algorithms.

    Given enough samples, individual authors can be identified and graphs of language relationships [economist.com], too.

    I think it's interesting because it raises the bar on preserving anonymity if you publish widely.

    Add some entropy to your life; write drunk.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...