Forgot your password?
typodupeerror
Space Math News Science

Mapping Planets and Moons In 3D With Stereophotoclinometry 23

Posted by Soulskill
from the for-some-reason-that-word-isn't-in-the-spellchecker dept.
subcomdtaco writes with this snippet from a story in the NYTimes: "Dr. [Robert] Gaskell, with software he developed over a quarter-century of trial and error, can process hundreds of images in a few hours, slap them atop one another electronically like coats of paint and produce a topographical map so detailed that you often need a pair of 3-D glasses to appreciate what he has done. At 63, Dr. Gaskell has become the Captain Cook of space. Dr. Gaskell calls what he does 'stereophotoclinometry.' [PDF] Ideally he needs at least three images of the target landscape, usually taken by an orbiting spacecraft or a probe on a flyby to another destination. Only in rare cases can telescope images provide enough detail. The sun angle must be different for each exposure so each image shows different shadows. By comparing the shadows, the software calculates slopes, which yield the altitudes of target features. The computer solves the equation in three dimensions, producing a patchlike topographical maplet."
This discussion has been archived. No new comments can be posted.

Mapping Planets and Moons In 3D With Stereophotoclinometry

Comments Filter:
  • Sounds like Photosynth for geeks.
  • about this topic to say if it's "Stuff That Matters", but it's definitely "News For Nerds."
  • Stereophotoclinometer, ya okay another stereotype. what next..... Stereophoto'clit'ometer!......hmm sounds nice.
  • This is new? (Score:4, Interesting)

    by pongo000 (97357) on Thursday December 25, 2008 @04:17PM (#26231329)

    We were doing this sort of thing when I worked for Raytheon in the late 90s, using overlapping satellite imagery from IKONOS [wikipedia.org] to generate DTED (digital terrain elevation data) through some rather complex projections. The advantage of this method over Gaskell's is that there is no dependency on sunlight-based data. I fail to see what is new here...

    • And before that, Harris Corp was using similar methods (using image metadata) as far back as the early 80's to generate accurate geolocation on any given pixel as well as to generate dted. Definitely not new.

    • Re:This is new? (Score:4, Insightful)

      by node 3 (115640) on Thursday December 25, 2008 @05:05PM (#26231523)

      The advantage of this method over Gaskell's is that there is no dependency on sunlight-based data. I fail to see what is new here...

      You stated one of the things that are new. He used shadows.

      It need not be new, or first, or best, to be interesting. People have been making accurate measurements from distances for thousands of years. That doesn't make this any less cool.

      If you're feeling left out, maybe you should have published a paper.

      • Re:This is new? (Score:4, Interesting)

        by mikael (484) on Thursday December 25, 2008 @07:13PM (#26232133)

        There is also a technique called 'photometric stereo'. That takes three images of a rough surface photographed from the same camera position, but with the light source at three different angles. Assuming a basic Lambert lighting model, it is possible to determine the slopes of individual pixels of the image (gradients across and down the image) by converging intensity values back into gradient angles for each axis of the image. Using some discrete FFT calculations, it is possible to convert these gradients back into a heightmap. Shadows aren't really much help as they don't convey any gradient information, and have to be ignored or interpolated.

        The only problem is that the light source angles have to be sufficiently far apart in order to get accuravte readings. But this technique can be applied to anything from microscope slides to satellite images.

        • by beckerist (985855)
          I'd assume this could apply to intra (inter?*) stellar images too. That'd be sweet to see Titan or Enceladus!

          *After pondering that thought, I bet this could be applied to any sort of electromagnetic spectrum (visual or otherwise.) Point some camera's at the center of the galaxy. That'd be sweet too.
    • by ben2umbc (1090351) *
      You fail to see what is new?

      ...and produce a topographical map so detailed that you often need a pair of 3-D glasses to appreciate what he has done.

      Did yours require 3D glasses?

    • by RockDoctor (15477)

      We were doing this sort of thing when I worked for Raytheon in the late 90s, using overlapping satellite imagery

      I was taught to do this sort of thing in the mid-1980s, using aerial photography (this was before there was significant publicly available satellite imagery of any sort). The university lecturer who was teaching us had been taught himself in the late 1940s or early 1950s on his National Service (that's compulsory military service) as a part of photo-interpretation for surveillance and/ or bomb-da

  • Thanks for posting this type of stuff ! Please keep it coming. I love this site.
  • re (Score:2, Informative)

    by JohnVanVliet (945577)
    nothing that new i have been making maps for Celestia for quite a few years and for the most pare the imaging is just NOT available to be able to do this the Japanese craft to the moon can http://www.selene.jaxa.jp/index_e.htm [selene.jaxa.jp] it uses a forward and backward looking cam. to do this the "Mars Express High Resolution Stereo Camera" can http://www.esa.int/esaSC/SEM8Q2PR4CF_index_0.html [esa.int] the mars HIRISE can http://hirise.lpl.arizona.edu/anaglyph/ [arizona.edu] but in all cases so far only a small patch of ground is able to b
  • by SiliconFiend (106910) on Thursday December 25, 2008 @09:58PM (#26232771) Homepage

    If his code (or even the compiled software) is available, it might be useful for creating a digital elevation model which could be used to fill in the voids in the SRTM elevation data.

  • Years back, we were agog with the quality of images - depth, detail - from synthetic aperture radar. Visible-spectrum processing is very cool and very different than radar, but what new problem is really being solved here?

    I'm not trolling, I'm seriously asking. Hundreds of pulse returns per second to a vehicle in flight (where the pulses themselves provide the shadowing) as opposed to three passive returns of separately-sourced shadowing.... I'm just not getting the improvement from a conceptual point of

Just because he's dead is no reason to lay off work.

Working...