Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Graphics Software Science Technology

Camera that Sees through Smoke and Fog Underway 220

Posted by CowboyNeal
from the looking-glasses dept.
tomschuring writes "The Age has a story about IATIA, who have been given $2.7 million by the Defence Department to fund development of a military spy camera capable of seeing through fog, smoke and dust storms. The technology uses a highly sophisticated camera that captures three images simultaneously through a single lens. Images thus resolved from between the particles making up fog, smoke, and dust storms are formed into a single picture of the hidden target."
This discussion has been archived. No new comments can be posted.

Camera that Sees through Smoke and Fog Underway

Comments Filter:
  • by RKBA (622932) * on Thursday September 23, 2004 @11:22PM (#10337385)
    BugMeNot [bugmenot.com] username and password:
    Username: registrationsucks1 Password: asdoestheage

  • density (Score:5, Insightful)

    by Coneasfast (690509) on Thursday September 23, 2004 @11:28PM (#10337412)
    how dense can the fog particles be? this camera would have to have an extremely large resolution to do this kind of thing. anyone have any specs on this?

    the uses for this are endless, eg, if the technology becomes cheap enough, we can have this in cars to help driving during foggy weather.
    • Re:density (Score:5, Informative)

      by ajna (151852) on Friday September 24, 2004 @12:06AM (#10337581) Homepage Journal
      This system does not rely on resolution. You might be imagining it as taking two (or more) pictures shifted horizontally, perhaps, and somehow subtracting the intervening particle's optical effects, leaving only the subject matter. This is not how the system works, however: instead, as the summary briefly but correctly stated it relies on three images being taken, one focused in the plane being studied and the other two focused before and after that plane. Quantitative Phase Microscopy [google.com] is the process of extracting additional data about the subject in the plane from the data in all three images. Why it doesn't rely on the resolution of the sensor is because the addition information is derived from the optical properties of the light passing through/reflected off the surface, not from sensor trickery.

      I guess this could be used on cars given enough processor speed, but it's really not applicable in this case, as it yields additional information about something in a plane (parallel to the sensor of the imaging device -- imagine a brick wall ahead of you when driving). When driving, the plane, say, 50m ahead of the car is moving just as fast as you are, and seeing ultra-crisp images of that plane for the instant that it is 50m ahead would be of dubious utility imo.
      • Re:density (Score:4, Interesting)

        by dew-genen-ny (617738) on Friday September 24, 2004 @01:41AM (#10337861) Homepage
        Regarding cars -

        surely we could just sweep a range of values....

        or are you of the mind that a TV is impossible because we can only draw one line of dots?

        just a case of enough processing power, surely?
        • Re:density (Score:3, Informative)

          by ajna (151852)
          There is the possibility, yes. TV generally features items in a single plane of focus (enhanced by depth of field) and the raster sweep is in this plane. The problem in applying this technology to cars, for example, lies in that the "sweep" is of the plane itself. Of course it's possible in theory, but it's a different problem than what has already been worked out so throughly in television.
      • Could be useful on telescopes, if a bit of mist threatens to spoil the evening.. At least for imaging planets, I don't think long exposures are feasable.
        • I don't believe that it would be useful on telescopes, as they are focused at essentially infinity. Thus one would not be able to capture the required image focused slightly past infinity with any precision. I'd love to be proven wrong -- if someone knows of an application of QPM at infinity focus please post it.
  • by physicsphairy (720718) on Thursday September 23, 2004 @11:28PM (#10337416) Homepage
    "that captures three images simultaneously through a single lens." There is also a Kodak version, where one set of pictures is lost, another is misdeveloped, and the third is inadvertently sent to your ex with the same middle initial.
  • by tqft (619476) <[ianburrows_au] [at] [yahoo.com]> on Thursday September 23, 2004 @11:30PM (#10337428) Homepage Journal
    Some detailed links on how it works

    http://www.iatia.com.au/technology/insideQpi.asp

    http://www.iatia.com.au/technology/applicationNo te s.asp

    he algorithm has a number of key advantages, including:

    * Returns phase and intensity information independently
    * Provides quantitative, absolute phase (with DC offset)
    * Is a rapid, stable, non-iterative solution
    * Works with non-uniform and partically coherent illumination
    * Offers relaxed beam conditioning
    * Solves the twin image problem of holography
    * Has been experimentally applied to a number of radiations

    You can find their list of patents on theire site. Digging into these should give you more detail.

    I don't care I am going on holidays for 3 weeks in 3hours

  • by telly333 (659241) on Thursday September 23, 2004 @11:31PM (#10337432)
    Just in time for politics.slashdot.org!


    telly
  • 3D? (Score:4, Funny)

    by Nathdot (465087) on Thursday September 23, 2004 @11:32PM (#10337438)
    The technology uses a highly sophisticated camera that captures three images simultaneously through a single lens

    Unfortunately the image cannot be viewed without Red+Blue 3D glasses.
    • by Alsee (515537)
      >three images simultaneously
      Unfortunately the image cannot be viewed without Red+Blue 3D glasses.


      Your count is off, Red+Blue is only two. This uses THREE images.

      The image cannot be viewed without Red+Blue+Green 4D glasses.
      Oh, and you'd have to be a three-eyed transdimentional being to wear them.

      -
  • by vectra14 (470008) on Thursday September 23, 2004 @11:32PM (#10337441)
    These guys at stanford have done some really amazing stuff that's directly related. Except that they has literally dozens of cameras (as seen in their ppt), and their research seems to concentrate on multifocal image reconstruction (see ppt slides, presentation is quite good)

    Link [stanford.edu] (has cool results links)
    • by ajna (151852) on Thursday September 23, 2004 @11:48PM (#10337512) Homepage Journal
      The Stanford work is actually entirely different. They utilize parallax -- in other words, their cameras are in physically distinct locations and see the scene with different perspectives. The IATIA work utilizes a single point of view, with images captured with the focal plane at the desired location and then slightly fore and aft. Read more here, at a Columbia site [columbia.edu].

      Quantitative phase microscopy is a relatively new technique that can generate phase images and phase-amplitude images. In practice, to obtain a quantitative phase image one collects an in-focus image and very slightly positively and negatively defocused images, and uses these data to estimate the differential with respect to the defocus of the image. These images (a through-focal series) can be easily obtained in our system with our z-motion nano-positioner. The resulting data can be solved to yield the phase distribution by Fourier-transform methods. Results are obtained by essentially solving an optical transport equation. Significantly, the phase that is obtained does not have to be unwrapped, as is required for interferometry.


      I'd be lying if I told you I completely understand the quoted paragraph, specifically what "essentially solving an optical transport equation" refers to, but I'm sure some cursory googling will lead the curious to specifics, certainly more than googling on terms in the article summary would yield.
      • by TiggertheMad (556308) on Friday September 24, 2004 @12:38AM (#10337695) Homepage Journal
        What they are saying is this: They take three pictures. On a camera there is a point somewhere in front of the lens that is the 'focus point'. The distance it is away from the camera will vary by the lenses and their distance apart, but it is basically a fixed distance for any given setting. The first picture that point is set too far behind the subject, the second right on the subject (In focus) and the third in front of the subject. Because you know how the lenses were made, you can do some math and figure out how far away each element in the picture is by how the focus changes between the shots, and get a (quasi) 3d model of everything in the picture. The concept is simple enough, although having a proc that can do that in real time could be a challenge.

        The real challenge is this: You are building a 3d model by interpolating data from a scene, but you are only doing it in one dimension. I bet a 3d picture would look like a scene from Doom1. You can create flat sprites and position them, but you can't capture any depth information without paralax interpolation either via lateral movement and reshooting or additional cameras.
  • Keith Nugent (Score:5, Interesting)

    by metlin (258108) * on Thursday September 23, 2004 @11:33PM (#10337445) Journal
    Hmm, Keith Nugent [iatia.com.au] is fairly well known in some niche areas of optics. If I remember right, his initial work on the use of x-rays and the like to compensate for normal visible hindrances were met with some opposition, but he is quite famous otherwise.

    That was because, ironically, this was developed as a method to visualize biological stuff, and some felt that his methods would not quite be suitable for such a task. His ideas were to use various parameters such as phase, intensity and angle of vision to extract information which could be correlated and converge to recreate images with minimal amount of information, which later gained acceptance.

    I guess he developed on that technique, and later on evolved to have the military to take notice. Interesting neverthless.
  • by shoolz (752000) on Thursday September 23, 2004 @11:35PM (#10337462) Homepage
    Much is left to the imagination in the article

    I am imagining that since it not possible to "see" "through" an object, that these three images must be of various wavelengths (visible light, ultraviolet light, and infrared) and then are run through an interpolation process to get a probable image of what is behind the obstacle.

    Am I out to lunch? Can anybody shed more light on how this works?
    • No, it doesn't use special wavelengths. It uses normal lighting.

      The system is based on "phase" which roughtly translates as "distance". Before any one objects, I am knowingly and intentionally glossing over the distinction between them for purposes of an easy clear explanation.

      Smoke/dust/fog scramble brightness information, but apparently they do not fully scramble distance information hidden within the lightwaves. When you take an out-of-focus photo, part of the blurring is controlled by / encodes that d
  • Hi-res TV stills (Score:2, Interesting)

    by whereiswaldo (459052)

    I was recently thinking about a technique which might be used for creating high definition stills of television programs.

    The principle goes like this: you can get a view of an entire room with only a slit to look through. All you need to do is move back and forth to get the extra details.

    So with the TV stills, you let the camera pan around a bit on a subject and capture all of the detail for each distinct area of the picture (eyes, whatever) since each of the raster lines on the tv are like the slit thro
    • Re:Hi-res TV stills (Score:3, Interesting)

      by swb (14022)
      Doesn't changing angles on the subject (a result of moving the camera), cause you to collect not more but different data on the subject, resulting in a higher resolution image that's higher in angular/dimensional data, but not in 2D data?

      It seems like you'd end up with a David Hockney-like image, not a higher resolution image.
      • That sounds reasonable, but don't you think that could be accounted for in the final image?
        • If the object was a flat, 2-D surface you could probably correct for it relative to some start position. But a 3D object? You'd have to know its complete 3D geometry to do the perspective transformations.

          If you had to scan the object with a laser to get 3D data on it, you might as well accept a lower resolution but 3-D image instead.
    • Not entirely sure what you want, but it sounds a bit like ALE [dyndns.org]. I've played around with it a bit [blog.tc.dk] and it's quite cool. Does have it's limitations, though...
    • I can not for the life of me find an online reference to it, but I watched a crime documentary about a case where they did almost exactly that.

      They had a very grainy security camera tape of a suspect with a tattoo. Through a process (I think) of analyzing the deltas between the frames were able to create a shockingly detailed 'close-up' of the tattoo.

      This led to the suspect's identity being discovered, and he was eventually convicted of whatever heinous crime it stemmed from.
  • by Stormwatch (703920) <rodrigogirao@hotmai[ ]om ['l.c' in gap]> on Thursday September 23, 2004 @11:41PM (#10337484) Homepage
    Here's what you want, a camera that sees through clothes [wired.com]. Sheesh...
  • I fought the law... (Score:3, Interesting)

    by Rendition (816193) on Thursday September 23, 2004 @11:49PM (#10337514) Homepage
    Hope they can't make this work for speed cameras...
  • vaporware (Score:2, Insightful)

    by Doc Ruby (173196)
    Wake me when journalists have a camera that can see through the fog of war, where the first casualty is the truth.
  • by Anonymous Coward
    ... now I can stop feeling guilty for turning off fog of war in games!
  • Already exists (Score:5, Interesting)

    by leabre (304234) on Friday September 24, 2004 @12:19AM (#10337627)
    I was in the US Navy from 1994-1996 and the damage control teams already have a special camera (forget what it is called) that can see through dense smoke (the type you would expect from a jet fuel fire or amunition fire on a ship) and help you to see clearly through the smoke.

    Wonder what makes the camera in this article so different from the technology the Navy already uses... I'm sure the current navy breed is much more advanced than it was 10 years ago.

    Thanks,
    Leabre
    • Re:Already exists (Score:2, Interesting)

      by Anonymous Coward
      Walked through one of the carriers docked in Newport around that same time and they let us use a device (looked like old style night vision binocs) that sounds like what you are talking about. It worked off heat, and wasn't color though. It was cool to see the heat from a lighter swirling around for 2 min after the flame was extinguished.
    • Re:Already exists (Score:4, Insightful)

      by GooberToo (74388) on Friday September 24, 2004 @03:56AM (#10338246)
      I'm guessing that it's an IR type camera, which is more commonly being used by fire fighters these days. They use these to quickly search burning houses (etc). Without this gear, they must search the same way rescue workers have for the last several hundred years (blind belly crawl), if not longer. Even these cameras have their limitations, based on the ambient temp and the number/size of particulants in the air. These generally work well for rescue because the air on the floor is cool and a warm body still makes for good contrast.

      I can't remember the brand of this system, but IIRC, they are super expensive. Expect to have to pay something like $40-60k per camera, per fire fighter. This is why communities often do fund raisers to get these for their local firehouses. Bluntly, most firehouses can't afford one, let alone the multiples, which are considered ideal.

      At any rate, after all that, there is a huge difference between IR and what this article is talking about.
      • Re:Already exists (Score:2, Informative)

        by Phoebus0 (446231)
        Actually, they are called "Thermal Imaging Cameras", and they are cheaper than that. Multiple companies make them, ours are made by MSA. My department just purchased one for around $12k, so they are cheap enough to carry one per apparatus. You can get them even cheaper if you want fewer features. Almost every volunteer department I am aware of has at least one.

        Generally they aren't restricted by particles in the air until the particles get large enough to block the infrared. In some situations the we
        • Thanks for the pricing update. I should of mentioned that my pricing was based on information that is about 8-years ago. Wow. I hadn't realized it was that long ago until I stopped to think about it. I guess prices have fallen!

          Thanks again!
    • Re:Already exists (Score:2, Interesting)

      by Stormshadow (41368)
      You're thinking about the Navy Firefighter's Thermal Imager AKA NFTI. It's an infrared camera, nothing really special. Just got my Surface pin today, so I could tell you all sorts of useless knowledge about it... runs off 12 AA batteries, can't see through glass, lasts about 90 minutes on a full charge.
      -ET3(SW)
  • Parallax (Score:2, Interesting)

    by Anonymous Coward
    "The technology uses a highly sophisticated camera that captures three images simultaneously through a single lens. Images thus resolved from between the particles making up fog, smoke, and dust storms are formed into a single picture of the hidden target."

    If it uses the concept of parallax, how can it possibly do this both using the same lense AND at the same time? Isn't parallax based on the concept of different images of overlapping fields of view? IR: two or more eyes/lenses or two or more images slig
    • Re:Parallax (Score:2, Informative)

      by Anonymous Coward
      It doesn't use the concept of parallax, and it doesn't take multiple copies of the same original image. Rather, it takes three images focused at different planes, and uses the phase differences in light from traveling different distances to reconstruct the original.
  • my thoughts (Score:5, Interesting)

    by Large Bogon Collider (815523) on Friday September 24, 2004 @12:25AM (#10337646)
    I'm not a 100% sure, but the technique involves phase shift. As light of a single frequency passes though an medium, its phase is altered and light propagation is delayed. If you can computationally filter out all out of phase shift information caused by fog, for example, you can "see" what the hidden object looked like. This process is quite CPU intensive. It seemed that about a grayscale SVGA sized image (0.41 mp) took 1.5 secs on a PIV 2.4GHz to calculate. This should improve with algorith tweaking and using FPGAs.

    This may also have medical applications in terms of optical imaging - see through the patient (arms and legs only, probably). Shine a bright light at the patient. Capture the ealiest photos that emerge (the ones that had a direct path to the camera). Ignore slow photons (ones that were absorbed and release or bounced around). Voila, instant imaging without x-rays. IIRC, this was in development years ago.

  • Better solution (Score:2, Interesting)

    by huge colin (528073)
    Just use an uncooled microbolometer-based infrared thermal imager. BAE Systems has been producing these for years. They're low-power, lightweight, and efficient.

    When receiving this wavelength of IR, you can see through smoke, fog, some plastics (regardless of opacity to visible light), and independent of visible light levels. And seeing radiated heat is, of course, an obvious benefit. A fraction of a degree F is all that's needed to note a difference -- you can even see where things used to be because of
    • Re:Better solution (Score:2, Informative)

      by KD5YPT (714783)
      The problem is that those camera only works will in areas with low temperature (night sea, night sky, moderately cool weather). In desert where sand storm kicks up or in picking out idle targets (stationary artillery for example that was not moving for a long time), infrared sensors are pretty useless.
  • Alternate uses... (Score:2, Interesting)

    by d474 (695126)
    Here are some good targets to test this camera out on, after all, these have some thick fog around them. Maybe we can get a clearer picture of:
    1. CBS - the "true" source of the forged documents

    2. Dick Cheney's secret Energy group (who are the members)
      CIA - Tenet's "slam dunk" intelligence source on Iraq's WMD (who fabricated that intelligence - afterall, it wasn't real)
      White House - who outed the CIA agent
      FBI & John Ashcroft - why is Sibel Edmond's testimony being "re-classified" after 2 years of being in p
  • Finally! (Score:4, Funny)

    by lewp (95638) on Friday September 24, 2004 @01:32AM (#10337844) Journal
    Every day for the last six months.

    Where's my spy camera?
    Where's my spy camera?
    Where's my spy camera?

    Here's your stupid spy camera!
  • by NathanM412 (750512) <[nathanm412] [at] [gmail.com]> on Friday September 24, 2004 @01:43AM (#10337869)
    Sounds like vaporware to me!
  • OASys (Score:5, Informative)

    by philipsblows (180703) on Friday September 24, 2004 @01:54AM (#10337897) Homepage

    In college my clinic team worked with Northrop Electronic Systems on their OASys project, or Obstacle Avoidance System. It was a laser + computer navigation system that would scan the horizon through smoke or other aerosols and generate a "safe passage" navigation image to the helicopter pilot using it. Supposedly it worked pretty well (they were still working on it after our 9 months on our piece of the project). It was basically a rotating laser optics assembly that would trace a cone in space, and the assembly would scan in the horizontal plane to yield the losenge shape (they used that term).

    Here's a funny little twist. When we went to the site to visit the developers of the project at Northrop, we stopped off in a meeting room that had on one of the walls a poster for the OASys project, featuring a helicopter with a losenge-shaped window of visibility depicted against some trees with some smoke and other debris in the air.

    Nearby on the same wall was another poster for a weapon system, the name of which escapes me. It was the same poster, but in the middle of the losenge-shaped window of visibility was a little gunsight, and I think the helecopter had some weapons slung.

    We asked our liason person whether the two projects were related, and he assured us they were completely different as we were brought to another area.

    Our professor on the project was a Yugoslavian National, and this was in 1992, so you can imagine how fun the rest of our visit was when they found that out....

  • by io333 (574963) on Friday September 24, 2004 @02:04AM (#10337930)
    1. Car in fog. It would be nice to have a heads up display on my winshield, kind of like Cadillac did with night vision some years ago... Whatever happened to that anyway?

    2. Airplanes! No more grounding because of fog.
    • Airplanes! No more grounding because of fog.

      As long as there is ground visibility, taking off is the easy part. It's landing that'll kill ya if you're not careful. In other words, as long as you have some visibliity to taxi and roll down a runway, you can easily get into the air. The problem is, getting safely back on the ground. ;)
  • Hey, I'm on Chopper Four! I can see through fog!
  • ..that rids them of the last excuse for not finding WMD..
  • by thrill12 (711899) * on Friday September 24, 2004 @04:44AM (#10338350) Journal
    ..[name your favourite war-game here] - it's called:

    Turn fog of war off

  • This sounds cool and all - but one that can see through bullshit would be infinitely more useful...
  • Quick, ship this sucker to the Utah District Court, attention Judge Dale Kimball.
  • But, will it see through clothes? That's really the key thing (subject mis-spelling intentional)
  • that the terahertz cameras which were also being developped, while being capable of filming through walls, are not good for fog, smoke and dust?
  • It's called Side Aperture Radar (SAR). It's used to get images through the clouds of Venus, etc.

    What's the fascination with the VISIBLE spectrum, anyways?
  • Here are some videos from range-gated imagers. [laseroptronix.se] These are active devices which illuminate the scene with a laser pulse, then turn the imager on for only a few nanoseconds to cover the range of interest. These see through fog well, because you can gate out all the stuff nearer or further than the specified distance.

    They're active devices, though, and the military prefers passives. Active sensors make you a target.

It's not so hard to lift yourself by your bootstraps once you're off the ground. -- Daniel B. Luten

Working...