Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Science Technology

The Nonphotorealistic Camera 233

An anonymous reader writes "This article on Photo.Net describes a new type of imaging technique that finds depth discontinuities in real-world scenes with multiple flashes added to ordinary digital cameras. As depth discontinuities correspond to real 3D object boundaries, the resulting images look like line drawings. The same technique was used at this year's SIGGRAPH to create a live A-ha 'Take On Me' demo."
This discussion has been archived. No new comments can be posted.

The Nonphotorealistic Camera

Comments Filter:
  • by mirko ( 198274 ) on Wednesday December 01, 2004 @05:53AM (#10961745) Journal
    It's interesting to see that people finally wanted to try to obtain from their hardware what they'd usually expect Photoshop filters to do.
    I am for example very happy with my Motorola v550 cell phone camera which takes the trashiest but also most colorful nunrealistic photos.
    • by citog ( 206365 ) on Wednesday December 01, 2004 @06:14AM (#10961816)
      Tell me more about this realistic nun photography hobby of yours. Are you into priests as well?
    • Re:Creative uses (Score:5, Interesting)

      by Random_Goblin ( 781985 ) on Wednesday December 01, 2004 @06:15AM (#10961819)
      I think this is a quantum level above the Photoshop filters on an ordinary photo.

      In a standard photo, where is light and where is dark is only an approximation to 3D properties from a specific angle

      The use of multiple flashes gives a much more complete picture of depth.

      The real question is what is the cost of this process, and how does it compare with laser modeling techniques?

      If the cost and ease of use is not very low, i would say most of the uses of this technology would be better served by the capability of laser scanners to produce a high resolution digital 3D model of an object, rather than a 2D representation of a 3D object.

      I know which one i would rather my surgeon was using i know that much!
      • Re:Creative uses (Score:2, Interesting)

        by mirko ( 198274 )
        from a specific angle

        Reminds me of a Calvin&Hobbes strip where Clavin is in a perspectiveless world.
        BTW, you can also play on the camera's limitations and move it while it's still busy catching the pic... some kind of artistical fuzz... When you take some colorful lights, you always get funny results after equalizing the whole pic with your favourite pic processing soft.
      • by Rufus88 ( 748752 ) on Wednesday December 01, 2004 @07:49AM (#10962110)
        I think this is a quantum level above the Photoshop filters

        So, you mean, this is the tiniest possible improvement over Photoshop filters?
      • The cost should be minimal. Using a digital camera, a tripod, and just walking around with a remote flash, you should be able to achieve the same results. The hard part is the image processing software that turns the differences in shadow between the images into the outline image. And, while it may help make 3d models you will still need multiple images. I'm not sure this buys you anything over making images of something at specified levels of rotation, and picking common points on the views.
        • Re:Creative uses (Score:3, Informative)

          by cei ( 107343 )
          The hard part is the image processing software that turns the differences in shadow between the images into the outline image.

          That's where Photoshop comes in. It seems like most of the math tools required are built in as layer modes...

          From the article:

          The shadows of an image are detected by first computing a shadow-free image, which is approximated with the MAX composite image. The MAX composite image is assembled by choosing from each pixel the maximum intensity value from the image set.

          OK, this is stac

      • Re:Creative uses (Score:3, Informative)

        by peacefinder ( 469349 )
        If you want a 3D model, then this isn't going to be a big help to you. But oftentimes you don't need a full model, you just need a really good image from one or two POVs.

        In my previous life in manufacturing, this would have been a godsend for creating as-built drawings of custom work and for making assembly drawings for the customer.

        For its designed purpose, this is brilliant.
      • Re:Creative uses (Score:2, Interesting)

        by alw53 ( 702722 )

        It seems like this could lend itself to some image compression techniques. Especially for web
        image downloads, you could send the line drawing first and then fill in the interiors more quickly
        because the colors of the interiors are likely to be homogenous. This would be a good alternative to the current technique of sending a low-res image first and then overwriting it.
  • by nhaines ( 622289 ) <nhaines@ubuntu.cCOFFEEom minus caffeine> on Wednesday December 01, 2004 @05:55AM (#10961750) Homepage
    A live "Take On Me" video?

    People always ask how we'll know when technology will go too far, and I think we've just found out. :P
  • Demo Video (Score:5, Informative)

    by HogynCymraeg ( 624823 ) on Wednesday December 01, 2004 @05:58AM (#10961763)
    Is here [merl.com]
  • Since the site (Score:3, Informative)

    by lachlan76 ( 770870 ) on Wednesday December 01, 2004 @05:58AM (#10961764)
    has slowed to a crawl, Here's [nyud.net] the cache.
  • by Anonymous Coward
    I wonder if this technology could be extended to allow one to quickly take a picture of a real world object and turn it into 3D models (for use in 3D Studio etc.). Obviously one would have to take multiple pictures (six?) to get a proper all round representation of the object.

    Just a thought.
    • by Stween ( 322349 ) on Wednesday December 01, 2004 @06:33AM (#10961876)
      3D cameras do exist ... though the one that I saw was a fairly substantial beast. About the size of a phone booth, you stand in the middle and well-calibrated cameras all around you take pictures, generating a 3D model of whatever's in there.

      It was strange seeing a surprisingly high resolution 3D model of me on screen seconds after I'd stepped out of the thing.
    • It only detects edges, differences in depth sharp enough to cast shadows.

      3D analysis requires stereo-pair of images, like this [goi-bal.spb.ru]. Alternative would be to use some kind of radar or sonar, measuring time-differences of bounced signals, etc. Those and other methods [isdale.com] for 3d digitizing.

    • The cameras I have seen, low end, that are used for 3-D in jewelry CAM (cameos, broaches, rings, busts, etc.) project a grid on the object and then photograph multiple views. I've done s little engineering work for a company that sells these as a side line to their table top CNC milling machines. If you're interested in Jewelry and small model making, SHAMELESS PLUG WARNING, have a look at the modelmaster [nyud.net] web site (I don't run this web site so don't bitch at me :-) Talk to Mike. Tell him Bob sent you (for
    • sony do a video camera with an extra channel for depth information

      real world models need to be shot from 360degrees and thus are usually recorded while static, the camera revolves around them

      we used such a system for our Boo Radleys video [couk.com]

      The band members were scanned in, two of which you can see pilotting the plane in this shot [couk.com]

      however the tiger in this one [couk.com] I rotated and scanned in by hand on a flatbed scanner and then used photoshop to build profiles and then used the extruder to make it 3d, took me a
    • by G4from128k ( 686170 ) on Wednesday December 01, 2004 @09:04AM (#10962521)
      This technology is a long way from 3-D. First, this camera can only estimate relative depth not absolute depth. Thus, it might determine that the foreground object is half the distance to the camera as the background object, but have no estimate of the numerical distance of either object - the foreground could be 3 feet from the camera and the background would be 6 feet or the foreground could be 5 feet from the camera and the background would be at 10 feet.

      Furthermore, this technology only sees edge discontinuities where a foreground object sits in front of a background object. Thus it cannot tell the difference between a circular disk in the foreground or a sphere in the foreground. Actually it is worse than that because the rounded edge of the sphere will cause errors in the estimation of the relative depth of the sphere vs. the background.

      Even with these limitations, the technology could be quite useful in robotics. Combining multiple edge images using optical flow and knowledge of the robot's motion would yield a more accurate 3-D depth map at least for the purposes of navigation.

      As for extending the technology, a second camera would do wonders for pinning down the distances to each observed edge. The system would still need separate software magic for mapping the front surfaces of objects (e.g. discerning the difference between a 3-D sphere and a 2-D disk).
  • 3D applications (Score:5, Interesting)

    by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Wednesday December 01, 2004 @06:01AM (#10961773) Homepage
    How about having a camcorder with several differently coloured light sources? By analyzing the correspondingly differently coloured shadows one could create depth information in real time.

    Add this to moving around a room while filming it. It should be possible to create an accurate 3D-representation even with today's technology.

    If the colours of the light sources we're properly matched any discoloration could probably be eliminated as well.

    Food for thought.
    • How about having a camcorder with several differently coloured light sources? By analyzing the correspondingly differently coloured shadows one could create depth information in real time.

      Isn't this how the coloured-glasses 3D filming that was briefly popular in the mid 80s worked?
      • Re:3D applications (Score:2, Informative)

        by Mr2001 ( 90979 )
        Isn't this how the coloured-glasses 3D filming that was briefly popular in the mid 80s worked?

        Doubtful. All you need to make a red-blue 3D movie is two cameras a certain distance apart. Apply a red filter to one and a blue filter to the other, and voila. This multiple-flash technique uses a single camera, as would the parent's suggestion.
        • Re:3D applications (Score:4, Interesting)

          by djmurdoch ( 306849 ) on Wednesday December 01, 2004 @08:15AM (#10962218)
          All you need to make a red-blue 3D movie is two cameras a certain distance apart. Apply a red filter to one and a blue filter to the other, and voila. This multiple-flash technique uses a single camera, as would the parent's suggestion.

          Actually, you don't want to apply the filters to the camera, you want to apply them after the image has been captured, when you combine the two images onto one piece of film.
    • Re:3D applications (Score:2, Informative)

      by jeps ( 700879 )
      Actually, there exists several different solutions to this. One of them is the HDTV Axi-Vision Camera [opticsexpress.org] - doing real time depth capture with 2.4mm depth resolution - in 702p HD (1280x720 - not 1920x1080). Look at the links at the bottom of the page for video.

      I've seen something similar to this being done before by sending out very short but wide-angle pulses from a laser. By capturing an image with a high speed camera, only a thin slice (in the z-axis) of your scene will be illuminated at any time. By adjust


    • How about having a camcorder with several differently coloured light sources? By analyzing the correspondingly differently coloured shadows one could create depth information in real time.

      Frickin' brilliant.
      • Wouldn't it require a really tight, tight resolution, though, to notice the relevant shift? The technique might work on something close-up that you photograph, but the farther items would probably be tough to resolve. (The shift you get from this two-flashbulbs technique is further apart.)
    • Simply use lights in the IR range - then use a half-silvered mirror design to pick up the flickering IR light (different frequency and flicker rate for each spotlight) and record the cues separately from the visible image. This way you wouldn't ruin the main image just to be able to make it 3D.
    • How about having a camcorder with several differently coloured light sources? By analyzing the correspondingly differently coloured shadows one could create depth information in real time.

      I'm afraid that this would get very complicated very fast if you want to photograph scenes containing coloured objects. (Is that a green object illuminated with white light, or a white object illuminated by green light? Or, for that matter, a green object illuminated with green light?)

      It might be easier to use multip

  • This could be very useful when you need to postprocess an image - like apply a segmentation algorithm.
    Several segmentation algorithms exists. Ususally, they look at the color/brightness of an area and uses that to do the segmentation. Adding knowledge of spatial position to an image will help segmentation immensly. I'm not sure that 3 small flashes is enough. The examples provided are not exceptional - the same results could be obtained without that special camera. Nevertheless, the idea is good.
    • This could be very useful when you need to postprocess an image

      not really, since it's specifically an in-process technique. a normal single 2d image doesn't contain the directional information that you get by combining the multiple multi-flash images, so it can't tell you where the actual 3d shapes are.
  • I know it ain't really a 3d mapper but is it a quick way to grab info that could be later given a more in-depth scan?
    Could this technology be modified to produce a good 3d mapper?
    What's it's claim 2 fame? Shadow-comparison , right? Length-of-shadow=height-of-object, yes?
    • It seems like an excellent way to do some basic 3d mapping; not of terrain but of obstacles. It would be a quick way to skim tall three-dimensional objects off the top so you don't have to do all the processing. "This thing is too tall to go over, and I'm not allowed to knock anything over, so I can stop thinking about what it is, and worry about going around it." Or of course, "this looks like a human, I'd better shoot it/go around it/offer it a glass of water", et cetera. I think sonar and radar height fi
  • You see, edge detection is funny.

    Real world discontinuities, what they mean is, you run an edge detection algorithm on the distance signal.

    This will not find edges in newspaper print.

    No edge detection system is perfect - even this which uses spatial edges.

    There is no real new technology, the multiple flash cameras are amazing and beat any faked edge detection hands down.

    I do think they have awesome capabilities to allow computers to do what our eyes do, which is segment and label areas of our vision, a
  • by taylorius ( 221419 ) on Wednesday December 01, 2004 @06:08AM (#10961796) Homepage
    This technique sounds like it could be useful for 3d reconstruction problems. The main issue in, for example shape from stereo algorithms is accurately finding depth discontinuities, and it can be nigh on impossible with a textureless, evenly lit surface.

    Having said that, I'm not sure whether it would be better than existing solutions for that sort of thing, for example structured light.

  • robot vision (Score:4, Interesting)

    by Tropaios ( 244000 ) <tropaios&yahoo,com> on Wednesday December 01, 2004 @06:49AM (#10961923)
    Could this tech be used to help robots, or any computer really better understand it's evironment visually? As I understand it one of the problems facing robot optics is the lack of depth perception and identifying object bounderies, if they used optics in the nonvisable spectrum and basically walked around with they're flashes strobing happily along would that help these problems? The only problem I see with that is multiple robots flashes interfering with each other, so maybe it's only be used sparingly when absolutely needed? Or is this technology completely inappropriate for this application?
    • Re:robot vision (Score:4, Interesting)

      by TheRaven64 ( 641858 ) on Wednesday December 01, 2004 @07:16AM (#10961992) Journal
      One possible implementation would be to use 4 single-wavelength searchlights in different places on the robot. If these were outside the visible spectrum, then they would not be distracting to humans (as multiple flashes would be), and could be used to build an object-overlay. By using the flashes intermittently, the robot could subtract the ambient image from the flash image to remove the effects of other robots' flashes.
    • Another question.. is there any reason we can't use this technique with RF imaging? That would prevent needing an annoying flash, and robots could auto-detect conflicting frequencies and change them...

    • The only problem I see with that is multiple robots flashes interfering with each other
      That, and of course the people with photo sensitive epilepsy thrashing away on the floor in front of them.
  • by Bazman ( 4849 ) on Wednesday December 01, 2004 @06:57AM (#10961951) Journal
    With four flashes, the first thing you better do before any fancy schmancy edge detection algorithm is run the red-eye removal filter!

    • by Trillan ( 597339 ) on Wednesday December 01, 2004 @07:05AM (#10961967) Homepage Journal
      Pfft. Red eye? That's two flashes. With four flashes, you need to run the forked tail and horn remover, too.
      • Re:Four flashes? (Score:3, Informative)

        Pfft. Red eye? That's two flashes. With four flashes, you need to run the forked tail and horn remover, too.

        Well, technically, red eye is avoided with two flashes. One flash surprises the eye and reflects light off it before the pupil has a chance to shrink. Red-eye removal basically takes a "pre-flash" to prepare onlookers for the real picture.

        Joking aside, this 4 flash thing does make me think that it's not useable on any targets that are moving at all.

        • Yes, my Sony P71 (or whatever, I am bad with numbers) does a series of flashes. It's really blinding, but there's usually no redeye.

          I can't say "never" redeye, though.

          I agree it probably wouldn't work with moving targets. The blur is going to be almost as bad as a long exposure taken for the duration from the first flash to the last flash. No matter how short a time that is, it's going to be at least as long as the second highest light exposure level on my camera (like I said, bad with numbers...).

    • Nah, just make sure you only use the eyes from the last picture. "Red-eye" happens because, if it's dark enough to need a flash, your pupils are going to be dilated; and the camera will take a photo of your retina rather than your iris.

      If you can give someone a good bright flash before the one with which the picture is taken, that will contract their pupils and you won't see "red-eye". Modern cameras do this already, with greater or lesser degrees of success {technically it's quite difficult; you need
  • manuals (Score:3, Insightful)

    by millahtime ( 710421 ) on Wednesday December 01, 2004 @07:09AM (#10961973) Homepage Journal
    This would be great for technical manual writing. Help you take pictures of the mechanicla interfaces.
  • Biometrics (Score:3, Interesting)

    by TheLoneCabbage ( 323135 ) on Wednesday December 01, 2004 @07:22AM (#10962012) Homepage
    I wonder if such a technogology could be used for biometric facial recognition. Since the lightsources are internal, it would be relatively simple get consistent refrence points from it.

    Also, it would not be *AS* processor intensive, so you could take more photos from more angles.

    Using autofocus, and a short depth of focus, you isolate figures even in crouds. Isolate the target from multiple photos, so you have more than one agle for a biometric.

    If we can track the target in motion, we can assume that FRONT is aproxomately the direction they are traveling. Use and IR flash so that people don't get all paranoid (not saying they don't have a reason).

    Even with glasses and a beard change it would be tough to fool the system.

    • Re:Biometrics (Score:3, Interesting)

      by LucidBeast ( 601749 )
      I read a book "Phantoms in the Brain" by Neuroscientist V. S. Ramachandran awhile back and he described a case, where a person with brain injury saw in part of her vision everything as "cartoons". He went on to speculate, if I remember correctly, that we all have this cartoon vision under our "real vision".

      This came to mind, when I after looking at these pictures read your post. Perhaps our brain needs some sort of caricature or simplified image of faces for us to recognize them, but this layer of vision i

    • Even with glasses and a beard change it would be tough to fool the system.

      But this begs the question (no, it really does!) of whether or not it would be tough enough to fool the system. All you're talking about doing here is edge detection. You're going to have to do it awfully fast if you want to get enough outlines (it's not like this technique generates wireframes on its own) to get a good idea of their shape.

      It might provide a small enhancement over current face recognition systems but the few

      • A) use IR instead, it's cheap and easy for CCD's and it's invisable to humans (therefore not disorienting)

        B) to get the basic data, I could have someone walk down a halway with these cameras (or just one high rez-with a fish eye lense).

        I know have images of you from 360 degrees, and I know from what angle each of those photos were taken.

        It's not just edge detection, it's a simplification of the process that allows you to process it from 3 dozen angles in the same ammount of time. So I don't have to chose
  • Finnally! (Score:5, Funny)

    by LabRat007 ( 765435 ) on Wednesday December 01, 2004 @07:28AM (#10962030) Homepage
    A nonphotorealistic camera for Lexmark's entire line of nonphotorealistic printers!!
  • In one of my to-do notebooks I've got 3 camera setup sketched out and some maths. This will get you radial depth on a moving object as the shots are taken all at once. Also handy if you're moving! It's not a new idea. Stereo cameras and viewers were used for remote sensing from the early days of the camera. I used to look at aerial 2-D photos of national forests in my dads office back in the early 60's.
    You can do the same thing with a 'normal' digital camera if you are taking digital photos while moving an
  • by Anonymous Coward on Wednesday December 01, 2004 @07:54AM (#10962140)
    For those that are uneducated in graphics, the engine photos show two comparative methods:

    The TOP row shows how the camera output is good enough to be used as a technical drawing- it requires very little modification or touch-up.

    The BOTTOM row shows how a Photoshop filters butcher the image and the result is completely useless. No amount of touch up could help that image.

    Furthermore... NO THIS CAMERA CANNOT BE USED ON MONOCHROME IMAGES. It can't be used on any kind of images, and it isn't a post-filter. There isn't any edge detection involved.

    The 4 flashes cause shadows to be cast in 4 different directions and creates a composite from the difference. If the subject DOESN'T cast a shadow, then the camera won't work.

    I assume this camera cannot be used to photograph the outdoor scenes, simply because the flashes will not render shadows at that great distance.

    This is an brilliant method though, and the results are excellent (look at how the details in the spine pop out).
    • The 4 flashes cause shadows to be cast in 4 different directions and creates a composite from the difference. If the subject DOESN'T cast a shadow, then the camera won't work.



      So... You couldn't use this on a vampire. It's the vampires who have that no-shadow thing, right?

  • NPR Quake (Score:3, Interesting)

    by Dan East ( 318230 ) on Wednesday December 01, 2004 @08:13AM (#10962210) Journal
    Speaking of non-photorealistic and real-time, this reminds of me NPR Quake [wisc.edu].

    Dan East
  • All these years, people were trying to make cameras more photorealistic, when, what they really should have been doing is making cameras non-photorealistic. I guess my 2-megapixel camera is lightyears beyond all those new fangled cameras ou see in the store.
  • Imaging techniques that control the illumination are not new. This is a nice and simple application of them, though.
  • by SethJohnson ( 112166 ) on Wednesday December 01, 2004 @09:17AM (#10962605) Homepage Journal


    Had to mention this for those who didn't catch it in 2001. Some students in Wisconsin created a Quake II mod that converts the Open GL rendering engine output to non-photorealistic sketches. Looks like the A-ha video in realtime. I'd really like to see someone bring this to more modern 1st-person-shooters like Doom 3 or Quake 3.

    NPR Quake [wisc.edu].
    • Actually, NPR Quake is not Quake 2, it's Quake, and it's based on GLQuake. Apparently the modifications were not all that extensive. Playing in any of the sketch modes is pretty challenging, it's hard to see anything at a distance except on the most uncomplicated maps. Anyway, in the other direction, tenebrae quake [sourceforge.net] brings (more) realistic shadow/lighting effects to the original quake.
    • Great link -- it hadn't occured to me, but 3D modelling with simple polygons like those earlier FPS games is probably the easiest application to apply a sketch filter to. Nifty.

      Also, there's good news for you -- the page you linked connects to this one [wisc.edu], which is a rough replacement OpenGL driver to postprocess any application's OpenGL calls with any sort of filter ... *very* cool stuff, though the page isn't dated, and there's no source, so it's hard to tell if it's still alive. Does have a screencap from
  • Funny that you should mention A-ha, because "ah ha!" is pretty much what I said to myself when I read TFA. The offset flash on most cameras is usually viewed as a liability that screws up your photos, but these guys have turned around and taken advantage of the effect.

    Simple idea, well executed. Ah ha!
    • by slim ( 1652 )
      The offset flash on most cameras is usually viewed as a liability that screws up your photos, but these guys have turned around and taken advantage of the effect.

      Interesting opinion. My problem with built in flashes is that they are too close to the lens, meaning there is little to no shadowing, flattening out everything.

      You'll notice a lot of pro photographers have devices to move the flash further from the lens: either tall stalks with the flash at the end, handheld flash units on wires (to be held arm
      • You'll notice a lot of pro photographers have devices to move the flash further from the lens: either tall stalks with the flash at the end, handheld flash units on wires (to be held arm-outstretched in the non-camera hand), or even RC flash units on tripods several metres from the camera.

        Sure, and they usually also have some sort of diffuser or umbrella with their flash. Or they'll bounce their flash off the ceiling for the same effect. Or multiple flashes are set up so that each one fills in the shadows
  • This method replaces value-based edge detection with depth-based edge detection, but to get a "proper" line drawing, you'd want to combine the two. That's because line art usually draws both kinds of discontinuities. i.e. We draw a line at the outer edge of an object, regardless of value/color change (which is what this technique does), but we also draw a line between the red and green stripes on an ugly Christmas sweater, despite the fact that there's no depth difference (what traditional edge detection
  • Win 10000 $ (Score:3, Interesting)

    by Isomorph ( 760856 ) on Wednesday December 01, 2004 @10:04AM (#10963016)
    This story made me revisit some old bookmarks.

    One of them is Canesta [canesta.com] that makes a photo sensors that can make pictures that include deep maps.

    To my surprise I see that they are running a contest were your can win 10000 $.

    But I don't have time to participate myself, because I am writing on my masters. So enjoy the contest [canesta.com].

  • by Spoing ( 152917 ) on Wednesday December 01, 2004 @10:11AM (#10963091) Homepage
    With this, why have only one object in focus? Here's what I mean;

    If autofocus (or any other method) from differnet angles allows for this enhancement, this technique can be used to 'cut' the image into different focus layers.

    Piece the layers together, and you get a photo that has depth of field and is much sharper at each level.

    The layer information could be stored seperately for later processing or combined with only a little fudging to give a weighted blur to the non-primary layer(s). Keeping the layers seperate and doing a comparison would also allow editing tricks such as cutting out objects at a specific depth or performing color enhancements on each level.

  • This technique makes outlines for real-life objects, thus providing the an inportant step for turning real-life objects into cartoons.
  • My degree was in Technical Illustration. I knew then, (15 years ago), that it was inevitable that computers were going to make the field obsolete. (which this technology appears to *finally* achieve to a large degree). So I got into computers.

    However, I *do* believe that this statement:
    Additionally, an endoscopic camera enhanced with the multi-flash technology promises to enhance internal anatomical visualization for researchers and medical doctors.

    Won't work out. This flash technology relies on the

Talent does what it can. Genius does what it must. You do what you get paid to do.

Working...