The Nonphotorealistic Camera 233
An anonymous reader writes "This article on Photo.Net describes a new type of imaging technique that finds depth discontinuities in real-world scenes with multiple flashes added to ordinary digital cameras. As depth discontinuities correspond to real 3D object boundaries, the resulting images look like line drawings. The same technique was used at this year's SIGGRAPH to create a live A-ha 'Take On Me' demo."
Creative uses (Score:5, Funny)
I am for example very happy with my Motorola v550 cell phone camera which takes the trashiest but also most colorful nunrealistic photos.
Re:Creative uses (Score:5, Funny)
Don't encourage him (Score:5, Funny)
Jolyon
Re:Don't encourage him (Score:2, Funny)
Using an app that came installed on my phone (photoBase by arcsoft.com), I can snap a picture of anyone's face surrounded by a nun's habit and veil.
Although I doubnt that's what the poster meant, I thought I'd let you know.
I tried it once, it was not habit-forming.
The phone itself is not really what I expected...it's a little too bulky. In fact, it may be a form of 'excommunication' as I intend on getting a new one.
Ok, time for coffee.
Re:Creative uses (Score:2)
Re:Creative uses (Score:2)
Re:Creative uses (Score:5, Interesting)
In a standard photo, where is light and where is dark is only an approximation to 3D properties from a specific angle
The use of multiple flashes gives a much more complete picture of depth.
The real question is what is the cost of this process, and how does it compare with laser modeling techniques?
If the cost and ease of use is not very low, i would say most of the uses of this technology would be better served by the capability of laser scanners to produce a high resolution digital 3D model of an object, rather than a 2D representation of a 3D object.
I know which one i would rather my surgeon was using i know that much!
Re:Creative uses (Score:2, Interesting)
Reminds me of a Calvin&Hobbes strip where Clavin is in a perspectiveless world.
BTW, you can also play on the camera's limitations and move it while it's still busy catching the pic... some kind of artistical fuzz... When you take some colorful lights, you always get funny results after equalizing the whole pic with your favourite pic processing soft.
Quanta of image processing capability (Score:5, Funny)
So, you mean, this is the tiniest possible improvement over Photoshop filters?
Re:Quanta of image processing capability (Score:2, Insightful)
Re:Quanta of image processing capability (Score:5, Funny)
Re:Quanta of image processing capability (Score:5, Funny)
--
Evan
Re:Creative uses (Score:2)
Re:Creative uses (Score:3, Informative)
That's where Photoshop comes in. It seems like most of the math tools required are built in as layer modes...
From the article:
OK, this is stac
Re:Creative uses (Score:3, Informative)
In my previous life in manufacturing, this would have been a godsend for creating as-built drawings of custom work and for making assembly drawings for the customer.
For its designed purpose, this is brilliant.
Re:Creative uses (Score:2, Interesting)
It seems like this could lend itself to some image compression techniques. Especially for web
image downloads, you could send the line drawing first and then fill in the interiors more quickly
because the colors of the interiors are likely to be homogenous. This would be a good alternative to the current technique of sending a low-res image first and then overwriting it.
Technology runs wild! (Score:5, Funny)
People always ask how we'll know when technology will go too far, and I think we've just found out.
Re:Technology runs wild! (Score:5, Funny)
Or maybe ASCII Starwars & Matrix movies... Oh wait...
Re:Technology runs wild! (Score:2)
Laugh, but with the amount of makeup and lighting that goes into a Vivid video production, to more or less wash out any shred of detail ... er blemish ... from the actors, you might as well shoot it in rotoscope.
Re:Technology runs wild! (Score:2)
Another misleading article title! I thought maybe Wal-Mart was selling defective cameras. Now I have to go look elsewhere for my Christmas shopping...
EricJavaScript is not Java! [ericgiguere.com]
Demo Video (Score:5, Informative)
Dijjer link (Score:2)
Since the site (Score:3, Informative)
Re:Since the site (Score:2)
3D Modelling Applications? (Score:2, Interesting)
Just a thought.
Re:3D Modelling Applications? (Score:4, Informative)
It was strange seeing a surprisingly high resolution 3D model of me on screen seconds after I'd stepped out of the thing.
Re:3D Modelling Applications? (Score:3, Informative)
And yes, it's pretty damn cool to see. They have lots of computer graphics/computer vision stuff going on here, some of it's pretty funky.
No, this does (depth) edge detection. (Score:2, Informative)
3D analysis requires stereo-pair of images, like this [goi-bal.spb.ru]. Alternative would be to use some kind of radar or sonar, measuring time-differences of bounced signals, etc. Those and other methods [isdale.com] for 3d digitizing.
Re:3D Modelling Applications? (Score:3, Interesting)
Re:3D Modelling Applications? (Score:2)
real world models need to be shot from 360degrees and thus are usually recorded while static, the camera revolves around them
we used such a system for our Boo Radleys video [couk.com]
The band members were scanned in, two of which you can see pilotting the plane in this shot [couk.com]
however the tiger in this one [couk.com] I rotated and scanned in by hand on a flatbed scanner and then used photoshop to build profiles and then used the extruder to make it 3d, took me a
No, this cannot estimate depth & sees only edg (Score:5, Interesting)
Furthermore, this technology only sees edge discontinuities where a foreground object sits in front of a background object. Thus it cannot tell the difference between a circular disk in the foreground or a sphere in the foreground. Actually it is worse than that because the rounded edge of the sphere will cause errors in the estimation of the relative depth of the sphere vs. the background.
Even with these limitations, the technology could be quite useful in robotics. Combining multiple edge images using optical flow and knowledge of the robot's motion would yield a more accurate 3-D depth map at least for the purposes of navigation.
As for extending the technology, a second camera would do wonders for pinning down the distances to each observed edge. The system would still need separate software magic for mapping the front surfaces of objects (e.g. discerning the difference between a 3-D sphere and a 2-D disk).
3D applications (Score:5, Interesting)
Add this to moving around a room while filming it. It should be possible to create an accurate 3D-representation even with today's technology.
If the colours of the light sources we're properly matched any discoloration could probably be eliminated as well.
Food for thought.
Re:3D applications (Score:2)
Isn't this how the coloured-glasses 3D filming that was briefly popular in the mid 80s worked?
Re:3D applications (Score:2, Informative)
Doubtful. All you need to make a red-blue 3D movie is two cameras a certain distance apart. Apply a red filter to one and a blue filter to the other, and voila. This multiple-flash technique uses a single camera, as would the parent's suggestion.
Re:3D applications (Score:4, Interesting)
Actually, you don't want to apply the filters to the camera, you want to apply them after the image has been captured, when you combine the two images onto one piece of film.
Re:3D applications (Score:2, Informative)
I've seen something similar to this being done before by sending out very short but wide-angle pulses from a laser. By capturing an image with a high speed camera, only a thin slice (in the z-axis) of your scene will be illuminated at any time. By adjust
Re:3D applications (Score:2)
How about having a camcorder with several differently coloured light sources? By analyzing the correspondingly differently coloured shadows one could create depth information in real time.
Frickin' brilliant.
Re:3D applications (Score:2)
Re:3D applications (Score:2)
Re:3D applications (Score:2)
I'm afraid that this would get very complicated very fast if you want to photograph scenes containing coloured objects. (Is that a green object illuminated with white light, or a white object illuminated by green light? Or, for that matter, a green object illuminated with green light?)
It might be easier to use multip
Re:3D applications (Score:3, Informative)
Yes, but the whole point of this system is to use one camera which is much cheaper and simpler to implement than two.
Additionally, if you RTA, it is mentioned that this system copes better with surfaces which would look uniform (white object on white background) than a stereo-based system, as different directions of light are more likely to expose object borders than light from a single direction.
Where is -MY- 3D camera? (Score:5, Interesting)
It bothers me a lot that stereo photography has been around so long yet isn't ubiquitous yet. Modern digi-cams don't do this. You said it's been around for ages, I hope most people know you mean more than decades. A quick google search tells me 1839 at the latest. What is stopping it?
Putting 2 sensors on a digi cam (photo or video) is not a difficult trick. You store the images in a format that supports 2 channels (left/right) and you can view them on any monitor with a simple pair of USB controlled glasses that flicks back and forth blacking out each eye. Also there are already 3d monitors out there that work without glasses.
Print out one channel for a 2d image or use photoshop filters to create red/blue 3d prints. Or even send images to a printer and get back those wheels used in those orange stereoscope toys.
If I had this ALL my pictures would be 3d. For that matter all movies should be 3D. IMAX has a workable solution but I think every movie should be shown this way. People would even buy their own personal polarized glasses that are more comfortable than the pairs handed out at the show.
I've been eyeing a digital-SLR for quite some time, for the cost of one of those I'd gladly turn my attention to a 3D capable camera with lower quality. And if the grandparent post is right something similar should be possible for SLR cameras without using 2 huge lenses. Although I'd submit that you can't always control the lighting.
Every now and then a red/blue 3D image comes up on APOD [nasa.gov] or elsewhere and I kick myself for not having a cheap pair red/blue glasses.
ViewMaster (Score:3, Informative)
FYI, they're called "View-Master [fisher-price.com]," and apparently they're no longer available in the vertical-wheel red/orange style I had as a kid.
Re:Where is -MY- 3D camera? (Score:2)
Glad to know I'm not the only one who feels that way..
Most people seem to feel that depth in images is just some kind of cheap gimmick, only really of interest to kids.. I beg to differ - it's a *major* part of the informati
Segmentation (Score:2)
Several segmentation algorithms exists. Ususally, they look at the color/brightness of an area and uses that to do the segmentation. Adding knowledge of spatial position to an image will help segmentation immensly. I'm not sure that 3 small flashes is enough. The examples provided are not exceptional - the same results could be obtained without that special camera. Nevertheless, the idea is good.
Re:Segmentation (Score:2)
not really, since it's specifically an in-process technique. a normal single 2d image doesn't contain the directional information that you get by combining the multiple multi-flash images, so it can't tell you where the actual 3d shapes are.
Does this have machine-vision applications? (Score:2, Insightful)
Could this technology be modified to produce a good 3d mapper?
What's it's claim 2 fame? Shadow-comparison , right? Length-of-shadow=height-of-object, yes?
Re:Does this have machine-vision applications? (Score:2)
The old 'edge detection' bites again (Score:2, Insightful)
Real world discontinuities, what they mean is, you run an edge detection algorithm on the distance signal.
This will not find edges in newspaper print.
No edge detection system is perfect - even this which uses spatial edges.
There is no real new technology, the multiple flash cameras are amazing and beat any faked edge detection hands down.
I do think they have awesome capabilities to allow computers to do what our eyes do, which is segment and label areas of our vision, a
Phase congruency (Score:5, Informative)
Image Partitioning (Score:3, Informative)
Could be very useful (Score:3, Insightful)
Having said that, I'm not sure whether it would be better than existing solutions for that sort of thing, for example structured light.
Re:Could be very useful (Score:5, Interesting)
But I don't think it will be useful for 3d reconstruction, since the algorithm doesn't have information about the depth of the shadows/borders.
robot vision (Score:4, Interesting)
Re:robot vision (Score:4, Interesting)
Re:robot vision (Score:2)
Re:robot vision (Score:2, Funny)
Re:robot vision (Score:5, Funny)
Re:robot vision (Score:2)
> you probably wouldn't notice problems until your robot farm population became unmanageable
Well, assuming my robots need to "look" at least once a second, I'd say a robot population of 60 would have problems.
Four flashes? (Score:3, Funny)
Re:Four flashes? (Score:5, Funny)
Re:Four flashes? (Score:3, Informative)
Pfft. Red eye? That's two flashes. With four flashes, you need to run the forked tail and horn remover, too.
Well, technically, red eye is avoided with two flashes. One flash surprises the eye and reflects light off it before the pupil has a chance to shrink. Red-eye removal basically takes a "pre-flash" to prepare onlookers for the real picture.
Joking aside, this 4 flash thing does make me think that it's not useable on any targets that are moving at all.
Re:Four flashes? (Score:2)
Yes, my Sony P71 (or whatever, I am bad with numbers) does a series of flashes. It's really blinding, but there's usually no redeye.
I can't say "never" redeye, though.
I agree it probably wouldn't work with moving targets. The blur is going to be almost as bad as a long exposure taken for the duration from the first flash to the last flash. No matter how short a time that is, it's going to be at least as long as the second highest light exposure level on my camera (like I said, bad with numbers...).
Re:Four flashes? (Score:2)
Thanks. :) I thought about posting a serious explanation, but to use a baseball analogy it was way too good a pitch to let it by without swinging!
Re:Four flashes? (Score:2)
If you can give someone a good bright flash before the one with which the picture is taken, that will contract their pupils and you won't see "red-eye". Modern cameras do this already, with greater or lesser degrees of success {technically it's quite difficult; you need
manuals (Score:3, Insightful)
Biometrics (Score:3, Interesting)
Also, it would not be *AS* processor intensive, so you could take more photos from more angles.
Using autofocus, and a short depth of focus, you isolate figures even in crouds. Isolate the target from multiple photos, so you have more than one agle for a biometric.
If we can track the target in motion, we can assume that FRONT is aproxomately the direction they are traveling. Use and IR flash so that people don't get all paranoid (not saying they don't have a reason).
Even with glasses and a beard change it would be tough to fool the system.
Re:Biometrics (Score:3, Interesting)
This came to mind, when I after looking at these pictures read your post. Perhaps our brain needs some sort of caricature or simplified image of faces for us to recognize them, but this layer of vision i
Re:Biometrics (Score:2)
But this begs the question (no, it really does!) of whether or not it would be tough enough to fool the system. All you're talking about doing here is edge detection. You're going to have to do it awfully fast if you want to get enough outlines (it's not like this technique generates wireframes on its own) to get a good idea of their shape.
It might provide a small enhancement over current face recognition systems but the few
Re:Biometrics (Score:2)
B) to get the basic data, I could have someone walk down a halway with these cameras (or just one high rez-with a fish eye lense).
I know have images of you from 360 degrees, and I know from what angle each of those photos were taken.
It's not just edge detection, it's a simplification of the process that allows you to process it from 3 dozen angles in the same ammount of time. So I don't have to chose
Finnally! (Score:5, Funny)
Re:Finnally! (Score:2)
Re:Finnally! (Score:2)
Interesting idea to use 4, 3 would work flashes (Score:2)
You can do the same thing with a 'normal' digital camera if you are taking digital photos while moving an
geek speculation is way off here. (Score:3, Informative)
The TOP row shows how the camera output is good enough to be used as a technical drawing- it requires very little modification or touch-up.
The BOTTOM row shows how a Photoshop filters butcher the image and the result is completely useless. No amount of touch up could help that image.
Furthermore... NO THIS CAMERA CANNOT BE USED ON MONOCHROME IMAGES. It can't be used on any kind of images, and it isn't a post-filter. There isn't any edge detection involved.
The 4 flashes cause shadows to be cast in 4 different directions and creates a composite from the difference. If the subject DOESN'T cast a shadow, then the camera won't work.
I assume this camera cannot be used to photograph the outdoor scenes, simply because the flashes will not render shadows at that great distance.
This is an brilliant method though, and the results are excellent (look at how the details in the spine pop out).
Re:geek speculation is way off here. (Score:2)
So... You couldn't use this on a vampire. It's the vampires who have that no-shadow thing, right?
NPR Quake (Score:3, Interesting)
Dan East
Ok, Now I Get It (Score:2)
active illumination (Score:2)
Nonphotorealistic Quake Mod (Score:3, Interesting)
Had to mention this for those who didn't catch it in 2001. Some students in Wisconsin created a Quake II mod that converts the Open GL rendering engine output to non-photorealistic sketches. Looks like the A-ha video in realtime. I'd really like to see someone bring this to more modern 1st-person-shooters like Doom 3 or Quake 3.
NPR Quake [wisc.edu].
Re:Nonphotorealistic Quake Mod (Score:2)
Re:Nonphotorealistic Quake Mod (Score:3, Interesting)
Also, there's good news for you -- the page you linked connects to this one [wisc.edu], which is a rough replacement OpenGL driver to postprocess any application's OpenGL calls with any sort of filter
A-ha (Score:2)
Simple idea, well executed. Ah ha!
Re:A-ha (Score:2)
Interesting opinion. My problem with built in flashes is that they are too close to the lens, meaning there is little to no shadowing, flattening out everything.
You'll notice a lot of pro photographers have devices to move the flash further from the lens: either tall stalks with the flash at the end, handheld flash units on wires (to be held arm
Re:A-ha (Score:3)
Sure, and they usually also have some sort of diffuser or umbrella with their flash. Or they'll bounce their flash off the ceiling for the same effect. Or multiple flashes are set up so that each one fills in the shadows
combine with traditional edge detection (Score:2)
Win 10000 $ (Score:3, Interesting)
One of them is Canesta [canesta.com] that makes a photo sensors that can make pictures that include deep maps.
To my surprise I see that they are running a contest were your can win 10000 $.
But I don't have time to participate myself, because I am writing on my masters. So enjoy the contest [canesta.com].
Focus across the whole filed of view (Score:3, Insightful)
If autofocus (or any other method) from differnet angles allows for this enhancement, this technique can be used to 'cut' the image into different focus layers.
Piece the layers together, and you get a photo that has depth of field and is much sharper at each level.
The layer information could be stored seperately for later processing or combined with only a little fudging to give a weighted blur to the non-primary layer(s). Keeping the layers seperate and doing a comparison would also allow editing tricks such as cutting out objects at a specific depth or performing color enhancements on each level.
Cartoons (Score:2)
Technical Illustration (Score:2)
However, I *do* believe that this statement:
Additionally, an endoscopic camera enhanced with the multi-flash technology promises to enhance internal anatomical visualization for researchers and medical doctors.
Won't work out. This flash technology relies on the
Re:Very interesting, but stupid (Score:5, Informative)
Re:Very interesting, but stupid (Score:4, Insightful)
Re:Very interesting, but stupid (Score:4, Insightful)
But look at the second image in the final set, it's clearly able to detect the edges of things. I'm not even sure what the filter in the last image is for.
And I'm not sure what you mean by "reproducing what can already bt produced". There are other multiple-image processing engines that can do line drawings and even 3d from multiple sources, but the thing is, they all require multiple cameras and calculating the slight offset in objects from different sources.
What's interesting about this new technique is that it uses the shadows from the flashes to determine edges and depth. Doing it entirely with lighting without multiple cameras is a really neat hack, imho.
Re:Very interesting, but stupid (Score:2)
Re:Very interesting, but stupid (Score:2, Interesting)
However,if you read carefully you see that the nifty aspect is that it gives depth information to images, even monochrome images. Just to start with this has applications for internal medicine (i.e. laparoscopy). This is cool.
Re:Robot uses? (Score:2)
Re:Taaaake oooon meeee... (Score:2)
Thanks for getting that song stuck in my head.
No kidding! Of course, it's probably meaningless to most of the readers, who are clueless about the eighties. (As opposed to us, who were clueless in the eighties.)
This brings up an interesting point, though. I've often wondered if the number of unique words in a song's refrain had any correlation with its popularity ranking. I mean, look at songs like "One More Night" or "Sussudio"... No! More songs of the 80s running through my head!
Eric
Some BlackBer [ericgiguere.com]
Re:Robot vision... (Score:2)
The crew of the Galactica must have made it here, because ever since the show went off the air, we've been inundated with coffee shops named "Starbuck's".
Re:Robot vision... (Score:2)
Re:Place obligatory girl comments here. (Score:2)
But where it the resulting image like they did for the picture of the bones?
BTW, I bet this would have saved the tracers countless hours on the 1978 rotoscoped version of Lord of the Rings. Maybe enough time for them to have figured out if the wizard was named Aruman or Saruman.