3D pics made using visible light 60
Danny Rathjens writes "David Brady of the University of Illinois at Urbana-Champaign and colleagues combined two kinds of technology --
computed tomography (CT), which is used to scan the inside of the body, and interferometry, which makes it possible to see
an image without focusing on it, to make 3D pictures using visible light. "
Re:Don't worry about what DARPA said... (Score:1)
What is 3D (Score:1)
Russ
Re:one step closer to getting my holodeck! (Score:2)
[closeup of you, 10 years from now, you're at your desk maintaining the www.microsoft.mil website. Suddenly the phone rings..]
"Hello? .. Hi! What's up? .. .. uhh. .. hmm .. but .. .. .. Honey, she was only a few beams of light to me. Honest! .. .. No, no.. There's no need for the magnet.. Consider her deleted. Today. I know. Love you, bye.. *click*" .. Damn you David Brady! Damn you University of Illinois at Urbana-Champaign! aahhhh!
[you run out of your office and into the street where you are hit by a bus]
Moral: Good technology also has a bad side.
Re:What is 3D (Score:2)
At least that's how I'm interpreting the complete lack of useful detail in the article...
Re:it gets better (Score:2)
Re:How I think it works (Score:1)
They were shining light thru skin and measuring some parameter X that was correlated with sugar level... Could the kind of interferometry described here give better results? ie better correlation?
Another $5e6 idea... nah! nobody cares ;)
---
Re:not really (Score:1)
Oh well, replace $5e6 by $5e-6 in my previous post ;)
---
Can't be anything like X-ray CT (Score:1)
--
Employ me! Unix,Linux,crypto/security,Perl,C/C++,distance work. Edinburgh UK.
3D Projectors (Score:1)
Re:What is 3D (Score:1)
And there's another step... (Score:1)
Step 1 was photography.
Step 2 was holography on those little silver things.
Step 3 was the "matter laser" (so far working but still experimental, it works similarly to a laser but with matter waves)
Step 4 was the 3-D camera.
Step 5 is 3-D television; getting it working without glasses and whatnot.
Step 6 is synthesizing matter with the matter laser.
The last step is doing 3-D TV with the matter laser, allowing for solid holograms.
So we still have a way to go, but nevertheless this is a significant advance.
Pretty cool (Score:1)
Well, doesn't this make you feel secure... (Score:1)
Brady's research was funded in part by the Department of Defense Advanced Research Projects Agency (DARPA), which he said would like to use it for military applications.
A camera that worked without having to focus would be ''smarter,'' he said. ``They have cameras spread throughout the world -- a lot more cameras than people,'' he said. These include camera viewing from satellites.
Got that warm and fuzzy feeling, uh-huh.
---
What it would be like (Score:1)
Then, you'd run the happy little proprietary number-crunching software and it'd build the 3d models for you. Probably not in realtime, for a desktop machine.
One difficult thing, though, is that you need to know the relative locations of all the sensors at recording time to some decent level of precision (i.e. they need figure out where they all are relative to each other, or you need to indicate their positions somehow, in 3d yet), or the data is pretty worthless. You could I suppose lay them out on a gridded mat or similar, but that's not convenient for filming in a lot of settings.
By the way, if USB were still involved, while USB is kind of complex to hack, the groundwork for Linux generic USB support is pretty much laid now. It might not be as bad as you think.
Not like it matters; most of the magic here is in the user-land signal-processing software, not the device drivers. (don't think the algorithms aren't patented; fat chance seeing libre software doing this anytime in your lifetime)
---
Um... no. (Score:1)
Read my speculations on how it works:
Hrm; I just discovered these people's site on a post below, too... it looks like my conjectures were pretty dead on:
http://www.phs.uiuc.edu/4Is/ [uiuc.edu]
---
not really (Score:1)
They were shining light thru skin and measuring some parameter X that was correlated with sugar level...
Could the kind of interferometry described here give better results? ie better correlation?
No, just more accurate measurements. Which might improve results, but not the correlation you're measuring. The actuall degree of correlation doesn't change. There's an analogy in applied statistics: more careful measurements don't help if your samples aren't any good.
---
it gets better (Score:1)
One shot, and they've got my whole house mapped out, right down to the ant crawling on the floor. When they come to bust be for hacking my grades on the school's system, I'll have no where to hide.
Screw petty computer criminals; think of the potential this has for suppressing those nasty political dissidents!
---
How I think it works (Score:2)
You'd have to apply algorithms similar to those used for CT to reconstruct the a 3d model from the data from the sensors, and interferometry comes in to play to compensate for the low spatial resolution of the individual sensors (as well as removing the need for them to be "focused" -- the more sensors, the sharper the detail).
Of course you'd still have problems with opacity, but with enough sensors scattered around that wouldn't be such an intractable problem. You just couldn't see the insides of stuff that you didn't have a window of some kind into and couldn't put a sensor inside of.
Stuff wouldn't need to be in full view of all of the sensors, either -- just, the fewer sensors that can see a feature, the more fuzzy ("out of focus") it'd be.
---
Re:3-D Quickcam (Score:1)
*obvious flamebait*
Re:Pretty cool : 3D scanning using a Pencil shadow (Score:1)
http://www.vision.caltech.edu/bouguetj/ICCV98/.
oooh and theres code as well . MSoft Boo !!!
Re:No No No... (Score:1)
this is because the pinhole is very small, while the camera lens is relatively wide. it gives great depth of field.
and to all those saying that IR cameras can see through walls, you must have awfully thin walls!
Re:The Logical Next Step (Score:1)
3D Widgets (Score:1)
Re:Don't worry about what DARPA said... (Score:1)
Hate to freak you out even more, but they can already do that using infrared cameras. Your local police force probably has a few.
Re:Hmm, would this violate search and seizure laws (Score:1)
Re:Don't worry about what DARPA said... (Score:1)
the web? I want to make some huge solar
furnaces this summer.
The Logical Next Step (Score:1)
http://www.phs.uiuc.edu/4Is/ (Score:3)
The group has a site at http://www.phs.uiuc.edu/4Is/ [uiuc.edu]
It includes a pretty spiffy mpeg of one of their scans [uiuc.edu]. Cool.
How to kill a camera (Score:1)
Variation. Switch the audio and video cables.
Must include Pentium processor with Floting Point Division bug.
Visable light?!?!? (Score:1)
I know what they mean, but it could have been stated _much_ better.
Re:And there's another step... (Score:1)
Dreamweaver
Re:What is 3D (Score:1)
Hard-core VR-heads have been doing that for years now. Saner folks can use shutter glasses, as mentioned - but some people like to go whole hog.
Re:What it would be like (Score:1)
3-D Quickcam (Score:1)
Tell a man that there are 400 Billion stars and he'll believe you
Re:it gets better (Score:1)
but think about the possible good it could do. A SWAT team can get an up to the second 3d image of the inside of the drug lab they're about to raid uncluding the guy in the closet with a gun and the boobytraps on the the door to the third floor office (for example)
Hmm, would this violate search and seizure laws? (Score:1)
Hmm. Ah, everyone needs a big brother!!!
Re:Hmm, would this violate search and seizure laws (Score:1)
Ahh, but they are supposed to obtain a warrant to do that as well; that is why the wiretap laws become ever more dilute!
3D Imaging (Score:1)
The news article motivating this thread arises from this article in yesterday's issue of Science: "Visible Cone-Beam Tomography With a Lensless Interferometric Camera," by D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson Jr., and R. B. Brady, Science Jun 25 1999: 2164-2166 There is a better news article in describing the work in Science itself: "3D Camera Has No Lens, Great Depth of Field," by Daniel Radov, Science Jun 25 1999: 2066-2067. These are available through Science's web site [science.com], but a subscription is required. Science offers a 1 day subscription to the web site.
The paper uses a combination of interferometric imaging algorithms (which image with infinite depth of field ) and computer tomography algorithms ( which combine infinite depth of field images to produce 3D models) to produce a 3D image of a plastic toy. Opacity is not a problem due to the linearity of the imaging process.
As several posters have noted at this site, pinhole cameras also have infinite depth of field. We wrote an paper about using pinhole cameras for tomography [uiuc.edu] in Optics Letters last year. Unfortunately, the depth of field of a pinhole camera comes at the expense of resolution. This is not true of interferometric cameras.
"Interferometric" refers to measurements of cross-correlation functions to isolate intensity contributions from different points in the object space. The algorithms used are very similar to those used in radio astronomy.
"Tomography" means slice (tomo) plotting. "Computer" is the C in CAT scan or CT. Current usage applies tomography to most 3D imaging schemes. "Coherence tomography" is a point by point scanning scheme which, ironically, is not tomographic at all. Tomography allows parallel data acquisition, which ultimately leads to real-time 3D video and holodecks. Real-time 3D was not demonstrated in the Science article, however, because it requires a dense sensor array. See http://www.phs.uiuc.edu/Beowulf [uiuc.edu] for progress on this front.
Concern about DARPA and big brother issues is unnecessary. A sensor array has no better chance of seeing inside opaque objects than a single camera. Anyway, why should big brother waste a lot of effort to get information people will volunteer in exchange for supermarket dicount cards.