Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science Technology

Automatic 3D Reconstruction of Scenes 136

Neil Halelamien writes "New Scientist reports on a piece of software by MDRobotics called instant Scene modeler (iSM), which automatically generates 3D reconstructions of scenes, using a few hundred frames from a pair of ordinary video cameras. The software uses David Lowe's SIFT vision algorithm to quickly locate common features between sequential images, for use in the reconstruction; SIFT has also been useful for generating panoramas and object recognition. MDRobotics has a demo page showing the software being used for crime scene reconstruction, along with animated GIFs of input video and the resulting 3D model."
This discussion has been archived. No new comments can be posted.

Automatic 3D Reconstruction of Scenes

Comments Filter:
  • by Anti Frozt ( 655515 ) <chris@buffett.gmail@com> on Sunday March 13, 2005 @11:03AM (#11926186)
    CSI: Crime Scene Reconstruction
  • by Lioner ( 19663 ) on Sunday March 13, 2005 @11:05AM (#11926196)
    Finally we can do what Harrison Ford can do... :-)
    • by Anonymous Coward
      I thought the same thing. But on reflection, Decker only used 'reflection' from the flat, still image bouncing off of the mirror. It was a great scene, however. And in the imortal Bob Hope.... Thanks for the memories.
  • wow, that's really neat.

    Kind of reminds me of the Mars Rovers' Autonomous Rover Navigation [nasa.gov] (QT video)...

  • by n0dalus ( 807994 ) on Sunday March 13, 2005 @11:10AM (#11926222) Journal
    Wow I bet it's just like in CSI they will be able to zoom up 10000% on the digital image, 'sharpen' it and read all sorts of interesting things off the back of things after rotating them virtually.
    • I'm glad i'm not the only one who hates how they pretend that that's possible. There's a lot of pictures i've downloaded from the net that i'd love to get a better zoom on :(
  • Awesome! (Score:2, Interesting)

    by ylsul ( 94641 )
    I love taking panoramic shots with my camera, but the stinkiest part has always been stitching all the images back together. I haven't seen any package like this before...too bad its not open source :)
    • Yes, I would love to see an open source implementation of this program. Does anyone know of anything like this, or similar software?
      • by YrWrstNtmr ( 564987 ) on Sunday March 13, 2005 @11:22AM (#11926273)
        Here [unimelb.edu.au], and here [fh-furtwangen.de]. Lin/Win/Mac versions.
        • I would recommend that you look at Hugin (http://hugin.sourceforge.net/) which is a frontend for Panorama tools. Also Autopano (http://autopano.kolor.com/) is a great tool for panorama creations. It does the stitching automatically, and Hugin can be used with it.

          Available for Linux/Win32 and OSX.
      • by imroy ( 755 ) <imroykun@gmail.com> on Sunday March 13, 2005 @11:32AM (#11926312) Homepage Journal
        Autopano [kolor.com] and Autopano-sift [tu-berlin.de]. I have't had good experiences with the SIFT-based software. They always tend to pick the most inappropriate points, like trees/leaves (that move between shots) and the middle of objects (where there aren't many features). I almost always have to go through and remove the bad points, adding in my own reliable ones (corners, unique features, etc). I just don't use them anymore because I actually spend less time if I do all the points myself manually. The GUI of Hugin [sf.net] usually saves me plenty of time already. It does a good job of picking the matching point when I click on one photo. That's all I need anymore.
        • Doing a bit more research you will find [tu-berlin.de] that the University of British Columbia has applied for a patent on the SIFT algorithm in the United States, although it's happy to allow non-commercial applications of the algorithm, and happy to use other open source projects in it's implementation.

          Although as it's from a University, and the patenter allows fre-as-in-beer non - commercial use this looks like a defensive patent to me, this immideatly puts the kibbosh on any GNU GPL/LGPL project using it
        • I've had much better experience with Autopano-SIFT. While it does pick points that aren't what I would pick manually, it does produce excellent panoramas with little manual cleanup to do afterwards. It has significantly reduced the manual labor necessary to produce quality panoramas.

          Now if only I had a wider angle lens and a pano head tripod...

    • Re:Awesome! (Score:4, Informative)

      by YrWrstNtmr ( 564987 ) on Sunday March 13, 2005 @11:30AM (#11926301)
      Olympus has had this functionality in their Camedia cameras for a few years now. The panorama stitching is actually pretty easy. You put it into pano mode, and the camera then uses the initial exposure/shutter setting for the entire series, and puts little bars on the sides of the LCD to tell you were to do the overlap. The (otherwise crappy) Camedia Master software recognizes that it is a series, and stitches pretty quickly and seamlessly. That function only works with original Olympus SM cards, though.
      • Wow, I am amazed someone has used panaromic mode. My 3020z came with a 16 meg card, which was immediately replaced by 2 128 meg cards. I read about that 'feature', but never bothered.

        A great camera otherwise, but the I hope the idiot that came up with that idea rots in marketing hell.
    • This private company just created a useful product (this is something that even you acknowledge) and justly wants to profit from the cost of risking development of such a tool. Then the first thing you ask is how somebody can clone it and steal the idea by rewriting the code as open source, since you cannot seriously expect them to just open source their code? This is capitalism at its worst.

      If this practice becomes more commonplace, all that will happen is that the tiny companies get screwed, while t
      • No that is capitalism AT ITS HEART. Competition is capitalism. Capitalism in its purest form grants no guaranteed return. What, do you think governmental enforcement of a monopoly based on the premise of earliest discovery would be a better idea? If I start up a business I have no guarantees it will succeed, and thats the way it should be. Why should discoveries be an exception?
        • What, do you think governmental enforcement of a monopoly based on the premise of earliest discovery would be a better idea?

          And what about copyright and patent law? That is exactly what the capitalism-based American government tries to do in such matters. While I agree that it is absurd to forbid cloning software, it is even more ridiculous to pass off a clone of expensive, research-intensive software as a virtue of capitalism. It is a question of development versus application. If these people devel
          • Well, copyright I understand, and copyright applies to this as well. A written work is a large well defined entity. Patents, at least as far as software is concerned, make little sense as they usually cover fairly abstract building blocks necessary to define many larger concepts. Granted, this particular patent would seem to be very specific, and benign by nature but....

            Lets postulate for a second that software patents were never enacted as a recognized "invention", and software was still under the doma
    • Re:Awesome! (Score:2, Informative)

      by modecx ( 130548 )
      There is an open source equivalent.

      It's called Autopano, and IIRC it also uses the SIFT algorithm. Try Hugin (Windows, Linux, OS X), it's able to call it autopano, and automatically get tons of stitching points, and it does quite a good job without tweaking... Especially if you took care in taking the pictures.
    • Re:Awesome! (Score:3, Informative)

      by FleaPlus ( 6935 )
      Did you happen to see the link to autopano-sift [tu-berlin.de]? That's GPL'd, I believe.
  • This could be applied to real estate and used to give "virtual tours" of homes on the market.

    However, I plan on figuring out how I can embedd this into one of my Mindstorms...
  • Coral Cache (Score:4, Informative)

    by PxM ( 855264 ) on Sunday March 13, 2005 @11:14AM (#11926243)
    Since slashdot will probably burn out the web server hosting images: http://www.mdrobotics.ca.nyud.net:8090/ism/behind. htm [nyud.net]

    --
    Free iPod? Try a free Mac Mini [freeminimacs.com]
    Or a free Nintendo DS, GC, PS2, Xbox [freegamingsystems.com]
    Wired article as proof [wired.com]
  • by fcolari ( 699389 ) on Sunday March 13, 2005 @11:16AM (#11926250)
    Educationally, people could truly "walk around" in a virtual museum. This is lightyears ahead of QuickTime VR(?) where one simply can rotate about one point and zoom in or out.

    It's only going to work on stationary scenes, as that sleeping fellow showed us. Basically, anything from the Real World you want to "import" into VR will be much easier to do.

    If anyone likes FPS, you could model a map based on real scenery.

    Most inventions and technology came into being before people found a use for it. It just seems pretty darn cool if nothing else.
    • Virtual museums are overrated. When VRML became The Next Big thing about 10 years ago, I looked into creating virtual museums, and we're still not at the tech level that we can pull them off. 2D objects look bad when they're distorted into a 3d projection on such a small screen. The best way to view virtual paintings is just as a normal bitmap on a large enough screen. 3D objects like vases and small sculptures do work well in VRML since you can rotate them and view them from any angle. Large (with respect
      • I agree nothing beats the Real Thing. "Museums" was a poor choice of word. Any space a person cannot normally get to, whether it be because of security or fragility or locality, would be an ideal canidate to be "shared". The realtor idea was a good one. Of course, Once the 3D model is generated, and once it becomes something I view, then it becomes an output system to me. My space is not interesting enough to generate a 3d model...
    • "Educationally, people could truly "walk around" in a virtual museum."

      True, though I don't think this is the right technology for that. Don't get me wrong, I think this is impressive and interesting, though it isn't the first or only 3D model from video motion software. However, for hi-fidelity virtualization of museums or artifacts I'm far more impressed with other [unile.it] approaches [nrc-cnrc.gc.ca].

      • by Dashing Leech ( 688077 ) on Sunday March 13, 2005 @01:56PM (#11927060)
        Actually, now I'm even less impressed. The MDRobotics iSM uses a stereo camera system. I had thought it was a 3D-from-video [unc.edu] method (particularly since it uses SIFT). I find those much more useful because I can use it with my home video cam. Making 3D models from a stereo-cam requires special equipment and has been done by everyone and their dog. I'm not so clear on why this is new or that interesting.
        • I agree with you that 3D from uncalibrated images (which is basically what you're looking for) would have been cooler, but science is not quite there yet. Lots of research is done on autocalibration, and lots of research is done on reconstruction based on calibrated images (I've just finished a Master thesis in 3D vision on that topic :) ), but as far as I know no one has been able to put two and two together with convincing results yet. And even because something is stereoscopy-based doesn't mean it's wort
          • "...but science is not quite there yet"

            Actually, it [unc.edu] is [ucla.edu]. Well, since you said "uncalibrated" I suppose you are mostly correct, though I don't mind calibrating my camera, that's relatively easy to do. But as far as 3D-from-motion (single camera) I have (and have read) the literature and examples from both of the referenced sources, and just 3 weeks ago we got a demo here of working 3D-from-motion system from one of our research partners. It's actually quite impressive, about on par with the stereo system

    • as that sleeping fellow showed us

      Since this is demonstrating crime-scene usefulness, I think that guy is, uh, supposed to be dead.

      I've worked some long shifts (24+ hours)... sometimes I've taken a cat-nap on the floor... but that position looks mighty uncomfortable.
  • Sounds Bogus (Score:4, Insightful)

    by menace3society ( 768451 ) on Sunday March 13, 2005 @11:16AM (#11926251)
    This reminds me of the bit in Enemy of the State where the government operatives take the lingerie store security footage, and then use their computer to "rotate the camer 90 degrees." And on top of that, they then see something in the bag that wouldn't have been at all detectable from the actual camera's angle.

    It's pretty silly to suppose that this thing will be able to generate a 3-D representation of a scene without without getting highly-detailed footage of everything from every angle. Otherwise, it would just be a completely bogus modelling system to pull a fast one on people who don't know any better. "Ladies and gentlemen of the jury, although the original crime scene evidence photos don't show it, when you look rotate the angle and look at the far side of the desk, the defendent's fingerprints are clearly visible"
    • Looks to be that everything in the scene is a real photograph, also why there are wierd black blotches of no data.
    • Re:Sounds Bogus (Score:2, Insightful)

      by noidentity ( 188756 )
      This reminds me of the bit in Enemy of the State where the government operatives take the lingerie store security footage, and then use their computer to "rotate the camer 90 degrees." And on top of that, they then see something in the bag that wouldn't have been at all detectable from the actual camera's angle.

      The technology is basically giving a computer the same information you are able to get by looking at a scene and moving slightly. Unless it's something really subtle, if you can't pick up the infor
    • Re:Sounds Bogus (Score:5, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday March 13, 2005 @12:55PM (#11926717) Homepage Journal
      People have been working on this particular problem for a lot longer than you'd think and there have been methods for creating 3d meshes from 2d video for quite some time now, shown at siggraph, and so on. There are at least tens of books on this and related subjects. Anyway you are exaggerating the problem. Obviously it will not be able to get information from anything it can't see, and without moving around and through a scene all you're going to have is a bunch of contour maps with empty back sides. This is still a huge improvement over having no 3d data at all. It will immediately be a gigantic boon to compositing animators, who will be able to get depth cues from 2d video.
      • Most of these methods are bogus. Researchers who present at SIGGRAPH have often worked with one dataset and publish results that have been finely tuned to work with it, and no other. Techniques from books quite simply don't work on real scenes. It's funny - academics talk as if computer vision works, but it mostly doesn't. However - the SIFT algorithm is pretty damn cool. I think it's the first feature matching algorithm I've come across that actually works (plus the one we have at work...but that's another
    • With you reasoning, computers couldnt work at all because 70s SF movies had redicululous things done by computers.

      Software like this, only in more basic form, has been around for at least 10 years. There was a retail product aimed at VRML designers 5 years or so ago that made the same, but needed user input (he had to mark corresponding edges in the different frames.

      Not this software can use the fact that the input frames are not from a camera, but video frames, so it can use normal motion search algorith
    • One could imagine a time in the future when the first thing done on a crime scene would be to deploy the room imager, a portable grid covering the ceiling (or as high up as you wanted imaging) from which would descend threads at the end of which would be little stereo cameras rotating in 360 degrees, recording as they descend, in a variety of spectrums, including xray.

      Following that, some detail work could be done to capture areas that might have been missed (for example, the interior of a lead lined safe

      • One could imagine a time in the future when a few frames of video (including IR), sonar, or radar from a moving object can be translated to a 3d scene and then matched against a 3d model of the world or a part of it, allowing calculation of the position and path of the moving object.

        One could imagine military, and various other, applications for such technology in a future where almost everything that moves is outfitted with video, sonar, or radar and broadcasts this data somewhere to be processed.

        The amo
    • Re:Sounds Bogus (Score:4, Insightful)

      by Solder Fumes ( 797270 ) on Sunday March 13, 2005 @04:03PM (#11927829)
      Actually, you're using an example which I usually think of as the least ridiculous from Hollywood. If you remember the scene correctly, you will notice that what they did was observe a small shift in the portion of the bag that was visible, and then had the computer project what kind of deformation would have caused that shift. And resolution enhancement is funny when they show it being done to a still image, but with video it is possible to analyze successive frames and use time-domain information to filter out pixelation and noise effects.
    • The NFL was playing around with something similar 2-3 years ago. They would take a freeze frame of a great catch or similar play from a couple different angles, then use the frames to construct a rough 3-D model of the instant. John Madden had some fun showing the replays. They would start from one camera angle, then "fly" around to the other camera angle, moving quickly enough that the roughness of the model wasn't really apparent. It was kind of cool and I'm sure they must have spent some dough developing
    • You can see the holes in their data gathering demonstration of the cubicle. If you don't take a picture of it, the surface just doesn't get mapped.
    • It's pretty silly to suppose that this thing will be able to generate a 3-D representation of a scene without without getting highly-detailed footage of everything from every angle. Otherwise, it would just be a completely bogus modelling system to pull a fast one on people who don't know any better.

      I think the point is that it can create a 3D representation of the surfaces that it can see. And perhaps some intelligent guesswork to complete shapes, etc. But even if it didn't do that it could still be ve

  • i cant wait (Score:4, Funny)

    by Idimmu Xul ( 204345 ) on Sunday March 13, 2005 @11:20AM (#11926268) Homepage Journal
    until they start using this technology for porn
  • That would be cool if someone could add something like that to Blender



    http://www.blender3d.org/ [blender3d.org]
  • I'm pretty sure videoconferencing of the future will involve automatically creating 3d models and detailed textures of the participants and their surroundings.

    Even though increase of bandwidth will have made the old method of sending bitmap diffs smoother than today, one doesn't have to stretch the imagination very far to see the amazing advantages of going realtime 3d...
  • A 3D reconstruction of a 3D animation.

    Bleh
  • Damn patents. (Score:4, Informative)

    by PxM ( 855264 ) on Sunday March 13, 2005 @11:28AM (#11926291)
    From one of the links: The SIFT algorithm is restricted by patents in the United States and hence this software is not completely free to use. For details see the LICENSE file included in the distribution, before you start to use this software.

    Hopefully, they're liberal about the patent and will let noncommercial nonresearch applications use the algorithm. Otherwise, we would have to wait for the really interesting software to come out.

    A C# implementation with support for Mono is available to play with for anyone who is interested: http://user.cs.tu-berlin.de/~nowozin/libsift/ [tu-berlin.de]
    --
    Free iPod? Try a free Mac Mini [freeminimacs.com]
    Or a free Nintendo DS, GC, PS2, Xbox [freegamingsystems.com]
    Wired article as proof [wired.com]
    • Hopefully, they're liberal about the patent and will let noncommercial nonresearch applications use the algorithm.

      Supposedly it's free for "non-commercial" use. But that eliminates most OSS licenses in the US. It's really disgusting that universities try to patent their research today. Whatever happened to science and mathematics for the greater good? Whatever happened to the notion that tuition money went partly to patronize the furthering of the arts and sciences through research? Students might a
  • Impressive (Score:3, Informative)

    by Pan T. Hose ( 707794 ) on Sunday March 13, 2005 @11:29AM (#11926293) Homepage Journal
    I have been waiting for the results for quite some time and they surely look impressive. I might add that the underlying concept is not very hard to understand and one could even make a simple 3-D model of distant objects (like e.g. buildings in your city) using only two eyes, paper, pencil and some basic trigonometry [wikipedia.org].

    Look at this model:

    A---B
    |\ /|
    | C |
    |/ \|
    D---E

    Where D and E are your two eyes, two cameras, or two positions from which you look at the object C that appears to be eclipsing A and B respectively. The distance between any of those points and their relative 3-D positions can be calculated when you know some of the distances (e.g. DE and AD) with very high precision.

    Recommended Wikipædia reading for anyone interested: Parallax [wikipedia.org], Triangulation [wikipedia.org], Stationary point [wikipedia.org], Pythagorean theorem [wikipedia.org], Euclidean geometry [wikipedia.org], Astrometry [wikipedia.org], Binocular vision [wikipedia.org], Stereoscopy [wikipedia.org]. Have fun.
  • Cool applications... (Score:3, Interesting)

    by catdevnull ( 531283 ) on Sunday March 13, 2005 @11:29AM (#11926295)
    I can see this sort of thing being useful for space exploration.
    • "The Planet appears to be round!"
    • I was cut short...(my wife made me get off the computer). Meow. Whip-crack.

      What I wanted to say was this is good tech to aid extraterrestrial exploration and extra vehicular work done remotely. Some of this is already in place with the Mars rovers, but I imagine this sort of application could help the folks who navigate those type of drones. Models created by surveillance cameras could offer a good virtual environment for planning movements--esp. when there is a significant delay for communications.
  • Lot of time ago I thought about this (the general idea, not detailed algorithmics) as a nice way to build very realistic scenarios for racing games.

    Just think about a street circuit you could imagine nicely put into a game. I already did with our riverside avenues :-)
  • by Anonymous Coward on Sunday March 13, 2005 @11:36AM (#11926331)
    Though the technology would need some additional improvements, it might be interesting to apply it to tracking shots in old movies (like Casablanca) and in addition to reconstructing the sets one could also replay a scene from a slighly different angle.

    The other slight modification would be to combine the possible modification (getting a slightly different angle from an existing tracking shot) and build a stereo 3D image of the shot or film segment.
    • Though the technology would need some additional improvements, it might be interesting to apply it to tracking shots in old movies (like Casablanca) and in addition to reconstructing the sets one could also replay a scene from a slighly different angle.
      And significantly easing development of the Casablanca FPS game!
    • The MDR product is a stereo camera (a PGR Bumblebee), making the reconstruction significantly easier (but not necessarily easy). Also, all the intrinsic camera parameters are constant, whereas they're dynamic in most movie footage (changing focal length, etc). So, the reconstruction from movies problem remains pretty hard. That said, lots of people are working on it...
      • Also, the sets probably changed quite a bit between shots. When they move the camera, they don't necessarily keep lighting the same, so you would get a whole different set of problems compounding each other. You are right about the focal length. Plus, a lot of times backgrounds are blurred (to put focus on the actors), so it would be hard reconstruct that.
  • I recall watching the special features for the original Matrix and seeing how a 3d subway environment for the final fight was created from the set by taking a panoramic set of pictures. Likely that process wasn't realtime and could have required a lot of hand tweaking, but that is probably true of this software as well (slashdotted servers means I can't RTFA).
  • fyi MD Robotics (Score:1, Interesting)

    by Anonymous Coward
    MD Robotics are the makers of the Canada Arm I & II.

    For those that didn't know.
  • Home video (Score:4, Insightful)

    by Lord_Dweomer ( 648696 ) on Sunday March 13, 2005 @11:49AM (#11926390) Homepage
    You know, I've seen all these tools come out that will be a great boon to the home movie producer. This seems like the next step to letting them use more CG in their movies. As that improves, the limits on what home movie producers can create will be as big as their imagination.

  • by Anonymous Coward
    The best 3D reconstruction method know so far is described in: http://www.wisdom.weizmann.ac.il/~wexler/papers/ic cv03.html [weizmann.ac.il]
  • Cool. Heh, maybe it's just me, but this reminded me a lot of the episode of TNG when Geordi got infected and became an invisible man. There was a scene where they reconstructed something on the holodeck, much like this.

    I suppose using zoom on the camera could cause trouble, but maybe specialized cameras could be used where the zoom is encoded within the video (wouldn't be hard with modern digital stuff -- I'm sure some do it already).

    Neat to see this automated, though certain aspects of this have been p
    • Your right, this even has the shadow areas (look behind the closet door).

      I recently visited stonehenge, and basically filled up my camera memory card with high res images from all around, and was hoping to reconstruct the space.
      Maybe now I will get cracking again.
  • by shapr ( 723522 ) on Sunday March 13, 2005 @11:58AM (#11926427) Homepage Journal
    This is not limited to static scenes as one comment says. It could be used to reconstruct moving objects just as well, with a bit more software.
    You could very accurately construct physical models of criminals from security tapes.
    You could also construct an accurate model of how they walk. Since every person has a unique walk, that would be more difficult to disguise than physical appearance alone.
    You could discover identifying details of the cars they drive, like a small dent in the fender.
    This would be perfect for eBay, you could send them a short film of the object you're selling and they would post a 3D model of the item.
    This heralds the end of both motion capture and the existing hours long '3D scanning' of clay models used in films like LoTR. Instead of requiring a mechanical stylus to touch every point of a model, you just film it.
    Once the software has the ability to turn multiple 2D viewpoints into a single 3D image, this will be the perfect replacement for VR gloves as well. You could have a cameras on either shoulder and your hands would be your 3D mice. That sounds like a nicely intuitive interface.
    Moving companies could find this useful. They could film the objects, the moving truck, and in return get an optimal packing order. You could also film the stairway up to an apartment and the software could figure out how to get through any of the particularly tight spots, if it's possible.
    This would be good for the sort of augmented reality that washington.edu has researched. When the software can regognize the separate parts in a machine it can display directions for disassembly on a heads up display.
    Oh, I can think of lots more uses, but better to get hold of the code and try to implement some of the random ideas above.
  • by null etc. ( 524767 ) on Sunday March 13, 2005 @12:11PM (#11926488)
    I just hope they patent it before anyone else can use it. I'd hate to see them give up their ability to innovate.
    • I haven't read the article but I've certainly read at least half a dozen patents for exactly what is described in the blurb.

      Huh, looks like knowing what prior art exists isn't always trivial. Just a thought.

  • On slashdot about three months ago.
  • by gojrocknyc ( 861306 ) on Sunday March 13, 2005 @01:15PM (#11926836)
    Can't wait til i can stroll thru my old high school with a pair of cameras, take that info, load it into the computer, algorithmize it, and load that data into Splinter Cell / Halo / Unreal Tournament. Ah man, think of all the cool new maps that'd be popping up every day if this system ever gets widespread!
  • Is anyone else remind anyone of the ESPER system from the movie Blade Runner?
  • 3D scene reconstruction from images has been the subject of research for decades, with some impressive successes even in the 1980's.

    With newer hardware, much higher resolutions, and more training data, these systems were bound to get better, and they are going to get better still.
  • Damn. (Score:3, Informative)

    by St. Arbirix ( 218306 ) <matthew.townsendNO@SPAMgmail.com> on Sunday March 13, 2005 @02:09PM (#11927128) Homepage Journal
    That is the third time an article on Slashdot has described something I had hoped I could do my masters thesis on.

    First was the decentralized bittorrent network.
    Then was the method by which angle changes in camera shots can be deduced by comparing the images (the jigsaw puzzle solver). That was probably an infantile persuit anyway.
    And now this.

    I will go ahead and spill the next idea along the same lines as the last two projects:

    What if you were to put a light collector that could detect angle and intensity of light on top of a camera. Then, when your camera is filming your scene you are also recording the manner in which ambient light is reaching your scene. Later, using you 3D scene reconstruction, you can throw in new objects such as creatures and whatnot and use the data from you light collector to apply correct lighting to the new objects you introduce.

    I would imagine someone is about to release this technology and we will see an article about it on Slashdot in a couple weeks. Look forward to it.
    • Well it would already exist if it was that simple. Your reasoning is flawed, as you suppose the light reaching the camera (or in your idea, the light collector) is the same everywhere in your scene. This isn't a simple problem, and it cannot be solved with such simple solutions.
      I eagerly await a solution to this problem, not requiring a very complicated setup.
    • Re:Damn. (Score:3, Interesting)

      by imroy ( 755 )
      Paul Debevec [debevec.org] has probably already done something like that. He's done a lot of work with "image based rendering", including reconstructing scenes from photos and extracting "light probes" from photos of shiny spheres. He's got a lot of papers and demos on his home page there.
      Sorry bud. I know how you feel! I've had similar experiences myself.
  • With graphics cards (and soon Physics Processing Units) constantly improving, game developers are faced with a problem: creating environments with appropriate levels of detail is becoming increasingly expensive. Creating the models, texturing, lighting, etc takes time.

    A technology like this could be used to use a movie set approach to developing games, allowing miniscule details to be included in scenes without the prohibitive cost of a human modelling every item in a room.
  • From TFA:
    iSM processed all images and created photorealistic three dimensional surface models of the dressing room. Come for the virtual tour of the Natasha's dressing room:
    Who else is excited to take a tour of Natasha's dressing room?
  • This imagery to 3d is a primary component to artificial intelligence [geocities.com]

    Also if you could video tape a building and port it to 3d, you could make some quick FPS levels.
  • I thought of this 5 years ago in grad school. At the time I thought it would be cool for football refs to be able to reconstruct a play in 3D, then zoom in to see if the player really did have his knee down or not, etc.

    At least my good ideas aren't 10 years old anymore. :)

  • Autostich impressive (Score:2, Interesting)

    by foxalopex ( 522681 )
    Wow, I actually tried out that autostitch program. It works extremely well. For carefully shot pictures, it will stitch more or less perfectly. For wrecklessly shot, less than perfect but much better than what I can do with Canon photostitch. Considering it was automatically stitching stuff better than what I was doing with Canon photostitch with a lot of manual tweaking, it's impressive. I hope this guy's development work becomes a commercial product.

Anything free is worth what you pay for it.

Working...