Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Science

Searching with Images instead of Words 207

johnsee writes "A computer vision researcher by the name of Hartmut Neven is developing ingenious new technology that allows the searching of a database by submitting an image, for example, off a mobile phone camera. Imagine taking a photo of a street corner to find out where you are, or the photo of a city building to see its history"
This discussion has been archived. No new comments can be posted.

Searching with Images instead of Words

Comments Filter:
  • by fembots ( 753724 ) on Thursday January 13, 2005 @03:30PM (#11352225) Homepage
    Tell me, which is easier? Upload this image [iclod.com] and try to find out where you are via this Visual Google, or enter the street name (street sign in the photo says "Queen Street") in Text Google?

    The article also mentioned this thing should start small, like a movie guide, so is it easier to upload a 2K "I,Robot" billboard photo, or just enter "I,Robot" in Google on your cell phone?

    As long as human input is still required (i.e. you need to submit something), I don't think this is going to be popular. However, if you have a Oakley that automatically takes photos of what you see and feeds you the location details, that'll be something.
    • Actually, what I look forward to is searching with a picture of Waldo. Maybe I can finally find him.
    • It does seem to have some functionality though. Let's say for example, this holiday I received a Thing(tm) as a present. I could take a picture or two of the Thing, and it may be easier to figure out what the it is.

      Of course, for some reason I think it would be difficult to make Visual Google function that well... the only way I would get results for my Thing would be if someone already knew what it was, and defined it for the search engine.
    • Pretty nifty for the functionally illiterate ... except that if they can't read the signs, they won't be able to read the answer either.

      Of course, it could just return the picture with a big red arrow saying "you are here" (or should it be "U R HERE"?)

      Seems to me that just offering a mapping function via cell locators would be more popular.

    • The real killer app will be taking out hotornot's MeetMe funding (you know, if you want to actually talk to someone you click on, you have to pay money); you just google for their picture, and find them that way :)
    • Imgseek. (Score:5, Informative)

      by Adhemar ( 679794 ) on Thursday January 13, 2005 @04:14PM (#11352552)

      Imagine you're a photographer. Professional or hobbyist, I don't care. You have made thousands of pictures; they all are on your hard drive.

      Imagine you're lazy. (Maybe you don't have to imagine that.) You don't want to describe your photos, you don't want to label them. The only metadata associated to your photos is date and time.

      Imagine you're looking for a particular photo. You know where you'va taken it, you know what is on it, you can remember the subject, the color shades, etc. You just can't remember exactly when you took that picture. How do you search for it?

      Well, you quickly make a drawing in which you try to (sort of) replicate colors and shapes. And you let your computer search for "similar" graphics.

      Such software exists already (for quite some time). There's a beta Free software project (GNU licenced) called imgseek [sourceforge.net]. Current version: 0.8.4. I haven't tried it, I don't know how good it is. But this screenshot [sourceforge.net] looks impressive.

      • Very cool, but completely different. You take your picture of the street corner that youw ant identified from a slightly different angle, and all the lines will be wrong. 3d messes with things like that; imgseek gets to work only in 2d. And it even has trouble with that - it only gives you a rough approximation.

        All imgseek does is compare the wavelets - not even remotely applicable to identifying a 3d scene with a non-predetermined 2d image.

        Still, a neat program. :)
      • There are lots of choices, really. I wrote a perl script [kudla.org] (using Image::Magick) about 5 years ago that did the same thing as this, and the awesome GQView [sf.net] image manager has had visual image matching for nearly as long. On the Windows side, I'm pretty sure ThumbsPlus has a feature like that, and I'm surprised they didn't build it into XP since they were supposed to be all digital-photo-friendly.
    • by Rei ( 128717 ) on Thursday January 13, 2005 @04:25PM (#11352641) Homepage
      Also, there's the whole "vaporware" issue. The scale of this programming task is staggering; it's not only image recognition, but image *searching*. Just look at how poor OCR does with handwriting (and sometimes even pre-printed text). Generalized image recognition is orders of magnitude harder than recognizing a small set of print characters lined up in nice rows and clustered into words, and image searching is beyond that.

      He can claim he's developing whatever he wants, but I'll believe it when I see it. It reminds me too much of how many AI researchers in the 60s were convinced that by the 90s computers would regularly converse with humans and be able to reason like them.
      • I agree. I can't imagine how this can be pulled-off in the near term. How will the image processor find that image in the database? How will data be entered into the database to begin with? How will the software understand what you're referring to in a particular picture, whether it happens to have a car, a building, a person, a billboard, and other stuff in the same picture. Either I am too stupid or I underestimate the technology that can make this available soon.

        • That reminds me of a story I heard once about some military system (using a neural net) that looked at satellite or aerial photos and selected the ones that had tanks. They fed it some photos with and without tanks. It scored almost perfectly. They decided to run another test and went out and took more pictures. This time it failed miserably. They eventually found out in the original set of pictures with tanks, it had been either sunny or cloudy (can't remember which). In the pictures without tanks it
      • Also, there's the whole "vaporware" issue. The scale of this programming task is staggering; it's not only image recognition, but image *searching*. Just look at how poor OCR does with handwriting (and sometimes even pre-printed text). Generalized image recognition is orders of magnitude harder than recognizing a small set of print characters lined up in nice rows and clustered into words, and image searching is beyond that.

        Yeah, but this doesn't have to be any more accurate that say.....Google. And the
        • Forgive me replying to myself, but I hadn't realized that the subject of this article was talking about searching for TEXT with images. I must admit that's a harder problem.

          My method really only encompasses searching for images with images. You could add text by searching through ALT tags, and processing the text on a page which contains a given image.

          A lot of pages with a picture of the eiffel tower would have the word "Paris" on them, for example.
          • But that's not at all what this person is claiming that his software will do. He's claiming that *you*, the user, can take a picture of anything, and it will find information about what you take the picture of. That's an almost unimaginably hard task to even come close at. Unless of course, by taking a picture of barney, you want information on eggplant, e-coli, and neptune. ;)

            Distinguishing one street corner from the next, regardless of angle, requires full understanding of the 3d geometry underlying t
      • I disagree. It's 'pretty easy' to find flat surfaces in a picture, and almost all buildings are made of those. Then 'all' you have to do is search a database for a matching set of planes. Not absurdly difficult, and prolly much easier than a general handwriting reader or face recognition.

        J.
        • Bullshit. There's nothing "easy" about extracting 3D plane data from a photograph. In fact, I can't claim to know of any techniques that allow you to do that without either manual interaction or multiple photographs (using stuff like stereo matching etc) and I work in the field. Even if you had a laser scan from a single perspective, it could be rather challenging to search a large database of laser scans and find the ones that match the original scanned object, and that's when you have depth data to play w
          • Please note the inverted commas: 'easy'. I was speaking relatively.

            I meant in comparison to things like faces, which the parent poster was commenting (and I agree) are very hard.

            Here's how you do it: Look for slightly curved lines. Examine curvature (due to lens) to infer distance. Convert to 3d model of lines. Look for lines which are parallel. Infer planes.

            [I'm a physicist and mathematician by the way. Your terminology may vary.]

            Justin.
    • Actually, I think I read once (might have even been on /.) about another potential application for a similar technology, which seemed much more useful than this. The idea involved using images to search, say, a parts database. If you were holding some unidentified doohickey in your hand, and you needed to know what it was so you could find a replacement, you could sketch a rough outline of the object, and the sketch would be used to search through the design information in the database (say, CAD drawings an
      • Interesting idea, but what's missing is context. Where did this doohickey come from? Is it an alternator arm from a Ford, or a tape transport arm from a VCR? Same basic shape, vastly different scale.
        Alternator arm from a Ford or a Chevy? 1/4" difference in length would negate its use on one or the other.

        If I know the context (VCR model 110FC, or 2001 Ford Focus), then it would be much easier to just go directly to an illustrated parts breakdown or CAD repository for that model. If I don't know the context,

    • If all you have is a cameraphone, with its crappy alphanumeric "keypad" it's easier to point and click the phone than to enter the text. Especially since text entry will often require reentry, after typos, while point/click pix will probably work every time, and be minimally frustrating to reshoot if needed. I commonly snap pix with my Treo 600 rather than "type" reminders, even though its QWERTY keypad is superior to most phones. I'll be even happier when I can just point a finger, snap another finger, and
    • well, maybe not more efficient for you - but could be a HELL of a lot more efficient for visual AI.

      The human mind is already exactly this - a visual google. We have a DB of contextual knowledge that is accessed and triggered through a few sensory inputs - smell, touch, sight etc...

      here we could have a robot that could 'see' be able to get contextual information based on what it is seeing.

      Say you have it walk into a room - and the room is filled with a bunch of objects, the AI could scan the room, then de
    • Tell me, which is easier? Upload this image and try to find out where you are via this Visual Google, or enter the street name (street sign in the photo says "Queen Street") in Text Google?

      my office is on queen street. what google query will tell you what city i am in?
  • by punkass ( 70637 ) on Thursday January 13, 2005 @03:30PM (#11352229)
    ..imagine being the guy who has to photograph EVERY STREET CORNER IN THE WORLD.
  • by 2advanced.net ( 849238 ) on Thursday January 13, 2005 @03:31PM (#11352237) Homepage
    So when do you combine this with Fleck's nude recognition [psu.edu] algorithms to provide a service that can identify a person by partial nude picture?

    The possibilities are endless!
  • by xmas2003 ( 739875 ) * on Thursday January 13, 2005 @03:31PM (#11352243) Homepage
    While this looks pretty cool, I'm confused by the examples provided in the writeup - "Imagine taking a photo of a street corner to find out where you are, or the photo of a city building to see its history" since GPS technology would probably be a better enabler for those specific applications.
    • Or why not just look at the street signs to find out where you are? If the street corner is in a database it is probably in an area that is developed enough to have street signs.
    • by nacturation ( 646836 ) <nacturation@gmai l . c om> on Thursday January 13, 2005 @03:39PM (#11352377) Journal
      Imagine you're going through photos of your latest vacation and you find one of a street corner which you snapped while on a drinking binge. Since, in your drunken stupor, you don't remember where it was, you can just submit it to find out the history of the building and perhaps discover other famous people who have similarly vomited in that vicinity.
    • or the photo of a city building to see its history"

      Substitute "person" for "building". (Is that a cameraphone, officer?)

    • I forgot to add to my post above that GPS encoding as part of the JPEG EXIF header had been a standard for some time and a handful of high-end DLSR's have this capability today - this will become more prevelent - heck, as part of E911, your cell phone camera already has GPS info, but I don't know first hand if those add the tags to the EXIF header, but would be trivial to do.

      alek

    • I thought of that too. Why not use GPS instead. It's a problem of machine learning and classification. Given a picture of a street corner, what features make that street corner in any light and weather conditions different than other hundreds of thousands of corners. Also, what about the angle at which the image is taken?

      The database to work will have to understand what 3D objects are (at least in the specific domain) and have an idea of what features of the object are important (like signs for exam

    • GPS would be useful in some situations (if you want to know about a general area), but for the example of taking a "photo of a city building to see its history", GPS itself would not be sufficient.

      GPS can provide a location, but it can't pinpoint what you are looking at. This is the case even with compass data indicating which direction you are pointing your device--what if there are two things in your line of site from that perspective? (Do you want information about the building, or do you want inform

    • ... you could always write a perl program to let people think they are controlling the city building's lights, when infact they're just controlling pictures in a database. ;-)
      • Ummmmm ... now I wonder who the heck could do something like that ... and if they did (and bazillions of people were turning those lights on and off), do you think anyone walking by the building (including reporters) would wonder why the lights never change?!? ;-)
  • Man on man (Score:3, Insightful)

    by sulli ( 195030 ) * on Thursday January 13, 2005 @03:31PM (#11352244) Journal
    the pr0n industry is going to love this.
    • by 2advanced.net ( 849238 ) on Thursday January 13, 2005 @03:33PM (#11352270) Homepage
      "Man on man"? Man oh man, Freud would love this!
    • Stalkers... (Score:3, Insightful)

      by Chordonblue ( 585047 )
      ...are gonna love this too. Take a picture of the girl you like and do a search. This has some scary connotations I'm afraid.

      • ...are gonna love this too. Take a picture of the girl you like and do a search. This has some scary connotations I'm afraid.

        Straw-man.

        Stalkers already use Google. It's a lot easier to stalk someone with text than with pictures. What are the chances your image search would actually turn up anything for your average Jane Q Public?
        • Re:Stalkers... (Score:3, Insightful)

          by kaustik ( 574490 )
          Not in the near future, of course. I can't image that taking a picture of most of anything would produce valid results in the near future. However, this type of photographic facial recognition [cnn.com] is already being reseached and developed for things like bank robberies and terorism. I can picture this taking off to the point where it applies to the general public...
        • where is the straw man argument here?
        • Are you really part of the Slashdot community and DON'T believe this will eventually happen?

          Remember: At some point when exabyte storage is commonplace, this is be all too easy. And I don't think it will be the gov't who will do it. It will be a result of the same kind of marketing genius that makes you use discount cards at the supermarket to get a deal. "Just give us a picture of your face - you'll get a deal..." You watch.

          One thing builds upon another - don't believe this sort of thing doesn't have org

    • Being a geek, locked away at your desk staring at a computer, what exactly are you going to be taking pictures of to search for?! ... wait. Don't answer that.
    • i assume the title of this was supposed to read "Man oh man" -- unless you were suggesting a new type of porn. in which case, i'm pretty sure they're way ahead of you.
    • The porn industry is, as usual, one step ahead. MiltonSoft's Thumbnail Gallery Finder [miltonsoft.com] lets you search a large database of porn galleries for copies of an image you have. It recognizes images even when they are cropped or resized, and it sometimes recognizes that two photos from the same set are "similar" even though they are not the same photo. It's a great solution for the "incomplete photoset" problem. It even comes with a Firefox extension [squarefree.com] (which I wrote) that lets you right-click an image to find mo
  • Imagine taking a photo of a street corner to find out where you are, or the photo of a city building to see its history
    or a photo of your wife to see...
    oh never mind.
  • by Tablizer ( 95088 ) on Thursday January 13, 2005 @03:32PM (#11352252) Journal
    Enter search criteria: (.)(.)
  • by The_Rippa ( 181699 ) * on Thursday January 13, 2005 @03:32PM (#11352261)
    I'm a beta tester for this product and have gotten some scary results. For instance, I was on vacation in Yellowstone and took a photo of Old Faithful with my camera phone. I submitted it and it gave me back search results for tubgirl!
  • it's going to be difficult for slashdotters to find p0rn with this search engine. Not only must you have sex, but you've got to take a picture of the act too...
  • how much harder is it to just use a regular text search for the restaraunt, movie, building, etc. that you want info on? It's like voice dialing on a cell phone, good idea, but it's about ten times faster and more effective to either dial or scroll to the name you want to call manually.
  • Isn't it more efficient to use the GPS in it to tell me where I am than to submit an image?

    Seriously it's a good thing but the uses are somewhat limited. What if you don't have a digital image or a photograph to be scanned? How would you translate the image in your minds-eye into something searchable by the PC? (yah, mindlink... I know...)
    • I believe a larger percentage of phones have cameras than GPS. But yes, for the examples given GPS would be more useful. For some other uses though (like menu translation) GPS wouldn't help at all.
  • by Nom du Keyboard ( 633989 ) on Thursday January 13, 2005 @03:34PM (#11352292)
    Everyone is going to try searching on a pair of boobs to see what they get. I can set it now:

    1: Take picture of current date's frontside archtecture.
    2: Submit to search.
    3: Reply: You can do better than that. Try her older sister.

  • i'm going to rush out today and buy a nokia phone so that i can have this functionality the instant it is available in 2038.
    i mean, really. isn't this one of the main roadblocks to having a robot that can operate independently -- object recognition? i mean, if spam bots (the world's most advanced robots) get tripped up trying to read the word 'cat' behind some wavy lines while they're signing up for a hotmail account, you really think that i'll be able to photograph a car in 5 years and get info about it?
    • Image recognition is immensely more difficult than people seem to think. And yet every few weeks someone is claiming to be just around the corner from a system that can easily identify the contents of an image.

      When your brain 'recognises' what it is looking at, it is doing a lot more than just comparing two images (as in the street-corner example from the article). Your brain simply doesnt operate in terms of bitmaps.

      The fact that he is basing his hyper-vaporous product on facial-recognition software should

  • OPTION 1:
    Take picture of street corner with camera phone. Connect cameraphone to Internet connection. Upload to wherami.cooldatabases.com. Wait 30 seconds for processing. Get location.

    OPTION 2:
    Pull out $400 GPS with map software.

    OPTION 3:
    Read street signs. Check index of road atlas.

    Yeah. Option 1 sounds awesome...
  • by Momoru ( 837801 ) on Thursday January 13, 2005 @03:36PM (#11352322) Homepage Journal
    Sounds like we may have a winner for Wired's 2005-2010 Vaporware awards.
  • Visual Google? Try visual semantic web. It identifies something and figures out where to go to find answers to it.

    Also, I think the building and street corner thing would work a lot better with GPS than a camera.

    The most interesting thing I saw in this article is that he plans to roll out a first version in about a year. Besides that, it's interesting research, but stuff we've already heard about.

    I would definitely like to experiment with this sort of system.

  • I've heard of more than one service in development for returning music titles against a song clip you play or hum into a phone or microphone.

    Even though satellite/digital radio will reduce the market for this kind of thing, because each displays the artist name and track title, there are still plenty of opportunities to get a song stuck in your head that you don't recognize. A surprising number of people find out about music when it's used as the background tune in TV commercials, for example.

  • by sometwo ( 53041 )
    How could taking a picture of a street corner possibly tell you where you are? Don't most look alike? There are other, better technologies for telling you where you are, such as GPS or even just looking at the nearest street sign and typing the name of the corner into the map application on your phone.
  • iDating (Score:5, Insightful)

    by El_Smack ( 267329 ) on Thursday January 13, 2005 @03:38PM (#11352353)
    Or taking a picture of someone and finding out their history.
    click
    "Whoa Dude!, she's been on 4 amature Pr0n sites!"
  • On a run the other day I found a waterfowl that I'd never seen before in the lake. It took some doing to figure out what it was. If I could have snapped its picture on my cell camera and gotten an identification (a hooded merganser, it turned out, after some digging), that would have been cool.

    I can't imagine how well this would work, since orientation fools things pretty easily. But I imagine that if it were available, I'd find a lot of unidentified objects to look up. (A cooking magazine I read has a
  • by Nom du Keyboard ( 633989 ) on Thursday January 13, 2005 @03:39PM (#11352368)
    Imagine taking a photo of a street corner to find out where you are,

    Yes, imagine that.

    1: Take picture with ultra-modern all-features camera phone of building while lost in city.
    2: Submit to search system.
    3: Search system queries phone's built-in GPS for position information.
    4: Search system sends back retrieved GPS location.
    5: Customer is absolutely blown away and immediately sends back picture of self signing virtual 10-year contract at Early Adopter prices.
    6: Profit!

  • The most searched term is 'sex' or something like that. It'd be interesting to see the "search queries" for a picture-based search tool. I bet there would be a lot of stick figures behaving badly.
  • Robot potential? (Score:2, Interesting)

    by CaptRespect ( 586610 )
    Maybe this can be used for robots to recognise stuff or something like that.
  • If I need to find a specific location, I'll send a text message to 46645 [google.com] (GOOGL). Then, I'll use the street signs or the navigation system in my car.

    If I'm completely lost, the only way object recognition would work is if I'm in an area with a lot of recognizable features...like a city...in which case I'd just ask somebody. I doubt taking a picture of a bush when I'm lost in the middle of nowhere will be helpful (see: car navigation system).
  • by dioscaido ( 541037 ) on Thursday January 13, 2005 @03:42PM (#11352422)
    This seems less like a technology article and more like an advertisement for Hartmut Neven himself. Yes, he's built a 'google for images'... But how does it perform? How exactly is it 'ingenious'? What sets his project apart from the handful of people at almost every University with a Computer Vision research department that is tackling the problem. The problem of matching images is well known, and very difficult to solve. Even in my grad school (BU), which has a small number of computer vision grad students, there are two different research projects on this very topic.
  • Can my Powerbook be prompted to surf porn sites when the iSight catches me pulling down my pants?

  • Way to figure out the name of the chick you saw on flashyourrack.com!!!
  • Back in the 1990s, I used to develop Kiosk Software using IBM's Audio-Visual Connection language. The next generation of that software was called Ultim-Media Builder. At the same time, IBM came out with an extension to DB2 that would allow you to do queries against a database of images...you could say "Find all pictures with a red ball and a tree", and it would find them in the database by "looking" at the pictures, not because of any captions or notes. I never heard if that tech became well know, or if it
  • Duh (Score:3, Funny)

    by dfj225 ( 587560 ) on Thursday January 13, 2005 @03:44PM (#11352458) Homepage Journal
    "Imagine taking a photo of a street corner to find out where you are..."

    Imagine reading that street sign you just took a photo of to find out where you are.
  • That IMHO is the real prize in imaging right now.

    I show an image of a car, and the computer knows, the make, model.

    I show an screenshot of a TV show where they remove the product name/brand from the product... it can ID the product.

    Facial recognition is not to bad at this point (though it seems lots of the pioneers are going under). Nobody seems to have successfully applied it to objects.

    I think that has much more use... think about it:
    1. Indexing and searching images/video
    2. Explaining TV to the bli
    • I agree. Object recognition is where the human compture (i.e brain) is FAR superior to any silicon based computer at this point. It seems people in this thread are thinking pretty small. Visual Google, will have its uses but the technology will revolutionize the way cmputers and humans interact someday.

      This image searching would make computing far more intuitive. In the far future, it may also lead to computers able to learn/recall in the same way as humans, (via visual memory). I believe that when th
  • This of course assumes that every location on the planet is stored somewhere from multiple angles. I guess you can scale in software.
  • [...]is developing ingenious new technology that allow[...]

    Sorry, please somebody explain me what is so ingeniously novel in this technology. In the digital image compression, pattern recognition and indexing research area answering example-based queries on an image or video database is anything but novel. And that includes color-, histogram-, texture-, object-, depth-, even motion-based indexing and query/retrieval (no secrets here, many conferences every year).

    Don't get me wrong, I have nothing again
  • ...for the pr0n directory, by uploading the pics of various parts of the body... to identify the actress...
  • So I am at work in downtown Fort Worth, TX USA.

    I take a photograph of a building that is under construction here and submit it to the "search engine". So what might I be wondering?
    - what street conrner am I at?
    - who owns/is building/will occupy?
    - what materials is it made of?
    - how is it being constructed?

    Same goes for anything. A flower, a person, an object. Without context, search results will be across the board.

    Huh, now that you mention it, same problem with Google today. Just do a seach for "building
  • It seems unlikely we've discovered the algorithm that will compare two scenes and not get fooled by different angles, perspectives, saturation, angle of sun, street lighting on/off, fog, rain, snowcover, traffic changes, etc.... It may be quite a few decades before it's cheaper to use a computer for this than to have a real human being look at the picture and go "hmmm, looks like a Asian scene, those mopeds look Laotian, and that guy has a bag from Taco Bell-- must be the capital!
  • Jeez, everyone's jumping all over this. "Why not read the street signs?" "What about GPS? This is stupid!" So the submitter cited some lame examples. Join the 0.05% percent of slashdotters and RTFA. He cites ideas like taking a picture of a cafe and getting a food review, or taking a picture of a French menu and getting a translation.

    Maybe it could never work, but in principle, this isn't a bad idea.
  • The only technical challenges he discussed were database size/scope and what resolution to use vs. relevant details.

    I was hoping to find out how, exactly, images could be normalized, and the normalize images indexed in a way to avoid O(N) searches.

    Differences in perspective seem to me to be an enormous hurdle, not to mention shadows that change from hour to hour. Nothing about these challenges was even mentioned in the article

    Looks to me like this guy is trolling patent sharks.
  • Didn't IBM have this ability several years ago? I remember seing ads in magazines that let you search for images based on their shape.
  • And what kind of search results do you get back when you sumbit this picture [goatse.cx]
  • Imagine taking a photo of a street corner to find out where you are, or the photo of a city building to see its history.

    Lame. Better yet:

    1. Identify edible as opposed to poisonous plants when hiking.

    2. Identify a part you took off your car during a doomed DIY tune-up session.

    3. Identify a part that was lying on the ground after you closed up the computer case that you just *swear* wasn't there when you started.

    4. Identify the odd substance in the bowl from the back of the fridge...

    5. ID your blind d
  • While this is quite a cool idea, what we really need is an audio search, where as someone could whistle or hum a few bars of that tune that is stuck in their head, and the search results could tell you what it is, where to buy or download it, the lyrics, etc.
  • I don't know how well it was done, but I remember many demonstrations and other interesting presentations from Informix (now owned by IBM) covering this as - IIRC - a blade plugin to their 9.x series Universal Server. You could submit a picture of, say, a wheel and it would figure out what index picture it was matching and then, of course, retrieve other related information. Pretty cool, but I don't know of anyone who came up with a production case for it.

    One of the big goals of this kind of technology w
  • This needs to be hooked up into a service that archives product manuals. Take a picture of your TV/cell phone/microwave/etc. and this service would be able to give you a PDF of your lost manual.

    No more hunting for model numbers (which I've found are not often included somewhere on the actual product).

  • muwahahahaaa

    writing is speech
    pictures are speech (this)
    code is tshirt is music is speech (decss)
    insert numerous other examples (my laziness)

    good luck continuing to legislate speech as property based upon the medium, the inevitability is that they're all communication and increasingly interchangable. i'd start looking for another way to designate what can be proprietary and not NOW.
  • Something pretty similar has already been done here [ox.ac.uk]. They have an impressive online demo to play with.
    It's called Visual Google - You give it a picture, and it returns all the frames in a movie that contain the picture. The clever bit is that it uses a kind of text-retrieval inspired method to do it, so the processing time is essentially zero (just an inverted index lookup, with "visual words")
  • So we can find that special picture with all the curvy bits in the right places.
  • http://shape.cs.princeton.edu/search.html [princeton.edu] allows you to search a database of 3d models by submitting text, 2d [orthagonal] sketch(s), or a 3d [isometric] sketch, or even a 3d model file. Note that at this time these sketches are drawn by the user at search time, but there is nothing to say they can't use a border detection algorithm to accept image input. Also, once you have some results, you can select 'find similar shape'.
  • IconSurf [iconsurf.com] also searches with images. It is a collection of the favicons of lots of sites. You choose whichever you fancy.
  • Washington U has an interesting software project [washington.edu] along similar lines. It can index thousands of pictures, and then recall them based on your crude drawings. Very cool and I think this tech is already appearing in one open source image management tool.

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...