Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Microsoft Science

Tag Images With Your Mind 64

blee37 writes "Researchers at Microsoft have invented a system for tagging images by reading brain scans from an electroencephalograph (EEG). Tagging images is an important task because many images on the web are unlabeled and have no semantic information. This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person's brain while they view an image. The person need only view the image for as little as 500 ms. Other current methods for generating tags include flat out paying people to do it manually, putting the task on Amazon Mechanical Turk, or using Google Image Labeler."
This discussion has been archived. No new comments can be posted.

Tag Images With Your Mind

Comments Filter:
  • by Anonymous Coward on Thursday November 26, 2009 @08:56AM (#30236518)

    Honestly this is nice, but seriously if my mind was tagging my images there would be something like the following list of tags
    idhitit, awesome,ohgod, thehelliswrongwithme, whydidisavethisagain, ohthatswhy, shit, wallpaper, and photoshop
    and that's just keeping it within PG-13.

    • Re: (Score:3, Funny)

      by dominious ( 1077089 )
      if my mind was tagging my images there would be something like:

      porn
    • Re: (Score:2, Funny)

      by Anonymous Coward

      So you frequent 4chans /b/, eh?

    • by MrKaos ( 858439 )

      idhitit, awesome,ohgod, thehelliswrongwithme, whydidisavethisagain, ohthatswhy, shit, wallpaper, and photoshop

      I wonder what Steve Ballmer would think seeing a picture of him throwing a chair.

  • by Anonymous Coward

    No really what can go wrong with using your inconscient animal nature to tag every photo with a (decent) girl in bikini as "To Do"

    • Using an EEG scan of a person's brain while they view an image could yield very different results for an image of a naked woman depending on the viewer's sex or sexual persuasion. Also, for images of objects and images of people in general - each viewer would have a different set of associations for a given image. For example, imagine the EEG of a person with arachnophobia when presented with a picture of a spider, etc.

  • From the summary:

    This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person's brain while they view an image.

    That's true, as long as "appropriate" means it was either X or Y, as the system really only works on discriminating between things like "a face" and "not a face". It's an interesting piece of research, sure, but it sure as hell won't replace good old fashioned tagging using a keyboard.

    • You're not fast enough.

      Get it to play 20 questions for you.
      "Person/NotPerson ... Place/Thing ... StaplesButton/DolphinPanicButton"

    • It's not going to replace keyboard tagging now.

      But in the future more advanced versions might.

      Then you could use "thought macros" to control wearable computers.

      The measurements of thought patterns are likely to be specific to each person. So devices that use thought input would have to be trained.

      But after that, you could be thinking of stuff like "purple green striped elephant" as the escape sequence to tell the computer to start listening in and doing stuff based on the thought patterns it recognizes (whi
      • by TheLink ( 130905 )
        Oh yah, in case it wasn't obvious, virtual telekinesis is also part of the human augmentation I'm talking about.

        Thinking of something could cause your wearable computer to perform an action. Which could include visiting a "favorite" url (which could result in turning the lights on or off, or setting the temperature of the airconditioning etc). Or running a script/program.

        FWIW, I'm not sure how many would bother to take the trouble to train their wearable (and their minds[1]) to "virtually type" stuff quickl
      • Then you could use "thought macros" to control wearable computers.

        What, like this [ocztechnology.com] product?

        • by TheLink ( 130905 )
          Yep. The tech is getting there. Very slowly though IMO.

          But in the near future I think the legal crap will be the thing that is slowing progress.
  • One can imagine a system that tags images by reading your mind as you surf the web. If Google Image Search needed to tag an image, it could just pop it up in a window for 500 ms and read your thoughts to get the tag.

    Wouldn't work with teenage males for example....

  • Yes, Microsoft can read your minds now! Next year, you'll have to plug this device in and wear it to run Windows.

    MSFT: "Is his Windows and Office license legit? Let's read his mind and find out."

    "Does he also run Linux?"
    "Yep. Crank up up the juice and reprogram him."

    The other thing I'd like mention, being only on my second cup this morning, those guys in the graphic when looked at quickly looked like they were wearing thigh high stockings.

    • The other thing I'd like mention, being only on my second cup this morning, those guys in the graphic when looked at quickly looked like they were wearing thigh high stockings.

      I thought the same and yet the image isn't tagged as such. Clearly vaporware then.

    • They're not gettin' their mind probes through my freakin tin foil barrier

  • Cool! (Score:3, Funny)

    by Rik Sweeney ( 471717 ) on Thursday November 26, 2009 @09:07AM (#30236596) Homepage

    Just don't let Rorschach tag any images of ink blot tests.

  • by erroneus ( 253617 ) on Thursday November 26, 2009 @09:11AM (#30236628) Homepage

    Microsoft, be warned. Some people have a limited scope in terms of what they are thinking about at any given time.

    • by ozbird ( 127571 )

      Microsoft, be warned. Some people have a limited scope in terms of what they are thinking about at any given time.

      I think they know that already. "Developers developers developers!"

  • ...what could possibly go wrong? ^^

    • We could make registering minimal brain activity a requirement to access the internet, that should cut net population by about 90%.

  • An EEG is not the same thing as a "brain scan". An EEG is an analog point to point system which is very good at "reading" the parts of the generalized electrical field that reaches the scalp from the brain. Using EEG output to control stuff is a fun sideline which is almost exactly as old as EEG technology itself.

    It doesn't work very well, and it very probably never will. The variance in electrical activity in the brain between two people receiving the same sensory input is, in an average way, too great
    • Re: (Score:3, Interesting)

      by smitty777 ( 1612557 )

      Actually, the variance isn't the problem. That comes out statistically in the wash - you can see that with a large enough N, patterns emerge across the different stimuli types, which allows them to do the tagging. The real problem is interpreting the complex interactions between the different regions of the brain. However, that doesn't really matter for an experiment like this, as the patterns don't actually need to be interpreted, just recognized by the algorithm. It's a similar concept to the way the MMP

    • by TheLink ( 130905 )
      > The variance in electrical activity in the brain between two people receiving the same sensory input is, in an average way, too great to be useful.

      You shouldn't directly use the "thought pattern" of a person to tag the data.

      As you said, the thought patterns are likely to be different from person to person.

      What you do though is, for each tagging participant, you get the person's thought patterns for a whole bunch of tags.

      Then they can tag stuff really quickly just by looking at them. The advanced people
  • Honey, I wasn't looking at her breasts; I was just tagging the image using Microsoft's new mind tagging, honest!
  • Fun and Easy to Use (Score:4, Interesting)

    by smitty777 ( 1612557 ) on Thursday November 26, 2009 @09:42AM (#30236848) Journal

    This is typical of MS -thinking that something like this would be easy for the average user. FTA: "However, the mind reading approach has the advantage that it does not require any work at all from the user."

    So, in order to use this sytem, we should all strap on EEG caps while we're surfing the web. Sounds real practical to me - I used to work in an EEG lab, and I can tell you that those caps are pretty uncomfortable to wear. After they put them on, you stick these little needles into the leads and squirt conductive goop on your scalp. It takes a few cycles to rinse that stuff out too.

    Way to go MS for making productivity so much easier.

    • Still, it is improvement from Outlook.
    • by Tim C ( 15259 )

      You used to work in a lab, so you ought to be familiar with how research works, and how often it produces actual products.

      Forget the practicalities of people doing this in their homes; in principle, I think it's pretty damn cool.

    • And as we all know, no technology that was slightly inconvenient in a lab has ever had any value or practical use.

    • by DynaSoar ( 714234 ) on Thursday November 26, 2009 @01:16PM (#30238508) Journal

      I used to work in an EEG lab, and I can tell you that those caps are pretty uncomfortable to wear. After they put them on, you stick these little needles into the leads and squirt conductive goop on your scalp. It takes a few cycles to rinse that stuff out too.

      Smitty, we've come a long way from those caps. There are now "caps" that are essentially nets of elastic cord with plastic cups containing pieces of sponge in them, the electrodes embedded in the sponge. Dip it in mild salt water for conduction, shake it out so there's no drips running together bridging the electrode sites, and pull it on. I could get good signal on 128 channels in less than 10 minutes from the time they walked in to data collection start.

      There is also a European company selling a similar get up, but the preamps are built into the cups on the net, making impedance matching irrelevant and signal balancing automatic on the fly. These are so stable that they can be used ambulatory.

      And nobody ever has to get goop or glue stuck on/into them any more.

  • by Interoperable ( 1651953 ) on Thursday November 26, 2009 @09:55AM (#30236966)
    Tags.
  • by davidwr ( 791652 ) on Thursday November 26, 2009 @10:23AM (#30237180) Homepage Journal

    unlabeled and have no computer-readable semantic information.

    There, fixed that for you.

    Seriously, the old saying "an image is worth 1,000 words" implies that images frequently have semantic information, at least in the sense that anything on paper can have semantic information. It's just that computers can't parse it, catalog it, search on it, etc. Not well, anyways, not yet. They are getting there though.

    • by Eevee ( 535658 )
      Except it's not an old saying, it's advertising copy [uregina.ca] from back in the 1920s that was falsely attributed to ancient Japanese/Chinese philosophers (yes, it flip-flops between ads) to make it sound more impressive. And the point behind this saying? It's to get people to buy advertising campaigns involving images--which bring in more revenue for the advertising firm.
  • Just one minor nitnoid: the title of this article should be "Tagging Images With Your Brain", not Mind. Electrical impulses are used - using the word mind implies that some conscious effort is involved. This is strictly identifying patterns using machine algorithms independent of the user's thought process.

  • 3-class (Score:3, Interesting)

    by Metasquares ( 555685 ) <slashdot AT metasquared DOT com> on Thursday November 26, 2009 @11:09AM (#30237544) Homepage
    Useful, but real-world tagging is much more specific than "person", "animal", or "inanimate". The number of classes required in the classification task is thus far greater and one would expect the accuracy to be proportionally lower. OTOH, it could be a great preprocessing step for further manual analysis, or a step in a hierarchical clustering algorithm. Or maybe 3 classes suffice for certain specific situations.
  • Focus (Score:2, Interesting)

    What happens when I'm tagging a photo but listening to music at the same time?

    Or I run the photo tagging software in a small window and watch a movie (or some porn) instead?

    So they can create tags from brain waves, but there's no way to tell what a user is actually focussing on.

    • What happens when I'm tagging a photo but listening to music at the same time?

      Or I run the photo tagging software in a small window and watch a movie (or some porn) instead?

      So they can create tags from brain waves, but there's no way to tell what a user is actually focussing on.

      If you were in a sensory deprivation tank with the only perceptual cue the target image, you'd still have hundreds to thousands of competing cognitions boiling away, fighting for processing space on their way to awareness, few of them making it but all taking up some resources and generating some signal. But these, as well as any ongoing stimulus like music, are not locked in time with the stimulus presentation and so are random as compared to the stimuli. When processed the signal is cut at the same point

  • Any image that stimulates brain activity not close to the surface is unclassifiable?
  • Are brain scans really so cheap that it's cheaper to set up an EEG than to pay someone in a third-world country to do it?

  • So we'll now have automatic 'Like', 'Dislike' and 'Eeeeeeagh my visual cortex where is the brain soap' responses?

  • Maybe if this were integrated with an intelligent tag vocabulary, such as the one at http://annotator.imense.com/faq-annotator/ [imense.com], the lives of those poor people manually tagging these images can be improved. Not to mention the potential increase in accuracy.
  • Tagging images is an important task because many images on the web are unlabeled and have no semantic information.

    And yet somehow we've managed to survive. I've never really seen the point behind "tagging" much of anything. In every implementation, it just amounts to a mostly random bunch of words that a mostly random person or group thought vaguely described the item at that time. It's never been useful for finding more of the same because tags are so absurdly broad, and it's never been useful for n
  • How about, with or without the brain sensors, a Recaptcha-like system where multiple people tag each photo and each person has to tag several photos, one already heavily processed and one as-yet-untagged? This might prove to be a lot more difficult for machines to do than text systems. On the other hand, if the determined spammers out there figure out how to get a machine to do it, then they've done our work and we now have a machine that can tag photos.

The program isn't debugged until the last user is dead.

Working...