Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

$6 System-On-A-Chip Mimics Human Vision 118

Brian McLaughlin writes "This article in TechWeb describes a Visual Perception Processor (costing $6) that can automatically detect objects and track their movement in real time, according to Buereau d'Etudes Vision (BEV). They claim that a full-blown vision processing system/application could be built for less than $50 that rivals current state-of-the-art $10,000 systems. Sounds pretty cool. " Heck, with my vision, I could tear my eyeballs out and simply use these, at a fraction of the cost of new glasses.
This discussion has been archived. No new comments can be posted.

$6 System-On-A-Chip Mimics Human Vision

Comments Filter:
  • I wonder if this is covered under my HMO. :-) Seriously, though, I wonder about the insurance industry and how they are going to handle things like this coming up. LASIK is getting more and more popular, but it is still expensive. If getting a processor like this is cheaper than LASIK and cheaper than conventional glasses -- what's the future of vision plans? At what point do they spring for the implants/glasses/lasers to fix you up? Of course, with all the genome stuff going on, maybe they'll make you fix it before the child is even born.
  • Uh oh. If AIBO implements this technology, I might have to buy one after all!

    ICQ: 49636524
    snowphoton@mindspring.com
  • Heck, with my vision, I could tear my eyeballs out and simply use these, at a fraction of the cost of new glasses.

    Hrm... I don't know, but for some reason that just dosn't sound like a good idea to me...

    [ c h a d o k e r e ] [dhs.org]
  • by georgeha ( 43752 ) on Friday March 10, 2000 @06:45AM (#1212100) Homepage
    $50 for a reactive vision processing system? Couple that with a cheap (or free) reliable operating system, and cheap networking that has lots of addresses (IPv6), and you could put watching devices on every street corner, heck in every house (you know, for the safety of the children!).

    Goodbye privacy.

    George
  • A) does it run Linux and
    B) if so, will a beowulf cluster of these things make bugs infinitely shallow?
  • by Anonymous Coward
    Great, now if I poke out his eyes, he can replace them and still find me. Plus, he will be pissed off at me for poking out his eyes, so he will eat me very slowly.
  • ...if you'll excuse the awful pun.

    Seriously, though, this has some seriously cool implications if it actually works. It reminds me of a story posted a while back on Slashdot, about a blind man who'd had prototype cranial vision implants back in the 70's. He's currently got some semblance of vision (truly achromatic, poor resolution, and about the size of a 3x5 card held at arm's length, but vision nonetheless). It's had some upgrades over the years, but it's still ancient technology by modern standards. What would happen if they could somehow retrofit that implant with one of these, assuming it worked? While I doubt it'd be anything near human vision, it'd certainly be better (both in terms of vision quality and, probably, physical bulk) as the system he has now.

    It'd certainly be an interesting one to try.
  • Driving down highway
    Blue screen of death suddenly
    I crash into tree
  • by SnatMandu ( 15204 ) on Friday March 10, 2000 @06:48AM (#1212105) Homepage
    If this is half of what's it's cracked up to be, I'm pretty impressed. I played around with mobile robots at the university, and doing anything based on vision was very difficult. Most of the time it was easier to solve a problem with sonar. Sonar works great for finding walls and stuff, but as soon as you introduce moving objects into the environment, it gets less useful.

    If this chip is really as capable as it's made out to be, it will mean a great deal to people who are primarily interested in autonomous mobile robots, as opposed to computer vision.

    I could imagine hooking something like this up to a pioneer [activrobots.com] and solving a bunch of problems.

    Sort of makes me wish I were still a student, with the time and resources to play with robots...

  • by Anonymous Coward
    I could well believe it. Several months ago I had lost my glasses. Now, I didn't think that my vision was THAT bad, and I thought that maybe I could function without them. However, not fifteen minutes after leaving my house I walked straight into a truck. I mean, it's a common mistake, those trucks are pretty tricky. You can't notice them unless you look REEEEAAALLLY carefully. And it wasn't my first incident with a truck either. I can't count the number of times I've been stalked by Log Trucks.

    Now, you might think that's a joke, but Log Trucks are a SERIOUS problem. Once one has its mind to get you, get you it will. They are VERY ruthless, and will stop at nothing to leap out of the woods when you're not looking and have thier way with you. But don't take my word for it, read the archives at the alt.fear.logtrucks newsgroup.

    At any rate, I fell onto a rake after I walked into the truck. Poked out both of my eyes. But on the slightly less bleak side, at least I didn't cry (I couldn't; I had no eyes, or tear ducts. Rakes suck). Anyway, once I got a hold of myself, I asked a passing mime to direct me to the nearest hospital, and I hooked up with a vision unit that cost me $15,874.97 plus tax. To be fair, that DID include the surgery, so I don't think it was that bad of a deal.

    Anyway, the vision isn't really that great. I mean, what I see is clear, and I have a nifty zoom option. However, I have lost the ability to see clothing. EVERYONE is naked. Oh, yeah, make your perverted little jokes. And yeah, it was neat at first, but let me tell you, you watch ONE episode of the Jerry Springer show and you'll change your tune REAL quick. Let alone going to grocery stores. I don't even want to THINK about going to a thrift store now.

    Which brings up another issue: wearing clothes. I can feel them, so I know when I am. And just because I look naked to me, doesn't mean I want to look naked to everyone else. However, because I can't see what I'm putting on, I end up with some pretty strange outfits. I can't really tell if I'm wearing a three piece suit or a "Hello Kitty" sundress. The only indication I get is the trail of giggles. Do I look THAT weird in a suit?!?!! I guess so.

    Anyway, I hope whoever is developing this system has read my story, and makes sure this bug does not persist. Oh, and will someone tell me where the next Linux convention is so I can steer right the fuck clear of it
  • Oooh. How about having something like this on a remote controlled car? That would be sweet! Have a lil' wireless connection that sends the image to your laptop computer....*drool*
  • If I understood the article corectly, and I rarely do. Then someone can use this $50 or so chip to build devices such as: motion detectors, better robot controls, new space craft for NASA to crash, and the one I really like a car that drives itself without getting blood on my bumper.
  • "Gait identification"?!?

    This is possible? I guess it would work on the Fat Albert & the gang, but the Minister of Silly Walks would fool this thing faster than your machine can go "bing"!
  • This guy already has a functional Visual Cortex, what would be the point of replacing it? What this guy needs is better/ more connectors in his brain, not a new one.

    [ c h a d o k e r e ] [dhs.org]
  • kinda makes you wonder about all those sci-fi stories you read as a kid doesnt it? it will be interesting to see what happens when these things get put in robots. Who knows, maybe we will have inhouse robots and self driving cars before 2010?
    -
  • I wonder if this is covered under my HMO. :-) Seriously, though, I wonder about the insurance industry and how they are going to handle things like this coming up. LASIK is getting more and more popular, but it is still
    expensive. If getting a processor like this is cheaper than LASIK and cheaper than conventional glasses -- what's the future of vision plans? At what point do they spring for the implants/glasses/lasers to fix you up? Of course, with all
    the genome stuff going on, maybe they'll make you fix it before the child is even born.


    What about the already living? I think that one of the major stumbling blocks for better lives is that there are already people who were born before all this little stuff is going on. I really would like to have the ability to alter my life for the better. The question is does this actually work in say a lab rat or a human? Maybe take a blind person who has really nothing to loose and put one of these in and see if it works.
  • Ok, now before I get kinda pissed about this, think about what you're saying first guys. This isn't some awesome new vision system that will allow blind people to see. It is just a method of emulating the way the brain processes signals from your eyes, and using that information to track objects.

    The brain-eye system uses layers of parallel-processing neurons that pass the signal through a series of preprocessing steps, resulting in real-time tracking of multiple moving objects within a visual scene.

    Get it now? So please guys, don't get any *SMART* ideas and jab your eyes out so you can buy a nice new $6 dollar replacement. (I know no one was actually serious about that, but you guys did still miss the point. Its not the camera thats important, and it would do no good to put xray or a zoom lense on there because you're not looking for the output of what its seeing. Its a TRACKING device. So its not used for monitoring either... geez.) \rant

  • Who knows, maybe we will have inhouse robots and self driving cars before 2010?

    Most likely some people will. But they'll probably suck. Robotics SOTA historically advances *very* slowly (it's HARD!).

  • by spiralx ( 97066 ) on Friday March 10, 2000 @06:56AM (#1212119)

    The GVPP's major performance strength over current $10,000 vision systems is its automatic adaptation to varying lighting conditions. Today's vision systems dictate uniform, shadowless illumination, and even next-generation prototype systems, designed to work under "normal" lighting conditions, can be used only from dawn to dusk. The GVPP, on the other hand, adapts to real-time changes in lighting without recalibration, day or night.

    Okay, so what they're claiming is that their brand-new, $6 ($50 in total) device can do things which long-standing scientific projects costing $10,000 cannot? Am I the only one who thinks that this sounds somewhat fishy?

    The GVPP was invented in 1992, when BEV founder Patric Pirim saw it would be relatively simple for a CMOS chip to implement in hardware the separate contributions of temporal and spatial processing in the brain.

    Again don't you think that all of the many computer scientists and neuropsychologists working on machine vision wouldn't have thought of this themselves? I've read a fair bit on the theory of vision processing and pattern recognition and it's a hugely complex subject. And now a small research company has cracked it? I don't think so. If you read the list of things which they say it can be used for it comes across as being a huge gimmick - they seem to have listed everything they could think of that might be worth money.

  • If they eventually turn this into a 'vision replacement' system, economics won't be a problem.

    All you need is one extra chip to hold the banner ads.

  • automatically detect objects and track their movement in real time, according to Buereau d'Etudes Vision (BEV). They claim that a full-blown vision processing system/application could be built for less than $50 that rivals current state-of-the-art $10,000 systems. Sounds pretty cool It looks like we're in for a slew of good-and-bad things to come. This silicon eye is so cheap that we'll end up seeing it installed in the oddest of places. Everything from the (1984-inspired) eyeball-in-the-TV (gives new meaning to the CBS logo) to anti-collision systems in vehicles. For $50 you can put one on your kid and see what they do (Here, Honey, it's a pendant from Aunt Huxley). Worried about which neighbor's pooch is pooping on your lawn? A few modules can be used to watch and track it home. I guess I'm a pessimist. I see more bad things, particularly in the area of privacy loss, than I see good.
  • Er... I think its more like a CCD camera. Nothing terribly new, it just fits all on one chip and it's cheap as heck.

    there is a little bit of discussion [kuro5hin.org] on kuro5hin [kuro5hin.org] already. ...

    /joeyo

  • These chips process the visual information, not generate it. They identify and track objects in a video stream. They'd be good for a robot that wanted to play baseball, or as a cheap add-on so that your Aibo could recognise you on sight, or as part of a automatic weaponry targeting system. (I doubt we'd have seen it if this had been a US company.. Defense would have snatched it..)

    Hell, it's cheap anough we could all have auto-aim Paint-ball guns, and semi-intelligent autocannon. Look out World, my rocketlauncher is going self-aim!

    Back to the real subject.. The poor fellow has no use for these, unless he also wants his visual cortex cut out in the name of Borgification..
  • I wonder if we could add in a zoom lens, or maybe x-ray vision (Like in the latest Bond movie).

    Unfortunately if you are emiting x-rays you could damage the living tissue inside your eye or your brain. Also I don't think that all those people would like to be exposed to random ammounts of such particles anyway.
  • ...and then imagine it if it were implanted in people, and somehow they were able to record. Privacy would be a thing of the past.
  • has anyone noticed that this might make implimenting a visual system for robots easier. heck, maybe in a few years i will have that robot maid who can take care of all my needs.:-)
  • Thanks for making this point. It's still damned cool that they call pull out all this visual information, especially in such diverse lighting conditions. A university pal of mine tried to do some vision stuff a while back - I helped a little - it was very difficult to identify objects in a scene. Even when it's, eg, a red coke can in a white/light scene. To have such a cheap chip be capable of identifying and tracking 8 objects in a scene (presumeably the sensor is outputting a pixelmap) is incredible.

  • ...and then imagine it if it were implanted in people, and somehow they were able to record. Privacy would be a thing of the past.

    Unfortunately you have to somehow dupe millions of people into putting these things inside their bodies which would be difficult. Couple that with the trillions and trillions of terabytes that would accumuliate from each and every person who had the device and you have a very untenable situation.
  • The processor sees its environment as a stream of histograms regarding the location and velocity of objects. Those objects could be the white lines on a highway, the football in a televised game or the annotated movement of enemy ground forces from satellite telemetry.

    This system does not see the same way that humans do, nor is there any mention in the article that you could actually get a picture out of one. It's designed to locate movement within its field of vision...nothing more. It doesn't notice, and probably can't 'see', things that do not move.
    I wouldn't want to replace my eyes with these things. I'd be able to see the animated banner ads, but not read the rest of a static web page!

  • Yes. However, I'm not sure that's practical given the current system. Have you seen what this thing looks like? He has a large plastic panel on the side of his head, into which runs a large bundle of wires. Theae are, in turn, attached to large device that fits on a pair of glasses (making him look not unlike a Borg, actually, but that's not the point).

    The problem is, it doesn't look like there's that much room left in either the panel or the device on his head. If you were to fit many more connections in using the current system, I'd guess you'd have to make the system into a large Vaderesque helmet, with connectors going into all sorts of other places in the head simply for there to be room for all of them. Far better to go for something more compact. You can certainly leverage the interface that's already in his head to start, but the external system's going to have to go if you want to fit many more connectors into his brain. It's a matter of physical size more than anything else.
  • Actually having an OS built into your vision could be fun...

    Throw Nextstep/Litestep/whatever on there with a blood-red theme...instant TerminatorVision(tm). =)

    Just as long as the thing doesn't look like a golden hair clip over my eyes, I'll be fine. =)

    ------
  • Ironically, you can 'cluster' these things just by feeding them all the same stream. It won't find the bugs in your code, but it will find the cockroaches in your apartment. You may very well have to cluster them if you have as many native species as some of the bachelor apartments I've seen and/or lived in. Also, you can use it to guide the robot vacuum cleaner to suck them up.
  • I think its more like a CCD camera. Nothing terribly new, it just fits all on one chip and it's cheap as heck.

    If you think that, ether you or the people at kuro5hin.org are dumbasses. Next time, at least read the story before jumping to conclusions.

    Ask yourself, why would a device that's 'more like a CCD camera' require a CCD camera or other video source to operate? geez.

    What this thing does is mimic the human visual cortex in hardware, it can be used to track and recognize up to 8 objects in real time...

    [ c h a d o k e r e ] [dhs.org]
  • by aav ( 117550 ) on Friday March 10, 2000 @07:11AM (#1212139)
    Actually it's not as impressive as it looks.
    They say that The $6 Generic Visual Perception Processor (GVPP) can automatically detect objects and track their movement in real-time, according to Bureau d'Etudes Vision (BEV)
    This can be easily accomplished by a technique called blob tracking - which is the coarsest image vision technique. A similar project (impressive too) was developed at a japanese company (although I don't remember exactly where). It was some sort of interactive game where a pet was playing with you in a projected image. You moved, played with it and it seemed to understant if you touched him,petted him etc. The catch was that the camera was filled with cameras and they detected the movement of your hands. Given the speed, direction etc. they could actually appreciate what you were supposed to do.
    Cute, but nothing interesting from a research point of view.
    I assume that they are doing the same. It is very easy to identify the movements in an image (you can do it in real time even on a Pentium). Check any image vision book for details.
    Probably they built some chip that works at the speed of a controller (i.e. very fast) but, as any controller performs very few operations.
    Still they don't say anything about actual image understanding.
    And this is where comes the commercial part. Because they actually are not saying their chip can understand an image. They simply can track motion. That system wouldn't have a clue whether that movement is a fighter or a flying orange.
    It may be useful in a computer that is used in a vision lab, but we're quite far from industrial pattern recognition or image understanding.
    So please don't take commercial ads as truth
    If you are French please don't read what will follow.
    After all, they are French, and they are the best sellers in the world. Every one believes that French wines are great and French women are beautiful. Have you ever tasted those sour poisons ? Or ever went to France to watch their women ?
  • Just did a quick search, and found out that GVPP isn't exactly new (the article mentions that it was invented in 1992):

    http://www.techweb.com/wire/news/1997/09/0913visio n.html [techweb.com]

    Seems the price has gone down "a bit" since '97 though:

    The modules measure 40 mm2, have 100 pins, and can handle 20-MHz video signals. The chip is priced at $960. On a card with a socketed GVPP and 64 kilobytes of Flash RAM, the price comes to $1,500.

    $6 sounds much better to me :)

  • This sort of thing is good news for people with seriously degrad(ed/ing) sight like me. Right now I spend nearly $400 on lenses right now before I even start looking at frames.

    The technology isn't good enough for full vision replacement yet but it's only going to improve. What I'd like is 'augmentation' applications:
    - 360deg vision
    - microscope extension that can be
    put into a machine and get a good
    view of the motherboard or components
    - direct output from a video game or computer

    *heh*

    Sign me up.

    --Ruhk

  • No doubt, the governments wouldn't stop until there were twice as many of these optical/electronic 'eyes' then there are people on earth! Those bastards!

    [ c h a d o k e r e ] [dhs.org]
  • $50 for a reactive vision processing system? Couple that with a cheap (or free) reliable operating system, and cheap networking that has lots of addresses (IPv6), and you could put watching devices on every street corner, heck in every house (you know, for the safety of the children!).

    You are thinking in the right direction, but this particular piece of hardware just detects movement and tracks moving objects very cheaply. That's not such a big deal for human surveillance. What you should REALLY be worried about is automated face recognition that feeds into big-ass backend database. Once the street cameras + database system will be able to identify you by your face and track you from camera to camera as you walk the streets, life suddenly becomes much more interesting. In particular, Darth Vader-ish helmets start looking very attractive.

    And yes, I fully expect such systems to be operational within the next few years. Of course, they may forget to tell the public about it...

    Kaa
  • Unfortunately you have to somehow dupe millions of people into putting these things inside their bodies which would be difficult.

    Not really, All they'll have to do is start mass genetically engineering people to start genetically engineering people to produce these electro/chemical 'eyes' while there still in the womb. They could then connect them to the brain in order to get them to record. The hard part then would be convincing people to take them out... Oh god, maybe they've started already!

    [ c h a d o k e r e ] [dhs.org]
  • You're more or less correct in your analysis of the chips, but come on, lighten up! Part of the fun of reading /. is coming across off-the-wall musings people post.

    Try to think out of the box a bit more, some of the most innovative inventions come can come from seemingly stupid ideas - why do you think that nothing is out of bounds in a brainstorming session - ideas spark ideas.

  • The problem is, it doesn't look like there's that much room left in either the panel or the device on his head.

    His implant was bulky because it was old-fashioned. If they were to do the implant now it would be much smaller. The reason the implant was old-fashioned is that they have been testing it for a long time (something like ten years).

  • Notice that they talk very carefully about "up to eight user defined objects". This appears to be a front-end processor, using straightforward (and well-known) signal processing algorithms. For instance, background compensation is a well-known technology, as are techniques for tracking user-defined objects.

    They aren't talking about doing object detection or segmentation, much less tracking, in a rich optical environment. Doing segmentation and tracking in a sparse and controlled visual environmant might useful in a factory environment, but it is going to be of very little value outside of that realm. That means that this chip is much less than it appears, and, frankly, isn't even all that new. Go look at Carver Mead's work in the early and middle eighties, if you don't believe me; it could do the same thing for the same price. Heck, go look at Eric Schwartz's work in the late eighties; it could do much more...for much less.

  • It didn't replace his visual cortex. The implant stimulated the neurons in his visual cortex, which weren't receiving much input from his eyes.
    Having more neurons in the visual cortex wouldn't help either, because again, the eyes have to send some usuable info to the visual cortex. Nor would adding more supportive glial cells for the visual cortex, or even extra thick mylin sheaths on the neurons in the visual cortex.


  • Sigh. I stopped reading after I saw
    "...the devices are modelled
    exactly on the way the human eye works, based on the work of thousands of previous researchers into how all human eyes function. 'It was remarkably easy to implement,' say the inventors, 'once others laid the groundwork.'
    Every aspect of the new devices are completely covered by freshly issued patents, there being no known prior examples of this science. Some novel aspects of these patents cover the many conceivable uses of the new eyes, especially in the area of e-commerce. 'We were blind to the possibilities,' wrote one of the inventors, 'but when we deliver eyeballs to a website, we expect to be compensated. And that's just to buy a book... wait till you try to read it: one-blink to accept the EULA.
    Tim was initially skeptical till he saw that the devices will require extensive 'in a nutshell' how-to books and he's already written an encouraging letter [fuckingsucks.net]...'"
    Will it ever stop?
  • Why is everybody talking about curing sight defects with these? Nowhere in the article does it mention interfacing these things with humans, despite Hemos' eye gouging joke in the post.

    This is basically a revolution in hardware accelerated shape/object recognition. With this you could build cars that automatically avoid accidents, robots that navigate through the world like normal people (seing eye droids?), and security systems that track every person in a building.

    Maybe in the future we will be able to link devices directly to the human brain and solve vision problems, but you wouldn't need anything as esoteric as this... you just need the right type of camera to replace an eyeball, not an object recognition chip.

  • Seems no one noticed your clever haiku yet.

    I knew I shouldn't have squandered all my moderator points yesterday...
  • Try to think out of the box a bit more, some of the most innovative inventions come can come from seemingly stupid ideas - why do you think that nothing is out of bounds in a brainstorming session - ideas spark ideas.

    Hey, I'm all for creativity, but this really has nothing to do with human vision. Its kinda deceptive if you didn't fully understand the article because it states that it emulates the human visual process. But that really has nothing to do with helping people see better. All this does is track where things go. What good would that do for a human? Our brain already tracks things well enough. For blind people, its just learning how to see again (or for the first time in alot of cases) with the equipment provided, not to mention actually interfacing well enough with the human brain to stimulate it exactly the same way the visual cortex is stimulated by the eyes. I seriously hope that vision systems for blind people are developed some day, but I think this is a totally unrelated field of study.

  • Mea Culpa...

    I misread the techweb article. If anyone is intersted there is another (slightly more technical) article here [eetimes.com]

  • Okay, so what they're claiming is that their brand-new, $6 ($50 in total) device can do things which long-standing scientific projects costing $10,000 cannot? Am I the only one who thinks that this sounds somewhat fishy?

    Apparently they have concentrated on a subset of vision problems. The device detects moving objects during varying lightning conditions, which is quite a feat. But, there is more to a generic vision system than that. For instance, you often want to know what kind of object you are tracking. Is it a ball or a bird?

  • Hrmm, for robotics enthusiasts, has anybody tried a fisheye vision system? Could be some interesting stuff :-)
  • You can find more information about the chip at http://www.eetimes.com/news/97/9 71news/vision.html [eetimes.com]. This tells a little more of how the chip actually works. provolt "I joined the giant collective brain and all I got was this lousy post.
  • This sure sounds like a technogical device that could be used to circumvent access control.

    If you live in USA, don't hold your breath waiting for these.


    ---
  • intelligent air bags that monitor passenger size and traffic congestion monitors; pedestrian detection, license plate recognition, electronic toll collection, automatic parking management, and automatic inspection; and medical uses, including disease identification. The chip could also prove useful in unmanned aircraft, miniature smart weapons, ground reconnaissance and other military applications, as well as in security access using facial, iris, fingerprint, or height and gait identification.

    A 100 proposed applications in 10 industries? I don't know, sounds pretty versatile to me. Of course the chip doesn't do all of that stuff natively, are you insane? But it is the crucial, missing component in an enormous variety of applications, and they're selling a powerful chip they've been developing since the early nineties for $6, and you guys are skeptical because you think they just "stumbled across it"?

  • by SEWilco ( 27983 ) on Friday March 10, 2000 @07:48AM (#1212161) Journal
    Actually the general way in which this level of processing is done has been known for a while. Many years ago signals from frog eyes were being decoded -- signals only from fly-sized moving objects. All this guy did was actually make similar circuitry (several versions, undoubtedly) and figure out how to analyze and use the signals. They've been working on it for a while; 1992 is mentioned, and they first announced devices in 1997.
  • That's right kids, when you wear the new PokeTiara you can find all of the PokeTrainers in the area. Then you can have big Pokefights!

    Never mind the fact that one of these imaging things is in the Pokecrystal on the PokeTiara telling Nintendo whether Pokemon is played more in little groups or big groups. It can even be modified to tell teachers if Pokemon is being played at nap time.

    The idea may be crazy, but I'm sure real company executives (the gov't wouldn't try it when it's cheaper to buy the information from the megacorps; especially after the lawsuits) could come up with crazier ideas.
  • Sigh. I stopped reading after I saw

    "...the devices are modelled exactly on the way the human eye works, based on the work of thousands of previous researchers into how all human eyes function. 'It was remarkably easy to implement,' say the inventors, 'once others laid the groundwork.'

    It's always a good idea to stop reading when you see things that are not there. This it what the article really said:

    ...models the human perceptual process at the hardware level by mimicking the separate temporal and spatial functions of the eye-to-brain system.

    AFAIK models by mimicking and models exactly is not the same. To write your interpretation as a citation is really bad manners BTW.

  • $50 for a reactive vision processing system? Couple that with a cheap (or free) reliable operating system, and cheap networking that has lots of addresses (IPv6), and you could put watching devices on every street corner, heck in every house (you know, for the safety of the children!).

    Why would this be done? What purpose would it serve?

    slightly offtopic:
    If you are fearing the Orwellian nightmare, then I think you can relax. 1984 and the like were books written about the Soviet Union, not our world today. There are far too many ways for people to communicate and get information out that simply weren't present in Orwell's day. And that is exactly why we do not have to fear it.

    provolt
  • "...Couple that with the trillions and trillions of terabytes that would accumuliate from each and every person who had the device..."

    I don't know about you, but I can remember things that happened to me when I was 5 like they just happened yesterday. Granted that humans have a selective memory (mostly subconcious), the human brain would probably be more than enough organic storage to store the information if it were retrieved by a set time limit......just try to keep those HERFs and EMPs away from your head....

    I can hear it now "WILL YOU STOP TRYING TO DEGAUSS YOUR BROTHER!!!!"

    /ramble

  • ...if the general-purpose hardware wasn't so stupid. Of the millions of transistors on a modern chip, most of them are wasted in maintaining the illusion of sequential operation, while the OS writers go to considerable trouble to create the illusion of parallel operation.

    Furthermore, there are the huge (in terms of transistor count) banks of flip-flops which just sit around most of the time, and the costly layers of cache all working their hardest to maintain the illusion that it is RAM. Meanwhile, software optimizers make sure to access memory sequentially to avoid upsetting this illusion, which would ruin the performance.

    You can justify all this nonsense with the argument that software is written for sequential machines with RAM. It's a circular problem. If somebody would just release a cheap massively parallel system, the programmers would learn to use it efficiently.

    You can make a complete processor in a few thousand transistors (as this guy [ultratechnology.com] has done, though he goes a bit off the deep end...), and you can add a bit (a few K) of high-speed RAM and network them easily enough to make a (dare I say it?) Beowulf cluster on a chip. Each might only run at one tenth the speed of a modern CPU, but you could have hundreds of them for the same cost, giving you bips and gflops for the price of mips.

    It would also make the whole design process a lot easier and faster. One simple processor, repeated hundreds or thousands of times. Every advance in production would bring a direct and proportional improvement in performace, with a tiny added design cost. Forget special graphics or sound processors, just plug in more processor banks like you would add memory today and watch your system fly.

    C'mon hardware guys, we software guys aren't that stupid! We don't need your illusion of a 386!
  • What you should REALLY be worried about is automated face recognition that feeds into big-ass backend database.

    It sounds like your Orwellian nightmare could be just a few years away. Just this morning, Public Radio International's [pri.org] Marketplace [marketplace.org] described a prototype face-recognition system made by TRW and designed for use in cars to allow only "authorized users" to start the engine.

    It doesn't take much of a leap to imagine this system being used for surveillance -- the NSA must be wetting themselves over this.
  • This is actually low grade tracking hardware by
    Military standards, you gotta think if they
    tell us about having a plan that has a radar
    cross section the size of a bird, they can most
    likely track a marble size object at a couple
    hunderd or thousand miles.

    So just being able to sense movement at shitty
    resolution, and distance wouldn't be that attractive.

  • Apparently they have concentrated on a subset of vision problems.

    Yes, but the list given at the end lists things like face recognition which is even more tricky than general object recognition. AFAIK even the best systems today have a lot of trouble with faces that are at an angle of even 30 degrees from the viewpoint.

    I'm not saying that they don't have a working chip, I'm saying that the article is filled with marketing hype and very little in the way of facts. The reality of the chip's potential is probably nowhere near what the article hints at, even though I'm sure it has a lot of genuine applications.

  • Good points, but sometimes the greatest breakthroughs are made not by those who have been working on something for decades, but those who just started and try something noone has ever done. Why was this small company able to do it? They simply didn't know it couldn't be done the way they tried.
  • It is possible. Sometimes the best, cheapest solutions are the ones which a person who has not been stuck in a money hungry research project. Not that some things don't require a lot of money to develop but sometimes people get stuck up in thinking in one way because they have have a lot of money riding on it. They are often afraid to try anything different.

    It reminds me of the story in Hackers by Steven Levy of the kids who made a ping pong playing robotic arm when the AI students thought it was impossible. Sometimes it takes a different approach to realize something isn't as hard (or expensive) as people believe.
  • by Anonymous Coward
    What you should REALLY be worried about is automated face recognition that feeds into big-ass backend database. Once the street cameras + database system will be able to identify you by your face and track you from camera to camera as you walk the streets, life suddenly becomes much more interesting. In particular, Darth Vader-ish helmets start looking very attractive.

    The Darth Vader helmets will not help. Face recognitition is just a passing phase. The real tracking will be done on your individual particular style of body movements / body language as you walk, move your arms/legs bob your head, etc. Everyone will need to ride around in a motorized wheelchair and keep your body ridged, but then they'll require license numbers on such wheelchairs. You cannot beat the loss of privacy. We're all SOL and screwed.
  • by spiralx ( 97066 ) on Friday March 10, 2000 @09:05AM (#1212175)

    I'm sure they've done some great work but my personal belief is that it's been subject to a marketing department's hype machine. The vision system in a primate is one of the most complex parts of the brain and is still not entirely understood by neuropsychologists.

    There are three main pathways from the eye - the magnocellular (connected to the rods mainly and used for brightness and motion detection), the parvocellular (connected to the cones mainly and used for colour determination) and the koniocellular (whose function is less well known). These three pathways feed into the V1 area which acts as a feature detector, then into the V2 area, which detects colour features and movement and then into a variety of different areas including the V3 (shapes), V4 (colour) and V5 (motion and positioning) areas, the parietal lobe, the thalamus and the various inferotemporal and interparietal areas among others still being found.

    All of these different areas seem to have some bearing on vision in its entirety, and show just how complex vision is. As such I think that any claim that a company has suddenly perfected a chip which allows complex visual capabilities is suspect until hard facts and experimentation can prove or disprove the claims.

  • Are you saying that using this technology to try and cure blindness/improve sight is a waste? Try telling that to a blind person.

    Don't get me wrong, I think it would be great for "cars that automatically avoid accidents, robots that navigate through the world like normal people..." too, but I think the concept of restoring sight is much more important (especially to those who aren't lucky enough to be able to see).

    Maybe they could get it to interface with the human nervous system and at least supply limited vision to the unsighted. I remember reading an article in Wired a month or so ago about a guy who was putting an implant into his arm that was going to both send and receive impulses to/from the nervous system. He was going to try and record different emotions to a computer and then attempt to play them back to see if it gave him the same feeling.

    I'm sure all of this is some time off, but I think it'll be very exciting to see where something like this goes. The fact that it is so cheap is great too. Although, I'm sure once the insurance companies and doctors get a hold of it we'll be paying $20,000 for a $6 implant!

    _________________________________________

  • While this won't be letting us get new eyes, one neat thing would be to add this onto a wearable so that one could automatically track things in your field of view(anybody for a game of cups?). Besides defeating sleight of hand, it could let you do the "glowing trail" effect that is used in hockey games and the like(automatic bullet tracking for the SWAT team as well).

    And of course, my favorite, Predator style targeting reticules;-,

  • There is a much better article here [eetimes.com] from September, 11 1997. Does the 1999 article say anything new about this chip?
    (There's also a blurb on the media demonstration here [japantimes.co.jp] from the Nov. 17, 1997 Japan Times.)

    -ac
  • by srussell ( 39342 ) on Friday March 10, 2000 @09:49AM (#1212181) Homepage Journal
    This is the sort of technology which we both dread and anticipate. Self-driven cars, (more) intelligent houses, home security, smarter traffic signals... all of the spin-off products from something like this would be great to have around.

    On the other hand, this could also be the basis for technology that tracks where you go and what you do. Under the auspices of controlling crime, criminals could be "flagged" and watched, traffic policing could be automated, etc. Where it gets scary is in who determines what suspicios behavior is, or who qualifies as needing to be watched, or the fact that you are removing the human element from the decision making process of evaluating a crime.

    In the end, both the citizens and the govornment want this kind of pervasive, intelligent, monitoring technology to be ubiquitous. The difference is that citizens want to be able to turn it off.

  • You can make a complete processor in a few thousand transistors (as this guy (Moore) has done, though he goes a bit off the deep end...),
    (Off topic, but...) Oh, the FORTH chip guy. He's wierd, but very competent. He's into things like generating video signals in real time, in software, to avoid needing a video chip. He used to use an interface with only three pushbuttons as input (no keyboard), with which, by suitable manipulation, you could not only operate a menu system, but program in FORTH. It's worth a look just to see how far minimalism can go. It's not all that useful, but if you architect systems, it's worth seeing his approach.
  • Everyone will need to ride around in a motorized wheelchair and keep your body ridged, but then they'll require license numbers on such wheelchairs. You cannot beat the loss of privacy. We're all SOL and screwed.

    Eventually, people will be encased in conical, wheeled vehicles from the moment of birth, for protection reasons.

    The mind will eventually rebel against such treatment, causing irrational hatred and fear of non encased humanoids, and desiring to exterminate them.

    Particular Dr. types.

    George
  • Am I the only one who thinks that this sounds somewhat fishy?

    Absolutely. There are several projects underway (and completed) to do this, and all of them are well into the multi-thousand dollar range, AFAIK. The silicon visual system constructed at, IIRC, Georgia Tech, is supposed to be most complex VLSI device ever created (or was when their page was last updated ;-). I suspect that you won't be buying one for for $50.

    (Incidentally, my favorite hardware vision project is here [umd.edu]. This guy building most of a primate visual system (retina to cortex) to model attention and tracking. Who needs a silicon retina when you can get a silicon cortex ;-)

  • Check out the failure of the Connection Machine and you'll see that actually us software guys are too dumb to write good massively parallel software. I do it, and it's hard, especally the debugging, and that's just parallelizing to 64 processors. Perhaps there are other paradigms other than the standard imperative programming languages that would make using these architectures more efficient, but as it stands now, it is very hard to effectively use parallel architectures. With the introduction of Intel's Merced, a VLIW architecture, we will see what happens when excellent compilier writers take a crack at a somewhat parallel architecture. The problem is not as easy as you make it out to be or it would have been done.
  • i like the way you think.
  • He <a href="http://www.ultratechnology.com/scope.htm">st uck a whole computer in a mouse</a>, apparently just for the hell of it. Knowing him, it probably wouldn't cost significantly more than an ordinary mouse, either.
  • I believe that's been patented -- look up Bokanovsky's Process.
  • I'm beginning to imagine one HELL of a porn hookup in those eyes!!

    Mike Roberto
    - roberto@apk.net
    -- AOL IM: MicroBerto
  • But, there won't be much improvement in voice synthesis technology, so we'll all sound like we're talking through garden hoses. :)
  • Can I get mine outfitted by the same designer who built that Sony camcorder that looked through clothes?
  • It sounds to me like they're doing analysis about as complex as the frog's eye (see the classic 1959 paper [stanford.edu]). Not enough to equal our processing, but a nice package for an intelligent perceptual peripheral.

    Feel free to buy an evaluation kit and see what experimentation shows. They've had them for two years, and there's mention on the net of a 1998 video describing and showing their technology. If you really like the tech, the article says they're auctioning it soon so you can get the whole package...

  • You can already do that with a video cam and some parts from Radio Shack. No vision processing chip necessary. I had a friend who did this. With the vision processing chip you could have your toy car follow someone automatically, or program it to find your shoes under your bed. Now that would be cool!
    ---
    Zardoz has spoken!
  • by Anonymous Coward
    Simple solution: Adopt a number of different silly walks.
  • actually, I meant it as pure fiction, a satire, so I don't think the quoting rule would apply. I could tell it wasn't structured properly to be totally funny by Slashdot standards: Slashdot abhors subtlety, so I shoulda sprinkled in some smileys and not mixed in a serious point, but I was tired and in a hurry. Also, I was treading on thin ice mentioning one of Slashdot's heroes disparagingly, but I think it was a timely criticism so... sorry it sent you documenting the details of the seeming error.

    I do maintain that it was clever just the same, but I won't hide behind the artist's excuse "if it made you angry, the art was working" because I was hoping to make you laugh. sorry.

  • it was done in the 1970s

    [ c h a d o k e r e ] [dhs.org]
  • Yeh, I know. when I said "Functional visual cortex" I ment the one he was born with. And when I said "He needs more connectors in his brain" I ment connectors to the hardware, not aditional nurons in his mind...

    [ c h a d o k e r e ] [dhs.org]
  • It seems very dubious. Perhaps their parallel system will allow tracking to be done more cheaply and quickly than existing systems, but tracking and vision in general are still, at a fundamental level, very much unsolved problems.

    The human visual system uses countless contextual, knowledge based cues to make sense of the world (e.g. take a look at some optical illusions); even a simple task like tracking requires a lot of background knowledge. To claim that their mimics the human visual system is, frankly, unbelievable.
  • I never said "using this technology to try and cure blindness/improve sight is a waste" I said it's NOT APPLICABLE! It's a microchip that takes in a video stream, identifies and tracks programmed objects, and outputs numerical data on their motion. Sure, you could conceivably find a way to feed that information in a comprehensible way directly to the human brain, which I guess would be the equivalent of giving somebody "visual sonar" or something, but if you had the technology to interface directly with the brain like that, you'd just have to feed it video.. you don't need a processor in there spitting out motion data... the brain does that. This new chip mimics the part of the brain that does this, not the actual sight organs.

  • I'm very excited at this bit of news, especially given my interests in the fields of computers and bionics. It's about time these fields started developing true brakthroughs such as this "near-to-human" vision system.

    I believe this is one of the first steps to a world similar to what William Gibson describes in his novels, where humans and technology interface in a whole new way. With these types of advances a whole slew of ethical and privacy issues will develop, as they do with any revolutionary technology, but I believe that in the end the benefits will far outweigh the drawbacks.

    Now all that is needed is for some other emerging thechnologies from varied fields such as nerve re-growth, computer miniturization, etc. will combine to produce things like relacement/upgrade body parts.

    Personally, I can't wait for the first brain-embedded computers to emerge. I'd be willing to be one of the first human testers of such technology.

    Any comments/ideas?

  • If it is really capable of what the designers are claiming, it will be really cool.
    Lets just hope the military or the government don't go and buy it up; if they did, it would disappear from the public domain completely, and forever.
  • I am not sure this system would contribute to loss of privacy. From what I can make of the article, it is just a motion tracker. It wont pick your face out of a crowd and alert someone.
    It is already possible to monitor people with hidden cameras. This system could only track someone's movements.
    What happens when the target turns around, obscuring his/her face, or whatever part was being tracked? Will it compensate by locking onto a different part of the object/person? What happens if the target becomes obscured?
    I guess it could be used to follow someone walking down an empty street.
    The article makes this system sound very capable, but it would be desirable for it to be able to compensate for the above problems (I doubt it would be able to deal with all of those mentioned) for it to be useful to follow someone automatically, anywhere they go.
  • Gotta wonder, a couple of these operating in tandem would give you a great depth perception, visual coverage of movement... wonder if it would make for more accurate smart bombs and ABM devices... Ronny Raygun Rules!
  • from the nerd point of view, yes. but from the jock view, I think it would be cool to go to a bally club or something and look throught all the clothes of the hot looking chicks. The wonders of technology.
  • This Yahoo story [yahoo.com] says the auction of the technology is under way now.

Only great masters of style can succeed in being obtuse. -- Oscar Wilde Most UNIX programmers are great masters of style. -- The Unnamed Usenetter

Working...