Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics Science

Humanoid Robots For the Next DARPA Grand Challenge? 53

HizookRobotics writes "The official announcement should be out very soon, but for now here's the unofficial, preliminary details based on notes from Dr. Gill Pratt's talk at DTRA Industry Day: The new Grand Challenge is for a humanoid robot (with a bias toward bipedal designs) that can be used in rough terrain and for industrial disasters. The robot will be required to maneuver into and drive an open-frame vehicle (eg. tractor), proceed to a building and dismount, ingress through a locked door using a key, traverse a 100 meter rubble-strewn hallway, climb a ladder, locate a leaking pipe and seal it by closing off a nearby valve, and then replace a faulty pump to resume normal operations — all semi-autonomously with just 'supervisory teleoperation.' It looks like there will be six hardware teams to develop new robots, and twelve software teams using a common platform."
This discussion has been archived. No new comments can be posted.

Humanoid Robots For the Next DARPA Grand Challenge?

Comments Filter:
  • Re:That's ambitious (Score:4, Informative)

    by Animats ( 122034 ) on Friday April 06, 2012 @12:25PM (#39598439) Homepage

    Same goes for vision processing. The reason most robots today use LIDARs as their primary "vision" sensor is that image processing is very slow and not very good. With a 3d LIDAR you can do mapping, object detection and recognition, people detection, grasp planning, obstacle avoidance, etc. all very easily in polynomial time. Most of these tasks are computationally intractable using images, if they're possible at all (for example, night time operations).

    10 years ago, that was true. Not any longer. Take a look at DARPA's current robot manipulation project. [youtube.com] Watch a vision-guided robot with two three-fingered hands put a key in a lock, turn the key, and open a door. That's being done with stereo vision and force feedback. The gold-colored device with two lenses is a Point Grey stereo vision device.

    Much of early vision processing is done in custom hardware or GPUs now. It's not "computationally intractable". It's fairly expensive computationally, because the amount of work per pixel is high. But it's not exponential.

    When I ran a DARPA Grand Challenge team a decade ago, we relied on LIDAR too much, and vision not enough. The winning teams relied almost entirely on vision when they were going fast - the LIDARs didn't have the range or angular accuracy to read terrain out to the stopping distance. What they got from vision was basically "distant part of road looks like near part of road - OK to go fast if near part of road is good", obtained from a classifier system. So they could out-drive their LIDARs. We couldn't exceed 17mph.

Credit ... is the only enduring testimonial to man's confidence in man. -- James Blish

Working...