Follow Slashdot stories on Twitter


Forgot your password?
Science Technology

A Robot Learns To Fly 289

jerkychew writes: "For those of you that read my last post about the robot escaping its captors, there's more news regarding robots and AI. According to this Reuters article, scientists in Sweden created a robot that essentially 'learned to fly' in just three hours. The robot had no preprogrammed instructions on how to achieve lift, it had to deduce everything through trial and error. Very interesting stuff."
This discussion has been archived. No new comments can be posted.

A Robot Learns To Fly

Comments Filter:
  • by min0r_threat ( 260613 ) on Friday August 16, 2002 @05:57AM (#4081531)

    Not only do we have to watch out for bird crap raining down on us, we now have robot excrement to worry about as well.

    • bird crap raining down on us, we now have robot excrement to worry about as well.

      With a few genetic algorithms, perhaps it can be trained to make realistic bird-poop also.

      Come to think about it, I vaguely remember a story about a robot garden slug-eater that could actually digest slugs for feul. I am sure it had "byproducts". A little merger here, and walah, your dream (or nightmare) comes true thanks to modern science.
  • by Anonymous Coward
    They say the robot had no concept of lift or how to achieve it, but given that it could only 'twitch' its wings, it isn't really an AI-related feat to twitch them faster and faster until... hey, I'm flying!

    It just seems to me like AI through logical progression, which I'd be tempted to not call AI...

  • Somehow... (Score:5, Funny)

    by madajb ( 89253 ) on Friday August 16, 2002 @06:01AM (#4081541)

    The fact that it "cheats" somehow restores my faith in robotkind....

  • Well.. (Score:5, Funny)

    by squaretorus ( 459130 ) on Friday August 16, 2002 @06:01AM (#4081542) Homepage Journal
    A robot has taught itself the principles of flying -- learning in just three hours what evolution took millions of years to achieve

    Well. Assuming the birds were TRYING to fly, knew what lift was, and already had the equipment (i.e. wings) to achieve this.

    This brings an image of stupid birds sitting around flapping randomly thinking "FUCK - I'm SURE this should fucking WORK! - Bastards - OOps, I just fell over to the left - does that mean my right wing was flapped right???? - Hey - John! WHAT DID I DO THEN????"
    • Re:Well.. (Score:5, Funny)

      by madajb ( 89253 ) on Friday August 16, 2002 @06:04AM (#4081548)

      Do you think it would have learned faster if they'd taken it up to the roof, and thrown it off?

      " sensors indicate that I am falling at a rapid rate. Maybe I ought to do something about that. I'll try flapping this thing. Nope. How about together..that seems to be wor...."

    • That reminded me a quote from "Chicken Run":

      Rocky: You see, flying takes three things: Hard work, perseverance and... hard work.
      Fowler: You said "hard work" twice!
      Rocky: That's because it takes twice as much work as perseverance.
    • A robot has taught itself the principles of flying -- learning in just three hours what evolution took millions of years to achieve

      I guess that the "Special Creation" theories no longer fly (ah-thankyou).

      Seriously... it took _humans_ a pretty long time to figure out flight, heck, even gravity (and for some reason we want AI to be like us?).

      While I'm amazed at anything that learns, which isn't carbon based, I wouldn't start comparing this to actual life. When robots actually take over, smelt metals for more robots and develop interstellar travel you'll get a wow from me.

      (BTW if this sort of thing scares you remember that the commies want to purify your precious bodily fluids!)
      • Re:Well.. (Score:3, Insightful)

        This evolution claim is so much bullshit. The robot already had wings, and was given the instructions on how to move them. A more accurate comparison would be when a bird finally decides to leave the nest--how long does it take to figure out how to fly then? Certainly not 3 million years. I don't know exactly how long it takes, but I'd guess that a bird does in a matter of hours what this machine did.

        If the scientists threw together a bunch of spare parts, and watched as a robot magically constructs itself, decides a useful thing to do would be learning to fly, and then takes off--well that could be compared to millions of years of evolution. And you know what? It'd never happen. Not without some "divine" intervention on the part of the scientists.
      • A robot has taught itself the principles of flying -- learning in just three hours what evolution took millions of years to achieve
        I guess that the "Special Creation" theories no longer fly (ah-thankyou).
        This research has absolutely no relevancy to evolution. Besides, if it did, it would actually help design proponents. The researchers designed the robot and software, gave it the necessary physical tools for flight (or at least flapping), gave it a goal to produce maximum lift, and provided feedback whether its actions were progressing towards the goal or not.
    • Re:Well.. (Score:4, Insightful)

      by Xaoswolf ( 524554 ) <{Xaoswolf} {at} {}> on Friday August 16, 2002 @07:43AM (#4081738) Homepage Journal
      "This tells us that this kind of evolution is capable of coming up with flying motion,"
      However, the robot could not actually fly because it was too heavy for its electrical motor.

      This thing didn't even learn to fly, it just flapped it's wings. And what kind of evolution did it go through, it didn't pass on different genetic information until a new trait was passed on forming a new race, it just flapped it wings.

      • Re:Well.. (Score:4, Insightful)

        by tlotoxl ( 552580 ) on Friday August 16, 2002 @08:37AM (#4081862) Homepage
        It may not have physically passed on its traits to any offspring, but from the sounds of it the program did internally pass on traits to the next generation (ie iteration of the program) when those traits proved to be successful. That's how an evolutionary/genetic algorithm works, and while it may not be evolution in the biological sense of the word, it clearly models the biological process.
        • Re:Well.. (Score:3, Insightful)

          by Xaoswolf ( 524554 )
          I see that as simply learning, the robot learned, changed how it thought. When I learn a new math equation, I don't say I underwent evolution, I say that I learned a new math equation. Neither the purpose or the form of the robot changed during the experiment. That is evolution, a change, the robot had one goal programmed into it, to obtain maximum lift, it had one form, a box with wings and legs. Had the robot changed it's programming to where it could drive a car, or had it actually altered it's physical form, then I could see calling it evolution.
      • Re:Well.. (Score:2, Informative)

        by audiophilia ( 516688 )
        The article is very light on details, but I assume that this robot is of a variety known as "living robots" or BEAM robots []. These robots do not use digital computer components like most people would probably assume. They use simple logic circuits to achieve their goal. And they DO learn in a very limited sense. They have a specific goal in mind (some learn to walk, some learn to seek out light to power their solar cell), and through trial and error they achieve that goal.
  • Hmm (Score:2, Funny)

    by af_robot ( 553885 )
    "However, the robot could not actually fly because it was too heavy for its electrical motor."

    One small step for robot, one giant leap for robotkind
  • very interesting (Score:4, Interesting)

    by shd99004 ( 317968 ) on Friday August 16, 2002 @06:09AM (#4081557) Homepage
    Especially tried to cheat by standing on it's wingtips or similar. I would like to see something else though. What if we build lots of small generic robots, let's say they have wheels to move around only. The on the floor there could be more components that robots can attach themselves to, like giving them legs, wings, arms, eyes, ears etc., and then give them all different objectives, for example to survive, escape, learn from others, etc. Could be interesting to see if it would evolve into some kind of robot society where they all evolve different abilities and so on.
    • by Alan Partridge ( 516639 ) on Friday August 16, 2002 @06:39AM (#4081622) Journal
      that's rubbish - what we really need is to give robots the ability to turn into cars and F-14s and then join together into a kind of super, Optimus-Prime type of device.
    • Re:very interesting (Score:2, Interesting)

      by AlecC ( 512609 )
      The interesting question in about your proposal is the goal setting. In the swedish research, they set the system a very simple goal - generate lift using the hardware provided. And they showed that an evolutionary algorithm actaully achieved that, including exploring unexpected pathways (the cheats). But it is long, long way from such a simple, one-dimensional, goal seeking to a the multi-dimesional goal seeking required to make a working community/society. Particularly important, in my opinion, and unexplored in this scenario, is finding good compromises between conflicting goals, and particularly between long term and short term goals.

      Actually, I think research of this sort has gone a lot further in the simulated environment than these swedes have done. The different thing about this research is that they have done it with an object in the physical world. This should please those who distrust simulation, but for the average /.er it probably only confirms what we have known for a while - genetic algorithms are a nifty solution to a certain class of problem.
      • Yeah, basically they showed us it's possible in yet another application. It could ofcourse be used for many purposes in many different physical objects. I was, just now, thinking of planetary exploration on, say, Mars. We could send rovers, aeroplanes, balloons, and relaying sattelites to Mars to operate in that environment. They could program them with everything we know about that environment. These bots would still encounter new situations, "marsquakes" (if that happens there), power failure, sand storms... so they would have to be self learning in a way, to handle it in a better way next time. Also, every time one of them learns something new, they would relay the info to the others. Maybe they could even help each other out or cooperate if needed. I don't know how much more research would have to be done to do this, but the idea makes sense to me...
    • Re:very interesting (Score:2, Interesting)

      by JPriest ( 547211 )
      Or sort of a robowars unleashed, where you place them in a room full of weapons that they are programmed to use, then let them fight it out quake style.
      • That is something I would watch, definitely :)
        I'm watching these RobotWars shows sometimes, and I'm always imagining something similar but with cooler weapons and AI instead. I am pretty sure it will come as soon as it's possible...
  • The moment the robot asks for a hamburger, hookes up to the net and orders a ticket for the next flight wherever they are getting somewhere with AI and simulated evolution..
  • Cool (Score:3, Funny)

    by TheCrunch ( 179188 ) on Friday August 16, 2002 @06:13AM (#4081566) Homepage
    Imagine a day where engineers build cool robots, upload the generic learn-to-do-stuff-with-your limbs program, leave it for a week or so to train up and get optimum calibration, then have it copy it's program onto subsequent batches.

    I picture a robot aerobics class.. heh. But if anybody asks, I picture a robot boot camp.
    • Re:Cool (Score:3, Informative)

      by mshiltonj ( 220311 )
      Imagine a day where engineers build cool robots, upload the generic learn-to-do-stuff-with-your limbs program, leave it for a week or so to train up and get optimum calibration, then have it copy it's program onto subsequent batches.

      Read _The Practice Effect_ by David Brin. Sci-Fi. It's not a deep read, but entertaining. In an alternate universe where physics are different, the more you do something, the better you get at it. For instance, if you tie a stone to the tip of a stick and pound it against a tree, eventually the stick-stone will turn into a diamond-tipped axe.

      It's a stretch, yes, but it's a fun read. You'll love it when the robot (from our world) reappears at the end of the book, after having 'practiced' what it was told to do, unseen, for most of the story.
  • Sensationalism (Score:5, Insightful)

    by Mika_Lindman ( 571372 ) on Friday August 16, 2002 @06:14AM (#4081570)
    LONDON (Reuters) - A robot has taught itself the principles of flying -- learning in just three hours what evolution took millions of years to achieve, according to research by Swedish scientists published on Wednesday.

    Ridiculous to compare prebuilt robot to evolution from some dinosaur to flying dinosaur (also known as bird). This really is tabloid headlining at it's purest.
    And the robot didn't even fly, just generated some lift!
    It's like saying humans can fly, when they generate 1N lift flapping their arms.

    But it's great to see how selflearning robots and programs will start evolving now. I quess pretty soon computers and robots will be able to evolve faster on their own than when developed by humans.
    • You missed the best part. The robot did diddly squat except test inputs using sensors.

      Krister Wolff and Peter Nordin of Chalmers University of Technology built a robot with wings and then gave it random instructions through a computer at the rate of 20 per second.

      So they built a robot, gave it sensors, then said, which works best, this? how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?how about this?

      Big whoop. So they built a robot that can do a bubble sort.
    • But it's great to see how selflearning robots and programs will start evolving now. I quess pretty soon computers and robots will be able to evolve faster on their own than when developed by humans.
      Did you even read the article? In no way was this robot self-learning. They provided random instructions to it, a goal of producing maximum lift, and feedback whether it was progressing towards the goal or not. This robot would not have "evolved" at all without human input.
  • It's fed a set of instructions, apparently 20/sec, and is asked to remember which one got it the highest.

    Execute instruction
    Lift higher than others?
    YES - Remember this instruction
    NO - Get next instruction
    Repeat until no instructions
    keep repeating successful instruction

    Seems pretty basic to me and hardly learning, just a new spin on analysing the effeciency of algorithms.
    • by Jugalator ( 259273 )
      hardly learning, just a new spin on analysing the effeciency of algorithms

      Well, analysing efficiency of algorithms and discarding the bad ones seem pretty much like "learning" to me.

      Sure, humans aren't built to work efficiently with algorithms like robots do, but we learn from mistakes which one could call "poor algorithms with an undesired result". Humans don't exactly choose randomly between ways to do things - we perform things the way we suceeded in earlier.
  • Next thing we know, they'll be controlling the nukes, building Skynet, and killing all humans with Schwarzenegger lookalikes.

    You're all doomed, I warned you!

    I'll just get to packing my stuff, moving to a remote cabin in Montana and keeping a close eye on my refridgerator (I know it hates me, it keeps melting my ice cream).

  • what evolution took millions of years to achieve
    Well, at least evolution succeeded in making birds that weren't too heavy for their own wings...

    Seems to me that this project was not really as exciting as they would like us to believe...
    • Exactly. It never actually flew. From the article -
      However, the robot could not actually fly because it was too heavy for its electrical motor.

      It merely succeeded in figuring out the best series of motions to get maximum lift. In any case, that's all the robot had to do - try to fly. It didn't have to worry about predator avoidance, finding food, defending a territory, or mating.
      • It merely succeeded in figuring out the best series of motions to get maximum lift.

        It's even worse. It didn't even try to learn to fly. It tried to get to the best combination of movements that its creators thought was "flying".

        See the difference? A real evolution would consist of a robot that was actually light enough to be able to fly. And then it could measure its own success by how much lift it got.

        Since the different instruction lists that were fed and tested inside this robot weren't checked against "How high will it let it fly" but only against "How close does it look to what we scientists know that flying should look like", this experiment is rather worthless.

        Maybe the evolutional algorithm found out a better way to fly with its wings than the standard way birds do. And it was just thrown away by the control program because it thought: "This doesn't look to me like flying is supposed to look." because it wasn't tested in real-life with a robot corpus that could have proved that this new movement combinations actually work.

        Really a shame. Could have made a really interesting study.

        • Yes, unfortunately, to simulate REAL evolution would require far more powerful computers than what we have now.

          So how about we just simulate parts of it for now, ok?
    • That's the next step. The damn thing figuring out that it's too heavy and:

      1) doing everything it can to lose weight so as to be able to do what society is asking of it.
      2) after long anorexic periods jumping off a bridge, inventing the concept of gliding half way, slowly setting down on the water and then taking digital anti-depressants until it shuts down.
    • No, of course it would be much more impressive if the robot started exchanging its metal parts with feathery wings, perhaps hunting some birds to get them. But also much more unrealistic.
    • Well, at least evolution succeeded in making birds that weren't too heavy for their own wings...

      You can consider the poor bot some kind of turkey :o)

    • Well, at least evolution succeeded in making birds that weren't too heavy for their own wings...

      Apart from Penguins, Emus, Ostriches, Kakapos, Cassowaries, Kiwis, etc...
  • Impressive, but... (Score:5, Insightful)

    by altgrr ( 593057 ) on Friday August 16, 2002 @06:26AM (#4081597)
    Rather than comparing this to millions of years of evolution, perhaps it would be better to compare it to a bird just old enough to physically be able to fly.

    The robot was physically equipped with all it needed to 'fly'; it was also equipped with all the wires in the right places. The fundamental difference between robots and living organisms is in the thinking: a newborn bird has to forge new synapses in its brain; this robot was designed with the purpose of 'learning to fly', so was given all the appropriate connections; it is just a matter of working out what sequence of events is required. Robots inherently have some form of co-ordination; birds, on the other hand, just like any other animal, have to develop such skills.
    • by lovebyte ( 81275 )
      birds, on the other hand, just like any other animal, have to develop such skills.

      I don't think you are quite correct here. Evolution has done wonders with the brain and pre-wired some instructions. For instance birds do learn very quickly how not to crash! And there must be some pre-wiring describing how to use air currents for instance.
      • Exactly - birds are born with certain knowledge about such things. Just like they're born with an anatomy they can use to fly with.

        Many "lower" animals are born with such knowledge required for their survival.
      • Some animals can walk immediately after birth as well (otherwise they would be doomed). I don't think birds can fly right away, so they may have to learn somethings before they can fly. Just like humans don't know how to walk before we are born.
        • Don't forget that when birds hatch they still have a fair amount of physical development (muscle strength, bone strength, flight feathers) before they are physically capable of flying.

    • I think that it is currently believed that birds are born knowing how to fly already. They don't fly right away because their wing muscles aren't strong enough.

      They do need to learn to fine tune their flying however, every birds body is going to be a bit different fo course.

  • as everyone knows, it all depends on what software there was in the system. If the starting-point was a program, which contained instructions for trying to move the "wings", and seeing which instruction caused most lift, and tuning the algorithm based on that, I don't think theres anything fancy in it. If this is the case, this could have been done at the same time when the moonlander game was first done :) I mean, it all depends on how dedicated for this exact "learning purpose" the SW in that robot was - or was it just an self-optimizing algorithm. Is there any more details on the software inside somewhere?
  • by Kredal ( 566494 ) on Friday August 16, 2002 @06:38AM (#4081619) Homepage Journal
    "Hey guys, look! We stood on really tall stilts, does this mean we're flying?"

    That would have been something to see.

    The robot stands proudly on it's wings, and tells the scientists "Look at me, I generated maximum lift, and I don't have to exert any force at all. Oh, and from here, I can see the mouse is climbing over walls to get to the cheese without going through the maze. You humans are so stupid!"
  • I am not sure if you can call what the robot was doing 'flying'. It was essentially just flapping its arms in the most effective way possible with whatever wing-like appendages given to it.

    Now the cheating - that is the interesting part. When they have the algorithm down so that the bot hobbles out the door and purchases a ticket at the airport, then they will have a winner.
  • check out AUVSI's [] Aerial Robotics Competition []
  • Finally! (Score:5, Funny)

    by jstockdale ( 258118 ) on Friday August 16, 2002 @06:41AM (#4081627) Homepage Journal
    Cheating was one strategy tried and rejected during the process of artificial evolution -- at one point the robot simply stood on its wing tips and later it climbed up on some objects that had been accidentally left nearby.
    But after three hours the robot discovered a flapping technique
    However, the robot could not actually fly because it was too heavy for its electrical motor.
    "There's only so much that evolution can do," Bentley said.

    Finally we understand the dodo's place in evolution.
  • by Ben Jackson ( 30284 ) on Friday August 16, 2002 @06:42AM (#4081630) Homepage
    Here's what I did to play around with breeding algorithms from small building blocks:

    Define a very simple stack-based language. The stack only holds boolean values, and when empty pops and endless supply of "false" values and when full discards pushes. Choose some control flow opcodes:

    NOP, SKIP (pop, if true, skip ahead a fixed amount), REPEAT (pop, if true, skip back a fxied amount), NOT, RESET (clear stack, back to beginning)

    and some opcodes related to your environment (mine was a rectangular arena):

    GO (try to move forward one step, push boolean success), TURN (90 degrees clockwise), LOOK (push boolean "do I see food ahead?"), EAT (try to eat, push boolean success)

    Pick a stack size (this has interesting consequences, as some of my organisms learned to count by filling the stack with TRUE values and consuming them until they hit the endless supply of FALSE when empty) and a code size. Force all organisms to end in your RESET op. Generate them randomly and run them in your simulator (I did 20-50 at once letting each one run a few hundred instructions in a row). Evaluate fitness (in my case, how well fed they were) and breed them. You can combine the functions in lots of ways. Randomly choose opcodes (or groups of opcodes) from each, possibly with reordering or shifting. Introduce some mutations.

    Once you get something interesting, try to figure out how it works. This can be the hardest part -- my description above produced many variations that were only 8-10 instructions long before an unavoidable RESET opcode, and they could search a grid with obstacles for food!

  • MAIN
    target = 72;

    guess = rand();
    while guess target;

    print "GOT IT!"


    Artificial Intelligence researcher creates computer program that comes up with the number 72.

  • The objective of the learning algorithm was to achieve maximum lift while attached to two vertical poles . So the headline should be: 'Robot learns to achieve maximum lift by flapping wings while attached to two poles'. I think keeping balance, avoiding stall, etc. are much harder to achieve.
  • Seem to be a lot of folks who aren't very impressed by this. I'll admit that the headline is a little over the top, but the story is still interesting -- and fraught with interesting potential.

    For example:

    StarBot: How long before it learns to make a grande latte half-skim/half 30 weight?

    Bouncebot: How long before it learns not to turn its back on the loud drunk in the corner?

    Lobot: How long before it decides it really doesn't want to learn anything, just sit around and smile.

    Congressbot: how long before it learns that working tirelessly for your constituency is its own reward, whereas lying for assorted interest groups is money in the bank? Note: This may be a special case of the Lobot.
  • ... did the robot felt happy for its achievement?

  • by clickety6 ( 141178 ) on Friday August 16, 2002 @08:20AM (#4081804)
    "It was amazing," said Dr. Heinrich Hienrichson, "Before I knew it, the robot had stolen my credit card, set up an account on Orbitz and booked two airline tickets to Mexico. Now the robot has escaped and my toaster appears to have gone misisng as well..."
  • by CompVisGuy ( 587118 ) on Friday August 16, 2002 @08:43AM (#4081883)
    There are lots of posts from people who don't really get what these guys did. I don't think they made a particularly amazing achievement, but many slashdotters out there don't seem to understand the science behind the achievement (the Reuters article was awful, second hand from New Scientist, which is often poor on presenting the basics).

    What the researchers did was to build a robot that had wings and motors for manipulating them. These could be controlled by a computer. But instead of writing an explicit program telling the robot how to fly, they got the robot to learn how to fly. They did this using some sort of Genetic Algorithm [].

    Basically, what a GA does is to generate a large population of possible solutions to the problem, then evaluate how good each one is (i.e. measure the lift each one creates in this example) and then to breed good solutions to create successive generations of possible solutions which are (hopefully) better than the previous generations.

    Then, once some criterion is met (for example, once the average fitness of your population doesn't change much for several generations), you then select the best solution found so far as being your answer.

    In mathematical terms, GAs are stochastic methods of optimising a function; they are typically used when solving the problem using an analytic method would be problematic (i.e. it would take too long etc.).

    So it's not really surprising the robot learned to 'fly' -- the researchers just managed to find an optimal sequence of instructions to send to the wings.

    The next step would be to get a robot to learn how to hover without the aid of the stabilising poles; then fly from one location to the other; then fly in a straight line in the presence of varying wind etc.

    What the research does do is to lend credence to the argument that insects and birds could have evolved, rather than having been 'designed' by some sort of a God.

    • You forget... The robot had wings... In this case, the robot's god put them there.

      And if the robot were to have built wings from available parts, that wouldn't count, as even we humans learned to assist the limitations of your body.

      Trial and error is an excellent learning tool, look at how much toddlers rely on it... I cry I get food, etc.

    • Well, I strongly disagree that simply because they used GA that this is not an impressive achievement. When I first read this article, and I saw the keyword "learned", I starting scanning the article for the fitness function...ahh, "generate maximum lift".

      This is impressive because it demonstrates a particularly successful marriage between a design and a fitness function for the design.

      Finally, re: evolution vs. intelligent design; who specified the fitness function ;)?
    • You are right about the GA, but this is closer to Genetic Programming than GA. GA evolves the 'answer', but GP evolves the 'solution' to the answer.

      There's a difference, GA is much easier to program than GP and is usually much faster. Example of a good candidate problem for to use GA to solve would be the travelling salesman problem, while GP would find a method to solve a problem.

      While fundamentally pretty similar, GP is slightly more complicated. You have to deal with issues such as program over growth during evolutions, which doesn't happen in GA. [] is a good source to learn more about GP.

      I took a class on GA and GP while in college, very interesting stuff. :)
    • GAs are one approach among many for searching a space of possible solutions for what will hopefully be a global optimum. The space is typically a set of N-tuples of integers (though it could be reals) where N is often large. A fitness function maps N-tuples to some sort of real-valued figure of merit.

      In biological evolution, the figure of merit is approximately reproductive success. Evolution works by favoring genes that get themselves copied a lot.

      Exhaustive search would be the obvious way to find a true global maximum in this sort of problem, but often the cost-per-tuple of evaluating the fitness function is non-trivial. In cryptographic key searches, the fitness function is a Dirac impulse which is one at the correct key and zero everywhere else, so near-misses don't help you to find the global maximum. (This isn't strictly true; many crypto algorithms have classes of weak keys, but that's a diversion for another time.)

      Another partial search algorithm, aside from GAs, is simulated annealing. Yet another is the backpropogation algorithm used to find good sets of weights for neural nets.

      As the crypto example illustrates, these partial search algorithms rely on gradients of the fitness function, where near-misses have higher fitnesses than wild-ass misses. One of the things one does when using a GA as a design tool is therefore to try to select fitness functions with gently sloping gradients throughout the space of solution tuples.

  • i really dig the idea of genetic "learning" simulations. they start with nothing, and eventually can come up with all the same things animals do - including different gaits for walking and running, etc.

    this is especially cool, in that they've not only done this in a simulation, but with a real nuts and bolts 'bot. how easy it seems to me now to ship out robots with very little programming, but a quick learning curve. the owner puts the 'bot in its home, punches in a few things it would like the bot to do, and lets it explore a little. after some training, it's perfectly suited to its new job and new environment...

    oh yeah. i grabbed a screensaver a while back from this guy [] that simulates a simple creature learning to walk. pretty spiffy, and you don't have to worry about it ambling off to the parking lot...

  • The really cool things about applications of GP/EA here isn't that it learned so much as the solution it came to. Often times, especially when we're thinking about ways to do stuff in hardware, rather than some simple software algorithm, evolutionary computation results in solutions that we humans wouldn't have thought of.

    A system that evaluates the effectiveness of a solution and refines it from there, that does so using real-world fitness ends up including factors that an engineer wouldn't have thought of. Perhaps because these variables are unknown, or seeminly insignifigant.

    A while back, you'll remember, /. had a story about using FPGAs and EA to program an array of FPGAs to distinguish the difference between "Yes" and "No" (or something to that effect, perhaps on/off). After many generations, the solution it came to not only work, but why it worked wasn't understood by the scientist. The best known human solution for something like that would've taken twice as many circuits. It included factors and variables (like magnetic resonance given off by excluded FPGA chips that weren't part of the circuit, but were still recieving power! when removed, the circuit didn't work) that humans wouldn't normally consider.

    Man, I love this stuff.
  • "This tells us that this kind of evolution is capable of coming up with flying motion," said Peter Bentley, an evolutionary computer expert at University College, London.

    Birdwatches around the world were SHOCKED at this finding.

    However, the robot could not actually fly because it was too heavy for its electrical motor.

    "There's only so much that evolution can do," Bentley said.

    Using these definitions, this robot's achievement pales in comparison to my evolutionary process which lead my body to the ingest a jelly donut covered in sprinkles this morning. Since I never had one of those before, the only explanation is that I naturally evolved to the point where I wanted to attempt the ingestion of such an object.

    Seriously, guys, this is nothing but cheap heat for a worthless techie. Move on.
  • I think the robot simply had a laugh sensor and just made variety of different motions until the laughter of the audience reached a certain threshold. The Moon Walk was a runner up.
  • Interesting article. It reminds me of the fascinating simulations [] done by Karl Sims [] which were inspired by work done by Chris Langton [] which is summarized here []. There are a bunch of articles on alife here [].

    This article is interesting, however, because it moves the agents into the physical world where it isn't possible to obtain the same kind of idealized environments that are possible in silico.

  • When jerkychew takes over the world, he will be twitching and muttering, "I tried to warn them! I did! THey didn't listen..."
  • "How Robots took over the world" circa - 2051.

    Electronic edition, of course.
  • by kc0dxh ( 115594 )
    This is neither scientific nor logical. Evolution of flight is presumed to transpire from not-flight capable creatures.

    1.Equiping a test subject with wings short-circuits the most intreguing part of the experiment.

    2.Equipping a winged test subject with a moter too heavy to maintain loft is stupidity at work.

    3.Thrust is not lift. Flight requires both, but this was thrust. The robot recreated 19th and 20th century flying machines. They didn't work either.

    4.Horizontal stabilizers (vertical rods) are not considered to have been available during the evolution of flight.

    The test is intriguing, for sure. But to bill this as AI learned flight is either poor press coverage, or a scientist seeking funding through an uninformed press.

  • This is very much like Demetri Terzopoulos's work on artificial fish. [] His simulated fish learn to swim. []

    It's not really that hard to learn locomotion in a continuous environment, where you can improve a little bit at a time. Swimming and crawling were done years ago. Basically, you formulate the problem in terms of a measurement of how well you're doing, and provide a control algorithm with lots of scalar parameters that can be tweaked. You then apply a learning algorithm which tweaks them, trying to increase the success metric. Any of the algorithms for solving hill-climbing problems in moderately bumpy spaces (genetic algorithms, neural nets, simulated annealing, adaptive fuzzy control, etc.) will work.

    There are limits to how far you can go with this technique, and they're fairly low. Gaits that require balance, such as biped walking and running, require more powerful techniques. Any task that requires even a small amount of prediction needs a different approach. Rod Brooks at MIT has explored the no-prediction no-model approach to control thoroughly, so we have a good idea now what its limits are.

  • I am trying to teach a robot to go, "Bok Bok Bwaaaahk Bok Bok."

    If I succeed, do I get a slashdot story?
  • yesterday's item, "Scientist Crushed To Death By Falling Robot In Failed Flight Attempt"


The relative importance of files depends on their cost in terms of the human effort needed to regenerate them. -- T.A. Dolotta