Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Machines That Emulate The Human Brain 37

prostoalex writes "Discover magazine provides an interesting insight into the future technologies that will emulate the human brain. While artificial intelligence supporters always considered direct emulation of brain functions too complex and preferred the top-down approach, some people are researching the ways human brain processes data. One of the interesting discoveries, mentioned in the article, is ability of the brain to re-architect the links as new information is added."
This discussion has been archived. No new comments can be posted.

Machines That Emulate The Human Brain

Comments Filter:
  • by spike666 ( 170947 ) on Thursday December 26, 2002 @09:19AM (#4960121) Journal
    if we want to analyse how alcohol impairs our thinking, how do we get these things drunk?
  • Period (Score:1, Funny)

    by peu ( 163472 )
    Can a male emulation understand female emulation wishes?
  • by watzinaneihm ( 627119 ) on Thursday December 26, 2002 @10:00AM (#4960297) Journal
    What would the machine do if it were intelligent enough? Men/women atleast would do things that make 'em happy. What can a machine be happy about?
    Also if this rewiring theory is true, we could just put in nano-radios into the head of a zillion bees and put in a nice enough routing algorithm then the bee colony would be an intelligent being ,right? ( assuming that a bee's brains are equivalent to a small cluster of human brain ?) . Now would the bees still look for honey more itelligently? or would they find the grand unified theory?
  • or are the four facial expressions analyzed:

    smirk, smirk, smirk, smirk

    ?
  • i.e., at least up to this point in time, there is no "self animating" force in any electronic entity that I am aware of.

    Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies there is still no true intelligence in the circuitry that is as flexible as the mind, which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision".

    The best example I can think of is "a WTC tower is falling down, what do I do? Do I run? and how far? When do I try to turn a corner to escape the dust blast? Do I altruistically tackle the person in front of me because I can tell that the only way they can survive is if I cover them at peril of my own existence? What about UA Flight 93? or any of the other thousands and perhaps millions of heroic acts we know of from just 9/11?? How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?

    Not in our lifetime, methinks.

    • Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies there is still no true intelligence in the circuitry that is as flexible as the mind...

      That makes no sense. Basically you said "Even if we could replicate it, we couldn't do it with our technology."

      How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?

      It's called neural networks... AL (artificial life). They simulate the neuron connections in a brain... If you replicate a brain well enough, it will have the same function as the original.

      ...didn't you read anything?
      • Bad grammar/wording on my part. Try it this way:

        Even if we could replicate the sheer processing power of the human brain at similar power levels, and make it mobile, our current circuitry and programming paradigms still don't offer a technologic basis for the kind of split second decisions..." followed by the remainder of my post.

        BTW, I am very aware of neural networks, and even some chip technologies based on modeling neurons. But there's nothing in the silicon that implicitly decides which particular version of a neuronal circuit might be useful, and what weird alternate connections it should make. Nor how to implicitly attempt to self-heal around a known bad connection, etc. like the biologic based computer does. All of that has to be developed fairly explicitly by an intelligent entity, also known as the human brain.

        • Nor how to implicitly attempt to self-heal around a known bad connection, etc. like the biologic based computer does.

          Starbridge Systems. They developed a hyper computer that was self healing... you shot a bullet through any one of its motherboards and it would reconfigure itself to work around it. I've been lookin on their site for it (www.starbridgesystems.com), but they've done some remodeling of their site over the years and technical descriptions like that can't be found as easily.
    • If I'm feeding the troll, too bad; at least it's a philosophical one.

      "which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision"."

      That's an obvious contradiction in terms, isn't it? ...

      "How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?"

      Well, see, we've got these things called stored-program computers... =)
      Do I have to finish that sentence, or do you see what I'm getting at?

      You're awfully good at jumping up and down and telling us that you don't know the answer, but what's the point of that?

      Anyway, is there some hidden meaning here I'm not getting?
    • i.e., at least up to this point in time, there is no "self animating" force in any electronic entity that I am aware of.

      You're a machine (or at least, a physical mechanism) made of hydrocarbons. Presumably, you think you have a "self-animating" force. I'm not sure I agree, but let's say you do.

      How do we detect the "ghost" in you? Where is it? How do we tell? If you can't find it in you, but you think it exists, how can you, in all fairness, know it's not in an "electronic entity"?

      Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies

      But, we're no where near close to the brain's storage capacity with today's technologies! I think the figures are about 10^25 neurons, with about 10^50 possible connections. (they can, and do, "re-wire" themselves in about a minute). It's been a few years since I talked to a neuroscientist, but those are the figures I recall.

      It's hard to compare bytes to neurons. If, for sake of argument, we say that 1 neuron (not conenction) encodes one bit of information, then we have 10^24 bytes, since 8 is roughly ten. Umm... Giga is 10^9 bytes. Terra is 10^12, I think. I don't know what 10^24 is, but I'm pretty sure we don't have anything that can store 10^24 bytes yet. Supposing we get Terrabytes on our desktop in the next five to ten years, we're about half way through the exponents we need. :-)

      there is still no true intelligence in the circuitry that is as flexible as the mind, which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision".

      I'm not sure humans always make the "best decision". We just make decisions, and sometimes, they're bad. Sometimes, they're good. Chess playing programs can examine thousands of factors in a split-second, too -- and they quite often make a better decision than their human opponents.
      I'm not that impressed that humans can make decisions "without full data": machines can use probabilistic guesses as well, albeit with far less memory capacity than the human brain. But with trillion times more memory? I'd say even current algorithms might be able to compete with humans in certain areas, though it would probably depend on what you were comparing.

      How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?

      There are a number of approaches that can be developed: if the brain is the (huge and complex) finite state machine that it appears to be, the approach taken by the researchers in the article may provide some good clues: scan brains, find out how they're built, set up a state machine in software that is similar, and see if it does similar things.

      There are some robots which are very simple, but which do some very interesting things. Mark Tilden made a robot which taught itself to walk every time it was turned on -- he didn't have to program in how each leg worked; instead, he let it recognize forward motion, and evolve it's own solution. Similarly, we may not have to program human intelligence, we just have to build a machine that can work towards it, and recognize progress. That's still a very daunting task, but it's far more workable.

      And you still have to define "effective action". That's a whole grey area that's worth a topic on it's own. :-)

      The best example I can think of is "a WTC tower is falling down, what do I do?

      I don't think that's a good example for human intelligence. Most animals have a reasonably intelligent response to a large object approaching at a rapid speed: they get out of the way as fast as they can! Humans who don't learn this tend to die at highway crossings. :-)

      Different lifeforms have typical responses to a given type of crisis situation: they may or may not be "effective action", depending upon the situation. Rabbits run, then suddenly "freeze", trying to hide. This works well in brush and thickets, but badly on roadways. Moose tend to flee along the clearest path. This might sound reasonable, but they often get killed by trains, because the train tracks often run along the clearest path in the forest, and the moose is afraid to slow down to turn, because the train is "chasing" it. Humans build train tracks where moose roam, and often get killed when the train derails after hitting a moose. ;-)

      Not in our lifetime, methinks

      You may be right, but I think great progress can be made. We can build machines that "learn": that adapt their programming based upon their environment. Given that it takes 18 years or so to "program" a human being to function society, I think making a chess playing program that can compete with grand master was a good acomplishment for just 50 years or so of modern computing. We're making progress in visual processing, and speech recognition software has quietly crept into cellphone dialers and children's toys. If hardware advances can keep up,I think we can expect great things in the future.
      --
      Ytrew
  • did anyone else go all cross-eyed when they thought about the implications of running the turing test with one of these?
  • Idly thinking... (Score:3, Interesting)

    by Hubert_Shrump ( 256081 ) <[cobranet] [at] [gmail.com]> on Thursday December 26, 2002 @01:12PM (#4961067) Journal
    Is there an inverse Turing test?

    Maybe we can make machines that fail it.

  • by Fly ( 18255 ) on Thursday December 26, 2002 @01:57PM (#4961272) Homepage
    This article talks to people who are doing mostly or solely bottom-up work to emulate brain activities, and poo-poos the work that has been done on top-down approaches. While I don't think anyone can refute the need to do low-level bottom-up reverse-engineering to understand how our brains work, I agree with Hofstadter, et al. in _Fluid_Concepts_&_Creative_Analogies_ that there needs to be a fundamental change in the direction of top-down approaches.

    I am intrigued by his work in combining the top-down and bottom-up work with his "codelets" design which relies on probabalistic results from a bottom-up approach that are weighted and driven in a more top-down manner. This higher-level approach is meant to simulate "mind" rather than "brain", but I'm eager to see just how far towards "mind" the neural approach in the article can be taken.

    [youmaynotcare]It's cool to see McCormick in an AI article. My first course from him was AI, and it fascinated me.[/youmaynotcare]

    • I am intrigued by his work in combining the top-down and bottom-up work with his "codelets" design which relies on probabalistic results from a bottom-up approach that are weighted and driven in a more top-down manner.

      I always thought something like that would be the best way to go, at least as far as managing such a complex beast. You have a bunch of "services", and a central selector/user/coordinator of those services. If a service fails to deliver good results, such as bad predictions or models that don't match observed results, then they get less attention.

      The drawback is integration. Some way for the different services to cooperate and share info is also needed. But, a competative GA-like approach may be interesting also. I am sure our brain does this to some extent. More attention is given to internal models (guessing techniques) that deliver the best results, for example. Or, like ignoring info from a bad eye and using the good one instead.
  • by budalite ( 454527 ) on Thursday December 26, 2002 @04:27PM (#4962424)
    Geez, surely we can do better! Most of the ones currently in production seem to be defective!!
    (Get me that guy that figured out what was wrong with HAL...)
  • Oh great! (Score:3, Funny)

    by Anonymous Coward on Thursday December 26, 2002 @04:59PM (#4962650)
    First it was the H-1B's taking our jobs, and now artificial brains. I should've been a dancer.
    • I should've been a dancer.

      Your legs are too fat.

      -
  • The approach mentioned of disecting a mouse brain neuron by neuron and modeling it is very interesting. I don't think anyone has ever approched it from such a reductionist angle before. Even doing this for a small section of brain (say the visual cortex) would probably provide some interesting information.

    It seems that this could be capable of showing if there's more than just the neurons involved.

  • ... has quoted the Orange Catholic Bible? "Thou shalt not make a machine in the likeness of a human mind."

    I'm ashamed of you people.

Trap full -- please empty.

Working...