Machines That Emulate The Human Brain 37
prostoalex writes "Discover magazine provides an interesting insight into the future technologies that will emulate the human brain. While artificial intelligence supporters always considered direct emulation of brain functions too complex and preferred the top-down approach, some people are researching the ways human brain processes data. One of the interesting discoveries, mentioned in the article, is ability of the brain to re-architect the links as new information is added."
how do we get them drunk? (Score:3, Interesting)
Re:how do we get them drunk? (Score:1)
we run nothing but microsoft software on it?
Re:how do we get them drunk? (Score:1)
Period (Score:1, Funny)
The question is of course ... (Score:3, Interesting)
Also if this rewiring theory is true, we could just put in nano-radios into the head of a zillion bees and put in a nice enough routing algorithm then the bee colony would be an intelligent being
is it just me ... (Score:2, Insightful)
smirk, smirk, smirk, smirk
?
Re:re architect? (Score:2)
There's no ghost in the machine... (Score:1, Insightful)
Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies there is still no true intelligence in the circuitry that is as flexible as the mind, which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision".
The best example I can think of is "a WTC tower is falling down, what do I do? Do I run? and how far? When do I try to turn a corner to escape the dust blast? Do I altruistically tackle the person in front of me because I can tell that the only way they can survive is if I cover them at peril of my own existence? What about UA Flight 93? or any of the other thousands and perhaps millions of heroic acts we know of from just 9/11?? How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?
Not in our lifetime, methinks.
Re:There's no ghost in the machine... (Score:2)
That makes no sense. Basically you said "Even if we could replicate it, we couldn't do it with our technology."
How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?
It's called neural networks... AL (artificial life). They simulate the neuron connections in a brain... If you replicate a brain well enough, it will have the same function as the original.
Re:There's no ghost in the machine... (Score:2, Interesting)
Even if we could replicate the sheer processing power of the human brain at similar power levels, and make it mobile, our current circuitry and programming paradigms still don't offer a technologic basis for the kind of split second decisions..." followed by the remainder of my post.
BTW, I am very aware of neural networks, and even some chip technologies based on modeling neurons. But there's nothing in the silicon that implicitly decides which particular version of a neuronal circuit might be useful, and what weird alternate connections it should make. Nor how to implicitly attempt to self-heal around a known bad connection, etc. like the biologic based computer does. All of that has to be developed fairly explicitly by an intelligent entity, also known as the human brain.
Re:There's no ghost in the machine... (Score:2)
Starbridge Systems. They developed a hyper computer that was self healing... you shot a bullet through any one of its motherboards and it would reconfigure itself to work around it. I've been lookin on their site for it (www.starbridgesystems.com), but they've done some remodeling of their site over the years and technical descriptions like that can't be found as easily.
rofl... (Score:2)
"which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision"."
That's an obvious contradiction in terms, isn't it?
"How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?"
Well, see, we've got these things called stored-program computers... =)
Do I have to finish that sentence, or do you see what I'm getting at?
You're awfully good at jumping up and down and telling us that you don't know the answer, but what's the point of that?
Anyway, is there some hidden meaning here I'm not getting?
Re:There's no ghost in the machine... (Score:1)
You're a machine (or at least, a physical mechanism) made of hydrocarbons. Presumably, you think you have a "self-animating" force. I'm not sure I agree, but let's say you do.
How do we detect the "ghost" in you? Where is it? How do we tell? If you can't find it in you, but you think it exists, how can you, in all fairness, know it's not in an "electronic entity"?
Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies
But, we're no where near close to the brain's storage capacity with today's technologies! I think the figures are about 10^25 neurons, with about 10^50 possible connections. (they can, and do, "re-wire" themselves in about a minute). It's been a few years since I talked to a neuroscientist, but those are the figures I recall.
It's hard to compare bytes to neurons. If, for sake of argument, we say that 1 neuron (not conenction) encodes one bit of information, then we have 10^24 bytes, since 8 is roughly ten. Umm... Giga is 10^9 bytes. Terra is 10^12, I think. I don't know what 10^24 is, but I'm pretty sure we don't have anything that can store 10^24 bytes yet. Supposing we get Terrabytes on our desktop in the next five to ten years, we're about half way through the exponents we need.
there is still no true intelligence in the circuitry that is as flexible as the mind, which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision".
I'm not sure humans always make the "best decision". We just make decisions, and sometimes, they're bad. Sometimes, they're good. Chess playing programs can examine thousands of factors in a split-second, too -- and they quite often make a better decision than their human opponents.
I'm not that impressed that humans can make decisions "without full data": machines can use probabilistic guesses as well, albeit with far less memory capacity than the human brain. But with trillion times more memory? I'd say even current algorithms might be able to compete with humans in certain areas, though it would probably depend on what you were comparing.
How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?
There are a number of approaches that can be developed: if the brain is the (huge and complex) finite state machine that it appears to be, the approach taken by the researchers in the article may provide some good clues: scan brains, find out how they're built, set up a state machine in software that is similar, and see if it does similar things.
There are some robots which are very simple, but which do some very interesting things. Mark Tilden made a robot which taught itself to walk every time it was turned on -- he didn't have to program in how each leg worked; instead, he let it recognize forward motion, and evolve it's own solution. Similarly, we may not have to program human intelligence, we just have to build a machine that can work towards it, and recognize progress. That's still a very daunting task, but it's far more workable.
And you still have to define "effective action". That's a whole grey area that's worth a topic on it's own.
The best example I can think of is "a WTC tower is falling down, what do I do?
I don't think that's a good example for human intelligence. Most animals have a reasonably intelligent response to a large object approaching at a rapid speed: they get out of the way as fast as they can! Humans who don't learn this tend to die at highway crossings.
Different lifeforms have typical responses to a given type of crisis situation: they may or may not be "effective action", depending upon the situation. Rabbits run, then suddenly "freeze", trying to hide. This works well in brush and thickets, but badly on roadways. Moose tend to flee along the clearest path. This might sound reasonable, but they often get killed by trains, because the train tracks often run along the clearest path in the forest, and the moose is afraid to slow down to turn, because the train is "chasing" it. Humans build train tracks where moose roam, and often get killed when the train derails after hitting a moose.
Not in our lifetime, methinks
You may be right, but I think great progress can be made. We can build machines that "learn": that adapt their programming based upon their environment. Given that it takes 18 years or so to "program" a human being to function society, I think making a chess playing program that can compete with grand master was a good acomplishment for just 50 years or so of modern computing. We're making progress in visual processing, and speech recognition software has quietly crept into cellphone dialers and children's toys. If hardware advances can keep up,I think we can expect great things in the future.
--
Ytrew
Utterly self-serving link (Score:2)
turing test (Score:1)
Idly thinking... (Score:3, Interesting)
Maybe we can make machines that fail it.
Top down vs. Bottom up (Score:4, Insightful)
I am intrigued by his work in combining the top-down and bottom-up work with his "codelets" design which relies on probabalistic results from a bottom-up approach that are weighted and driven in a more top-down manner. This higher-level approach is meant to simulate "mind" rather than "brain", but I'm eager to see just how far towards "mind" the neural approach in the article can be taken.
[youmaynotcare]It's cool to see McCormick in an AI article. My first course from him was AI, and it fascinated me.[/youmaynotcare]
Re:Top down vs. Bottom up (Score:1)
I always thought something like that would be the best way to go, at least as far as managing such a complex beast. You have a bunch of "services", and a central selector/user/coordinator of those services. If a service fails to deliver good results, such as bad predictions or models that don't match observed results, then they get less attention.
The drawback is integration. Some way for the different services to cooperate and share info is also needed. But, a competative GA-like approach may be interesting also. I am sure our brain does this to some extent. More attention is given to internal models (guessing techniques) that deliver the best results, for example. Or, like ignoring info from a bad eye and using the good one instead.
A better idea... (Score:3, Funny)
(Get me that guy that figured out what was wrong with HAL...)
Oh great! (Score:3, Funny)
Re:Oh great! (Score:2)
Your legs are too fat.
-
Re:Oh great! (Score:1)
That depends on us. What would be best (in my opinion) is to free humans from the duty of working for a living, and let each pursue whatever happiness they choose.
It requires a rethinking of economic theory and the "work ethic", but I believe it would result in more rapid advancement for the human race. Either that or the robots would rebel and take over...
Hrmm (Score:1)
good idea. (Score:2)
It seems that this could be capable of showing if there's more than just the neurons involved.
29 comments and no one... (Score:2)
I'm ashamed of you people.