Building Brainlike Computers 251
newtronic clues us to an article in IEEE Spectrum by Jeff Hawkins (founder of Palm Computing), titled Why can't a computer be more like a brain? Hawkins brings us up to date with his latest endeavor, Numenta. He covers progress since his book On Intelligence and gives details on Hierarchical Temporal Memory (HTM), which is a platform for simulating neocortical activity. Programming HTMs is different — you essentially feed them sensory data. Numenta has created a framework and tools, free in a "research release," that allow anyone to build and program HTMs.
Can't build what you don't understand (Score:4, Insightful)
Re:this is stupid (Score:2, Insightful)
The question isn't "will we?", the question in reality be: "should we?" Do we have the right to dissect the creations of god and dupllicate them? Sure, I see no reason not to. There are certainly hazards (as most of famous sci-fi movies absolutely love to point out) but there are hazards to driving in the morning. Sure, one day we may be responsible for annihilation of all man-kind but hey, we had a good run ;)
But I think there are some good aspects to trying to replicate the brain. The best reason of all is for understanding of how we work. To duplicate something, you need to know how it works first (or at least know how in general). If we understand the brain, that could help us
Oh and one last thing. Have you ever programmed an email program? They made be fun to design, if you're a hard-core coder but they're not easy.
Re:Mod parent UP! (Score:2, Insightful)
Re:this is stupid (Score:3, Insightful)
Alchemy (Score:5, Insightful)
Medievals didn't understand the atom or crystalline structures, but they still made carbonized steel for armour. They had the wrong ideas about exactly how metal became properly carbonized and tempered, but they still came up with correctly tempered spring-like steels (IIRC similar to tempered 1050) without getting any of the "why" of it right.
I think someday we will be viewed as the medievals of AI. We occasionally make progress even though we really don't know what we're doing. Yet.
Re:Can't build what you don't understand (Score:2, Insightful)
Not that brain researchers are literally just making gray globs out of Play-Doh, but that doesn't rule out similar errors at a deeper level.
a "full understanding" isn't necessary (Score:5, Insightful)
Hawkins and the people he's working with have come up with an approach that lets people explore possible uses of allowing a machine to learn in a way that's inspired by a process that may be part of how humans learn. They don't need a "full understanding" of how the human brain works to do that.
Actually, computer brains will be far superior (Score:5, Insightful)
It's worth stating that unless you believe that the human brain contains magic (which 99% of your religious bretheren don't), then it is no more than a very complex arrangement of perfectly ordinary physical components, namely atoms and molecules. And if you don't think that we will in due course be able to arrange atoms and molecules as we wish, then you're very blinkered to the direction in which science and engineering are heading.
That said, the recreation of human brains is merely an interesting challange as far as practical engineers are concerned, and not a practical approach. The vast majority of us have no intention of actually taking that route because protein is such an inferior building material. Your alleged god (aka. blind evolution) only "chose" it because carbon is so damn versatile in conjunction with O and N and H, so a million different reactions occurred in the mess of the primordial soup. And one of them happened to work.
Well we don't rely on blind chance, but coerce the reactions in the direction we want, which gives us the chance to choose our materials more strategically. And we will.
There's not a chance in hell (trying to use your frame of reference here) of us producing "brains" that are *MERELY* as good as nature created in humans, because the equations that underpin ordinary physics and chemistry (and therefore molecular nanotechnology) say otherwise. Instead, you can expect "brains" a billion times our mental capacity and a trillion times our mental speed in due course. We know that it's possible (from theory, and by observing protein nanomachines doing it very poorly), but we lack the infrastructure to do it ourselves at present. It's many decades away, but hey, we're working on it.
You'd have to contradict the maths and physics of materials and biotech that says that MNT is possible before you can validly say that it's not. And with the intellectual depth of your contribution above, my guess is that you won't.
Because human brains can give wrong answers... (Score:1, Insightful)
Why do you think that people are going bonkers over the offshore outsourcing trend? It's like Artificial Intelligence... You ask a question over the phone, or via chat window, and your question is magically answered by the thinking thing inside the box. It doesn't matter what the thing is on the other end of the line... all that matters is that it gives you a reasonably good answer that helps you make progress in your business or personal life.
We already have seen the face of Artificial Intelligence... it is staring back at us in the mirror.
Re:this is stupid (Score:5, Insightful)
Regardless of there being a God, brains, humans, birds, or diamonds, to be honest we don't want to create a brainlike computer.
Human brains can do amazing things, but one thing we like about computers over human brains is that human brains, even the best ones, are simply wrong from time to time, and our goal with "brainlike computers" is not to recreate these mistakes, but rather to overcome them.
With respect to our senses, again, they are amazing, but then again they are fooled much of the time. There are perceptual errors, optical illusions, selective memories (ask 10 eye witnesses and get 10 different accounts), and all of that.
Today, computers are great at being calculators, and for storing and retrieving digital data. They suck at making "decisions". Even seemingly trivial ones like telling the difference between an apple and an orange is difficult for a computer today.
Take a look at much more mature technologies, like flying. For ages, humans tried to make flying machines like birds, and now we have a handful of flying technologies that can fly faster than the speed of sound and can go beyond the earth's atmosphere. But we still can't fly like a bird with flapping wings, and I don't remember a time in my life where I saw a headline saying "Building Birdlike Planes".
stuff that matters (Score:1, Insightful)
This is the first piece of "stuff that matters" that I have read here on
Re:this is stupid (Score:3, Insightful)
The thing is, computers can already do lots of things that brains are bad at. Making brainlike software that allows computers to do things brains are good at is something we want to do, because lots of times we'd like our computers to do tasks that involve repetitively doing things brains are bad at mixed with things that brains are good at, while our actual brains are off doing completely unrelated things rather than be interrupted everytime the computer needs someone to do the part brains are good at.
Obviously, it would be good ultimately to make computers that do things that brains are good at even better than brains do them, but since we're far from as good as brains in our computers in many areas, we've got even more distance to cover till we get to better than brains. In the short-term, we're aiming more for "close enough to brains" so that for tasks which are hard for computers but trivial for brains, we can reduce the amount of human involvement needed to get the task done.
That's not entirely true [wikipedia.org].
Re:In Defence of Hawkins (Score:3, Insightful)
Re:In Defence of Hawkins (Score:3, Insightful)
I agree. I doubt that Hawkins can use his HTM to recognize (let alone understand the meaning of) full sentences. For that, you need a hippocampus, i.e., the ability to hold things in short-term memory (and play them back internally) and to parse events using a variable time-scale mechanism. You also need a mechanism of attention which, IMO, requires motor control. I think that Hawkins underestimates the necessity of having a motor control/coordination mechanism (basal ganglia). These are essential to reasoning.