Recent Advances in Cognitive Systems 85
Roland Piquepaille writes "ERCIM News is a quarterly publication from the European Research Consortium for Informatics and Mathematics. The April 2003 issue is dedicated to cognitive systems. It contains no less than 21 articles which are all available online. In this column, you'll find a summary of the introduction and what are the possible applications of these cognitive systems. There's also a picture of the cover, a little robot with a very nice looking blue wig. And in A Gallery of Cognitive Systems, you'll find a selection of stories, including links, abstracts and illustrations (the whole page weighs 217 KB). There are very good pictures of autonomous soccer robots, swarm bots, cognitive vision systems, and more."
Re:Cognitive Science (Score:5, Insightful)
Actually, cognitive science does not replace AI. The goal of cognitive science is to figure out how our brain works on a functional level. Where neurology studies the actual chemical reactions and neural activity, cognitive science studies how the "hardware" works to achieve our thought processes.
One good example is how the brain works out an image of the mismash of neural impulses going through the retinal nerves. The resolution of the eye is actually quite low, and the "pixels" aren't ordered in any linear fashion. The brain does an enormous amount of processing to form an actual image. This is why babies can't see, even though the optics work. The brain needs to develop the processing algorithms in order to make sense of all the information coming in.
Of course, all of this is theory, and subject to scientific dispute
You can teach a computer to think (Score:3, Insightful)
On Combining Sensory and Symbolic Information (Score:5, Insightful)
The point at which an understanding of body position is integrated with an overall structure of behavior leading towards a goal seems a mirage, since this isn't necessarily the way animal systems work. The best recreation of natures flexibility in "simple" systems that I've heard of comes from Mark Tilden's analog systems [cnn.com] that are controled by tight-loops of feedback that very closely model reflex circuits, but that are capable of recovering from intense deformations of "perfect positioning".
Now, obivously, reflex systems can only go so far, when you have a bot that you want to decide path across a room, there has to be a symbolic understanding of its environment. But it seems to me, from my (albeit very limited) understanding of insect / lower-animal inteligence, that most insects don't actually work up a full symbolic understanding of their surroundings, they just have some sort of sense of direction towards a goal (think moths to light) and then they start the reflex circuits firing to move towards it. I can understand having an end goal of having a full cognitive system comparable to human understanding of the world, but it seems like people might be overshooting the process a bit. We need a greater understanding of the simple systems before we can hope to frog-leap to the big stuff.
To dispute my own point though, I feel its fair to say that the "simple" systems of the animal brain are already currently being modeled [newscientist.com] to the point that prosthesis for the brain might just be within reach. The success of an artificial hipocampus will prove that modeling the brain isn't necessarily understanding the brain, but it might be easier to learn the systems from our artificial models than the real ones.
Re:Not all cognitive scientists do that. (Score:4, Insightful)
Re:Not all cognitive scientists do that. (Score:5, Insightful)
the classic sense of AI might have been that of search and planning. but for the last 20 years or so, many non-search and non-symbolic approaches have been treated as equals in the discipline, including:
but your're absolutely right, cog sci is more concerned with mimicking human cognitive processes. which is why AI cannot simply be a branch of it.
Re:The title reminds me of an article in AIR (Score:5, Insightful)
Essentially, AI is used to mean "stuff computers can't do yet".
People say "but the computer's just doing maths". Well, that's the point, isn't it? It might be that an AI powerful enough to be mistaken for human is simply horrendously complex, not unattainable, needing the sum of all those little incremental advances that AI researchers keep making.
Actually,the thing that annoys me most is that people associate Lisp with 80s AI, when in fact modern Common Lisp is an excellent multiparadigm language for all sorts of problems, and a much better fit for large software systems than, say, Java.
Re: Not all cognitive scientists do that. (Score:4, Insightful)
> The goal of all the cognitive scientists I've met is to make machines think, just as with A.I.
You need to meet more then. Ask linguists whether they're studying cog sci and they'll give you an emphatic "yes". I think these days most research psychologists would say so as well (though maybe clinical psychologists wouldn't).
> In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.
Some AI is, but not all. It really depends on the individual researcher's goals.
> However, there are many approaches to machine thinking that are not considered part of A.I.:
neural networks, SVMs, computer vision (signal interpretation), modeling.
Never heard of SVMs, but most AI researchers do think neural networks, computer vision, and certain kinds of modelling are subfields of AI.
Who taught your AI class?
> Cognitive scientists are usually more concerned with getting the machines to do what we want than they are with modeling human thinking techniques.
No, you have that backwards. AI researchers are concerned with getting machines to behave intelligently, and cog sci researchers are trying to understand human or animal cognition. And there is a fair amount of overlap, e.g. an AI/CogSci researcher may try to get a machine to behave intelligently as a model of human cognition.
Re:Maybe slashdot could use a cognitive system... (Score:2, Insightful)
Re:The title reminds me of an article in AIR (Score:4, Insightful)
"Real" AI would emphasize the "intelligence" part and be capable of, for example, learning the rules of a new game or process from a natural language description and trial and error, and then being able to perform said process. Anything less is pretty much just dicking around with heuristics.
Anyone who ever claimed that machine vision or chess playing or voice recognition was AI, was either confused or guilty of the charge in the first paragraph above. Even before those things were first achieved, the people actually working on them had a pretty good idea of how they could be achieved without anything like what we normally consider intelligence - and they went on to prove it.
Lisp: The Next Generation! (Score:3, Insightful)
All of which is a secondary result of another case of 80s hype. Declarative languages, such as SQL, were sold as "fourth generation" because they were supposed to make procedural languages ("third generation languages") obsolete. Which didn't happen of course. Declarative programming ended up supplementing older languages, not replacing them.
After a while the original meaning was forgotten. So now people call languages "4GLs" etc. to emphasize some vague claim that they're more advanced. Or because of a vague notion that 4GL has something to do with database programming. These are terms we should just stop using.