Recent Advances in Cognitive Systems 85
Roland Piquepaille writes "ERCIM News is a quarterly publication from the European Research Consortium for Informatics and Mathematics. The April 2003 issue is dedicated to cognitive systems. It contains no less than 21 articles which are all available online. In this column, you'll find a summary of the introduction and what are the possible applications of these cognitive systems. There's also a picture of the cover, a little robot with a very nice looking blue wig. And in A Gallery of Cognitive Systems, you'll find a selection of stories, including links, abstracts and illustrations (the whole page weighs 217 KB). There are very good pictures of autonomous soccer robots, swarm bots, cognitive vision systems, and more."
Cognitive Science (Score:1, Flamebait)
Re:Cognitive Science (Score:1)
*It's a joke. Laugh.*
Re:Cognitive Science (Score:5, Insightful)
Actually, cognitive science does not replace AI. The goal of cognitive science is to figure out how our brain works on a functional level. Where neurology studies the actual chemical reactions and neural activity, cognitive science studies how the "hardware" works to achieve our thought processes.
One good example is how the brain works out an image of the mismash of neural impulses going through the retinal nerves. The resolution of the eye is actually quite low, and the "pixels" aren't ordered in any linear fashion. The brain does an enormous amount of processing to form an actual image. This is why babies can't see, even though the optics work. The brain needs to develop the processing algorithms in order to make sense of all the information coming in.
Of course, all of this is theory, and subject to scientific dispute
Not all cognitive scientists do that. (Score:5, Interesting)
The goal of all the cognitive scientists I've met is to make machines think, just as with A.I. In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.
However, there are many approaches to machine thinking that are not considered part of A.I.:
neural networks, SVMs, computer vision (signal interpretation), modeling.
So what does A.I. cover then? Well, it's not exactly well defined. If you read A.I. textbooks, you'll find the full of lots of different things. Some would go so far as to even include those things I mentioned that aren't normally considered part of A.I. However, in general, I would say that A.I. is the field that is concerned with
1) Solving the search problem (searching for a solution in a large set of possibilities)
2) Doing it with heuristics.
I'd like to take a moment to note that a famous computer vision paper came out in the 80's that documented a method called Marr-Hildreth, which was for finding edges in images. They created it by using the same technique that eyes use (laplacian of a Gaussian for edge detection - they studied cats to find this out).
A few years later someone improved upon it by throwing out the model completely and NOT doing it the way that people do (Canny).
Cognitive scientists are usually more concerned with getting the machines to do what we want than they are with modeling human thinking techniques.
Re:Not all cognitive scientists do that. (Score:4, Insightful)
Re:Not all cognitive scientists do that. (Score:1)
Re:Not all cognitive scientists do that. (Score:2)
His metaphor stuff is really good -- when you get it, it's like a new way of looking at all of the old language and thinking you've ever used before... (I'm sure you know what I mean).
Good luck with you work!
Re:Not all cognitive scientists do that. (Score:1)
Yeah, Lakoff does some very interesting work with metaphor. Here's something recent [dpingles.ugr.es].
It was his research that contributed a lot to the idea that since we (humans) process metaphor and figurative language at the same rate we do literal, non-figurative language, computers should do the same. Big implications for NLP...
Re:Not all cognitive scientists do that. (Score:4, Informative)
I have never met any cognitive scientists, but I've read books on the subject by Danniel Dennet (who is arguably a philosopher not a scientist) and Steven Pinker (a cognitive scientist). The works of both of them are highly recommended.
Anyway, niether of them are focused on making machines think, but rather on understanding what makes humans think.
Re:Not all cognitive scientists do that. (Score:2)
The goal of all the cognitive scientists I've met is to make machines think, just as with A.I. In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.
The goal of cognitive science is to find out how humans think, especially how they process information and reach decisions.
AI as a branch of cognitive science tries to model human thought with computers, basically testing theories on how the human brain achieves the things it does, to achieve this.
AI in general
Re:Not all cognitive scientists do that. (Score:5, Insightful)
the classic sense of AI might have been that of search and planning. but for the last 20 years or so, many non-search and non-symbolic approaches have been treated as equals in the discipline, including:
but your're absolutely right, cog sci is more concerned with mimicking human cognitive processes. which is why AI cannot simply be a branch of it.
Re:Not all cognitive scientists do that. (Score:2)
Lisp: Getting there is half defun.
Re: Not all cognitive scientists do that. (Score:4, Insightful)
> The goal of all the cognitive scientists I've met is to make machines think, just as with A.I.
You need to meet more then. Ask linguists whether they're studying cog sci and they'll give you an emphatic "yes". I think these days most research psychologists would say so as well (though maybe clinical psychologists wouldn't).
> In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.
Some AI is, but not all. It really depends on the individual researcher's goals.
> However, there are many approaches to machine thinking that are not considered part of A.I.:
neural networks, SVMs, computer vision (signal interpretation), modeling.
Never heard of SVMs, but most AI researchers do think neural networks, computer vision, and certain kinds of modelling are subfields of AI.
Who taught your AI class?
> Cognitive scientists are usually more concerned with getting the machines to do what we want than they are with modeling human thinking techniques.
No, you have that backwards. AI researchers are concerned with getting machines to behave intelligently, and cog sci researchers are trying to understand human or animal cognition. And there is a fair amount of overlap, e.g. an AI/CogSci researcher may try to get a machine to behave intelligently as a model of human cognition.
Re:Cognitive Science (Score:2, Informative)
You *almost* got it. Cog Sci approaches the mind as an information processing device and seeks to understand the algorithms (mental representations and processes) operating on the incoming data. Thus, Cog Sci is the study of the mind as software not "hardware".
This is why babies can't see, even though the optics work.
Actually, newborn babies ca
Re:Cognitive Science (Score:1)
But what a physical brain "does" determines what "computation" or
Re:Cognitive Science (Score:2)
> Noun ; 1. The current scientist scam
You don't think cognition is a legitimate subject for scientists to study?
> which has replaced the older artificial intelligence scam
Not the same thing at all; AI will still be around, plodding along, though they may eventually get a boost from the results of cognitive scientists.
> with its more robust resistance to criticism
How so?
> and even more byzantine theories.
Sorry, but the theories have to go wherever the facts lead. General relativity an
Re:Cognitive Science (Score:2, Interesting)
Reading this reminds me of my cognitive neuroscience/AI prof Lev Goldfarb. He began our course by telling us that very, very little has been accomplished in the fields of Cog Sci and AI, and that he is possibly the only one who has brought a real contribution to the table: a formal language ("real science") for working in this field. His "Evolving Transformation System" or ETS provides methods for measuring symbols and the differences between them, and lays the groundwork for modelling cognitive processes
Robots (Score:2)
Rus
My vision (Score:1)
When you're that old I think it's your right to be lazy... right?
Re:doh! (Score:2)
For the humour impaired (Score:2)
Here it is
You can teach a computer to think (Score:3, Insightful)
Re:You can teach a computer to think (Score:1, Offtopic)
The title reminds me of an article in AIR (Score:5, Funny)
Re:The title reminds me of an article in AIR (Score:5, Insightful)
Essentially, AI is used to mean "stuff computers can't do yet".
People say "but the computer's just doing maths". Well, that's the point, isn't it? It might be that an AI powerful enough to be mistaken for human is simply horrendously complex, not unattainable, needing the sum of all those little incremental advances that AI researchers keep making.
Actually,the thing that annoys me most is that people associate Lisp with 80s AI, when in fact modern Common Lisp is an excellent multiparadigm language for all sorts of problems, and a much better fit for large software systems than, say, Java.
Re:The title reminds me of an article in AIR (Score:2)
Certainly, we don't label certain insects which behave as machines as inteligent. In fact, everything that resembles machine like behaveou
Re:The title reminds me of an article in AIR (Score:4, Insightful)
"Real" AI would emphasize the "intelligence" part and be capable of, for example, learning the rules of a new game or process from a natural language description and trial and error, and then being able to perform said process. Anything less is pretty much just dicking around with heuristics.
Anyone who ever claimed that machine vision or chess playing or voice recognition was AI, was either confused or guilty of the charge in the first paragraph above. Even before those things were first achieved, the people actually working on them had a pretty good idea of how they could be achieved without anything like what we normally consider intelligence - and they went on to prove it.
Re:The title reminds me of an article in AIR (Score:2, Informative)
But I'd like to bring to your attention a research project going on at my school (Michigan State University) which I think is different from other "AI". I didn't see it mentioned from glancing the article.
The attempt to is create a robot that learns and develops as a baby would.
Re:The title reminds me of an article in AIR (Score:2, Interesting)
I don't mean to slight the progress made, and I also didn't mean to criticize all AI researchers.
Perhaps a better way to describe what I was getting at is that there's an unfortunate feedback effect that happens with these advanced applications, where: researchers say things which excite the general public because they describe things that sound amazing and desirable; researchers notice said excitement and connect that with increased funding; researchers exploit excitement by attach
Intelligence and learning (Score:1)
Some people had a better idea than others, though. I don't think much fundamental has changed in terms of our general understanding of what does and doesn't constitute intelligence, since at least about the late '70s, but there've still been questionable AI-related claims in that timeframe.
And personally, as far as I can see, most of th
Lisp: The Next Generation! (Score:3, Insightful)
What annoys me is that people refer to Lisp as a "fifth generation language", even though it's the second oldest high-level language (after Fortran). But that's not as annoying as calling Visual Basic a "fourth generation language" because of its database features.
All of which is a secondary result of another case of 80s hype. Declarative languages, such as SQL, were sold as "fourth generation" because they were supposed t
Real world problems and neuroscience (Score:5, Interesting)
The great thing about the recent development in so-called cognitive systems is that they start to address more real problems. The time of toy problems is over. It is not enough to just follow a line. Only the challenge from the real world can make algorithms in any way "clever" or meaningful.
This is why I find it truly inspiring that so much research is going into these systems these days.
Sadly however most of neuroscience these days is still far from these questions. Most electrophysiologists that for example study the visual system show it trivial stimuli such as bars or gratings. In some sense a system can only show its capability when the stimuli are rich enough.
Nevertheless there is clearly a move these days towards larger more interesting problems even in neuroscience. We should be inspired by the works of the roboticists.
Comment removed (Score:5, Interesting)
Re:Real world problems and neuroscience (Score:1)
There is a prissy, "language must remain static" camp that refuses to acknowledge the validity of "irregardless". However, it is commonly understood to connote a more emphatic version of "regardless".
Re:Real world problems and neuroscience (Score:1)
I believe it's actually commonly understood to connote a writer unable to parse the very words he uses. Of course language evolves. But this is not a valid mutation unless the meaning of the prefix "ir" also changes.
Inspired (Score:4, Funny)
Now THAT's a goal.
Maybe we'll see humanoid robot referees in sports. That should stop any dissent from the players
Player: C'mon ref, that was never in a million years a f**king penalty !!
Ref: You have 3 seconds to comply..
Re:Inspired (Score:2)
Re:Inspired (Score:1)
Actually, they should make the robo-ref semi-fragile. No better way to boost ratings than to let John McEnrow or Shaq beat the chips out of a ref. You can't do that with human refs. They could also beat the stuffing out of robo-mascots also.
Re:Real world problems and neuroscience (Score:4, Interesting)
This is interesting to me, for several reasons. I'm working on robotics in my free time, mainly not cognitive stuff but lower level autonomous muscular control and feedback loop stuff. But anyway, my girlfriend's studying neuroscience and she, like many (too many) of her peers, finds absolutely NOTHING interesting in cognitive research.
All they care about is the mechanics (which is important) but I think they consider cognition to be a peculiar but unimportant side effect of the rest of the complex process.
So, as a fellow who's spent years writing code to try to do intelligent stuff, and more recently robots to carry these actions out, it's somewhat frustrating to be in a bar with a bunch of neuroscientists and hear them dismiss cognition as irrelevant.
Maybe slashdot could use a cognitive system... (Score:4, Funny)
Re:Maybe slashdot could use a cognitive system... (Score:1)
I know... maybe once the cognitive system is all figured out (if it ever is
We love you slashdot
Re:Maybe slashdot could use a cognitive system... (Score:2, Insightful)
Slashdot *does* use a cognitive system.... (Score:3, Funny)
But don't get your hopes up - when they attempted t
On Combining Sensory and Symbolic Information (Score:5, Insightful)
The point at which an understanding of body position is integrated with an overall structure of behavior leading towards a goal seems a mirage, since this isn't necessarily the way animal systems work. The best recreation of natures flexibility in "simple" systems that I've heard of comes from Mark Tilden's analog systems [cnn.com] that are controled by tight-loops of feedback that very closely model reflex circuits, but that are capable of recovering from intense deformations of "perfect positioning".
Now, obivously, reflex systems can only go so far, when you have a bot that you want to decide path across a room, there has to be a symbolic understanding of its environment. But it seems to me, from my (albeit very limited) understanding of insect / lower-animal inteligence, that most insects don't actually work up a full symbolic understanding of their surroundings, they just have some sort of sense of direction towards a goal (think moths to light) and then they start the reflex circuits firing to move towards it. I can understand having an end goal of having a full cognitive system comparable to human understanding of the world, but it seems like people might be overshooting the process a bit. We need a greater understanding of the simple systems before we can hope to frog-leap to the big stuff.
To dispute my own point though, I feel its fair to say that the "simple" systems of the animal brain are already currently being modeled [newscientist.com] to the point that prosthesis for the brain might just be within reach. The success of an artificial hipocampus will prove that modeling the brain isn't necessarily understanding the brain, but it might be easier to learn the systems from our artificial models than the real ones.
Re:On Combining Sensory and Symbolic Information (Score:1)
But I can almost swear that is how some managers function.
can't resist (Score:3, Funny)
Somewhat Relevant Plug... (Score:5, Informative)
A more detailed summary is available here [osforge.com] and this [greatmindsworking.com] is the project web site.
Compared to proprietary systems such as Ai's HAL [bbc.co.uk], Meaningful Machines Knowledge Engine [prnewswire.com], and Lobal Technologies LAD [silicon.com], EBLA is the only system to incorporate grounded/perceptual understanding of language.
What I'd like to hear more about (Score:3, Interesting)
Cogsci? (Score:1)
Not a great read (Score:5, Informative)
Basically each one boiled down to: our lab does the XYZAB project and we're studying this system.
bad science (Score:1)
AI is a vague misleading term (Score:1)
All I ever hear about when folks brag about "advances in AI" are things like some new algorithm which can interpret some form of input which it previously could not, or new theories of machine learning, etc.
No one yet has effectively defined the mechanics which make up "the mind". Fol