Recent Advances in Cognitive Systems 85
Roland Piquepaille writes "ERCIM News is a quarterly publication from the European Research Consortium for Informatics and Mathematics. The April 2003 issue is dedicated to cognitive systems. It contains no less than 21 articles which are all available online. In this column, you'll find a summary of the introduction and what are the possible applications of these cognitive systems. There's also a picture of the cover, a little robot with a very nice looking blue wig. And in A Gallery of Cognitive Systems, you'll find a selection of stories, including links, abstracts and illustrations (the whole page weighs 217 KB). There are very good pictures of autonomous soccer robots, swarm bots, cognitive vision systems, and more."
Re:Not all cognitive scientists do that. (Score:4, Informative)
I have never met any cognitive scientists, but I've read books on the subject by Danniel Dennet (who is arguably a philosopher not a scientist) and Steven Pinker (a cognitive scientist). The works of both of them are highly recommended.
Anyway, niether of them are focused on making machines think, but rather on understanding what makes humans think.
Somewhat Relevant Plug... (Score:5, Informative)
A more detailed summary is available here [osforge.com] and this [greatmindsworking.com] is the project web site.
Compared to proprietary systems such as Ai's HAL [bbc.co.uk], Meaningful Machines Knowledge Engine [prnewswire.com], and Lobal Technologies LAD [silicon.com], EBLA is the only system to incorporate grounded/perceptual understanding of language.
Re:Cognitive Science (Score:2, Informative)
You *almost* got it. Cog Sci approaches the mind as an information processing device and seeks to understand the algorithms (mental representations and processes) operating on the incoming data. Thus, Cog Sci is the study of the mind as software not "hardware".
This is why babies can't see, even though the optics work.
Actually, newborn babies can do more sophisticated visual processing than you might think. In the first day of life, they have a preference for looking at faces over other stimuli. Plus, if you put two TV screens up with people talking on both and a speaker in the middle that's playing a soundtrack of one of the people but not the other, babies prefer to look at the TV screen that matches the sound. Thus, babies are wired to perform some fairly sophisticated cross-modal perceptual processing from the beginning.
Not to say that babies can see THAT well-- the mylenation of neurons (kinda like insulation on an electrical wire) in the brain isn't finished until years after birth, which limits the conductivity of neural signals and therefore the babies' perceptual and motor repertoire.
The perceptual system comes pre-wired for some basic things, and then self-organizes the rest based on the statistics of visual input from natural scenes. For instance, they've raised kittens in environments with nothing but vertical stripes, and after a while, they lose the ability to perceive horizontal stripes. (Sick experiments, but informative.)
Here, kitty kitty...
----------
Hey, buddy-- Can you spare a sig?
Not a great read (Score:5, Informative)
Basically each one boiled down to: our lab does the XYZAB project and we're studying this system.
Re:The title reminds me of an article in AIR (Score:2, Informative)
But I'd like to bring to your attention a research project going on at my school (Michigan State University) which I think is different from other "AI". I didn't see it mentioned from glancing the article.
The attempt to is create a robot that learns and develops as a baby would. A key point is that it develops its own representation of the world. I disagree on some issues with the professor, but I think he has the right general idea.
Here's a link [msu.edu] to slides explaining the approach and another link [msu.edu] to the main research page.