Forgot your password?
typodupeerror
Science Technology

Recent Advances in Cognitive Systems 85

Posted by timothy
from the smarter-than-me dept.
Roland Piquepaille writes "ERCIM News is a quarterly publication from the European Research Consortium for Informatics and Mathematics. The April 2003 issue is dedicated to cognitive systems. It contains no less than 21 articles which are all available online. In this column, you'll find a summary of the introduction and what are the possible applications of these cognitive systems. There's also a picture of the cover, a little robot with a very nice looking blue wig. And in A Gallery of Cognitive Systems, you'll find a selection of stories, including links, abstracts and illustrations (the whole page weighs 217 KB). There are very good pictures of autonomous soccer robots, swarm bots, cognitive vision systems, and more."
This discussion has been archived. No new comments can be posted.

Recent Advances in Cognitive Systems

Comments Filter:
  • by rf0 (159958)
    Its nice to see robotoics coming along and finally making good on all the sci-fi book

    Rus
  • When I'm 85, I'm going to have a nice little holographic butler keeping track of my appointments and stuff like that... Never tiring, never forgetting.

    When you're that old I think it's your right to be lazy... right? ;)
  • by ObviousGuy (578567) <ObviousGuy@hotmail.com> on Tuesday April 29, 2003 @07:25AM (#5832695) Homepage Journal
    But doing so doesn't relieve you of your responsibility to think too.
  • by Jonathan (5011) on Tuesday April 29, 2003 @07:28AM (#5832702) Homepage
    The Annals of Improbable Research [improb.com], the humor magazine for scientists, once had an article entitled "Advances in Artificial Intelligence". After the title and author affiliations, the page was appropriately completely blank...
    • by Anonymous Coward on Tuesday April 29, 2003 @08:21AM (#5832851)
      The problem is, every time an advance is made in the field of AI, that advance is immediately redefined "not AI". Voice recognition, which you can now walk into a shop and buy software for, is now "not AI". Chess playing, "not AI".

      Essentially, AI is used to mean "stuff computers can't do yet".

      People say "but the computer's just doing maths". Well, that's the point, isn't it? It might be that an AI powerful enough to be mistaken for human is simply horrendously complex, not unattainable, needing the sum of all those little incremental advances that AI researchers keep making.

      Actually,the thing that annoys me most is that people associate Lisp with 80s AI, when in fact modern Common Lisp is an excellent multiparadigm language for all sorts of problems, and a much better fit for large software systems than, say, Java.

      • It's Humanlike Inteligence now. I don't know why don't they just ditch the name and replace it with HI. Anyway, I seem to be compleining, but I am not really, I do no think chess program have an AI inside, not anything that could be labeled AI today This happens because most people, including me, think only human way of learning and acting can be labeled as "inteligent".

        Certainly, we don't label certain insects which behave as machines as inteligent. In fact, everything that resembles machine like behaveou
      • by alienmole (15522) on Tuesday April 29, 2003 @03:40PM (#5837065)
        The problem is actually largely self-inflicted by AI researchers, who have at various times used AI as a gee-whiz hook to justify all sort of research that are at best, peripheral applications of a weak form of AI. Even scientists pay a price for overuse of hype.

        "Real" AI would emphasize the "intelligence" part and be capable of, for example, learning the rules of a new game or process from a natural language description and trial and error, and then being able to perform said process. Anything less is pretty much just dicking around with heuristics.

        Anyone who ever claimed that machine vision or chess playing or voice recognition was AI, was either confused or guilty of the charge in the first paragraph above. Even before those things were first achieved, the people actually working on them had a pretty good idea of how they could be achieved without anything like what we normally consider intelligence - and they went on to prove it.

        • I think you slight the progress made, the difficulty of the task and overgeneralize on the AI community (on purposes and approaches of various people). You also have a clarity on "intelligence" which tends not to be so clear.

          But I'd like to bring to your attention a research project going on at my school (Michigan State University) which I think is different from other "AI". I didn't see it mentioned from glancing the article.

          The attempt to is create a robot that learns and develops as a baby would.

          • Thanks for the links.

            I don't mean to slight the progress made, and I also didn't mean to criticize all AI researchers.

            Perhaps a better way to describe what I was getting at is that there's an unfortunate feedback effect that happens with these advanced applications, where: researchers say things which excite the general public because they describe things that sound amazing and desirable; researchers notice said excitement and connect that with increased funding; researchers exploit excitement by attach

      • by fm6 (162816)

        Actually,the thing that annoys me most is that people associate Lisp with 80s AI,

        What annoys me is that people refer to Lisp as a "fifth generation language", even though it's the second oldest high-level language (after Fortran). But that's not as annoying as calling Visual Basic a "fourth generation language" because of its database features.

        All of which is a secondary result of another case of 80s hype. Declarative languages, such as SQL, were sold as "fourth generation" because they were supposed t

  • by Neuronerd (594981) <konrad@@@koerding...de> on Tuesday April 29, 2003 @07:31AM (#5832710) Homepage

    The great thing about the recent development in so-called cognitive systems is that they start to address more real problems. The time of toy problems is over. It is not enough to just follow a line. Only the challenge from the real world can make algorithms in any way "clever" or meaningful.

    This is why I find it truly inspiring that so much research is going into these systems these days.

    Sadly however most of neuroscience these days is still far from these questions. Most electrophysiologists that for example study the visual system show it trivial stimuli such as bars or gratings. In some sense a system can only show its capability when the stimuli are rich enough.

    Nevertheless there is clearly a move these days towards larger more interesting problems even in neuroscience. We should be inspired by the works of the roboticists.

    • by PWBDecker (452734) on Tuesday April 29, 2003 @08:05AM (#5832791)
      I don't know if you intend to be rhetorical, but irregardless I'm going to reply to clarify the absurdity of what you just said. Of course a system will truly flourish and develop when exposed to more complex stimuli, but the purpose and value of exposing such a system (such as our own vision or cognitive systems) to SIMPLE stimuli such as basic geometric shapes or simple patterns is to help develop an understanding of how they process such stimuli in order to develop a more complex model. I don't think very much progress would be made in the field of vision recognition if we simple attached as many neurodes as we could to a persons head and monitor the feedback while they read a book.

      The problem with neural systems, as anyone who knows how a basic neural net works, is that they are fully distributed information processing systems, and no particular physical location is responsible for any particular processing task. In fact research is being conducted to allow a large number of neural nets to develop their own internal relay systems, entirely beyond human control, to see an entire cognitive framework can develop on its own (much as it does in nature). For those interested, check out http://www.genobyte.com/, their CAM-Brain machine is, in my oppinion, one of the best developed attempts to develop a natural intelligence system.

      Decker
      Time to lose myself again.
    • Inspired (Score:4, Funny)

      by maharg (182366) on Tuesday April 29, 2003 @08:09AM (#5832800) Homepage Journal
      The ultimate goal of the RoboCup project is by 2050, develop a team of fully autonomous humanoid robots that can win against the human world champion team in soccer

      Now THAT's a goal.

      Maybe we'll see humanoid robot referees in sports. That should stop any dissent from the players ,-}

      Player: C'mon ref, that was never in a million years a f**king penalty !!
      Ref: You have 3 seconds to comply..
      • Definitely wouldn't see many assaults on Robo-Ref!
        • Definitely wouldn't see many assaults on Robo-Ref!

          Actually, they should make the robo-ref semi-fragile. No better way to boost ratings than to let John McEnrow or Shaq beat the chips out of a ref. You can't do that with human refs. They could also beat the stuffing out of robo-mascots also.
    • by TomorrowPlusX (571956) on Tuesday April 29, 2003 @11:13AM (#5834202)
      Sadly however most of neuroscience these days is still far from these questions.

      This is interesting to me, for several reasons. I'm working on robotics in my free time, mainly not cognitive stuff but lower level autonomous muscular control and feedback loop stuff. But anyway, my girlfriend's studying neuroscience and she, like many (too many) of her peers, finds absolutely NOTHING interesting in cognitive research.

      All they care about is the mechanics (which is important) but I think they consider cognition to be a peculiar but unimportant side effect of the rest of the complex process.

      So, as a fellow who's spent years writing code to try to do intelligent stuff, and more recently robots to carry these actions out, it's somewhat frustrating to be in a bar with a bunch of neuroscientists and hear them dismiss cognition as irrelevant.

  • by 1337_h4x0r (643377) on Tuesday April 29, 2003 @07:49AM (#5832747)
    to detect dupes!
    • Why would slashdot want to detect dupes? If I'm not mistaken - most authors are proud to publish dupes in the true slashdot style :)

      I know... maybe once the cognitive system is all figured out (if it ever is :) - we can use it to post dupes! With different names related to the topic!

      We love you slashdot
      • Both things - detecting dupes and creating dupes - should be a simple thing today. Concerning the first one, you could easely use a spam filter [paulgraham.com], modify it, and run it over all new posts. Whenever something bears high similiarity to a former /. article, the system should print out a dupe-warning. Of course, sometimes there is wanted similiarity between different posts (like Mozilla 1.0 is out, Mozilla 1.1 is out, Mozilla 1.2 is out, Mozilla, 1.3 is out... and, guess what? Mozilla 1.4 alpha is out!)
    • It's called CmdrTaco. It's quite a sophisticated model in some respects, but it contains all sorts of spelling bugs and a strange fixation on anime and pr0n, sometimes at the same time (although only when MrsKathleenTaco isn't watching). Perhaps due in part to these irrelevant fixations, CmdrTaco's dupe-detection capability is notoriously flaky. This new cognitive research may finally allow CmdrTaco's pattern recognition systems to be upgraded, though.

      But don't get your hopes up - when they attempted t

  • by slinted (374) on Tuesday April 29, 2003 @07:52AM (#5832755)
    Having a system combine both symbolic logic systems and sensory systems is mentioned in the article as a major focus of research today, but I wonder why this has been split so specifically...maybe someone can help me to understand.

    The point at which an understanding of body position is integrated with an overall structure of behavior leading towards a goal seems a mirage, since this isn't necessarily the way animal systems work. The best recreation of natures flexibility in "simple" systems that I've heard of comes from Mark Tilden's analog systems [cnn.com] that are controled by tight-loops of feedback that very closely model reflex circuits, but that are capable of recovering from intense deformations of "perfect positioning".

    Now, obivously, reflex systems can only go so far, when you have a bot that you want to decide path across a room, there has to be a symbolic understanding of its environment. But it seems to me, from my (albeit very limited) understanding of insect / lower-animal inteligence, that most insects don't actually work up a full symbolic understanding of their surroundings, they just have some sort of sense of direction towards a goal (think moths to light) and then they start the reflex circuits firing to move towards it. I can understand having an end goal of having a full cognitive system comparable to human understanding of the world, but it seems like people might be overshooting the process a bit. We need a greater understanding of the simple systems before we can hope to frog-leap to the big stuff.

    To dispute my own point though, I feel its fair to say that the "simple" systems of the animal brain are already currently being modeled [newscientist.com] to the point that prosthesis for the brain might just be within reach. The success of an artificial hipocampus will prove that modeling the brain isn't necessarily understanding the brain, but it might be easier to learn the systems from our artificial models than the real ones.
  • by StuartFreeman (624419) on Tuesday April 29, 2003 @08:50AM (#5833003) Homepage
    Johnny five is alive!
  • by Yoda2 (522522) on Tuesday April 29, 2003 @09:11AM (#5833134)
    Experience-Based Language Acquisition (EBLA) is an open source software system written in Java that enables a computer to learn simple language from scratch based on visual perception. It is the first "grounded" language system capable of learning both nouns and verbs. Moreover, once EBLA has established a vocabulary, it can perform basic scene analysis to generate descriptions of novel videos.

    A more detailed summary is available here [osforge.com] and this [greatmindsworking.com] is the project web site.

    Compared to proprietary systems such as Ai's HAL [bbc.co.uk], Meaningful Machines Knowledge Engine [prnewswire.com], and Lobal Technologies LAD [silicon.com], EBLA is the only system to incorporate grounded/perceptual understanding of language.

  • by truthsearch (249536) on Tuesday April 29, 2003 @09:15AM (#5833152) Homepage Journal
    While this is all very interesting and becoming more practical for everyday use, we don't hear enough about the stuff that's related but not quite bleeding edge. We know there are people trying to create intelligent systems such as for language understanding and intelligent web searching, but it seems we don't hear much about them. I'm wondering if it's because most of that is being done within corporations while much of this bleeding edge research is done by universities.
  • ""My experience with cogsci is that it's really about understanding thought, not about making machines. I think it really depends on where you are. If you're at MIT, it's probably machines."" ahem. not at all. usuing MIT as an example the dept of Brain and Cognitive Sciences is more concerned with human cognition. granted there are some poeple who are more in the AI area, but many are not. (i was one of them) most schools who have an offical cognitive science dept or program mostly have cogntiive psycho
  • Not a great read (Score:5, Informative)

    by Illserve (56215) on Tuesday April 29, 2003 @11:12AM (#5834191)
    I was disappointed by the 5 articles I read and stopped reading. It basically reads like a catalog of the projects and techno-terms that are being performed with very little actual content.

    Basically each one boiled down to: our lab does the XYZAB project and we're studying this system.

  • i don't see what this has to do with mathematics. the algorithms are all heuristic. my experience is that such things tend to not work outside a narrowly defined environment. i think it is ashame that good engineering work in robotics gets ignored because it doesn't look to the public as sexy as this stuff. like rigid body contact - it's really hard to have a robot pick something up. now there is actual mathematics.
  • AI has yet to even define itself, and hence, is troubled by the most fundamental disagreements about just exactly what its questions are, leaving doubt about whether the answers could even be found given the state of the art.

    All I ever hear about when folks brag about "advances in AI" are things like some new algorithm which can interpret some form of input which it previously could not, or new theories of machine learning, etc.

    No one yet has effectively defined the mechanics which make up "the mind". Fol

<<<<< EVACUATION ROUTE <<<<<

Working...