Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

MIT Finds 'Grand Unified Theory of AI' 301

aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"
This discussion has been archived. No new comments can be posted.

MIT Finds 'Grand Unified Theory of AI'

Comments Filter:
  • Endless vs. infinite (Score:2, Interesting)

    by MarkoNo5 ( 139955 ) <MarkovanDooren@@@gmail...com> on Wednesday March 31, 2010 @12:04PM (#31688842)
    What is the difference between an endless task and an infinite task?
  • by HungryHobo ( 1314109 ) on Wednesday March 31, 2010 @12:08PM (#31688904)

    The comments on TFA are a bit depressing though...

    axilmar - You MIT guys don't realize how simple AI is. 2010-03-31 04:57:47
    Until you MIT guys realize how simple the AI problem is, you'll never solve it.

    AI is simply pattern matching. There is nothing else to it. There are no mathematics behind it, or languages, or anything else.

    You'd think people who were so so certain that sure AI is easy would be making millions selling AI's to big buisness but no....

    I'd be interested if this approach to AI allows for any new approaches to strategy.

  • by xtracto ( 837672 ) on Wednesday March 31, 2010 @12:09PM (#31688916) Journal

    This kind of probabilistic inference approach with "new information" [evidence] being used to figure out "consequences" [probability of an event happening] sounds very similar to Bayesian inference/networks.

    I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model (or Dempster–Shafer theory [wikipedia.org]) which itself addresses some shortcomings of BN.

  • Re:The real summary (Score:3, Interesting)

    by JerryLove ( 1158461 ) on Wednesday March 31, 2010 @12:36PM (#31689292)

    Helicopters can fly, but not because they have wings.

    The license you get that allows you to pilot a helicoptor is for "rotary wing aircraft".

    Those blades are indeed wings (to the same extent wings on a plane are).

    Not that this is related to the actual topic at all.

  • by kikito ( 971480 ) on Wednesday March 31, 2010 @12:53PM (#31689534) Homepage

    This is just a library for Scheme. It does the same things that have been done before. In scheme.

    Move along.

  • Re:Interesting Idea (Score:3, Interesting)

    by eldavojohn ( 898314 ) * <eldavojohn.gmail@com> on Wednesday March 31, 2010 @01:01PM (#31689644) Journal
    Everyone is exhibiting the very thing I was talking about. Which is they have a more complete rule base so they get to come up with great seemingly "common logic" defenses for the AI.

    In an example, we're told the cassowary is a bird. Then we're told it can weigh almost 200 lbs. Okay. Now you're telling me that it might revise its guess as to whether or not it can fly? Come on! Am I the only person that can see that you've just given me an example where the program magically drums up the rule or probability based rule that "if something weighs almost 200 lbs it probably cannot fly"? Does that rule apply to birds or some things? Does the probability get affected by the thing being a bird, plane or piece of granite? Each of those needs to be defined either through observation or axiom!

    If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?

    Hey, I'm not saying I or the AI would believe that one way or the other. All I'm saying is that you have to explicitly code or develop a way so that it can give you an answer one way or the other. I don't care if it's a stated rule, an a priori probability or a little of both! It still needs to be developed!

    To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.

    Well then you've already set your sites far below the Turing Test.

  • by technofix ( 588726 ) on Wednesday March 31, 2010 @01:12PM (#31689792)

    Actually, axilmar hit it on the nail. There's more than one nail here, but that's not bad at all.
    The next nail is "What patterns are *salient*". This is the billion dollar question in AI.
    We hit *that* nail around 2003. In fact we're several nail further along....

    I'm part of the crowd that thinks AI is much simpler than most people think. It's still not trivial.

    But there's a *big* difference between a project to "tell the computer everything about the world
    in first order predicate calculus" and "Figuring out how learning in the brain might work,
    implementing that in a computer to test it, figuring out what might be wrong, and repeat the process
    until we have something that is capable of learning anything we tell it roughly the way humans do".

    The first approach is doomed to fail for reasons explained on my website below, including the simple
    reason that the everyday mundane world is more complex than we think. Any ontology or semantic
    web based project is thus doomed to fail.

    The latter is "only" hampered by the fact that we haven't tried it yet. Attacking it from the
    neuroscience angle is one way, but it's actually *easier* to attack it from the Epistemology
    angle. "How is it possible to learn *anything*? What is it possible to learn at all? How *might*
    the brain go about doing what it does? How could we duplicate it in a computer to see if
    the theory is correct?" Repeat until we succeed.

    A million man-years has been wasted on 20th Century style AI. We have so far put
    10 person-years into 21st Century AI. To wit:

    I (and my company) have been working on our idea of how this is supposed to be done since 2001
    and though we have some interesting results and many insights we haven't been able to demonstrate
    effects that are stronger than what you can do with regular programming. We have good benchmarks
    but we're currently at 80-85% on tasks where regular programming can do 95% and humans 99.99%
    but we're slowly improving. And as opposed to *many* AI projects, we are writing code, running
    experiments daily (and overnight), have built our own extra-large computers (32 GB RAM linux systems)
    etc. We are attempting to learn human languages (any language) by unsupervised training by simply
    reading books (Jane Austen, in our case). We have good semantic level reading comprehension
    tests that can be completely automated and work at *very low levels of IQ and reading comprehension*.

    We've funded all of this work ourselves and hope to leverage this effort (once we get it to work :-)
    into a market leading position on various semantic technologies including web search support technologies,
    true semantic search, and superior speech understanding. Ask me for a Use Cases and Markets document.

    When comparing the ideas in my company to those of almost all other AI research, *including TFA*,
    I'd like to think that *we* at least got the Most Significant Bit correct. And we feel sad that most people
    that are entering the field of AI today are being taught the wrong things, perpetuating the old myths and
    mistakes and thereby guaranteeing we won't get decent AI any time soon.

    I have an unpublished article that I'm trying to get into some mainstream magazine at
    http://syntience.com/AIResearchInThe21stCentury.pdf [syntience.com] - feel free to peek at it.
    It's not a direct response to the MIT article but argues a different angle and aims
    at roughly the same audience (you!).

    If you want more info beyond that, then check out our other online resources:

    Theory and motivational site (2 years old) : http://artificial-intuition.com/ [artificial-intuition.com]
    Video site (latest insights, more detailed info) http://videos.syntience.com/ [syntience.com] (or go to Vimeo.com and search for "syntience") Axilmar will enjoy "Models vs. Patterns" video.
    Blog:

  • Interesting timing (Score:3, Interesting)

    by BlueBoxSW.com ( 745855 ) on Wednesday March 31, 2010 @01:17PM (#31689860) Homepage

    I've always enjoyed reading about AI, and like many here have done some experiments on my own time.

    This week I've been looking for a simple state modeling language, for use in fairly simple processes, that would tie into some AI.

    I wasn't really that impressed with anything I found, so when I saw the headline, I jumped to read the article.

    Unfortunately, this is a step in the right direction, but not all that clear to write or maintain, and probably too complex for what I need to do.

    The cleanest model to do these types of things I've found is the 1896 edition of Lewis Caroll's Symbolic Logic. (Yes, the same Lewis Caroll that wrote Alice in Wonderland).

  • by hey ( 83763 ) on Wednesday March 31, 2010 @03:09PM (#31691538) Journal

    I have a child. When I watch her learn its totally rules based. Also, very importantly when she is told that her existing rules don't quite describe reality she is quick to make a new exception (rule). Since she's young her mind is flexible and she doesn't get angry when its necessary to make an exception. The new rule stands until a new exception comes up.

    eg in english she wrote "there toy" since she wasn't familiar with the other there's. She was corrected to "their toy". But of course, there is still "they're".

  • by Mashdar ( 876825 ) on Wednesday March 31, 2010 @03:13PM (#31691596)
    Expert systems hardly seem like a waste ;) And what you are proposing sounds a lot like NLU (natural language understanding). NLU hearkens from the good old days of LISP :) They, of course, did not have nearly the computing power, but NLU is alive and well, with word-context analysis and adaptive algorithms in conversational settings. Do you somehow distinguish yourself from this? -andy

Waste not, get your budget cut next year.

Working...