Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

MIT Finds 'Grand Unified Theory of AI' 301

aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"
This discussion has been archived. No new comments can be posted.

MIT Finds 'Grand Unified Theory of AI'

Comments Filter:
  • Interesting Idea (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Wednesday March 31, 2010 @12:06PM (#31688882) Journal
    But from 2008 [google.com]. In addition to that, it faces some similar problems to the other two models. Their example:

    Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

    But you just induced a bunch of rules I didn't know were in your system. That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight. Unless the cassowary is an extinct dinosaur in which case there might have been one ... again, creativity and human analysis present quite the barrier to AI.

    Chater cautions that, while Church programs perform well on such targeted tasks, they’re currently too computationally intensive to serve as general-purpose mind simulators. “It’s a serious issue if you’re going to wheel it out to solve every problem under the sun,” Chater says. “But it’s just been built, and these things are always very poorly optimized when they’ve just been built.” And Chater emphasizes that getting the system to work at all is an achievement in itself: “It’s the kind of thing that somebody might produce as a theoretical suggestion, and you’d think, ‘Wow, that’s fantastically clever, but I’m sure you’ll never make it run, really.’ And the miracle is that it does run, and it works.”

    That sounds familiar ... in both the rule based and probabilistic based AI, they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work. Problem is that you never scratch the surface of a human mind's lifetime experience though. And Chater's method, I suspect, is similarly stunted.

    I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

  • by Lord Grey ( 463613 ) * on Wednesday March 31, 2010 @12:08PM (#31688902)
    1. 1) New rule: "Colorless green ideas sleep furiously."
    2. 2) ...
    3. 3) Profit!
  • by linhares ( 1241614 ) on Wednesday March 31, 2010 @12:09PM (#31688924)
    HYPE. More grand unified hype. The "grand unified theory" is just a mashup of old-days rules & inferences engines thrown in with probabilistic models. Hyperbole at its finest, to call it a grand unified theory of AI. Where are connotations and framing effects? How does working short term memory interact with LTM and how does Miller magic number show up? How can the system understand that "john is a wolf with the ladies" without thinking that john is hairy and likes to bark at the moon? I could go on but feel free to fill in the blanks. So long and thanks for all the fish MIT.
  • by zero_out ( 1705074 ) on Wednesday March 31, 2010 @12:10PM (#31688930)

    My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.

    An infinite task is one that, at any point in time, has no bounds. An infinite task cannot "grow" since it would need a finite state to then become larger than it.

  • Terrible Summary (Score:1, Insightful)

    by Anonymous Coward on Wednesday March 31, 2010 @12:10PM (#31688938)

    When you use the phrase "Grand Unified Theory" you better have something impressive to show me.

  • by Anonymous Coward on Wednesday March 31, 2010 @12:22PM (#31689100)

    You'd think people who were so so certain that sure AI is easy would be making millions selling AI's to big buisness but no....

    Come now, when The Man is out there suppressing all knowledge of your discovery, you have to spend all your time on blogs and news commentary sites explaining how stupid The Academic Community is. Even though you could build The Ultimate AI in QBasic in maybe 5 minutes (tops) if you really wanted to, you just don't have the energy for it after a long day of blog-ranting.

    Side note: I like how pattern matching doesn't involve math in that commenter's universe.

  • by viking099 ( 70446 ) on Wednesday March 31, 2010 @12:25PM (#31689156)

    My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.

    Much like copyright terms then, I guess?

  • by ericvids ( 227598 ) on Wednesday March 31, 2010 @12:26PM (#31689168)

    The way the author wrote the article, it seems like nothing different from an expert system straight from the 70's, e.g. MYCIN. That one also uses probabilities and rules; the only difference is that it diagnoses illnesses, but that can be extended to almost anything.

    Probably the only contribution is a new language. Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)

    AI at this point has diverged so much from just probabilities and rules that it's not practical to "unify" it as the author claims. Just look up AAAI and its many conferences and subconferences. I just submitted a paper to an AI workshop... in a conference ... in a GROUP of co-located conferences ... that is recognized by AAAI as one specialization among many. That's FOUR branches removed.

  • by Fnkmaster ( 89084 ) on Wednesday March 31, 2010 @12:30PM (#31689222)

    AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly. Five years later, the rest of the field would be using these algorithms to solve actual problems, without the grandiose hype.

    These days, I'm not sure if AI is even that. But maybe some of this stuff will prove to be useful. You just have to put on your hype filter whenever "AI" is involved.

  • by digitaldrunkenmonk ( 1778496 ) on Wednesday March 31, 2010 @12:31PM (#31689236)

    The first time I saw an airplane, I didn't think the damn thing could fly. I mean, hell, look at it! It's huge! By the same token, how can a ship float? Before I took some basic physics, it was impossible in my mind, yet it occurred. An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind. If I learned that a bird was over 200 lbs before seeing the bird, I'd honestly expect that fat son of a bitch to fall right out of the sky.

    If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?

    Humans have a problem dealing with that. Heavy things fall. Heavy things sink. To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.

  • by aaaaaaargh! ( 1150173 ) on Wednesday March 31, 2010 @12:47PM (#31689440)

    Wow, as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising. For a start, (1) uncertain reasoning and expert systems using it is hardly new. This is a well-established research domain and certainly not the golden grail of AI. Because, (2) all this probabilistic reasoning is nice and fine in small toy domains, but it quickly become computationally intractable in larger domains, particularly when complete independence of the random variables cannot be assured. And for this reason, (3) albeit being a useful tool and important research area, probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning. The way we draw inference is much more heuristic, because we are so heavily resource-bound, and there are tons of other reasons why probabilistic inference is not cognitively adequate. (One of them, for example, is that untrained humans are incapable of making even the simplest calculations in probability theory correctly, because it is harder than it might seem at first glance.) Finally, (5) there are numerous open issues with all sorts of uncertain inference, ranging from certain impossibility results, over different choices that all seem to be rational somehow (e.g. DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other, alternative decision theories, different rules of dealing with conflicting evidence, etc.) to philosophical justifications of probability (e.g. frequentism vs. Bayesianism vs. propensity theory and their quirks, justification of inverse inference, etc).

    In a nutshell, there is nothing wrong with this research in general or the Church programming language, but it is hardly a breakthrough in AI.

  • by nomadic ( 141991 ) <`nomadicworld' `at' `gmail.com'> on Wednesday March 31, 2010 @01:09PM (#31689760) Homepage
    Hey, but it's MIT!! It's freaking cool!!!

    Yes, the world leaders in failing at AI. "In from three to eight years we will have a machine with the general intelligence of an average human being." -- Marvin Minsky, 1970.
  • by Animats ( 122034 ) on Wednesday March 31, 2010 @01:12PM (#31689786) Homepage

    This is embarrassing. MIT needs to get their PR department under control. They're inflating small advances into major breakthroughs. That's bad for MIT's reputation. When a real breakthrough does come from MIT, which happens now and then, they won't have credibility.

    Stanford and CMU seem to generate more results and less hype.

  • by astar ( 203020 ) <max.stalnaker@gmail.com> on Wednesday March 31, 2010 @01:17PM (#31689858) Homepage

    a common treatment of the universe is finite, but unbounded

  • by thedonger ( 1317951 ) on Wednesday March 31, 2010 @02:02PM (#31690508)
    Maybe the real problem is that we haven't given a program/computer a reason to learn. We (and the rest of the animal kingdom) are motivated by necessity. Learning = survival. It is part of the design.
  • by Ksevio ( 865461 ) on Wednesday March 31, 2010 @02:35PM (#31691048) Homepage
    Do a search for articles with MIT in the title and you'll find that's a pretty common story here.
  • by kdemetter ( 965669 ) on Wednesday March 31, 2010 @02:42PM (#31691128)

    I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

    I admit "MIT Finds Theory of AI " does sound a lot less interesting , though it's probably closer to the truth.

  • Re:Basically... (Score:3, Insightful)

    by Chris Burke ( 6130 ) on Wednesday March 31, 2010 @02:46PM (#31691216) Homepage

    Pretty much.

    The pragmatic answer to the Chinese Room [wikipedia.org] problem is "Who gives a fuck? There's no way to prove that our own brains aren't basically Chinese Rooms, so if the only difference between a human intelligence and an artificial one is that we know how the artificial one works, why does it matter?"

    But really, identifying patterns, and then inferring further information from the rules those patterns imply, is a pretty good behavior.

  • by nebosuke ( 1012041 ) on Wednesday March 31, 2010 @03:02PM (#31691442)

    On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

    --Charles Babbage

  • Re:Basically... (Score:3, Insightful)

    by Raffaello ( 230287 ) on Wednesday March 31, 2010 @07:50PM (#31695352)

    The pragmatic answer to the chinese room is that the non-chinese-speaking person in the room in combination with the book of algorithmic instructions, considered together as a system, does understand chinese.

    Searle's mistake is an identity error - the failure to see that a computer with no software is not the same identity as a computer with software loaded inot it. The latter quite possibly could understand chinese (or some other domain) while the former most definitely does not.

If you have a procedure with 10 parameters, you probably missed some.

Working...