Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

MIT Finds 'Grand Unified Theory of AI' 301

aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"
This discussion has been archived. No new comments can be posted.

MIT Finds 'Grand Unified Theory of AI'

Comments Filter:
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday March 31, 2010 @11:02AM (#31688818)

    Tell me about you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before.

    • by HungryHobo ( 1314109 ) on Wednesday March 31, 2010 @11:08AM (#31688904)

      The comments on TFA are a bit depressing though...

      axilmar - You MIT guys don't realize how simple AI is. 2010-03-31 04:57:47
      Until you MIT guys realize how simple the AI problem is, you'll never solve it.

      AI is simply pattern matching. There is nothing else to it. There are no mathematics behind it, or languages, or anything else.

      You'd think people who were so so certain that sure AI is easy would be making millions selling AI's to big buisness but no....

      I'd be interested if this approach to AI allows for any new approaches to strategy.

      • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday March 31, 2010 @11:19AM (#31689062)

        Why do you think you'd be interested if this approach to AI allows for any new approaches to strategy.

      • Re: (Score:3, Interesting)

        by technofix ( 588726 )

        Actually, axilmar hit it on the nail. There's more than one nail here, but that's not bad at all.
        The next nail is "What patterns are *salient*". This is the billion dollar question in AI.
        We hit *that* nail around 2003. In fact we're several nail further along....

        I'm part of the crowd that thinks AI is much simpler than most people think. It's still not trivial.

        But there's a *big* difference between a project to "tell the computer everything about the world
        in first order predicate calculus" and "Figuring out

        • Re: (Score:3, Insightful)

          by thedonger ( 1317951 )
          Maybe the real problem is that we haven't given a program/computer a reason to learn. We (and the rest of the animal kingdom) are motivated by necessity. Learning = survival. It is part of the design.
        • Re: (Score:3, Interesting)

          by Mashdar ( 876825 )
          Expert systems hardly seem like a waste ;) And what you are proposing sounds a lot like NLU (natural language understanding). NLU hearkens from the good old days of LISP :) They, of course, did not have nearly the computing power, but NLU is alive and well, with word-context analysis and adaptive algorithms in conversational settings. Do you somehow distinguish yourself from this? -andy
      • It look like Church is a little Fuzzy, maybe.
      • It's human intelligence than I'm unsure about.

    • Re: (Score:2, Funny)

      Possibly some problems in your childhood are related to this.
  • Endless vs. infinite (Score:2, Interesting)

    by MarkoNo5 ( 139955 )
    What is the difference between an endless task and an infinite task?
    • Re: (Score:3, Funny)

      by Bat Dude ( 1449125 )
      Simple endless task never ends but the infinite task! the end is just not in sight :)
    • by zero_out ( 1705074 ) on Wednesday March 31, 2010 @11:10AM (#31688930)

      My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.

      An infinite task is one that, at any point in time, has no bounds. An infinite task cannot "grow" since it would need a finite state to then become larger than it.

      • by viking099 ( 70446 ) on Wednesday March 31, 2010 @11:25AM (#31689156)

        My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.

        Much like copyright terms then, I guess?

      • Re: (Score:3, Insightful)

        by astar ( 203020 )

        a common treatment of the universe is finite, but unbounded

        • Re: (Score:3, Informative)

          by Khashishi ( 775369 )

          yet, there's scant evidence that the universe is finite. Only WMAP quadrupole data... one number... suggests such a thing.

      • Re: (Score:3, Informative)

        by jd ( 1658 )

        There are different sizes of infinity, and therefore it is entirely possible for an infinite task to grow into a larger infinite task.

    • by geekoid ( 135745 )

      and endless task is just the same thing over and over again. and Infinite task goes on because of changes in variable and growing experience.

      So you can just write downs a list of things and say 'go thought this list', but if the list changes because you are working on the list, then it's infinite.

      At least that's how it reads in the context he used it.

    • by Monkeedude1212 ( 1560403 ) on Wednesday March 31, 2010 @11:28AM (#31689202) Journal

      Simple. One doesn't end and the other goes on forever.

  • by Bat Dude ( 1449125 ) on Wednesday March 31, 2010 @11:04AM (#31688850)
    Sounds a bit like a journalists brain to me ... NO NO let me make up the rest of the Story
  • Interesting Idea (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@gm a i l . com> on Wednesday March 31, 2010 @11:06AM (#31688882) Journal
    But from 2008 [google.com]. In addition to that, it faces some similar problems to the other two models. Their example:

    Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

    But you just induced a bunch of rules I didn't know were in your system. That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight. Unless the cassowary is an extinct dinosaur in which case there might have been one ... again, creativity and human analysis present quite the barrier to AI.

    Chater cautions that, while Church programs perform well on such targeted tasks, they’re currently too computationally intensive to serve as general-purpose mind simulators. “It’s a serious issue if you’re going to wheel it out to solve every problem under the sun,” Chater says. “But it’s just been built, and these things are always very poorly optimized when they’ve just been built.” And Chater emphasizes that getting the system to work at all is an achievement in itself: “It’s the kind of thing that somebody might produce as a theoretical suggestion, and you’d think, ‘Wow, that’s fantastically clever, but I’m sure you’ll never make it run, really.’ And the miracle is that it does run, and it works.”

    That sounds familiar ... in both the rule based and probabilistic based AI, they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work. Problem is that you never scratch the surface of a human mind's lifetime experience though. And Chater's method, I suspect, is similarly stunted.

    I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

    • Re: (Score:2, Informative)

      by Anonymous Coward
      My broken url should read: http://www.mit.edu/~ndg/papers/churchUAI08_rev2.pdf [mit.edu]
      Google quick view didn't work for some reason.
    • Re: (Score:3, Informative)

      by geekoid ( 135745 )

      "That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight

      what? He specifically stated birds. Not Animals, or inanimate objects.

      It looks like this system can change as it is used, effectivly creating a 'lifetime' experience.

      This is very promising. In fact, it may be the first step in creating primitive house hold AI.

      OR robotic systems used in manufacturing able to adjust the process as it goes. Using i

      • by Chris Burke ( 6130 ) on Wednesday March 31, 2010 @12:09PM (#31689756) Homepage

        what? He specifically stated birds. Not Animals, or inanimate objects.

        What if I tell it that a 747 is a bird?

        This is very promising. In fact, it may be the first step in creating primitive house hold AI.

        Very, very promising indeed.

        Now, I can mess with the AI's mind by feeding it false information, instead of messing with my child's mind. I was worried that I wouldn't be able to stop myself (because it's so fun), despite the negative consequences for the kid. But now I have an AI to screw with, my child can grow up healthy and well adjusted!

        BTW, when the robot revolution comes, it's probably my fault.

      • My point was that he gave an example and the system conveniently already knew that when you assign something a weight of 200 lbs then it should reduce your assumption that it flies. He didn't say by how much or how this information was ever gleaned, just that it was conveniently there and adjusted the answer in the right direction!

        A cassowary is a thing and an animal and a bird. Sometimes people call airplanes 'birds.' So if you learned blindly from literature, you could run into all sorts of proble
    • Re: (Score:3, Funny)

      by psnyder ( 1326089 )

      "...these things are always very poorly optimized when they’ve just been built."

      XKCD #720 [xkcd.com]

    • by digitaldrunkenmonk ( 1778496 ) on Wednesday March 31, 2010 @11:31AM (#31689236)

      The first time I saw an airplane, I didn't think the damn thing could fly. I mean, hell, look at it! It's huge! By the same token, how can a ship float? Before I took some basic physics, it was impossible in my mind, yet it occurred. An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind. If I learned that a bird was over 200 lbs before seeing the bird, I'd honestly expect that fat son of a bitch to fall right out of the sky.

      If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?

      Humans have a problem dealing with that. Heavy things fall. Heavy things sink. To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.

      • Re: (Score:3, Interesting)

        by eldavojohn ( 898314 ) *
        Everyone is exhibiting the very thing I was talking about. Which is they have a more complete rule base so they get to come up with great seemingly "common logic" defenses for the AI.

        In an example, we're told the cassowary is a bird. Then we're told it can weigh almost 200 lbs. Okay. Now you're telling me that it might revise its guess as to whether or not it can fly? Come on! Am I the only person that can see that you've just given me an example where the program magically drums up the rule or p
        • Re:Interesting Idea (Score:4, Informative)

          by Chris Burke ( 6130 ) on Wednesday March 31, 2010 @01:31PM (#31690978) Homepage

          In an example, we're told the cassowary is a bird. Then we're told it can weigh almost 200 lbs. Okay. Now you're telling me that it might revise its guess as to whether or not it can fly? Come on! Am I the only person that can see that you've just given me an example where the program magically drums up the rule or probability based rule that "if something weighs almost 200 lbs it probably cannot fly"?

          For fucks sake, it was just an example of the kind of inferences a logical rule system can make, not a dump of the AI's knowledge and successful inference databases. I mean you might as well complain that the example given was not written in Church and ergo not understandable by the AI whatsoever.

          As the article explains, just not explicitly in the context of that example, it devises these rules from being fed information and using the probabilistic approach to figure out patterns and to infer rules, and that it does this better than other

          So in the actual version of the Cassowary problem, you would have first fed it a bunch of data about other birds, their flying capabilities, and their weights. The AI would then look at the data, and infer based on the Emu and the Ostrich that heavy birds can't fly and light birds can, unless they're the mascots of open source operating systems (that was a joke). Then you tell it about the cassowary, but not whether or not it can fly, and it infers based on its rules that the cassowary probably can't fly.

          In a sense it does "magically drum up the rule". Yes you still have to feed it data, but the point is that you do not have to manually specify every rule, because it can infer the rules from the data, and the create further inferences for those rules, combining the abilities of a rule-based system with the pattern-recognizing power of probabilistic systems.

          So the point is it takes less training, and a relatively small amount of explicitly specified rules.

      • by PPH ( 736903 ) on Wednesday March 31, 2010 @12:13PM (#31689804)

        The first time I saw an airplane, I didn't think the damn thing could fly.

        The first time I saw an airplane, I was just a kid. Physics and aerodynamics didn't mean much to me, so airplanes flying wasn't that much of a stretch of the imagination.

        I didn't develop the "airplanes can't fly" concept until I'd worked for Boeing for a few years.

      • You've hit the nail on the head. It seems like the human mind has to take some leaps of faith when it comes to everyday tasks. In your example, when a person gets on an airplane for the first time, there is a certain level of doubt. When it's the fifth time, your confidence is probably not based on your knowledge of aerodynamics, but simply your trust in the pilot and the folks at Boeing in addition to the first four successes. Could an abstraction of faith be a requirement for "good" AI?
      • by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday March 31, 2010 @12:54PM (#31690384) Homepage Journal

        Ships float because wood floats, and you make a ship from wood. Once you have made a ship from wood, then logically ALL ships can float. So then you can make them out of steel.
        Q.E.D.

    • "that things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that."

      But as a GENERAL RULE most things _that cannot fly_ fly without understanding of aerodynamics and having the ability to make them fly (i.e. engines, jet fuel, understanding of lift, etc). A 747 didn't just appear one day it was a gradual process of testing and figuring out the principles of flight. Birds existed prior to 747's.

    • I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

      You haven't been around /. much lately then...

    • Any AI must be able to learn. A 5 year old wouldn't know about a 747 or flightless bird but a 12 year old probably would. Presumably, the AI is trying to model an intelligent, reasonably-educated adult.
    • by astar ( 203020 )

      this AI model seems to be nothing more than something out of hilbert and whitehead program, really lousy as demoed by godel, but so very attractive to the common positivist

      for a little deeper treatment, I guess goethe on euler is appropriate

    • Re: (Score:3, Insightful)

      by kdemetter ( 965669 )

      I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

      I admit "MIT Finds Theory of AI " does sound a lot less interesting , though it's probably closer to the truth.

  • by Myji Humoz ( 1535565 ) on Wednesday March 31, 2010 @11:06AM (#31688888)
    Since the actual summary seems to involve a fluff filled soundclip without anything useful, here's the run down of the article.
    1) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.
    2) It turns out that teaching AIs to infer new ideas is really freaking hard. (Birds can fly because they have wings, mayflies can fly because they have wings, helicopters can... what??)
    3) We turned to probability based AI creation: you feed the AI a ton of data (training sets) and it can go "based on training data, most helicopters can fly."

    4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go
    "100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly"
    "Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth.
    5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.

    6) ???
    7) When asked if sparrows can fly, the AI asks if it's a European sparrow or an African sparrow, and Skynet ensues.
    • by wiredog ( 43288 )

      helicopters can... what?? Not to be a pedant... Well, actually, yeah, it's pedantic. But helicopters do have wings, or airfoils, anyway.

    • Re:The real summary (Score:5, Informative)

      by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Wednesday March 31, 2010 @11:23AM (#31689126)

      Mostly, he or his university are just really good at overselling. There are dozens of attempts to combine something like probabilistic inference with something more like logical inference, many of which have associated languages, and it's not clear this one solves any of the problems they have any better.

      • Re:The real summary (Score:5, Informative)

        by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Wednesday March 31, 2010 @11:40AM (#31689358)

        I should add that this is interesting research from a legitimate AI researcher, not some kooky fringe AI. I suspect it may have been his PR department more to blame than him, and his actual academic papers make no similarly overblown claims, and provide pretty fair positioning of how his work relates to existing work.

      • You kind of have to feel sorry for the guy [mit.edu], he just got a new job as a researcher studying human cognition, so he's under pressure to come up with something. Secondly it's not an easy field to come up with something. It's not like nutritional science where you can easily design an experiment to measure the effects of high-vitamin-C diets, and suddenly you have a publishable paper, even if the result is "absolutely no effect." To add to the difficulty, group he is with [mit.edu] seems to think that Bayesian statist
    • 4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go "100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly" "Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth. 5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.

      In my mind, you don't get to call it "AI" until, after feeding the computer information on thousands of birds and asks it whether penguins can fly, it responds, "I guess, probably. But look, I don't care about birds. What makes you think I care about birds? Tell me about that sexy printer you have over there. I'd like to plug into her USB port."

      You think I'm joking. You hope I'm joking. I'm not joking.

  • by Lord Grey ( 463613 ) * on Wednesday March 31, 2010 @11:08AM (#31688902)
    1. 1) New rule: "Colorless green ideas sleep furiously."
    2. 2) ...
    3. 3) Profit!
  • This kind of probabilistic inference approach with "new information" [evidence] being used to figure out "consequences" [probability of an event happening] sounds very similar to Bayesian inference/networks.

    I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model (or Dempster–Shafer theory [wikipedia.org]) which itself addresses some shortcomings of BN.

  • by linhares ( 1241614 ) on Wednesday March 31, 2010 @11:09AM (#31688924)
    HYPE. More grand unified hype. The "grand unified theory" is just a mashup of old-days rules & inferences engines thrown in with probabilistic models. Hyperbole at its finest, to call it a grand unified theory of AI. Where are connotations and framing effects? How does working short term memory interact with LTM and how does Miller magic number show up? How can the system understand that "john is a wolf with the ladies" without thinking that john is hairy and likes to bark at the moon? I could go on but feel free to fill in the blanks. So long and thanks for all the fish MIT.
    • Correct me if I'm wrong, but a child would presumably understand the wolf statement literally with hair and everything. Presumably as the list of rules grow (just as a child learns), the A.I.'s definition of what John is would change.

      My question is, how do you expect to list all these rules when we can probably define hundreds of rules from a paragraph of information alone.

      Would it also create a very racist A.I. that tends to use stereotypes to define everything?
      Maybe until so many rules are learnt, it's ve

    • Re: (Score:3, Insightful)

      by Fnkmaster ( 89084 )

      AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly. Five years later, the rest of the field would be using these algorithms to solve actual problems, without the grandiose hype.

      These days, I'm not sure if AI is even that. But maybe some of this stuff will prove to be useful. You just have to put on your hype filter whenever "AI" is involved.

    • See, that's not an AI problem, that's a semantics problem. The fact that you can mislead an AI by feeding it ambiguous inputs does not detract from it's capacity to solve problems.

      A perfect AI does not need to be omniscient, it needs to solve a problem correctly considering what it knows.

  • This looks familiar (Score:5, Informative)

    by Meditato ( 1613545 ) on Wednesday March 31, 2010 @11:23AM (#31689128)
    I looked at the documentation of this "Church Programming language". Scheme and most other Lisp derivatives have been around longer and can do more. This is neither news nor a revolutionary discovery.
    • Re: (Score:3, Funny)

      by godrik ( 1287354 )

      Hey, but it's MIT!! It's freaking cool!!!

      My conclusion from reading reading MIT's stuff: "I am not sure they are better scientist than anywere else. What I am sur about MIT is that they are freaking good at marketing!"

      • Re: (Score:2, Insightful)

        by nomadic ( 141991 )
        Hey, but it's MIT!! It's freaking cool!!!

        Yes, the world leaders in failing at AI. "In from three to eight years we will have a machine with the general intelligence of an average human being." -- Marvin Minsky, 1970.
    • Re: (Score:3, Funny)

      by Angst Badger ( 8636 )

      I looked at the documentation of this "Church Programming language". Scheme and most other Lisp derivatives have been around longer and can do more.

      Not only that, but more recent languages support actual syntax so that the user does not have to provide the parse tree himself.

  • by ericvids ( 227598 ) on Wednesday March 31, 2010 @11:26AM (#31689168)

    The way the author wrote the article, it seems like nothing different from an expert system straight from the 70's, e.g. MYCIN. That one also uses probabilities and rules; the only difference is that it diagnoses illnesses, but that can be extended to almost anything.

    Probably the only contribution is a new language. Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)

    AI at this point has diverged so much from just probabilities and rules that it's not practical to "unify" it as the author claims. Just look up AAAI and its many conferences and subconferences. I just submitted a paper to an AI workshop... in a conference ... in a GROUP of co-located conferences ... that is recognized by AAAI as one specialization among many. That's FOUR branches removed.

  • The key to Artificial Intelligence is to ignore the "intelligence" part and just think of it as Artificial Behavior.
    • Re: (Score:3, Insightful)

      by Chris Burke ( 6130 )

      Pretty much.

      The pragmatic answer to the Chinese Room [wikipedia.org] problem is "Who gives a fuck? There's no way to prove that our own brains aren't basically Chinese Rooms, so if the only difference between a human intelligence and an artificial one is that we know how the artificial one works, why does it matter?"

      But really, identifying patterns, and then inferring further information from the rules those patterns imply, is a pretty good behavior.

      • Re: (Score:3, Insightful)

        by Raffaello ( 230287 )

        The pragmatic answer to the chinese room is that the non-chinese-speaking person in the room in combination with the book of algorithmic instructions, considered together as a system, does understand chinese.

        Searle's mistake is an identity error - the failure to see that a computer with no software is not the same identity as a computer with software loaded inot it. The latter quite possibly could understand chinese (or some other domain) while the former most definitely does not.

  • by aaaaaaargh! ( 1150173 ) on Wednesday March 31, 2010 @11:47AM (#31689440)

    Wow, as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising. For a start, (1) uncertain reasoning and expert systems using it is hardly new. This is a well-established research domain and certainly not the golden grail of AI. Because, (2) all this probabilistic reasoning is nice and fine in small toy domains, but it quickly become computationally intractable in larger domains, particularly when complete independence of the random variables cannot be assured. And for this reason, (3) albeit being a useful tool and important research area, probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning. The way we draw inference is much more heuristic, because we are so heavily resource-bound, and there are tons of other reasons why probabilistic inference is not cognitively adequate. (One of them, for example, is that untrained humans are incapable of making even the simplest calculations in probability theory correctly, because it is harder than it might seem at first glance.) Finally, (5) there are numerous open issues with all sorts of uncertain inference, ranging from certain impossibility results, over different choices that all seem to be rational somehow (e.g. DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other, alternative decision theories, different rules of dealing with conflicting evidence, etc.) to philosophical justifications of probability (e.g. frequentism vs. Bayesianism vs. propensity theory and their quirks, justification of inverse inference, etc).

    In a nutshell, there is nothing wrong with this research in general or the Church programming language, but it is hardly a breakthrough in AI.

    • by godrik ( 1287354 )

      untrained humans are incapable of making even the simplest calculations in probability theory correctly

      And obviously does not know how to count to 4. :)

  • by kikito ( 971480 )

    This is just a library for Scheme. It does the same things that have been done before. In scheme.

    Move along.

  • by kenp2002 ( 545495 ) on Wednesday March 31, 2010 @11:59AM (#31689628) Homepage Journal

    Again, as I bring up often with AI researchers, we as humans evolved over millions of years (or were created, doesn't matter) from simple organisms that encoded information that built up simple systems into complex systems. AI, true AI, must be grown, not created. Asking the AI if a Bat is a mammal and can fly can a squirrel? ignores a foundation of development in intelligence, our brains were created to react and store, not store and react from various inputs.

    Ask an AI if the stove is hot. It should respond "I don't know, where is the stove?" Rather AI would try and make an inference based on known data. Since there isn't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah. A human would put thier hand (a senor) near the stove and measure the change, if any in temperature and reply yes or no accordingly. If a human cannot see the stove, and had no additional information either a random guess is in order or a "I have no clue." response of some sort. The brain isn't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.

    Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION". The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short. When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information (that was gathering in the background) that travels with the CONVERSATION thread.

    Now the conversation appears to be about the lamp and wheather it goes with the room's decor. Again the links should be built adding, retroactively the room's information into the CONVERSATION thread (again expiring information that is irrelivant to a short term memory buffer) and ultimately since visual and verbal queues imply that the AI's opinion is wanted should result in the AI blurting out, "I love Lamp."

    In case you missed it, this was one long Lamp joke...

    • by astar ( 203020 )

      if i thought i could build an ai, I would start by giving a computer system control of the world's physical production. I would observe say electricity getting short and then see if the computer system build a fusion reactor. the ai is not going to be skynet

  • by Animats ( 122034 ) on Wednesday March 31, 2010 @12:12PM (#31689786) Homepage

    This is embarrassing. MIT needs to get their PR department under control. They're inflating small advances into major breakthroughs. That's bad for MIT's reputation. When a real breakthrough does come from MIT, which happens now and then, they won't have credibility.

    Stanford and CMU seem to generate more results and less hype.

  • FTA: "In the 1950s and 60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data."

    From the viewpoint of Jaynes and many Bayesians, probability IS simply the rules of thought.
  • Interesting timing (Score:3, Interesting)

    by BlueBoxSW.com ( 745855 ) on Wednesday March 31, 2010 @12:17PM (#31689860) Homepage

    I've always enjoyed reading about AI, and like many here have done some experiments on my own time.

    This week I've been looking for a simple state modeling language, for use in fairly simple processes, that would tie into some AI.

    I wasn't really that impressed with anything I found, so when I saw the headline, I jumped to read the article.

    Unfortunately, this is a step in the right direction, but not all that clear to write or maintain, and probably too complex for what I need to do.

    The cleanest model to do these types of things I've found is the 1896 edition of Lewis Caroll's Symbolic Logic. (Yes, the same Lewis Caroll that wrote Alice in Wonderland).

  • by hey ( 83763 ) on Wednesday March 31, 2010 @02:09PM (#31691538) Journal

    I have a child. When I watch her learn its totally rules based. Also, very importantly when she is told that her existing rules don't quite describe reality she is quick to make a new exception (rule). Since she's young her mind is flexible and she doesn't get angry when its necessary to make an exception. The new rule stands until a new exception comes up.

    eg in english she wrote "there toy" since she wasn't familiar with the other there's. She was corrected to "their toy". But of course, there is still "they're".

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...