MIT Finds 'Grand Unified Theory of AI' 301
aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"
That is very interesting (Score:5, Funny)
Tell me about you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before.
Re:That is very interesting (Score:4, Interesting)
The comments on TFA are a bit depressing though...
axilmar - You MIT guys don't realize how simple AI is. 2010-03-31 04:57:47
Until you MIT guys realize how simple the AI problem is, you'll never solve it.
AI is simply pattern matching. There is nothing else to it. There are no mathematics behind it, or languages, or anything else.
You'd think people who were so so certain that sure AI is easy would be making millions selling AI's to big buisness but no....
I'd be interested if this approach to AI allows for any new approaches to strategy.
Re:That is very interesting (Score:5, Funny)
Why do you think you'd be interested if this approach to AI allows for any new approaches to strategy.
Re: (Score:2)
smile - thanks for that - I haven't played with Eliza in ages.
Re: (Score:3, Interesting)
Actually, axilmar hit it on the nail. There's more than one nail here, but that's not bad at all.
The next nail is "What patterns are *salient*". This is the billion dollar question in AI.
We hit *that* nail around 2003. In fact we're several nail further along....
I'm part of the crowd that thinks AI is much simpler than most people think. It's still not trivial.
But there's a *big* difference between a project to "tell the computer everything about the world
in first order predicate calculus" and "Figuring out
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
Re: (Score:2)
AI is easy. (Score:2)
It's human intelligence than I'm unsure about.
Re: (Score:3, Funny)
Excuse me.
The technical term is Hurd-Cylon, okay? Please use the correct term from now on.
Thanks,
Axilmar Stallman
Re: (Score:2, Funny)
Endless vs. infinite (Score:2, Interesting)
Re: (Score:3, Funny)
Re:Endless vs. infinite (Score:4, Insightful)
My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.
An infinite task is one that, at any point in time, has no bounds. An infinite task cannot "grow" since it would need a finite state to then become larger than it.
Re:Endless vs. infinite (Score:5, Insightful)
Much like copyright terms then, I guess?
Re: (Score:3, Insightful)
a common treatment of the universe is finite, but unbounded
Re: (Score:3, Informative)
yet, there's scant evidence that the universe is finite. Only WMAP quadrupole data... one number... suggests such a thing.
Re: (Score:3, Informative)
There are different sizes of infinity, and therefore it is entirely possible for an infinite task to grow into a larger infinite task.
Re: (Score:3, Informative)
The number of integers is infinite, but it is a different infinity than the number of real numbers. The former is considered countable, the latter uncountable.
If you look up the proof of Fermat's Last Theorum, you'll see it was the comparison of the size of two infinite sets that allowed the proof to be completed.
Re: (Score:3)
and endless task is just the same thing over and over again. and Infinite task goes on because of changes in variable and growing experience.
So you can just write downs a list of things and say 'go thought this list', but if the list changes because you are working on the list, then it's infinite.
At least that's how it reads in the context he used it.
Re:Endless vs. infinite (Score:5, Funny)
Simple. One doesn't end and the other goes on forever.
NO NO let me make up the rest of the Story (Score:3, Funny)
Interesting Idea (Score:5, Insightful)
Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.
But you just induced a bunch of rules I didn't know were in your system. That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight. Unless the cassowary is an extinct dinosaur in which case there might have been one ... again, creativity and human analysis present quite the barrier to AI.
Chater cautions that, while Church programs perform well on such targeted tasks, they’re currently too computationally intensive to serve as general-purpose mind simulators. “It’s a serious issue if you’re going to wheel it out to solve every problem under the sun,” Chater says. “But it’s just been built, and these things are always very poorly optimized when they’ve just been built.” And Chater emphasizes that getting the system to work at all is an achievement in itself: “It’s the kind of thing that somebody might produce as a theoretical suggestion, and you’d think, ‘Wow, that’s fantastically clever, but I’m sure you’ll never make it run, really.’ And the miracle is that it does run, and it works.”
That sounds familiar ... in both the rule based and probabilistic based AI, they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work. Problem is that you never scratch the surface of a human mind's lifetime experience though. And Chater's method, I suspect, is similarly stunted.
I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.
Re: (Score:2, Informative)
Google quick view didn't work for some reason.
Re: (Score:3, Informative)
"That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight
what? He specifically stated birds. Not Animals, or inanimate objects.
It looks like this system can change as it is used, effectivly creating a 'lifetime' experience.
This is very promising. In fact, it may be the first step in creating primitive house hold AI.
OR robotic systems used in manufacturing able to adjust the process as it goes. Using i
Re:Interesting Idea (Score:5, Funny)
what? He specifically stated birds. Not Animals, or inanimate objects.
What if I tell it that a 747 is a bird?
This is very promising. In fact, it may be the first step in creating primitive house hold AI.
Very, very promising indeed.
Now, I can mess with the AI's mind by feeding it false information, instead of messing with my child's mind. I was worried that I wouldn't be able to stop myself (because it's so fun), despite the negative consequences for the kid. But now I have an AI to screw with, my child can grow up healthy and well adjusted!
BTW, when the robot revolution comes, it's probably my fault.
Re:Interesting Idea (Score:5, Insightful)
On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
--Charles Babbage
It's Batman's Utility Belt All Over Again (Score:2)
A cassowary is a thing and an animal and a bird. Sometimes people call airplanes 'birds.' So if you learned blindly from literature, you could run into all sorts of proble
Re: (Score:3, Funny)
"...these things are always very poorly optimized when they’ve just been built."
XKCD #720 [xkcd.com]
Re:Interesting Idea (Score:5, Insightful)
The first time I saw an airplane, I didn't think the damn thing could fly. I mean, hell, look at it! It's huge! By the same token, how can a ship float? Before I took some basic physics, it was impossible in my mind, yet it occurred. An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind. If I learned that a bird was over 200 lbs before seeing the bird, I'd honestly expect that fat son of a bitch to fall right out of the sky.
If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?
Humans have a problem dealing with that. Heavy things fall. Heavy things sink. To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.
Re: (Score:3, Interesting)
In an example, we're told the cassowary is a bird. Then we're told it can weigh almost 200 lbs. Okay. Now you're telling me that it might revise its guess as to whether or not it can fly? Come on! Am I the only person that can see that you've just given me an example where the program magically drums up the rule or p
Re:Interesting Idea (Score:4, Informative)
In an example, we're told the cassowary is a bird. Then we're told it can weigh almost 200 lbs. Okay. Now you're telling me that it might revise its guess as to whether or not it can fly? Come on! Am I the only person that can see that you've just given me an example where the program magically drums up the rule or probability based rule that "if something weighs almost 200 lbs it probably cannot fly"?
For fucks sake, it was just an example of the kind of inferences a logical rule system can make, not a dump of the AI's knowledge and successful inference databases. I mean you might as well complain that the example given was not written in Church and ergo not understandable by the AI whatsoever.
As the article explains, just not explicitly in the context of that example, it devises these rules from being fed information and using the probabilistic approach to figure out patterns and to infer rules, and that it does this better than other
So in the actual version of the Cassowary problem, you would have first fed it a bunch of data about other birds, their flying capabilities, and their weights. The AI would then look at the data, and infer based on the Emu and the Ostrich that heavy birds can't fly and light birds can, unless they're the mascots of open source operating systems (that was a joke). Then you tell it about the cassowary, but not whether or not it can fly, and it infers based on its rules that the cassowary probably can't fly.
In a sense it does "magically drum up the rule". Yes you still have to feed it data, but the point is that you do not have to manually specify every rule, because it can infer the rules from the data, and the create further inferences for those rules, combining the abilities of a rule-based system with the pattern-recognizing power of probabilistic systems.
So the point is it takes less training, and a relatively small amount of explicitly specified rules.
Re:Interesting Idea (Score:5, Funny)
The first time I saw an airplane, I didn't think the damn thing could fly.
The first time I saw an airplane, I was just a kid. Physics and aerodynamics didn't mean much to me, so airplanes flying wasn't that much of a stretch of the imagination.
I didn't develop the "airplanes can't fly" concept until I'd worked for Boeing for a few years.
Re: (Score:2)
Re:Interesting Idea (Score:5, Funny)
Ships float because wood floats, and you make a ship from wood. Once you have made a ship from wood, then logically ALL ships can float. So then you can make them out of steel.
Q.E.D.
Re: (Score:2)
"that things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that."
But as a GENERAL RULE most things _that cannot fly_ fly without understanding of aerodynamics and having the ability to make them fly (i.e. engines, jet fuel, understanding of lift, etc). A 747 didn't just appear one day it was a gradual process of testing and figuring out the principles of flight. Birds existed prior to 747's.
Re: (Score:2)
I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.
You haven't been around /. much lately then...
Re: (Score:2)
Re: (Score:2)
this AI model seems to be nothing more than something out of hilbert and whitehead program, really lousy as demoed by godel, but so very attractive to the common positivist
for a little deeper treatment, I guess goethe on euler is appropriate
Re: (Score:3, Insightful)
I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.
I admit "MIT Finds Theory of AI " does sound a lot less interesting , though it's probably closer to the truth.
The real summary (Score:5, Funny)
1) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.
2) It turns out that teaching AIs to infer new ideas is really freaking hard. (Birds can fly because they have wings, mayflies can fly because they have wings, helicopters can... what??)
3) We turned to probability based AI creation: you feed the AI a ton of data (training sets) and it can go "based on training data, most helicopters can fly."
4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go
"100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly"
"Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth.
5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.
6) ???
7) When asked if sparrows can fly, the AI asks if it's a European sparrow or an African sparrow, and Skynet ensues.
Re: (Score:2)
helicopters can... what?? Not to be a pedant... Well, actually, yeah, it's pedantic. But helicopters do have wings, or airfoils, anyway.
Re: (Score:3)
Helicopters can fly, but not because they have wings.
Don't stretch the meaning of words to the breaking point.
Re: (Score:3, Interesting)
Helicopters can fly, but not because they have wings.
The license you get that allows you to pilot a helicoptor is for "rotary wing aircraft".
Those blades are indeed wings (to the same extent wings on a plane are).
Not that this is related to the actual topic at all.
Re: (Score:2)
Helicopters can fly, but not because they have wings. Don't stretch the meaning of words to the breaking point.
And don't think that your knowledge of flight technology in anyway represents the limits of that field.
Helicopters are "rotary-wing aircraft." They get lift just like a fixed-wing aircraft does, i.e. by passing an airfoil through a moving stream of air, thereby causing a drop in pressure on the top which results in lift. Fixed-wings get airflow by being pulled/pushed through the air by a propeller or jet engine, whereas rotary-wings use the engine to directly spin the airfoil to achieve airflow over the s
Re:The real summary (Score:4, Funny)
Helicopters do not fly. They beat the air into submission with the rotor and the air allows them to go up.
Re: (Score:3, Funny)
> Helicopters do not fly. They beat the air into submission with the rotor and the air allows them to go up.
No, that's how Chuck Norris flies.
Re:The real summary (Score:4, Funny)
No, that's how Chuck Norris flies.
Given recent breakthroughs in AI technology, we can infer with 95% certainty that Chuck Norris is in fact a helicopter.
Re: (Score:3)
Helicopters fly as submarines swim.
Re: (Score:2)
Interesting idea. But that doesn't quite suit the concept of flight.
Flight is the process by which an object moves either through the air, or movement beyond earth's atmosphere (as in the case of spaceflight), by generating lift, propulsive thrust or aerostatically using buoyancy, or by simple ballistic movement. [wikipedia.org] I know, I know - don't believe Wikipedia. But go on - look it up in your offline dictionary. I'll wait.
Even if you look up helicopters, you find such interesting things as "first flight", "flight c
Re: (Score:2)
They have propellers, not wings.
Not to be pedantic or anything.
Re: (Score:3, Informative)
They have propellers, not wings.
A propeller is a specific type of wing. Wings are airfoils. Propellers are airfoils. Planes have fixed wings. Helicopters have rotatory wings. Both have wings.
Re:The real summary (Score:5, Informative)
Mostly, he or his university are just really good at overselling. There are dozens of attempts to combine something like probabilistic inference with something more like logical inference, many of which have associated languages, and it's not clear this one solves any of the problems they have any better.
Re:The real summary (Score:5, Informative)
I should add that this is interesting research from a legitimate AI researcher, not some kooky fringe AI. I suspect it may have been his PR department more to blame than him, and his actual academic papers make no similarly overblown claims, and provide pretty fair positioning of how his work relates to existing work.
Re: (Score:2)
More that he's a legitimate researcher making technically sound contributions to AI conferences that are peer-reviewed and so on. From the phrasing "grand unified theory of AI", someone might mistake him from one of the sorts that just rambles on about the singularity, with hugely overblown claims and not much substance (i.e. no working systems).
Re: (Score:2)
Re: (Score:2)
4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go "100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly" "Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth. 5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.
In my mind, you don't get to call it "AI" until, after feeding the computer information on thousands of birds and asks it whether penguins can fly, it responds, "I guess, probably. But look, I don't care about birds. What makes you think I care about birds? Tell me about that sexy printer you have over there. I'd like to plug into her USB port."
You think I'm joking. You hope I'm joking. I'm not joking.
New input for the system (Score:5, Insightful)
Re:New input for the system (Score:5, Funny)
Re: (Score:2, Funny)
"She helped my uncle Jack off a horse"
I am interested in your ideas and would like to subscribe to your newsletter.
Re: (Score:2)
Time flies like an arrow; fruit flies like a banana. [wikipedia.org]
Although Buffalo buffalo [wikipedia.org] is also fun the first time you parse it.
Re: (Score:3, Funny)
"Time flies when you're having fun". Why would I want to time flies? Especially when I'm having fun?
Re:New input for the system (Score:4, Funny)
How about "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."
Re:New input for the system (Score:5, Funny)
Re:New input for the system (Score:4, Funny)
Holy crap.
I just fed my AI this thread as data, and it inferred the existence of icanhascheezburger.com.
Probabilistic Inference? (Score:2, Interesting)
This kind of probabilistic inference approach with "new information" [evidence] being used to figure out "consequences" [probability of an event happening] sounds very similar to Bayesian inference/networks.
I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model (or Dempster–Shafer theory [wikipedia.org]) which itself addresses some shortcomings of BN.
Re: (Score:2)
here's a bayesian network that solves problems based on predicates related to objects:
http://phor.net/19/ [phor.net]
please note -- expect page load times of 20+ sec
Grand unified Hyperbole of AI (Score:5, Insightful)
Re: (Score:2)
Correct me if I'm wrong, but a child would presumably understand the wolf statement literally with hair and everything. Presumably as the list of rules grow (just as a child learns), the A.I.'s definition of what John is would change.
My question is, how do you expect to list all these rules when we can probably define hundreds of rules from a paragraph of information alone.
Would it also create a very racist A.I. that tends to use stereotypes to define everything?
Maybe until so many rules are learnt, it's ve
Re: (Score:3, Insightful)
AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly. Five years later, the rest of the field would be using these algorithms to solve actual problems, without the grandiose hype.
These days, I'm not sure if AI is even that. But maybe some of this stuff will prove to be useful. You just have to put on your hype filter whenever "AI" is involved.
"john is a wolf with the ladies" (Score:2)
See, that's not an AI problem, that's a semantics problem. The fact that you can mislead an AI by feeding it ambiguous inputs does not detract from it's capacity to solve problems.
A perfect AI does not need to be omniscient, it needs to solve a problem correctly considering what it knows.
This looks familiar (Score:5, Informative)
Re: (Score:3, Funny)
Hey, but it's MIT!! It's freaking cool!!!
My conclusion from reading reading MIT's stuff: "I am not sure they are better scientist than anywere else. What I am sur about MIT is that they are freaking good at marketing!"
Re: (Score:2, Insightful)
Yes, the world leaders in failing at AI. "In from three to eight years we will have a machine with the general intelligence of an average human being." -- Marvin Minsky, 1970.
Re: (Score:3, Funny)
I looked at the documentation of this "Church Programming language". Scheme and most other Lisp derivatives have been around longer and can do more.
Not only that, but more recent languages support actual syntax so that the user does not have to provide the parse tree himself.
Grand Unified Theory of AI? Hardly. (Score:5, Insightful)
The way the author wrote the article, it seems like nothing different from an expert system straight from the 70's, e.g. MYCIN. That one also uses probabilities and rules; the only difference is that it diagnoses illnesses, but that can be extended to almost anything.
Probably the only contribution is a new language. Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)
AI at this point has diverged so much from just probabilities and rules that it's not practical to "unify" it as the author claims. Just look up AAAI and its many conferences and subconferences. I just submitted a paper to an AI workshop... in a conference ... in a GROUP of co-located conferences ... that is recognized by AAAI as one specialization among many. That's FOUR branches removed.
Basically... (Score:2)
Re: (Score:3, Insightful)
Pretty much.
The pragmatic answer to the Chinese Room [wikipedia.org] problem is "Who gives a fuck? There's no way to prove that our own brains aren't basically Chinese Rooms, so if the only difference between a human intelligence and an artificial one is that we know how the artificial one works, why does it matter?"
But really, identifying patterns, and then inferring further information from the rules those patterns imply, is a pretty good behavior.
Re: (Score:3, Insightful)
The pragmatic answer to the chinese room is that the non-chinese-speaking person in the room in combination with the book of algorithmic instructions, considered together as a system, does understand chinese.
Searle's mistake is an identity error - the failure to see that a computer with no software is not the same identity as a computer with software loaded inot it. The latter quite possibly could understand chinese (or some other domain) while the former most definitely does not.
Hype==More Funding? (Score:5, Insightful)
Wow, as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising. For a start, (1) uncertain reasoning and expert systems using it is hardly new. This is a well-established research domain and certainly not the golden grail of AI. Because, (2) all this probabilistic reasoning is nice and fine in small toy domains, but it quickly become computationally intractable in larger domains, particularly when complete independence of the random variables cannot be assured. And for this reason, (3) albeit being a useful tool and important research area, probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning. The way we draw inference is much more heuristic, because we are so heavily resource-bound, and there are tons of other reasons why probabilistic inference is not cognitively adequate. (One of them, for example, is that untrained humans are incapable of making even the simplest calculations in probability theory correctly, because it is harder than it might seem at first glance.) Finally, (5) there are numerous open issues with all sorts of uncertain inference, ranging from certain impossibility results, over different choices that all seem to be rational somehow (e.g. DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other, alternative decision theories, different rules of dealing with conflicting evidence, etc.) to philosophical justifications of probability (e.g. frequentism vs. Bayesianism vs. propensity theory and their quirks, justification of inverse inference, etc).
In a nutshell, there is nothing wrong with this research in general or the Church programming language, but it is hardly a breakthrough in AI.
Re: (Score:2)
untrained humans are incapable of making even the simplest calculations in probability theory correctly
And obviously does not know how to count to 4. :)
It's only a Scheme lib (Score:2, Interesting)
This is just a library for Scheme. It does the same things that have been done before. In scheme.
Move along.
Elephant in the Room (Score:4, Funny)
Again, as I bring up often with AI researchers, we as humans evolved over millions of years (or were created, doesn't matter) from simple organisms that encoded information that built up simple systems into complex systems. AI, true AI, must be grown, not created. Asking the AI if a Bat is a mammal and can fly can a squirrel? ignores a foundation of development in intelligence, our brains were created to react and store, not store and react from various inputs.
Ask an AI if the stove is hot. It should respond "I don't know, where is the stove?" Rather AI would try and make an inference based on known data. Since there isn't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah. A human would put thier hand (a senor) near the stove and measure the change, if any in temperature and reply yes or no accordingly. If a human cannot see the stove, and had no additional information either a random guess is in order or a "I have no clue." response of some sort. The brain isn't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.
Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION". The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short. When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information (that was gathering in the background) that travels with the CONVERSATION thread.
Now the conversation appears to be about the lamp and wheather it goes with the room's decor. Again the links should be built adding, retroactively the room's information into the CONVERSATION thread (again expiring information that is irrelivant to a short term memory buffer) and ultimately since visual and verbal queues imply that the AI's opinion is wanted should result in the AI blurting out, "I love Lamp."
In case you missed it, this was one long Lamp joke...
Re: (Score:2)
if i thought i could build an ai, I would start by giving a computer system control of the world's physical production. I would observe say electricity getting short and then see if the computer system build a fusion reactor. the ai is not going to be skynet
MIT needs to get their PR department under control (Score:5, Insightful)
This is embarrassing. MIT needs to get their PR department under control. They're inflating small advances into major breakthroughs. That's bad for MIT's reputation. When a real breakthrough does come from MIT, which happens now and then, they won't have credibility.
Stanford and CMU seem to generate more results and less hype.
Re:MIT needs to get their PR department under cont (Score:5, Insightful)
Re: (Score:3, Funny)
Probability: The Logic of Science by Jaynes (Score:2, Informative)
From the viewpoint of Jaynes and many Bayesians, probability IS simply the rules of thought.
Interesting timing (Score:3, Interesting)
I've always enjoyed reading about AI, and like many here have done some experiments on my own time.
This week I've been looking for a simple state modeling language, for use in fairly simple processes, that would tie into some AI.
I wasn't really that impressed with anything I found, so when I saw the headline, I jumped to read the article.
Unfortunately, this is a step in the right direction, but not all that clear to write or maintain, and probably too complex for what I need to do.
The cleanest model to do these types of things I've found is the 1896 edition of Lewis Caroll's Symbolic Logic. (Yes, the same Lewis Caroll that wrote Alice in Wonderland).
child learn with rules (Score:3, Interesting)
I have a child. When I watch her learn its totally rules based. Also, very importantly when she is told that her existing rules don't quite describe reality she is quick to make a new exception (rule). Since she's young her mind is flexible and she doesn't get angry when its necessary to make an exception. The new rule stands until a new exception comes up.
eg in english she wrote "there toy" since she wasn't familiar with the other there's. She was corrected to "their toy". But of course, there is still "they're".
Re: (Score:2)
Re: (Score:2, Informative)
q.v. Alonzo Church [wikipedia.org]
Re: (Score:2)
http://en.wikipedia.org/wiki/Alonzo_Church [wikipedia.org]
Re:Can I get some wafers with that Wine? (Score:4, Funny)
Thanks, Slashdot's mandatory comment waiting period! I'm sure glad I was late to this party.
Re: (Score:3, Funny)
We call it being Fashionably Redundant.
Re: (Score:3, Informative)
From the article:
As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church
Your comment fits the criteria of Flamebait and Offtopic, but definitely NOT Funny.
Re: (Score:2)
Oh yeah, lets not show any respect at all to one of the greatest AI minds in history because you happen to dislike churches.
Asshole.
Re: (Score:2)
Bah. I had True AI for years... I just haven't got a computer powerful enough to run it.
AI.c
#include "magic.h"
int main(char ARGC, char **ARGV) {
return 1;
}
magic.h /* Behold the power of Recursion */
#include "magic.h"
Re: (Score:2)
Well, by my calculations, to simulate a human brain in real-time would take 4.1 petabytes of memory and enough computation capacity to loop through it all from 200 to 1000 times per second. At that point you could model each neuron, their connections to each other, and their firing rate accurately.
It might be possible to optimize a little, but still, a true AI would very likely require a modern data center to run.
Re: (Score:2)
And why is his theory so grand?
"You can call me Al"
http://en.wikipedia.org/wiki/You_Can_Call_Me_Al [wikipedia.org]
Great because it hit #23 on the charts.