Wolfram Promises Computing That Answers Questions 369
An anonymous reader writes "Computer scientist Stephen Wolfram feels that he has put together at least the initial version of a computer that actually answers factual questions, a la Star Trek's ship computers. His version will be found on their Web-based application, Wolfram Alpha. What does this mean? Well, instead of returning links to pages that may (or may not) contain the answer to your questions, Wolfram will respond with the actual answer. Just imagine typing in 'How many bones are in the human body?' and getting the answer." Right now, though the search entry field is in place, Alpha is not yet generally available -- only "to a few select individuals."
Anyone remember AskJeeves? (Score:4, Informative)
Been there, done that. [ask.com]
All that is old is new again.
A.I. (Score:3, Informative)
Google already does this. Type a question like "What is one plus one?" and you will get an answer. It's artificial intelligence.
Re:Nope. (Score:5, Informative)
like this? (Score:3, Informative)
How many hogshead is a litre? [google.com]
How many rods in a furlong? [google.com]
What's MLXII + XIV? [google.com]
Re:How many bones (Score:3, Informative)
Wasn't this done? answers.com, askjeeves.com (now ask.com)
the answer to your question is yes.
There's also the pathetic Powerset [powerset.com], which was sold to microsoft for 100 million bucks [venturebeat.com]. Very pleasing see ms burning money on such hyped shit.
Re:Simple: (Score:2, Informative)
Re:Lojban (Score:3, Informative)
Firstly, you've got Goedel incompleteness to worry about (which stems from statements that are fundamentally ambiguous as to their interpretation, such as "this statement is false"). Secondly, languages are there for people to communicate with, and people seem to prefer ambiguity
What exactly does Godel's theorem have to do with what you just said? The incompleteness theorem deals with axiomatized systems. This leads me to think that you might be confusing the popular meaning of "language" with the mathematical definition. People (at least normal people) do not communicate with mathematical languages.
Re:Anyone remember AskJeeves? (Score:4, Informative)
Bicorns do exist. Napoleon's hat was a bicorn [answers.com].
Sci-Tech Dictionary: bicorn (bkörn)
(mathematics) A plane curve whose equation in cartesian coordinates x and y is (x2 + 2ay - a2)2 = y2(a2 - x2), where a is a constant.
WordNet: bicorn
The noun has one meaning: a cocked hat with the brim turned up to form two points
Synonym: bicorne
The adjective bicorn has one meaning: having two horns or horn-shaped parts
Re:Lojban (Score:2, Informative)
Just a couple of day ago I wanted to figure out how "crustacean" are related to other animals. (don't ask...)
I'd have to think for five minutes to formulate the question instead of just typing 'crustecean' which I don't even know how to spell.
In rare occasions that I only need a factual answer like how much the elephant weigh the answer is more complex than simple number.
Anyway, after talking to a number of 'ai' voice recognition systems I don't believe in machine intelligence in any form :)
Re:People are special. (Score:5, Informative)
Also, until you can claim to solve the halting problem in real life (as opposed to a "theoretical device"), don't go around claiming that the brain is turing-complete. It isn't, and cannot be - not in this universe, anyway.
Of course the brain is turing complete. You can prove it the same way you prove any other machine is turing complete: it has the ability to simulate a turing machine. I can simulate a tape driven turing machine pretty damn easily with a sheet of paper and a pencil. I think you're confused as to what "turing-complete" means. Solving the halting problem is not a requirement. In fact, you can prove that a turing machine cannot solve the halting problem. So the brain's inability to do so doesn't have any bearing on whether it's turing complete.
Re:Lojban (Score:5, Informative)
The Chinese Room is misdirection, pure and simple. We're supposed to conclude that because the person in the room doesn't have the subjective experience of understanding Chinese, the system as a whole (the person, the data tables, the rules) doesn't "really" understand Chinese.
But there's no logical reason to assume a specific part of the system should have a subjective experience of understanding something that the system as a whole understands. This becomes obvious if you follow the logic a few more steps. Do you believe each specific part of your brain subjectively experiences understanding? How about individual neurons? How about the atoms that comprise the neurons in your brain? If you don't believe these things have the subjective experience of understanding the things that your brain as a whole understands, then your brain is incapable of "really" understanding anything, according to the logic of the Chinese Room.
What's so new about this? (Score:2, Informative)
Re:How many bones (Score:2, Informative)
Re:People are special. (Score:4, Informative)
> Also, until you can claim to solve the halting problem in real life (as opposed to a "theoretical device"), don't go around claiming that the brain is turing-complete. It isn't, and cannot be - not in this universe, anyway.
The halting problem is undecidable over Turing machines. Claiming 'the brain is not turing-complete because it cannot solve the halting problem' makes no sense.
Re:like this? (Score:3, Informative)
Re:Lojban (Score:3, Informative)
No, Turing's argument was that the question "can X really think or is it merely perfectly simulating thinking" is impossible to answer for any value of X other than yourself. For this reason, we have a polite convention of treating things which appear to think as really thinking, so simply extend this courtesy to any seemingly sentient computer we might ever produce and be done with it.
If anything the Chinese room and all other arguments from incredility ("How could it think? It's just a room!") reinforce Turing's point about the pointlesness of such arguments.
We have no evidence... that the laws of physics exist? Lul wut?
Gotta hand it to you, you certainly take your scepticism seriously ;).
This doesn't help the main problem in Searle's philosophy, namely his assumption that "brains cause minds" as opposed to their functionality. In other words, Searle assumes that an algorithm being executed on arbitrary hardware isn't conscious, but that only a (biological) brain can be. He never once proves or shows any evidence for this assumption, yet without it one has no reason to assume that the Chinese room isn't conscious or doesn't "really" understand Chinese.
Chinese Room isn't a thought experiment, it's an argument in the lines of: "If algorithm executing on arbitrary platforms can be minds, then an algorith being executed by someone by hand might be a mind, and that's incredible!"