Wolfram Promises Computing That Answers Questions 369
An anonymous reader writes "Computer scientist Stephen Wolfram feels that he has put together at least the initial version of a computer that actually answers factual questions, a la Star Trek's ship computers. His version will be found on their Web-based application, Wolfram Alpha. What does this mean? Well, instead of returning links to pages that may (or may not) contain the answer to your questions, Wolfram will respond with the actual answer. Just imagine typing in 'How many bones are in the human body?' and getting the answer." Right now, though the search entry field is in place, Alpha is not yet generally available -- only "to a few select individuals."
Lojban (Score:2, Interesting)
I don't think this can be examined without language issues. Lojban attempts to make a parsable constructed language (currently undergoing a few grammar issues, but mostly locked down). As we get closer to the Singularity, with regards to infant-style general AI and perhaps even transhuman implants (thought detector or such), we'll see perhaps a myriad of unambiguous languages.
Re: (Score:3, Funny)
Re:Lojban (Score:4, Insightful)
is the answer to this question "no"?
If you want to answer a question without understanding the question then how do you know when the question can be answered?
Re: (Score:2)
is the answer to this question "no"?
[snip]
five.
Re:Lojban (Score:5, Funny)
three, sir
Re:Lojban (Score:5, Funny)
You look for an answer until you find it or give up.
Oh, so (i) you don't understand the question, then (ii) you look for an answer, (iii, A) you find it (how? how will you know you found it?), or (iii, B) you give up.
Just wanted to make sure that this thread was really about this. Here's a new low, even for slashdot.
Re:Lojban (Score:5, Funny)
You look for an answer until you find it or give up.
Oh, so (i) you don't understand the question, then (ii) you look for an answer, (iii, A) you find it (how? how will you know you found it?)
Once you have found the answer only then will you understand the question, grasshooper.
Re: (Score:2)
Re:Lojban (Score:4, Insightful)
Don't sweat it ... these are the same people who believe that a computer can "think" about chess, instead of just searching through N number of plies in T time, then offering the best solution it has found in those constraints, without ever having to "understand" chess on any level.
This can be applied to ANY problem, provided you want to invest the design and testing time.
This whole question was answered decades ago (1970s) with the "foreigner in a sealed room" turing thought experiment. It showed that the person in the sealed room doesn't have to understand english, or even know the answer to questions, provided they are given some simple rules to link words together in a response depending on what words are in the original statement.
Re:Lojban (Score:5, Insightful)
Who says that this is insufficient for "thinking"?
I think understanding the Chinese room paradox as having provided a solution to this question is a misinterpretation. The best thing to take away is that "thinking" is not well defined.
Not quite correct (Score:3, Interesting)
This whole question was answered decades ago (1970s) with the "foreigner in a sealed room" turing thought experiment. It showed that the person in the sealed room doesn't have to understand english, or even know the answer to questions, provided they are given some simple rules to link words together in a response depending on what words are in the original statement.
That's not quite correct--the Chinese Room thought experiment does not depend on "simple rules"--it imagines a Turing-Test-passing program converted to book form, which is then run manually by an English speaker, responding to Chinese inputs. But there's nothing in it that implies that the Turing-Test-passing routine is simple.
In fact there's nothing that says such a routine is even possible. The Chinese Room thought experiment has always struck me as begging the question. It starts by assuming that a routi
Re: (Score:3, Interesting)
The "thinking" can be reduced to a computing machine made out of tinkertoys, or punch-card readers. It requires no understanding, no "learning", no insight - just rote mechanical responses to inputs. That's not "thinking" any more than instinct is - it's just hard-wired responses.
The real question is "what is sufficient for thinking", and I believe the answer is pretty easy - free will. To those who argue against free will, then all thought is predetermined, and therefore mechanistic. But of course, t
Re:People are special. (Score:5, Informative)
Also, until you can claim to solve the halting problem in real life (as opposed to a "theoretical device"), don't go around claiming that the brain is turing-complete. It isn't, and cannot be - not in this universe, anyway.
Of course the brain is turing complete. You can prove it the same way you prove any other machine is turing complete: it has the ability to simulate a turing machine. I can simulate a tape driven turing machine pretty damn easily with a sheet of paper and a pencil. I think you're confused as to what "turing-complete" means. Solving the halting problem is not a requirement. In fact, you can prove that a turing machine cannot solve the halting problem. So the brain's inability to do so doesn't have any bearing on whether it's turing complete.
Re:Lojban (Score:5, Informative)
The Chinese Room is misdirection, pure and simple. We're supposed to conclude that because the person in the room doesn't have the subjective experience of understanding Chinese, the system as a whole (the person, the data tables, the rules) doesn't "really" understand Chinese.
But there's no logical reason to assume a specific part of the system should have a subjective experience of understanding something that the system as a whole understands. This becomes obvious if you follow the logic a few more steps. Do you believe each specific part of your brain subjectively experiences understanding? How about individual neurons? How about the atoms that comprise the neurons in your brain? If you don't believe these things have the subjective experience of understanding the things that your brain as a whole understands, then your brain is incapable of "really" understanding anything, according to the logic of the Chinese Room.
Re:Lojban (Score:4, Interesting)
I have always hated Searle's Chinese room "paradox", since it is just playing a semantic game with the definition of the system. The claim that the person in the room doesn't understand things is no different from saying that a neuron doesn't understand things, or that 1/4 of my brain alone doesn't understand things. The "rules" in the box are part of the system, and I would claim that if it passes the test, the person+rules do demonstrate understanding. We have no evidence that human thought somehow transcends the model of executed rules anyway; at some level it is all chemistry and physics.
A modern example would be that my CPU (::person in box) doesn't know how to behave as a web browser. While true, my computer does know how to be a web browser when you add the software (::rules), and an input and output system (::box interface). The Chinese room paradox is just yanking out the CPU and saying that it doesn't know how to be a web browser. Nice trick.
The other thing the "paradox" does it to try to evoke imagery of a very simple ruleset because it is a person executing rules on paper, which would be very slow. The person executing paper rules is slow enough to have the computational power of a few neurons at best, while the brain has ~100 billion. So the equivalent rules in the paradox's imagined transformation would never fit in a room and could not be executed to completion by a person before their death. While it is supposed to be a thought experiment, the relative scale is so incredibly different that it makes imagining it difficult, and I wonder if it was chosen for that purpose. I will cut Searle some slack though, since Turing's guess about how much computing power needed to pass the Turing test was ridiculously low (~50 MB of storage), when compared to what we now know of human brain capabilities.
I think the appeal of the paradox is that deep down many people want to believe that we are qualitatively different from computers, rather than quantitatively so. As for me, I'm happy enough knowing that atop my shoulders sits a computer with more raw processing power than the largest supercomputer, with rules/programming far beyond anything we can create now, or perhaps for hundreds of years.
Re:People are special. (Score:4, Informative)
> Also, until you can claim to solve the halting problem in real life (as opposed to a "theoretical device"), don't go around claiming that the brain is turing-complete. It isn't, and cannot be - not in this universe, anyway.
The halting problem is undecidable over Turing machines. Claiming 'the brain is not turing-complete because it cannot solve the halting problem' makes no sense.
Re: (Score:3, Informative)
No, Turing's argument was that the question "can X really think or is it merely perfectly simulating thinking" is impossible to answer for any value of X other than yourself. For this reason, we have a polite convention of treating things which appear to think as rea
Re:Lojban (Score:4, Interesting)
From this, we can see that if we can build a reasoning engine that can determine if a given answer is correct for a question, hypothetically we can iterate over a large set of answers and apply our filter to each one. This provides us with a machine to answer questions (although depending on the size of the set of answers, "I don't know" might be a frequent response) which (by my definition, at least) 'understands' the question.
Re: (Score:3, Insightful)
Moreso, I'd argue that true reasoning would be the ability to provide a factual answer to a subjective question.
For example, "Does food taste good?"
The machine would have to take into account the vast bits of information at it's disposal. For example, found statements like 'This food tastes good' and 'this food does not taste good', would both have to be considered and then qualifiers added to the answer to make it correct, such as 'Some food tastes good'.
Otherwise, it's just fancy regurgitation of facts u
Re: (Score:3, Interesting)
Otherwise, it's just fancy regurgitation of facts using a complex language parser.
It's more than regurgitation. It can make inferences from the set of facts available to generate new propositions. Some of these new propositions may not be obvious to a human looking at the same set of source facts.
Re: (Score:2)
But asking the question is so much easier than coming up with the magical query which will return the right "I'm feeling lucky" result, which is really the answer you want.
Re: (Score:3, Insightful)
Ahh, but how many bones does a one-armed midget with three fused vertebra who just swallowed a whole parakeet have?
Re: (Score:3, Insightful)
Re:Lojban (Score:5, Funny)
I don't think this can be examined without language issues. Lojban attempts to make a parsable constructed language (currently undergoing a few grammar issues, but mostly locked down). As we get closer to the Singularity, with regards to infant-style general AI and perhaps even transhuman implants (thought detector or such), we'll see perhaps a myriad of unambiguous languages.
Your cautiousness and pragmatism in the first two sentences was noted and admired. Then you used the word Singularity in the Vinge sense, and my woo-detector pegged.
Re:Lojban (Score:4, Interesting)
I don't think this can be examined without language issues. Lojban attempts to make a parsable constructed language (currently undergoing a few grammar issues, but mostly locked down). As we get closer to the Singularity, with regards to infant-style general AI and perhaps even transhuman implants (thought detector or such), we'll see perhaps a myriad of unambiguous languages.
Any language that is truly unambiguous is uninteresting. Firstly, you've got Goedel incompleteness to worry about (which stems from statements that are fundamentally ambiguous as to their interpretation, such as "this statement is false"). Secondly, languages are there for people to communicate with, and people seem to prefer ambiguity. Ask a poet if you need proof of that.
Re: (Score:3, Informative)
Firstly, you've got Goedel incompleteness to worry about (which stems from statements that are fundamentally ambiguous as to their interpretation, such as "this statement is false"). Secondly, languages are there for people to communicate with, and people seem to prefer ambiguity
What exactly does Godel's theorem have to do with what you just said? The incompleteness theorem deals with axiomatized systems. This leads me to think that you might be confusing the popular meaning of "language" with the mathematical definition. People (at least normal people) do not communicate with mathematical languages.
Re: (Score:3, Insightful)
What exactly does Godel's theorem have to do with what you just said? The incompleteness theorem deals with axiomatized systems. This leads me to think that you might be confusing the popular meaning of "language" with the mathematical definition. People (at least normal people) do not communicate with mathematical languages.
If you have an unambiguous system, that means it must be possible to give an exact translation from it into mathematics. After all, that's what mathematical notation really is, a way of unambiguously saying things.
But wait! Goedel says that there are no interesting complete mathematical systems (yeah, I do know what axiomatization is, thankyouverymuch). From that we can then deduce that the language that is being translated from must either be able to describe paradoxical entities whose interpretation/valua
Re: (Score:3, Insightful)
"When are you going to crash?" (Score:4, Funny)
1. Windows version of program crashes without answering
2. Mac version of program says "after your next question, smartass"
3. Linux version of program says never, 'cos it can't even drive a car
Re:"When are you going to crash?" (Score:4, Funny)
"Is 'no' the answer to this question?"
Re: (Score:3, Funny)
Re:Lojban (Score:4, Interesting)
Are you limited to yes/no answers?
Why are people presuming that the program will be limited to yes/no answers?
Q: Will you answer no to this question?
A: It's rather unlikely.
(Or, "I doubt it" or any of several different answers.)
There are enough legitimate paradoxes that you don't need to construct such obvious losers.
How about:
Is "This statement is false." false?
It's still easy enough to handle (in several different ways), but at least it's a valid challenge.
Re: (Score:3, Funny)
Why are people presuming that the program will be limited to yes/no answers? Q: Will you answer no to this question? A: It's rather unlikely. (Or, "I doubt it" or any of several different answers.)
Q: What made you think it's rather unlikely? kernel panic
Re: (Score:2)
Are you implying that the computer hasn't been programmed to lie?
How many bones (Score:5, Interesting)
Q: How many bones are in the human body
A: Did you mean cumulatively or at any point in time?
Re:How many bones (Score:5, Funny)
Q: How many bones are in the human body?
A: Did the human in question eat fish recently?
Re: (Score:2)
Re: (Score:2)
Actually a good point; bones fuse over time so children have more bones than adults.
Then again, this kind of system will fall out of favour as soon as it delivers incorrect answers, especially when there is a clear context.
I like answers like SAL in 2010...
Re: (Score:3, Funny)
Re:How many bones (Score:5, Funny)
Q: How many bones are in the human body?
A: How does bones are in the human body make you feel?
Re: (Score:2)
Re: (Score:2, Interesting)
Wasn't this done? answers.com, askjeeves.com (now ask.com)
Re: (Score:3, Informative)
Wasn't this done? answers.com, askjeeves.com (now ask.com)
the answer to your question is yes.
There's also the pathetic Powerset [powerset.com], which was sold to microsoft for 100 million bucks [venturebeat.com]. Very pleasing see ms burning money on such hyped shit.
Re: (Score:2, Funny)
My buddy Guido can reduce the number.
Re: (Score:2)
melelaswe@localhost ~ $ python
Python 2.5.2 (r252:60911, Dec 11 2008, 09:55:54)
[GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> reduce(the,number)
42
Re: (Score:2)
Oblig (Score:2)
Q: How many bones are in the human body
A: What do you mean, an African or European human?
Re: (Score:2)
"where is the source" (Score:2)
Odd, i didn't get an answer.
Simple: (Score:4, Funny)
public class Alpha {
public static void main(String[] args) {
System.out.println("42");
}
}
Re: (Score:2)
Six minutes for the inevitable Deep Thought joke?
We're slacking.
Re:Simple: (Score:4, Funny)
Anyone remember AskJeeves? (Score:4, Informative)
Been there, done that. [ask.com]
All that is old is new again.
Re: (Score:2)
Re:Anyone remember AskJeeves? (Score:5, Insightful)
Circumference of the earth [google.com]?
Number of horns on a unicorn [google.com]?
Google already does this. It's giving you the answer and linking to the page that has it. All Google needs is to be able to use these things in the calculator ("circumference of the earth in furlongs").
Oh and related to your "rupees in a dollar". "1 dollar in indian rupees [google.com]" will tell you.
Re:Anyone remember AskJeeves? (Score:4, Interesting)
Try checking the number of horns on a bicorn [google.com] and you'll see that the google engine is not intelligent, artificial or otherwise. Or would you like to argue that bicorns are not real, and therefore don't count?
Re:Anyone remember AskJeeves? (Score:4, Informative)
Bicorns do exist. Napoleon's hat was a bicorn [answers.com].
Sci-Tech Dictionary: bicorn (bkörn)
(mathematics) A plane curve whose equation in cartesian coordinates x and y is (x2 + 2ay - a2)2 = y2(a2 - x2), where a is a constant.
WordNet: bicorn
The noun has one meaning: a cocked hat with the brim turned up to form two points
Synonym: bicorne
The adjective bicorn has one meaning: having two horns or horn-shaped parts
Re:Nope. (Score:5, Informative)
Re: (Score:2)
I guess you can call that "detail". I call it a fawning press release.
Still, from what I read it's a proprietary summary of a lot of expert knowledge with a token cellular automaton thrown in somewhere to satisfy Wolfram's ego. Prediction: fades from prominence within a year of release, as 1) the summarized knowledge goes out of date and can't be maintained in realtime (human labor required too expensive); and 2) knockoffs emerge.
Re:Nope. (Score:5, Insightful)
Good points, but this is still just a different (better perhaps?) implementation of the same concept. The big issue with the implementation is that it will only "know" what you tell it, the same as any other computer. Further it will only be able to tell you about what you want to know based on the system's ability to parse your question and return what it "thinks" you want to know.
Look, I'm not saying it isn't a cool idea, I'm just saying that it isn't as shiny and new as the creator would lead you to believe. I'm also not inclined to be impressed considering that it isn't even available to try yet. It hasn't even been released yet.
Re:Nope. (Score:4, Funny)
I'd be impressed if it could answer "Could you explain your previous answer using a car analogy?"
Re: (Score:3, Interesting)
Good points, but this is still just a different (better perhaps?) implementation of the same concept. The big issue with the implementation is that it will only "know" what you tell it, the same as any other computer.
Or human, for that matter.
The big difference is inference. If I tell you the facts "John married Jane in 1981" and "Frank is Johns son, he's 15", you will probably conclude that Jane is very likely Franks mother, at least until you get conflicting information. Computers so far could not. AI research has been working on giving them that ability for almost 20 years now. After lots and lots of failures, they've also made some progress. The big issue hasn't been the collection of facts for years now, but how to
A.I. (Score:3, Informative)
Google already does this. Type a question like "What is one plus one?" and you will get an answer. It's artificial intelligence.
Re:A.I. (Score:4, Interesting)
Re: (Score:3, Interesting)
That seems to be hardcoded though, it already fails at "how old is Steve Jobs".
Re: (Score:2, Interesting)
It also fails on This [google.co.uk] - seriously...
Re: (Score:2)
Re:A.I. (Score:5, Interesting)
It goes further than that. Try Googling "how old is Britney Spears" and "what is the population of iceland" (without quotes). The answer appears at the top, separately from the search results.
Google them together, it returns your post!
Re: (Score:3)
Now try googling "what is the population of states that border Spain and only Spain".
If I understand TFA, the application in question will actually be able to answer that question.
Can he do this for politicians, as well? (Score:2, Insightful)
a computer that actually answers factual questions
I've never seen a politician who has been able to do that. But I guess they don't want to either.
Wolfram = Tungsten (Score:2)
Tungsten has the symbol "W" from its original name, "Wolfram" (which comes from wolframite, one of the ores from which it is extracted.)
http://en.wikipedia.org/wiki/Tungsten [wikipedia.org]
Re: (Score:2)
Yeah, but... (Score:3, Funny)
Why would someone brand something that was supposed to be an intelligent machine as "W".
Computers are useless... (Score:5, Insightful)
...they only give you answers.
Re: (Score:3, Insightful)
Re: (Score:2)
Wow, I like that. Is that yours? Mine was Picasso I think, probably paraphrased.
A new kind of askjeeves... (Score:3, Funny)
Re: (Score:2)
Deep Thought version 0.1? (Score:2, Insightful)
"Hmm. Tricky."
We miss you, Mr. Adams.
mostly, this system already exists (Score:2)
like this? (Score:3, Informative)
How many hogshead is a litre? [google.com]
How many rods in a furlong? [google.com]
What's MLXII + XIV? [google.com]
Re: (Score:3, Informative)
I hope this is what I think it is (Score:3, Interesting)
Trying to find mathematics/physics information is often pretty terrible. I mean, if you are just looking for a topic you can generally pull up related papers, but that is about the depth of complexity you are capable of searching for.
Unfortunately there is no convenient (or universal) plaintext notation. If you are doing anything serious you probably use latex markup (e.g., \Psi^{*}\Psi) or something similar to render images of your equations. That's well and good for people who just want to read your paper, but for people who want to do a complex search to find very specific bits of contextual information, it is just about useless.
So if I can hope that Wolfram's goal is to make his company's math and science knowledge base searchable by some sort of contextual framework, then that could be pretty awesome for those of us who would like to penetrate particular aspects of independent fields without having to become experts on the fields first.
Who is the greatest scientist? (Score:2)
I'm just wonderin' ...
Re: (Score:2)
Re: (Score:3, Interesting)
There were many editors on the book, but unfortunately, they weren't actually allowed to do anything. I mean nothing.
If an editor is denied write access to the book, is he still an editor?
All the Wolfram promises (Score:2, Insightful)
Circularity (Score:2)
What happens if I try to ask it when it will be available to the rest of us mere mortals? Does the web site or my head asplode?
OK, I've got a question for it. (Score:2)
Just Words (Score:3, Insightful)
As long as they are not showing the tool to the public, I do not believe they build a system which promises that. However, there have been lots of research in this area and there are methods to convert queries into horn-clauses so you can query knowledge bases. I designed a method in my master thesis which does similar things, however it was laid out to be performed by humans.
As ingredients for such a system you need
- a knowledge base filled with facts (you can use OWL for it if you want or a rule based approach)
- a reasoner (e.g. something like pellet)
- a rule engine (e.g. something like Jess)
- a method which understands simple English query sentences.
The really hard part is the knowledge base, because it is lots of work. And an automated approach which can understand written documents and classify them correctly would be great, but I doubt that they found a solution for this problem.
This problem includes:
- How to handle uncertainty?
- What to do with contradicting knowledge?
- What to do with temporal aspects in that knowledge?
However, if they built a tool which can answer question of one single domain of knowledge, this is nothing new. Such machines exist now for a long time. They can be helpful, but there is nothing exciting about them.
True Knowledge (Score:3, Interesting)
quizbot (Score:2)
Well, quizbot from trueknowledge already does what
wolfram alpha promise to do in May.
http://quizbot.trueknowledge.com/ [trueknowledge.com]
I Already Know the Answer (Score:2)
I asked Google... (Score:2)
...The question "how many bones are there in the hunan body?"
In a poorly formatted answer embedded in a preview above a URL was this text:
"there are 206 in adults and up to 350 for infants"
I did not have to click on anyting to read this text - it was just there.
Me thinks we are already there,
I don't know... (Score:5, Funny)
I'm not sure I really want to trust a product by Wolfram and Heart. Seems like there is a possibility of some soul loss.
The ever-decreasing depth of human knowledge (Score:3, Insightful)
Tools like this are decreasing the general ability of the population to research - resulting in a debt in 'comprehensive knowledge' on topics.
Yes, tools like search engines enhance our ability to retrieve information faster than written documents such as manuals, dictionaries, and fiction, but they do not - 100% of the time, or even 80% of the time - lead us to the answers to complex questions directly. We are still required, as human beings, to read material, digest it, and often confer an answer.
People will largely lose the ability to make (effective) decisions on their own, because the critical inputs for a good decision are usually both a broad and deep understanding of the topics at hand.
Think of what kind of impact this would have on the overall problem solving ability of a population. Problem solving is often largely qualified by a person's ability to get a good picture of what the problem is. What do we do when a person can simply ask complex questions where a wealth of experience was previously required? Sure, this allows people to move on to do other things, but...
When you make it so that your analytical people - the problem solvers and those who create new things - are made irrelevant by a technology, you as a society will stop evolving socially. No, it will not happen immediately. It will happen gradually, over the period of a generation. Consider the dearth between the research abilities of a previous generation, and those who are graduating college today. There is a substantial difference, and the ease in which information is acquirable today has had a lot to do with this shortcoming.
Re: (Score:3, Insightful)
I suspect it will be similar to the great cultural loss of the ability to memorize long narratives that was brought about by the invention of writing.
Re: (Score:2, Interesting)
Isn't a proof of the Riemann Hypothesis necessarily non-constructive? If so, a computer can't answer your question.
Re: (Score:2)
Isn't a proof of the Riemann Hypothesis necessarily non-constructive? If so, a computer can't answer your question.
A computer can answer the question if the answer is that the hypothesis is false. A proof would indeed be non-constructive, but a counterexample is perfectly constructible (though there are theorems demonstrating that it has to be potentially computationally infeasibly large -- at least on current hardware).
Re: (Score:2)
Please note: Wolfram did not promise computing that *correctly* answers questions. Tribute to Douglas Adams: Perhaps his next endeavor should involve providing the question that goes with the answer.
Makes sense now. Thanks.