Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Science Technology

Wolfram Promises Computing That Answers Questions 369

An anonymous reader writes "Computer scientist Stephen Wolfram feels that he has put together at least the initial version of a computer that actually answers factual questions, a la Star Trek's ship computers. His version will be found on their Web-based application, Wolfram Alpha. What does this mean? Well, instead of returning links to pages that may (or may not) contain the answer to your questions, Wolfram will respond with the actual answer. Just imagine typing in 'How many bones are in the human body?' and getting the answer." Right now, though the search entry field is in place, Alpha is not yet generally available -- only "to a few select individuals."
This discussion has been archived. No new comments can be posted.

Wolfram Promises Computing That Answers Questions

Comments Filter:
  • by SirLurksAlot ( 1169039 ) on Sunday March 08, 2009 @07:04PM (#27115411)

    Been there, done that. [ask.com]

    All that is old is new again.

  • A.I. (Score:3, Informative)

    by unlametheweak ( 1102159 ) on Sunday March 08, 2009 @07:04PM (#27115413)

    Google already does this. Type a question like "What is one plus one?" and you will get an answer. It's artificial intelligence.

  • Re:Nope. (Score:5, Informative)

    by captainboogerhead ( 228216 ) on Sunday March 08, 2009 @07:25PM (#27115585) Journal
    Actually, the original source, TechCrunch [techcrunch.com], not the dumbed down linked article, discusses in much better detail what Alpha is about.
  • like this? (Score:3, Informative)

    by cvd6262 ( 180823 ) on Sunday March 08, 2009 @07:31PM (#27115635)
  • Re:How many bones (Score:3, Informative)

    by linhares ( 1241614 ) on Sunday March 08, 2009 @08:01PM (#27115909)

    Wasn't this done? answers.com, askjeeves.com (now ask.com)

    the answer to your question is yes.

    There's also the pathetic Powerset [powerset.com], which was sold to microsoft for 100 million bucks [venturebeat.com]. Very pleasing see ms burning money on such hyped shit.

  • Re:Simple: (Score:2, Informative)

    by kbrasee ( 1379057 ) on Sunday March 08, 2009 @08:58PM (#27116379)
    Sorry, I try to be diligent about these things, but just chose a bad time to leave my mom's basement for a few minutes.
  • Re:Lojban (Score:3, Informative)

    by caffeinemessiah ( 918089 ) on Sunday March 08, 2009 @09:19PM (#27116549) Journal

    Firstly, you've got Goedel incompleteness to worry about (which stems from statements that are fundamentally ambiguous as to their interpretation, such as "this statement is false"). Secondly, languages are there for people to communicate with, and people seem to prefer ambiguity

    What exactly does Godel's theorem have to do with what you just said? The incompleteness theorem deals with axiomatized systems. This leads me to think that you might be confusing the popular meaning of "language" with the mathematical definition. People (at least normal people) do not communicate with mathematical languages.

  • by Mr_Blank ( 172031 ) on Sunday March 08, 2009 @11:32PM (#27117463) Journal

    Bicorns do exist. Napoleon's hat was a bicorn [answers.com].

    Sci-Tech Dictionary: bicorn (bkörn)
    (mathematics) A plane curve whose equation in cartesian coordinates x and y is (x2 + 2ay - a2)2 = y2(a2 - x2), where a is a constant.

    WordNet: bicorn
    The noun has one meaning: a cocked hat with the brim turned up to form two points
        Synonym: bicorne

    The adjective bicorn has one meaning: having two horns or horn-shaped parts

  • Re:Lojban (Score:2, Informative)

    by home-electro.com ( 1284676 ) on Monday March 09, 2009 @12:04AM (#27117727)

    Just a couple of day ago I wanted to figure out how "crustacean" are related to other animals. (don't ask...)
    I'd have to think for five minutes to formulate the question instead of just typing 'crustecean' which I don't even know how to spell.

    In rare occasions that I only need a factual answer like how much the elephant weigh the answer is more complex than simple number.

    Anyway, after talking to a number of 'ai' voice recognition systems I don't believe in machine intelligence in any form :)

  • by TheCrazyMonkey ( 1003596 ) on Monday March 09, 2009 @01:10AM (#27117993)

    Also, until you can claim to solve the halting problem in real life (as opposed to a "theoretical device"), don't go around claiming that the brain is turing-complete. It isn't, and cannot be - not in this universe, anyway.

    Of course the brain is turing complete. You can prove it the same way you prove any other machine is turing complete: it has the ability to simulate a turing machine. I can simulate a tape driven turing machine pretty damn easily with a sheet of paper and a pencil. I think you're confused as to what "turing-complete" means. Solving the halting problem is not a requirement. In fact, you can prove that a turing machine cannot solve the halting problem. So the brain's inability to do so doesn't have any bearing on whether it's turing complete.

  • Re:Lojban (Score:5, Informative)

    by znu ( 31198 ) <znu.public@gmail.com> on Monday March 09, 2009 @01:41AM (#27118141)

    The Chinese Room is misdirection, pure and simple. We're supposed to conclude that because the person in the room doesn't have the subjective experience of understanding Chinese, the system as a whole (the person, the data tables, the rules) doesn't "really" understand Chinese.

    But there's no logical reason to assume a specific part of the system should have a subjective experience of understanding something that the system as a whole understands. This becomes obvious if you follow the logic a few more steps. Do you believe each specific part of your brain subjectively experiences understanding? How about individual neurons? How about the atoms that comprise the neurons in your brain? If you don't believe these things have the subjective experience of understanding the things that your brain as a whole understands, then your brain is incapable of "really" understanding anything, according to the logic of the Chinese Room.

  • by Kleiba ( 929721 ) on Monday March 09, 2009 @04:53AM (#27118847)
    Question answering (QA) has been around as a research track for years, and quite a lot of effort has been spent in the field. See for instance http://trec.nist.gov/data/qa.html [nist.gov] - So, is the novelty in the story that someone is trying to make a business out of it? I doubt it, because even that has been tried before, most recently with powerset.com. Of course, I assume that the business model would be "getting bought by a search giant as soon as we can", and not creating an actual competitor to google and the likes.
  • Re:How many bones (Score:2, Informative)

    by Kaitnieks ( 823909 ) on Monday March 09, 2009 @05:39AM (#27119067)
    From Powerset: Q: how many bones are there in human body? A: There are 206 bones in the adult human body and about 270 in an infant.
  • by Thiez ( 1281866 ) on Monday March 09, 2009 @06:58AM (#27119397)

    > Also, until you can claim to solve the halting problem in real life (as opposed to a "theoretical device"), don't go around claiming that the brain is turing-complete. It isn't, and cannot be - not in this universe, anyway.

    The halting problem is undecidable over Turing machines. Claiming 'the brain is not turing-complete because it cannot solve the halting problem' makes no sense.

  • Re:like this? (Score:3, Informative)

    by argiedot ( 1035754 ) on Monday March 09, 2009 @07:57AM (#27119661) Homepage
    Not at all. Compare this [google.co.in] with asking START [mit.edu] the same question: "How far is Los Angeles from New York?"
  • Re:Lojban (Score:3, Informative)

    by ultranova ( 717540 ) on Monday March 09, 2009 @10:49AM (#27121205)

    The "rules" in the box are part of the system, and I would claim that if it passes the test, the person+rules do demonstrate understanding.

    Well that was Turing's argument, and what the Chinese room is arguing against (or at least questioning).

    No, Turing's argument was that the question "can X really think or is it merely perfectly simulating thinking" is impossible to answer for any value of X other than yourself. For this reason, we have a polite convention of treating things which appear to think as really thinking, so simply extend this courtesy to any seemingly sentient computer we might ever produce and be done with it.

    If anything the Chinese room and all other arguments from incredility ("How could it think? It's just a room!") reinforce Turing's point about the pointlesness of such arguments.

    We have no evidence that human thought somehow transcends the model of executed rules anyway; at some level it is all chemistry and physics.

    We have no evidence that it is merely executed rules either, nor that chemistry and physics follow the same.

    We have no evidence... that the laws of physics exist? Lul wut?

    Gotta hand it to you, you certainly take your scepticism seriously ;).

    Searle's response to this was to replace the man in the box with the population of India, thereby allowing for much more processing power in a reasonable time.

    This doesn't help the main problem in Searle's philosophy, namely his assumption that "brains cause minds" as opposed to their functionality. In other words, Searle assumes that an algorithm being executed on arbitrary hardware isn't conscious, but that only a (biological) brain can be. He never once proves or shows any evidence for this assumption, yet without it one has no reason to assume that the Chinese room isn't conscious or doesn't "really" understand Chinese.

    Chinese Room isn't a thought experiment, it's an argument in the lines of: "If algorithm executing on arbitrary platforms can be minds, then an algorith being executed by someone by hand might be a mind, and that's incredible!"

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...