Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software Science

New Algorithm for Learning Languages 454

An anonymous reader writes "U.S. and Israeli researchers have developed a method for enabling a computer program to scan text in any of a number of languages, including English and Chinese, and autonomously and without previous information infer the underlying rules of grammar. The rules can then be used to generate new and meaningful sentences. The method also works for such data as sheet music or protein sequences."
This discussion has been archived. No new comments can be posted.

New Algorithm for Learning Languages

Comments Filter:
  • just thought.. (Score:3, Interesting)

    by thegoogler ( 792786 ) on Wednesday August 31, 2005 @11:06PM (#13451517)
    what if this could be integrated into a small plugin for your browser(or any program) of choice, that would then generate its own dictionary in your language.

    would probably help with the problem of either downloading a small, incomplete dictionary, a dictionary with errors, or a massive dictionary file.

    • Re:just thought.. (Score:5, Insightful)

      by Bogtha ( 906264 ) on Wednesday August 31, 2005 @11:49PM (#13451726)

      This algorithm works with sample data. Where is the sample data going to come from? If you have to download it, then that negates the whole point of using it. If you use what you see online, well that's just rediculous, for obvious reasons :).

      • Re:just thought.. (Score:2, Insightful)

        by Hikaru79 ( 832891 )
        This algorithm works with sample data. Where is the sample data going to come from? If you have to download it, then that negates the whole point of using it. If you use what you see online, well that's just rediculous, for obvious reasons :).

        It's going to come from large bodies of text that exist in mmultiple langueages. Things like the Bible, the constitution, etcetera. The whole point of this technology is that by drawing conclusions from those texts, the program infers the underlying rules of the lang
        • It's going to come from large bodies of text that exist in mmultiple langueages.

          The parent to my comment was suggesting that this algorithm be used in leau of a large dictionary download. I was pointing out that you'd have to download said "large bodies of text" to make it work, and so the whole exercise would be pointless.

          The whole point of this technology is that by drawing conclusions from those texts, the program infers the underlying rules of the language and can therefore translate other th

      • O(n^n^n...)????? (Score:4, Interesting)

        by mosel-saar-ruwer ( 732341 ) on Thursday September 01, 2005 @01:34AM (#13452083)

        From TFA: The algorithm discovers the patterns by repeatedly aligning sentences and looking for overlapping parts.

        If you take just a single string [of length n] and rotate it against itself in a search for matches, then you've got to do n^2 byte comparisons just to find all singleton matches, and then gosh only knows how many comparions thereafter to find all contiguous stretches of matches.

        But if you were to take some set of embedded strings, and rotate them against a second set of global strings [where, in a worst case scenario, the set of embedded strings would consist of the set of all substrings of the set of global strings], then you would need to perform a staggeringly large [for all intents and purposes, infinite] number of byte comparisons.

        What did they do to shorten the total number of comparisons? [I've got some ideas of my own in that regard, but I'm curious as to their approach.]

        PS: Many languages are read backwards, and I assume they re-oriented those languages before feeding them to the algorithm [it would be damned impressive if the algorithm could learn the forwards grammar by reading backwards].

        • by psmears ( 629712 )

          If you take just a single string [of length n] and rotate it against itself in a search for matches, then you've got to do n^2 byte comparisons just to find all singleton matches,...

          No you don't :-)

          If you want to find all singleton matches, it's enough to sort the string into ascending order (order n.log(n)), and then scan through for adjacent matches (order n). For example, sorting "the cat sat on the mat" gives "cat mat on sat the the"—where the two "the"s are now adjacent and so easily discover

    • Re:just thought.. (Score:5, Informative)

      by Mac Degger ( 576336 ) on Thursday September 01, 2005 @12:46AM (#13451931) Journal
      What they've develloped is something which interprets grammar; the ruleset behind the organisation of buildingblocks, apparently buildingblock agnostic.

      A dictionary is just words. This algorythm cant assign meaning to the buildingblocks, it can only dicide how and in what order the buildingblocks go together.
  • by HeLLFiRe1151 ( 743468 ) on Wednesday August 31, 2005 @11:07PM (#13451521)
    Their jobs be outsourced to computers.
  • by powerline22 ( 515356 ) * <thecapitalizt AT gmail DOT com> on Wednesday August 31, 2005 @11:08PM (#13451528) Homepage
    Google apparently has a system like this in their labs, and entered it into some national competetion, where it pwned everyone else. Apparently, the system learned how to translate to/from chinese extremely well, without any of the people working on the project knowing the language.
    • by Anonymous Coward
      Here's the link [blogspot.com] at Google's blog.
    • No the didn't (Score:5, Interesting)

      by Ogemaniac ( 841129 ) on Wednesday August 31, 2005 @11:53PM (#13451740)
      I played around with the Google translator for a while. I work in Japan and am half-way fluent. Google couldn't even turn my most basic Japanese emails into comprehensible English. Same is true for the other translation programs I have seen.

      I will believe this new program when I see it.

      Translation, especially from extremely different languages, is absurdly difficult. For example, I was out with a Japanese woman the other night, and she said "aitakatta". Literally translated, this means "wanted to meet". Translated into native English, it means "I really wanted to see you tonight". It is going to take one hell of a computer program to figure that out from statistical BS. I barely could with my enormous meat-computer and a whole lot of knowledge of the language.
      • Re:No the didn't (Score:3, Interesting)

        by lawpoop ( 604919 )
        The example you are suing is from conversation, which containts a lot of mutually shared assumptions and information. Take this example from Stephen Pinker:

        "I'm leaving you."

        "Who is she?"

        However, in written text, where the author can assume that the reader brings no shared assumptions, nor can the author rely on any deefback, 'speakers' usually do a good job of including all necessary information in one way or another -- especially in texts meant to convince or promote a particular viewpoint. I'll bet thes

        • by mbius ( 890083 )
          It works the other way too:

          "I'm leaving you."

          What?

          "I'm leaving you, Alice."

          I don't understand what you're trying to do.

          "I've met someone."

          What do you mean 'met'?

          "Look...just read the pamphlet."

          I don't have the pamphlet.

          "I have to go."

          Which way do you want to go?

          "Uh...west."

          You would need a machete to head further west.



          I can't tell you how many of my break-ups have ended with needing a machete.
      • Re:No the didn't (Score:5, Informative)

        by superpulpsicle ( 533373 ) on Thursday September 01, 2005 @12:25AM (#13451855)
        Try this free website out. http://www.freetranslation.com/ [freetranslation.com]

        I know it is fairly accurate because I have fooled my spanish speaking friends once in an IM conversation. I told them I learned spanish via hypnosis and basically just copy/pasted everything spanish into IM. The conversation went on for like 15 minutes full spanish before I told them I was using the website. They were pissing their pants.

        • by Mostly a lurker ( 634878 ) on Thursday September 01, 2005 @02:34AM (#13452229)
          I know it is fairly accurate because I have fooled my spanish speaking friends once in an IM conversation. I told them I learned spanish via hypnosis and basically just copy/pasted everything spanish into IM. The conversation went on for like 15 minutes full spanish before I told them I was using the website. They were pissing their pants.
          English to German produces:
          Ich weiß, dass es ziemlich genau ist, weil ich mein Spanisch getäuscht habe, Freunde einmal in einer IM Konversation zu sprechen. Ich habe sie erzählt, dass ich Spanisch über Hypnose und im Grunde nur Kopie gelernt habe/hat eingefügt alles Spanisch in IM. Die Konversation ist weitergegangen für wie 15 Minuten volles Spanisch, bevor ich sie erzählt habe, dass ich die Website benutzte. Sie pissten ihre Hose
          Then, German to English:
          I know that it rather exactly is, because I deceived my Spanish to speak friends once in one IN THE conversation. I told it, learned would have inserted that I Spanish over hypnosis and in the reason only copy all Spanish in IN THAT. The conversation is gone on for Spanish full like 15 minutes before I told it, that I the websites used. You pissten its pair of pants
          My conclusion is that there is still a place for human translators.
      • Or was it "chinko wo nametakatta"? It's just as easy for me to believe, you hot Slashdot nerd, you.

        Being more serious, how do you think humans learn the rudiments of language? It's pattern analysis, i.e. precisely the technique this algorithm tries to replicate. It is true that the algorithm won't then progress onto the next stage, which is using that rudimentary grasp of the language to be taught its finer points, but if you genuinely doubt the capacity of this method to produce an understanding of
        • Re:No the didn't (Score:3, Interesting)

          by krunk4ever ( 856261 )
          Being more serious, how do you think humans learn the rudiments of language? It's pattern analysis, i.e. precisely the technique this algorithm tries to replicate. It is true that the algorithm won't then progress onto the next stage, which is using that rudimentary grasp of the language to be taught its finer points, but if you genuinely doubt the capacity of this method to produce an understanding of language you are contesting the experiences of every human on the planet.

          there's one flaw in your analysis
      • I played around with the Google translator for a while. I work in Japan and am half-way fluent. Google couldn't even turn my most basic Japanese emails into comprehensible English. Same is true for the other translation programs I have seen.

        You haven't seen the Google translator he's talking about. It isn't public yet, I don't believe.

        Here [outer-court.com] was the original article on it.

        Old: "Alpine white new presence tape registered for coffee confirms Laden"
        New: "The White House Confirmed the Existence of a new Bin Laden
      • Re:No the didn't (Score:4, Interesting)

        by burns210 ( 572621 ) <maburns@gmail.com> on Thursday September 01, 2005 @12:49AM (#13451939) Homepage Journal
        There was a program that tried to use the language of Esperanto (a made-up language designed specifically to be very consistent and guessable with regards to how syntax and words are used, very easy to learn and understand quickly) to be a middleman for translation.

        The idea being that you take any input language, Japanese for instance, and get a working Jap Esperanto translator. Being as Esperanto is so consistent and reliable in how it is designed, it should be easier to do than a straight Jap Eng translator.

        To finish, you write a Esperanto English translator. By leveraging the consistent language of Esperanto, researchers thought they could write a true universal translator of sorts.

        Don't know what ever came of it, but it was an interesting idea.
      • by Sycraft-fu ( 314770 ) on Thursday September 01, 2005 @01:27AM (#13452063)
        Called Pragmatics. It can be somewhat oversimplified as saying it's the study of how context affects meaning or as figuring out what we really mean, as opposed to what we say.

        For example, a classical Pragmatics scenario:

        John is interested in a co worker Anna, but is shy and doesn't want to ask her out if she's taken. He asks his friend Dave if he knows if Anna is available to which Dave replies "Anna has two kids."

        Now, taken literally, Dave did not answer John's question. What he literally said is that Anna has at least two children, and presumably exactly two children. That says nothing of her avalibility for dating. However, there's nobody who reads that scenario who doesn't get what Dave actually meant to communicate: That Anna is married, with children.

        So that's a major problem computers hit when trying to really understand natural language. You can write a set of rules that comletely describes all the syntax and grammar. However that doesn't do it, that doesn't get you to meaning, because meaning occurs at a higher level than that. Even when we are speaking literally and directly, there's still a whole lot of context that comes in to play. Since we are quite often at least speaking partially indirectly, it gets to be a real mess.

        Your example is a great one of just how bad it gets between languages. The literal meaning in Japanese was not the same as the intended meaning. So first you need to decode that, however even if you know that, a literal translation of the intended meaning may not come out right in another language. To really translate well you need to be able to decode the intended meaning of a literal phrase, translate that into an approprate meaning in the other language, and then encode that in a phrase that conveys that intended meaning accurately, and in the appropriate way.

        It's a bitch, and not something computers are even near capable of.
        • I'd say this is the first step to it though. Lets forget about natural language for a second and look at computer algebra systems, proof generators, etc. How is the inference that you talk about any different than a computerized proof system proving something based on bits of information it has stored away? I think it's pretty similar really, except for the part about knowing what thing you want to prove/confirm.

          So how does that sort of thing work? Well, in mathematics you can have something like y=f(x) and
    • by spisska ( 796395 ) on Thursday September 01, 2005 @12:19AM (#13451834)

      IIRC, Google's translator works from a source of documents from the UN. By cross referencing the same set of documetents in all kinds of different languages, it is able to do a pretty solid translation built on the work of goodness knows how many professional translators.

      What is a little more confusing to me is how machine translation can deal with finer points in language, like different words in a target language where the source language has only one. English for example has the word "to know" but many languages use different words depending on whether it is a thing or a person that is known. Or words that relate to the same physical object but carry very different cultural connotations -- the word for female dog is not derogatory in every language, for example, but some other animals can be extremely profane depending on who you talk to.

      Or situations where two entirely different real-world concepts mean similar things in their respective language -- in English, for example, you're up shit creek, but in Slavic languages you're in the pussy.

      I've done translation work before (Slovak -> English), and there's much more going on than differences in words and grammar. There are whole conceptual frameworks in languages that just don't translate, and this is frustrating for anyone learning a language, let alone trying to translate. English is very precise (when used as directed) in matters of time and sequence -- we have more than 20 verb tenses where most languages get away with three.

      Consider this:

      I was having breakfast when my sister, whom I hadn't seen in five years, called and asked if I was going to the county fair this weekend. I told her I wasn't because I'm having the painters come on Saturday. They'll have finished by 5:00, I told her, so we can get together afterwords.

      These three sentences use six different tenses: past continuous, past perfect, past simple, present continuous, future perfect, and present simple, and are further complicated by the fact that you have past tenses refering to the future, present tenses refering to the future, and the wonderful future perfect tense that refers to something that will be in the past from an arbitrary future perspective, but which hasn't actually happened yet. Still following?

      On the other hand, English is much less precise in things like prepositions and objects, and utterly inexplicable when it comes to things like articles, phrasal verbs, and required word order -- try explaining why:

      I'll pick you up after work

      I'll pick the kids up after work

      I'll pick up the kids after work

      are all OK, but

      I'll pick up you after work

      is not.

      Machine translation will be a wonderful thing for a lot of reasons, but because of these kinds of differences in languages, it will be limited to certain types of writing. You may be able to get a computer to translate the words of Shakespeare, but a rose, by whatever name, is not equally sweet in every language.
      • but

        I'll pick up you after work

        is not.


        It can be, depending on context or emphasis. "I'll pick up the kids after lunch. I'll pick up you after work."
      • by ericbg05 ( 808406 ) on Thursday September 01, 2005 @02:27AM (#13452213)
        I've done translation work before (Slovak -> English), and there's much more going on than differences in words and grammar. There are whole conceptual frameworks in languages that just don't translate, and this is frustrating for anyone learning a language, let alone trying to translate.

        Yes! I'd have thrown a mod point at you just for this paragraph if I could.

        English is very precise (when used as directed) in matters of time and sequence -- we have more than 20 verb tenses where most languages get away with three.

        Not really. Firstly, English only has two or three tenses. (Depending upon which linguist you ask, English either has a past/non-past distinction or past/present/future distinctions. See [1], [2]. The general consensus seems to be in favor of the former, although I humbly disagree with the general consensus.) It maintains a variety of aspect [wikipedia.org] distinctions (perfective vs imperfective, habitual vs continuous, nonprogressive vs progressive). See [3]. Its verbs also interact with modality [wikipedia.org], albeit slightly less strongly.

        It's a very common mistake to count the combinations of tense, aspect, and modality in a language and arrive at some astronomical number of "tenses". It's an even more common mistake (for native English speakers, anyway) to think that English is special or different or strange compared to other languages. In most cases, it's not -- especially when compared with other Indo-European languages.

        Secondly, and more interestingly IMHO, most languages do not have three distinct tenses. The most common cases are either to have a future/non-future distinction or a past/non-past distinction. In any case, the future tense, if it exists, is normally derived from modal or aspectual markers and is diachronically weak (which is linguist-babble meaning "future tenses forms don't stick around for very long"). See [3].

        English is a perfect example: will, of course, used to refer to the agent's desire (his or her will) to do something. Only recently has it shifted to have a more temporal sense, and it still maintains some of its modal flavor. In fact, the least marked way of making the future (in the US, at least) is to use either gonna or a present progressive form: I'm having dinner with my boss tonight. I'm gonna ask him for a raise. See Comrie [1] again.

        So as not to be anglo-centric, I'll give another example. Spanish has three widespread means of forming the future tense. Two of these are periphrastic and are exemplified by he de cantar 'I've gotta sing' and voy a cantar 'I'm gonna sing'. The last is the synthetic form, cantaré 'I'll sing'.

        Most high school or college Spanish teachers would tell you that the "pure" future is cantaré. Actually, it's historically derived from the phrase cantar he 'I have to sing' (from Latin cantáre habeo), and is being displaced by the other two forms all across the Spanish-speaking world. I'm told, for example, that cantaré has been largely lost in in Argentina and southern Chile (see [4]).

        In any case, the parent's main point still holds. It's a b?tch to deal with cross-linguistic differences in major semantic systems computationally. But good lord, it's fun to try. :)

        References:

        1. Comrie, Bernard. Tense. [amazon.com] Cambridge, UK: Cambridge University Press, 1985.
        2. Davidsen-Nielsen, Niels. "Has English a Future?" Acta Linguistica Hafniensia 21 (1987): 5-20.
        3. Frawley, William.
  • SCIgen (Score:5, Interesting)

    by OverlordQ ( 264228 ) on Wednesday August 31, 2005 @11:09PM (#13451533) Journal
    SCIgen [mit.edu] anyone?
  • PDF of paper (Score:5, Informative)

    by mattjb0010 ( 724744 ) on Wednesday August 31, 2005 @11:10PM (#13451536) Homepage
    Paper here [pnas.org] for those who have PNAS access.
    • by dmaduram ( 790744 ) on Wednesday August 31, 2005 @11:33PM (#13451664) Homepage
      Unsupervised learning of natural languages

      Zach Solan, David Horn, Eytan Ruppin and Shimon Edelman
      School of Physics and Astronomy and School of Computer Science, Tel Aviv University, Tel Aviv 69978, Israel; and Department of Psychology, Cornell University, Ithaca, NY 14853

      We address the problem, fundamental to linguistics, bioinformatics, and certain other disciplines, of using corpora of raw symbolic sequential data to infer underlying rules that govern their production. Given a corpus of strings (such as text, transcribed speech, chromosome or protein sequence data, sheet music, etc.), our unsupervised algorithm recursively distills from it hierarchically structured patterns. The ADIOS (automatic distillation of structure) algorithm relies on a statistical method for pattern extraction and on structured generalization, two processes that have been implicated in language acquisition. It has been evaluated on artificial context-free grammars with thousands of rules, on natural languages as diverse as English and Chinese, and on protein data correlating sequence with function. This unsupervised algorithm is capable of learning complex syntax, generating grammatical novel sentences, and proving useful in other fields that call for structure discovery from raw data, such as bioinformatics.

      Many types of sequential symbolic data possess structure that is (i) hierarchical and (ii) context-sensitive. Natural-language text and transcribed speech are prime examples of such data: a corpus of language consists of sentences defined over a finite lexicon of symbols such as words. Linguists traditionally analyze the sentences into recursively structured phrasal constituents (1); at the same time, a distributional analysis of partially aligned sentential contexts (2) reveals in the lexicon clusters that are said to correspond to various syntactic categories (such as nouns or verbs). Such structure, however, is not limited to the natural languages; recurring motifs are found, on a level of description that is common to all life on earth, in the base sequences of DNA that constitute the genome. We introduce an unsupervised algorithm that discovers hierarchical structure in any sequence data, on the basis of the minimal assumption that the corpus at hand contains partially overlapping strings at multiple levels of organization. In the linguistic domain, our algorithm has been successfully tested both on artificial-grammar output and on natural-language corpora such as ATIS (3), CHILDES (4), and the Bible (5). In bioinformatics, the algorithm has been shown to extract from protein sequences syntactic structures that are highly correlated with the functional properties of these proteins.

      The ADIOS Algorithm for Grammar-Like Rule Induction

      In a machine learning paradigm for grammar induction, a teacher produces a sequence of strings generated by a grammar G0, and a learner uses the resulting corpus to construct a grammar G, aiming to approximate G0 in some sense (6). Recent evidence suggests that natural language acquisition involves both statistical computation (e.g., in speech segmentation) and rule-like algebraic processes (e.g., in structured generalization) (7-11). Modern computational approaches to grammar induction integrate statistical and rule-based methods (12, 13). Statistical information that can be learned along with the rules may be Markov (14) or variable-order Markov (15) structure for finite state (16) grammars, in which case the EM algorithm can be used to maximize the likelihood of the observed data. Likewise, stochastic annotation for context-free grammars (CFGs) can be learned by using methods such as the Inside-Outside algorithm (14, 17).

      We have developed a method that, like some of those just mentioned, combines statistics and rules: our algorithm, ADIOS (for automatic distillation of structure) uses statistical information present in raw sequential data to identify significant segments and to distill rule-like regularities that support structured generalization. Unlike
    • by ksw2 ( 520093 ) <obeyeater&gmail,com> on Wednesday August 31, 2005 @11:36PM (#13451673) Homepage
      Paper here for those who have PNAS access.

      HEH! funniest meant-to-be-serious acronym ever.

    • Re:PDF of paper (Score:3, Informative)

      by downbad ( 793562 )
      The project also has a website [tau.ac.il] where you can download crippled implementations of the algorithm for Linux and Cygwin.
    • Better link for PDF (Score:2, Informative)

      by Anonymous Coward

      PNAS wants you to subscribe to download the PDF.

      Or you could just go to the authors' page and download it for free: http://www.cs.tau.ac.il/~ruppin/pnas_adios.pdf [tau.ac.il]

  • Woah (Score:4, Funny)

    by SpartanVII ( 838669 ) on Wednesday August 31, 2005 @11:11PM (#13451540)
    Imagine if the editors started using this, what would everyone have to bitch about on Slashdot?
  • Noam Chomsky (Score:2, Informative)

    This is a perfect apportunity to remind that its Chomsky's contribution to Linuguistics which enabled this amazing (if true) achievement. For those of you don't know Chomsky, he is the father of modern linguistics. Many would also know him as a political activist. Very amazing character. http://www.sk.com.br/sk-chom.html [sk.com.br]
    • What will be really interesting will be when its exposed to actual natural languages as used by actual, normal, human beings (as opposed to pedants and linguists).

      Many english speakers (and writers) appear to actively avoid using the actual *official* rules of english grammar (and I'm sure this is true or other languages and their native speakers too).

      I've always assumed that natural language comprehension (as it happens in the human brain) is mostly massively parallel guesswork based on context since (hav
      • Presumably, this 'gadget' will barf if there really are no rules in such natural language usage...


        There must be some rules in natural language, otherwise how would anyone be able to understand what anyone else was saying? The rules used may not be the "official" rules of the language, and they may not even be clearly/consciously understood by the speakers/listeners themselves, but that doesn't mean they aren't rules.

      • Re:Noam Chomsky (Score:5, Insightful)

        by hunterx11 ( 778171 ) <hunterx11@NOSpAm.gmail.com> on Thursday September 01, 2005 @12:29AM (#13451865) Homepage Journal
        Linguistics has nothing to do with prescriptive grammar, except perhaps studying what influence it has on language. Something like "don't split infinitives" is not a rule in linguistics. Something like "size descriptors come before color descriptors in English" is a rule, because it's how people actually speak. Incidentally, most people are not even aware of these rules in their native language, despite obviously having mastery over them.

        If there were no rules, I could write a post using random letters for random sounds in a random order, or just using a bunch of non-letters. That wouldn't convey anything. Saying "I'm writing on slashdot" is more effective than writing "(*&$@(&^$)(#*$&"

    • Re:Noam Chomsky (Score:5, Insightful)

      by venicebeach ( 702856 ) on Thursday September 01, 2005 @12:18AM (#13451829) Homepage Journal
      Perhaps a linguist could weigh in on this, but it seems to me that this kind of research is quite contrary to the Chomskian view of linguistics.

      Instead of a language module with specialized abilities tuned to learn rule-based grammar, we have an an unsupervised learning system has surmised the grammar of the language merely from the patterns inherent in the data it is given. That a system can do this is evidence against the notion that an innate grammar module in the brain is necessary for language.

      • IANAL either, but I had the same reaction.
      • Re:Noam Chomsky (Score:3, Interesting)

        Actually, this fits very tidily in a Chomskian context. The program has an internal, predetermined notion of "what a grammar looks like" (i.e. a class of allowable grammars sharing certain properties), and adapts that to the source text. The way all this is presented makes it seem like unsupervised learning that can find any pattern, but the best you can hope to do with a method like this is capture an arbitrary (possibly probabilistic) context free grammar (CFG).

        Even then, Gold showed a long, long time a
      • Re:Noam Chomsky (Score:3, Interesting)

        This won't disprove Chomsky's theories, at most it will serve as evidence that language can be learned through statistical means. The reason it won't disprove anything is because we're ultimately interested in the way that *humans* learn language. Whether or not it's possible to learn a language solely through statistical means doesn't change the fact of the matter for humans, which may or may not have a genetic endowment for learning language. It's entirely possible that it's possible in principle to learn
    • Re:Noam Chomsky (Score:5, Insightful)

      by SparksMcGee ( 812424 ) on Thursday September 01, 2005 @12:53AM (#13451954)
      I took a linguistics class this previous year with a professor who absolutely disagreed with the Chomskyan view of linguistics (though she did acknowledge that he had contributed a great deal to the field). Some of the arguments against Chomsky include objections to the Chomskyan view of "universal grammar"--that essentially a series of nerual "switches" determine what language a person knows and that these in turn are purely grammatical in nature (the lexicon of different languages qualifying as "superficial"--in and of itself a somewhat tenable argument). While this holds reasonably well for English and closely related languages (English grammar in particular depends a tremendous amount upon word order and syntax, and thus lends itself well to this sort of computational model), in many languages the lines between nominally "superficial" categories--e.g. phonology, lexicon and syntax--become blurred, especially in, for instance, case languages. Whereas you can break down the grammatical elements of an English sentence fairly easily into "verb phrases" "noun phrases" and so on, this is largely because of English syntactical conventions. When a system of prefixes and suffixes can turn a base morpheme from a noun phrase to a verb phrase or any of various parts of speech, the kind of categories to which English morphemes and phrases lend themselves become much harder to apply. Add to this the fact that there exist languages (e.g. Chinese) in which grammatically superficial categories (in English) like phonology become syntactically and grammatically significant, and the sheer variety of lingiustic grammars either seriously undermines the theory in general or forces upon one the Socratic assumption that everyone knows every language and every possible grammar from birth and simply need to be exposed to the rules of whatever their native language is and to pickup superficialities like lexicon to become a fluent speaker. It's not all complete nonsense, but if it were truly correct then presumably computerized translation software (with the aid of large dictionary files for lexicons) would have been perfected some time ago).


      Sorry about the rant, but like I said, my prof did *not* like the Chomskyan view of linguistics.

      Oh, and as far as the notion of the "language module" goes, it might be premature to call it a module, but there *is* neurophysiological evidence to suggest that humans are physically predisposed towards learning language from birth, so that much at the very least is tenable.

  • by OO7david ( 159677 ) on Wednesday August 31, 2005 @11:12PM (#13451552) Homepage Journal
    IAALinguist doing computational things and my BA focused mainly on syntax and language acquisition, so here're my thoughts on the matter.

    It's not going to be right. The algorithm is stated as being statistically based which while is similar to the way children learn languages is not exactly it. Children learn by hearing correct native languages from their parents, teachers, friends, etc. The statistics come in when children produce utterances that either do not conform to speech they hear or when people correct them. However, statistics does not come in at all with what they hear.

    With respect to the learning of the algorithm the underlying grammar of a language, I am dubious enough to call it a grand, untrue claim. Basically all modern views of syntax are unscientific and we're not going to get anywhere until Chompsky dies. Think about the word "do" in english. No view of syntax describes from where that comes. Rather languages are shoehorned into our constructs.

    So, either they're using a flawed view of syntax or they have a new view of syntax and for some reason aren't releasing it in any linguistics journal as far as I know.
    • However, statistics does not come in at all with what they hear.

      Utterance in pattern A is heard more often than utterance in pattern B; utterances in patterns C and D are not heard at all. How is that not statistics?

      • Insofar as only utterance A is heard. A kid will always hear "Are you hungry" but never "Am you hungry" or "Are he hungry".

        Native speakers by definition speak correctly, and that is all the child is hearing.


    • However, have you read the paper? Looked at the data? It seems a bit early for you to dismiss their conclusions based on a blog entry.
    • Basically all modern views of syntax are unscientific and we're not going to get anywhere until Chompsky dies.

      I really don't understand that. How are modern views of syntax unscientific? Also, if Chomsky is such an influence on linguistics, then maybe he's right about it. Aren't you essentially saying that we have no way of arguing with him so let's wait til he dies so he can't argue back? I would think the correct view should win out regardless of the speaker.

      Other than what I've studied in cogniti

      • by OO7david ( 159677 ) on Wednesday August 31, 2005 @11:55PM (#13451749) Homepage Journal
        It is in effect two parted:

        Chomsky is to linguistics as Freud to psych. He had great ideas for the time (many still stand), and the science would be nowhere close to where it is without him. However, A) he's backed off alot of supporting his own theories and B) he's published papers contradicting his original ideas so that is some question there for their veracity. Since so many linguistics undergrads hold him as the pinnical of syntax none are really deviating drastically from him.

        WRT the unscientificness, to make his view fit English, there has to be "do-support" which basically is that when forming an interrogative "do" just comes in to make things work without any explanation. In other words, it is in our grammar, but our view of syntax does not account for it.
    • It's not just statistical. From the article (empahsis mine),

      ADIOS relies on a statistical method for pattern extraction and

      on structured generalization -- two processes that have been implicated in language acquisition.

      And while their paper is not being published in a linguistic journal, it is being published in the Proceedings of the National Academy of Sciences (PNAS, Vol. 102, No. 33), which is a well respected cross-discipline journal.

      Although I, along with you, am skeptical of this, it sounds like

      • Right, I think that it will be a fascinating read and will ultimately help the project I'm currently doing, but my claim is that if it is linguistic then I highly doubt that it will be fully correct given the flawed (IMO) assumptions.

        My argument is based soley upon this blog entry, and what it says doesn't quite seem to add up to me.
    • Almost all of modern NLP relies on statistical approaches.

      I'm not going to chime in and start a flame-war, but since your view is rather iconoclast, I think it only fair to point this out to the Slashdot audience, who are probably not as informed on the topic as you or I.
    • IIRC, the part of Chomsky's theory that is relevant to this application is that universal grammar is a series of choices about grammar -- i.e. adjectives either come before or after nouns, there are or are not postpositions, etc. I think the actual 'choices' are more obscure, but I'm trying to make this understandable ;)

      According to the theory, children come with this universal grammar built-in to their mind (for some reason, Chomsky seems against genetic arguments, but good luck understanding his reasoni

    • by PurpleBob ( 63566 ) on Thursday September 01, 2005 @12:08AM (#13451794)
      You're right about Chomsky holding back linguistics. (There are all kinds of counterarguments against his Universal Grammar, but people defend it because Chomsky Is Always Right, and Chomsky himself defends it with vitriolic, circular arguments that sound alarmingly like he believes in intelligent design.)

      And I agree that this algorithm doesn't seem that it would be entirely successful in learning grammar. But this is not because it's statistical. I don't understand how you can look at something as complicated as the human brain and say "statistics does not come in at all".

      If this algorithm worked, then it could be statistical, symbolic, Chomskyan, or magic voodoo and I wouldn't care. There's no reason that computers have to do things the same way the brain does, and I doubt they'll have enough computational power to do so for a long time anyway.

      No, the flaws in this algorithm are that it is greedy (so a grammar rule it discovers can never be falsified by new evidence), and it seems not to discover recursive rules, which are a critical part of grammar. Perhaps it's learning a better approximation to a grammar than we've seen before, but it's not really doing the amazing, adaptive, recursive thing we call language.
  • Wow! (Score:5, Funny)

    by the_skywise ( 189793 ) on Wednesday August 31, 2005 @11:13PM (#13451557)
    They've rediscovered the Eliza program!

    Input: "For example, the sentences I would like to book a first-class flight to Chicago, I want to book a first-class flight to Boston and Book a first-class flight for me, please may give rise to the pattern book a first-class flight -- if this candidate pattern passes the novel statistical significance test that is the core of the algorithm."

    How does it feel to "book a first-class flight"?

  • by Tsaac ( 904191 ) on Wednesday August 31, 2005 @11:14PM (#13451562) Homepage
    If fed with a heap of decent grammar, what happens when it's fed with bad grammar and spelling? Will it learn, and incorporate, the tripe or reject it? That's the sort of problem with natural language apps, it's quite hard to sort the good from the bad when it's learning. Take the megahal library http://megahal.alioth.debian.org/> for example. Although possibly not as complex, it does a decent job at learning, but when fed with rubbish it will output rubbish. I don't think it's the learning that will be that hard part, but rather the recognition of the good vs. the bad that will prove how good the system is.
    • The problem with this program is that you could input the most gramatically correct sentences you can into it, and it'll still spew out senseless garbage. For this to be of any worth, the computer will need to understand the meaning each word, and how each meaning relates to what the other words in the sentence mean. And you can't program it into a computer what something is just by putting words into it. Like if I tell the machine that mice squeak, it has to know what a squeak sounds like and what a mou
    • If the system works correctly (ie, it is really capable of learning language syntax), it will learn the "bad grammar" presented to it. Can you really expect an algorithm to automatically figure out the social conventions that mark one system of communication as "good" and one as "bad", and reject or correct "bad" ones?

      Actually it would be quite remarkable if this was possible, given that the reasons some dialects become privileged and others don't have nothing to do with the formal properties of those dial
    • If you take young children and expose them to rubbish for four or five years while they're learning to speak, they'll speak rubbish too. That's the problem with young children, they can't sort the good from the bad.

      But if you expose them to well strucutred language, they'll learn to speak it, without being EXPLICITLY TAUGHT THE RULES. Which is exactly what this paper is about. Unsupervised natural language learning. That's what makes the system good. It's able to build equivalency classes of verbs, noun

  • I know we all feel like we've been screwed by the conspicuous lack of flying cars around these days, but at least some progress is being made on the Universal Translator front...
  • Or better yet, start feeding it images of crop circles that haven't been proven to be fakes (yet.)
    • Klingon has simple grammar.

      How about Dolphinese? Research shows that they seem to be able to scout and transfer information from one individual to his/her pod. If there's some grammar it would be pretty good nut to crack.
  • Hieroglyphics? (Score:3, Interesting)

    by Hamster Of Death ( 413544 ) on Wednesday August 31, 2005 @11:21PM (#13451601)
    Can it decipher these things too?
  • A few years ago, thousands of posts like this were flooding alt.relgion.scientology:

    His thirteenth unambiguity was to equate Chevy Meyer all my nations. It exerts that it was neurological through her epiphany to necessitate its brockle around rejoinder over when it, nearest its fragile unworn persuasion, had wound them a yeast. Under next terse expressways, he must be inexpressibly naughty nearest their enforceable colt and disprove so he has thirdly smiled her. Nor have you not secede beneath quite a swam

    • That's just a Markov Model [wikipedia.org] that "learned" from what looks religious mumbo jumbo in the first place.

      Markov models are perhaps the easiest language acquisition model to implement, but also one of the worst at coming up with valid speech or text.

      Interestingly, they do much, much better as recommender systems.
  • Actually, what is actually new about this is the unsupervised approach. Graph parsers are quite popular in Natural Language Processing applications, but they use supervised methods.
  • Dupe (Score:3, Informative)

    by fsterman ( 519061 ) on Wednesday August 31, 2005 @11:27PM (#13451629) Homepage
    We just had an article on this. There was a shootout by NIST. At least I think, /. search engine blows, hard. Either way, here a link [nist.gov] to the tests. This is one that wasn't covered by the tests, so I guess its front page news.
  • Programming Language (Score:2, Interesting)

    by jmlsteele ( 618769 )
    How long until we see something like this applied to ?
  • Finaly (Score:2, Interesting)

    by Trigulus ( 781481 )
    something that can make sense of the voynich manuscript http://www.voynich.nu/ [voynich.nu]. They should have tested their system on it.
    • Re:Finaly (Score:3, Insightful)

      Just because the program can extract grammar, it doesn't mean it can extract meaning. If I give you this sentence:

      Ov brug termat akti mak lejna trovterna.

      And tell you that "termat" and "lejna" are nouns, "akti mak" is a 'composite' verb, "brug" and "trovterna" are adjectives... it still doesn't say anything about the actual meaning.

  • by mwilli ( 725214 )
    Could this be integrated into a handheld device to be used as a universal translater much like a hearing aid?

    Electronic babelfish anyone?

  • God loves you. God will burn you in hell for all eternity. God wants more foreskins.
  • How "intricate"? (Score:2, Insightful)

    by P0ldy ( 848358 )

    Our experiments show that it can acquire intricate structures from raw data, including transcripts of parents' speech directed at 2- or 3-year-olds. This may eventually help researchers understand how children, who learn language in a similar item-by-item fashion and with very little supervision, eventually master the full complexities of their native tongue."

    In addition to child-directed language, the algorithm has been tested on the full text of the Bible in several languages

    I hardly would consider

  • While working for a nutcase [slashdot.org]. I spoke with with Philip Resnik about his project of building a href="http://www.umiacs.umd.edu/users/resnik/paral lel/bible.html">parallel corpus as a tool to build a language translation system. This seems like the next logical step.
  • by t35t0r ( 751958 ) on Thursday September 01, 2005 @12:22AM (#13451848)
    In analyzing proteins, for example, the algorithm was able to extract from amino acid sequences patterns that were highly correlated with the functional properties of the proteins.

    NCBI BlastP [nih.gov] already does this for proteins. Similarities and rules for things can be found but if the meaning of the sequence is not known then what good is it? In the end you need to do experiments involving biology/biochemistry/structural biology to determine the function of a protein or nucleotide sequence. Furthermore in language as well as in biology/chemistry things which have similar vocabulary (chemical formula) may in the end be structurally very different (enantiomers), which leads to vastly different functionality.
  • Dolphins? (Score:2, Interesting)

    by Stripsurge ( 162174 )
    Seems like that'd be a good place to test the system out. While talking with extraterestrials would be pretty awesome, having a chat with a dolphin would be pretty cool too. Remember: "The second most intelligent [species] were of course dolphins"
  • by 2Bits ( 167227 ) on Thursday September 01, 2005 @12:36AM (#13451898)
    - translate some posts on /. into comprehensible contents
    - figure out it is a dupe and kill it before it even appears
    - RTFA for me and just give me a good summary (by the rate of articles posted here, there's probably not much to summarize either)
    - translate "IANAL" into something else that does not make me think of ANAL thing
    - figure that articles on Google and Apple are just speculations by some dude living in his (can't be her, for sure) parent's basement, and not really news worth posting
    - translate my suggestions into something acceptable to the (kernel) hackers that good hygiene is a good thing
    - understand that I'm just ranting, and it should not take it personal.

  • by pugugly ( 152978 ) on Thursday September 01, 2005 @12:40AM (#13451908)
    Feed it the entries in the "obfuscated C" competition - if it works for that, it oughta work for anything.

    Pug
  • Finally! (Score:2, Funny)

    by Druox ( 911165 )
    Finally! Engrish for the masses!
  • Spam filter? (Score:3, Interesting)

    by goMac2500 ( 741295 ) on Thursday September 01, 2005 @01:00AM (#13451981)
    Could this be used to make a smarter spam filter?
  • grammar isn't enough (Score:5, Informative)

    by JoeBuck ( 7947 ) on Thursday September 01, 2005 @01:32AM (#13452081) Homepage
    The classic problem example is:
    • Time flies like an arrow.
    • Fruit flies like a banana.
    There are other, similar examples. Computer systems tend to deduce either that there's a type of insect called "time flies", or that the latter sentence refers to the aerodynamic properties of fruit.
    • by g2devi ( 898503 )
      Even better. The meaning of words can flip back and forth depending on the ever widening context.

      * The clown threw a ball.

      (Probably, a tennis or basket ball)

      * The clown threw a ball,....for charity.

      (Okay, sorry, a ball a party.)

      * The clown threw a ball,....for charity...., and hit the target.

      (Okay, sorry again, the tennis ball hit the dunking target and someone fell in the water. Got it. We're in a carnival.)

      * The clown threw a ball,....for charity...., and hit the target....of 1 million dollars.

      (Scratch th

Genius is ten percent inspiration and fifty percent capital gains.

Working...