Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Science IT Technology

Computer Program Learns Baby Talk in Any Language 170

athloi writes "Researchers have made a computer program that learns to decode sounds from different languages in the same way that a baby does. The program will help to shed new light on how people learn to talk. It has already raised questions as to how much specific information about language is hard-wired into the brain."
This discussion has been archived. No new comments can be posted.

Computer Program Learns Baby Talk in Any Language

Comments Filter:
  • by zobier ( 585066 ) <zobier@NosPam.zobier.net> on Wednesday July 25, 2007 @07:34PM (#19989889)

    It has already raised questions as to how much specific information about language is hard-wired into the brain.
    Really, I'm interested in how much specific information about language is hard-wired into this program.
  • by Original Replica ( 908688 ) on Wednesday July 25, 2007 @07:55PM (#19990073) Journal
    And what about this "hard wired vs soft wired" stuff?

    I get so annoyed when people talk about "hardwired" like we have some kind of genetic memory. We have great genetic potential to learn languages when comared to other animals, but we don't come with linguistic firmware. Watching a baby "discover" that they are moving their arms and hands around makes me think we may have no firmware at all. Just lots of potential, and the spark of conciousness.
  • by CastrTroy ( 595695 ) on Wednesday July 25, 2007 @08:20PM (#19990263)
    Yes and no. On one hand, I remember hearing that babies have the potential to lean any language. Take a Chinese orphan, and bring them to America, and they will learn English no problem, with no accent. All babies have the potential to learn any language (or many languages). On the other hand, my laugh sounds exactly like my dad's. Not surprising until you find that I didn't live with my dad and didn't really spend much time with him at all. Many of our mannerisms are also the same. Like the way we walk, with a one hand in my pocket. The resemblence between our personalities is uncanny considering we didn't live together. So I have to ask, how much is based on what we see, and how much is based on our genes. The old nature vs. nurture question.
  • No it won't. (Score:3, Interesting)

    by Aetuneo ( 1130295 ) on Wednesday July 25, 2007 @10:25PM (#19991539) Homepage
    This will not shed any light on how people learn to talk. It will, however, shed light on how the programmers think people learn to talk. If you design something, it will work the way you expect it to (hopefully, anyways). Is that so hard to understand?
  • by virgil_disgr4ce ( 909068 ) on Wednesday July 25, 2007 @11:16PM (#19992011) Homepage
    Thank you. This claim in the Reuters article blows me away: "They said the finding casts doubt on theories that babies are born knowing all the possible sounds in all of the world's languages."

    What modern linguist / cognitive linguist actually thinks this??? It boggles my mind that the people fighting this retarded "language war" are so one-sided either way. Anyone seriously interested in current research in the direction this field is going might be into Jerome Feldman's work [amazon.com] on the Neural Theory of Language [berkeley.edu] at UC Berkeley. It's still in its early stages but (as far as I know) he's the first to offer a genuine "bridging theory" between neuroscience and language / linguistics, while building on the excellent work of many others, notably George Lakoff [wikipedia.org].

    It's a breath of fresh air to deal with real research for once instead of armchair science (so sorry, Chomsky).
  • by MOBE2001 ( 263700 ) on Wednesday July 25, 2007 @11:25PM (#19992081) Homepage Journal
    "[S]pecific information about language is hard-wired into the brain." is what Chomsky's been saying all along. I think he's probably right about the other things he says too.

    Chomsky's argument is that there are specific areas of the brain (Broca's and Wernicke's areas) that are dedicated to language and are prewired for grammar. Truth is, people who are born unable to speak, use other areas of their cortices to learn to communicate in sign language. I see no fundamental difference between learning motor skills (such as walking, running, reaching and grasping) and learning how to speak. Every type of motor learning has to do with generating precisely timed sequences of motor commands. It is all in the timing. It just so happens that Broca's area is genetically prewired to control the mouth, tongue, throat and lung muscles. It's still motor learning. No special wiring is needed other than what is avalaible for other types of motor behavior. One man's opinion.
  • by Anonymous Coward on Wednesday July 25, 2007 @11:33PM (#19992137)
    - You're referring to a Linguistic property known as Universal Grammar, at first at least.

    - Deaf children feel vibrations of sound and watch mouth movements to learn speech still. This is what accounts for dumbness and slurred speech with deaf individuals, since they lack the ability to monitor their speech, as long as it's close to the mouth movements. When people become deaf after acquisition of these syllables and sounds, their speech will not be affected as much by dumbing effects.

    Listen to a deaf person with slurred speech speak, they're actively moving their mouth significantly more than anyone with regular hearing would, and have trouble forming syllables.

    - This is a COMPUTER we're talking about here. It won't be like a baby would, although, once a word is acquired, I suppose it can refer to a lexicon/dictionary in the computer to make associations, something a baby definitely doesn't have. Although that's not the point, they want it to acquire audible speech sounds/patterns. But I do get your point and agree, this isn't exactly learning a language.
  • by Nazlfrag ( 1035012 ) on Thursday July 26, 2007 @12:06AM (#19992419) Journal
    There's also the corollary that language sounds, especially 'mama', 'papa', 'dada' are evolved from baby speech, sort of wishful thinking from parents. It's not just babies imitating sounds, adults also ascribe meaning to these most probably meaningless sounds from their baby.
  • by Skrynesaver ( 994435 ) on Thursday July 26, 2007 @02:03AM (#19993141) Homepage
    You are totally correct from my limited knowledge on the subject of language development, recently read the language instinct by Steven Pinker.

    It would appear that Chomsky et al have found that there is a "grammar engine" hard wired in the mind which assimilates the local grammar until about the age of seven when the brain reorders itself. He makes interesting case studies of pidgin languages where the several different languages are forced together, the first generation develops a common vocabulary but children born into this culture develop the formal grammar. Worth a read.

  • IAAL, too (Score:5, Interesting)

    by Estanislao Martínez ( 203477 ) on Thursday July 26, 2007 @02:30AM (#19993261) Homepage

    IAAL (I am a linguist), and I believe you are correct. Language is a colligation of sound and meaning, but this technology merely distinguishes sounds: it is a vastly simplified model, not of how children acquire language, but of how children pick up phones. The phone is the most basic unit of the physical (sound) aspect of language, so if this technology is to have any use at all, it has a very long way to go.

    IAAL, and although not a child language specialist, I will say one thing: children make plenty of meaningless sound before the start making sense, and more interestingly, they become able to tell their future native language apart from other languages quicker than they become able to understand it. (And I'll even be as daring to suggest that it simply has to be this way; you need to be able to tell signal from noise before you can decode a signal.)

    I also think that by calling this a "technology," you're fundamentally misunderstanding it. It's a computer program being used as a test of a model of phonological learning.

    How about this: as soon as their program can distinguish allophones, I will be impressed.

    I think you've got it exactly backwards here. The whole point this is demonstrate a model that loses the ability to tell allophones apart. I.e., that makes the jump from perceiving a speech stream as a continuous sequence of sounds laid out on a continuous acoustic space, to perceiving it as a sequence of discretely distinct segments.

    Of course, a major disclaimer: I haven't seen the actual research, so I don't know to what extent they've met these goals.

  • by cyphercell ( 843398 ) on Thursday July 26, 2007 @02:47AM (#19993355) Homepage Journal
    so are you saying a child of the tick-clik-ick tribe of some obscure place will find it easier calling their father's tick-tak-teekee-leekee-do as opposed to dada, because that's what they hear all the time? I'm sorry but dada, mama, and papa, are words that are designed for babies to learn. I taught my kids to talk very fluently at a fairly young age (dumb thing to do btw :) ) and the basis of their learning was that if they learned to pronounce their vowels (through imitation) they would quickly learn words, as they had already learned about 40% of the sounds in the spoken English language. If I had spent that whole time whistling at them then sure they would have learned how to whistle but it would have taken a hell of a long time.
  • by orcrist ( 16312 ) on Thursday July 26, 2007 @04:53AM (#19993941)

    Interesting, I never thought about a "feedback loop" in that way. But now you mention it, it makes (evolutionary) sense that important words (for a baby) would correlate to simple and consistent sounds the parents can pick out and reinforce.


    The feedback loop is essential. There is an anecdote Linguists learn on the subject of language acquisition: A couple, both of whom were deaf for non-genetic reasons, had a hearing child. Since the parents could only communicate in sign language they plopped the kid in front of the TV a lot, thinking he could pick up spoken English from the TV. At 3 the child had developed at a completely normal rate in acquiring... sign language; he had not learned one word of spoken English.

    As others have pointed out, this is one of the genetic aspects of learning a language. We are "hard-wired", if you will, to socialize, particularly with our parents, and are predisposed to ascribing meaning to the sounds we make to each other. This is of course a vast over-simplification, but I'll leave the detailed explanations to others in this thread; I just wanted to add that anecdote.
  • by zooblethorpe ( 686757 ) on Thursday July 26, 2007 @01:43PM (#19999461)

    It is more difficult for an adult to learn a radically different language (eg Asian vs European) because the adult brain refuses to hear the different phonetics, the adult brain long ago rejected those sounds as irrelevant to language and no longer even hears them in speech.

    As someone who has gotten into other languages later in life, after also having seriously gotten into languages earlier, I think a lot of any person's ability to "hear" radically different (or even slightly different) phones [wikipedia.org] has to do at least in part with how sensitive they are to sound systems aside from that used by their mother tongue. Some sounds are radically different -- try the Russian vowel that looks a bit like "bl", or the vowel sound in the Vietnamese noodle dish name "pho" -- and some are slightly different, but notably so nonetheless -- try the Castilian Spanish "s" as heard in the movie Pan's Labyrinth or the slightly retroflex British "sh" as in Room With a View versus their American counterparts. Monoglots may well tend to interpret these sounds as the closest analog in the sound system of their one language, though they might be aware on some level that the sounds are qualitatively different in the different languages. Polyglots already have more than one sound system within the scope of their familiarity, and thus seem more apt to fully perceive when a given phone is different from those in the sound systems they know.

    Furthermore, we must recognize the social elements of language acquisition. Adult speakers of only one language, and who have similar monoglots as the core of their social community, actually face numerous disincentives against properly learning another language. For one, adults in general have notably less free time than children. For two, adults are actively discouraged from engaging in behaviours that might be deemed inappropriate, but that are vital to language learning -- such as repeating sounds until they sound "right", or experimenting with different enunciations and different ways of using one's face to make different sounds. (For example, try repetitively enunciating "ba ba ba ba ba" to work on the Chinese non-aspirated bilabial plosive, while sitting on a crowded bus, and see how others react.) Adults are also less likely to engage in conversation if their grammar might be incorrect. Adults face very strong pressures to not be wrong in speech and bearing, certainly much stronger pressures than those children are subject to.

    To provide an anecdote regarding Engrish, I've had numerous middle school students in Japan who had impeccable English (note the L) pronunciation, only to devolve into Engrish in high school due to the social pressures of not wanting to appear like they were trying to outdo their Engrish-speaking classmates, or even worse, their Engrish-speaking teachers.

    On the flip side, it can also sometimes be a very good thing to have a noticeable accent, as it serves as a cue to others that the speaker is not a native speaker. A friend of mine is Israeli born and raised, and he speaks English without accent, despite not studying it until university. He's actually found his native-level pronunciation to be a liability at times, as people then get very confused when he mistakenly uses the wrong word, or when he does not have the expected cultural literacy (i.e. commonly known television shows, celebrities, events, etc.). If he spoke with an accent, he would be immediately identifiable as non-native, and such minor social gaffs would be much more smoothly overlooked. His case could serve as an example for why sometimes people never quite sound like native speakers in their non-native languages, as there is sometimes a benefit to be had by being obviously foreign.

    To sum up, I really must disagree with your implied statement that the adult brain is somehow incapable of learning different sound systems. Any one particular adult may indeed have more difficulty than another in learning foreign sounds, but this is not due to any inherent neurological inability, rather it is due to social conditioning and personal motivations.

    Cheers,

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...