Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IBM Science Technology

IBM Eyes Brain-Like Computing 100

schliz writes "IBM's research director John E Kelly III has delivered an academic lecture in Australia, outlining its 'cognitive computing' aims and efforts as the 70-year 'programmable computing' era comes to a close. He spoke about Watson — the 'glimpse' of cognitive computing that beat humans at Jeopardy! in Feb — and efforts to mimic biological neurons and synapses on 'multi-state' chips. Computers that function like brains, he said, are needed to make sense of exascale data centers of the future."
This discussion has been archived. No new comments can be posted.

IBM Eyes Brain-Like Computing

Comments Filter:
  • If it turns physco, maybe we can make it a CEO.

    • by wsxyz ( 543068 )
      Fizzco?
    • Oh come on. lets be fair. If it turns psycho, it has a 25% chance of it becoming a CEO. Either way, they sure as shit know how to make a lot of people a ton of money. Isn't that what we hire CEO's for? Even the psycho ones?
    • by mikael ( 484 )

      It would get spooky if the system could really develop predictive algorithms and start anticipating events before they happened.

  • Re: (Score:2, Informative)

    Comment removed based on user account deletion
    • Comment removed based on user account deletion
    • Mind blowing achievement that I think gets little attention. If only we could pair the Siri interface with Watson, and have him tie back to Google, Wikipedia, and Wolfram Alpha, the amount of discoveries we could make would happen in weeks if not days.

      Oh boy; here we go again. As a cognitive scientist, I'm appalled by /. people buying on the hype.

      Hmm.... huge discoveries... intelligent machines... let's see:

      Wolfram: who was the cowboy in washington? [wolframalpha.com]

      Google: who was the cowboy in washington? [google.com]

      Yup. No improvement after all these years.

      Wanna Tip? Please read some Hofstadter.

      • you mean that [wikipedia.org]Hofstadter?

        BTW: sorry for the blatant display of ignorance but what is this question supposed to produce?
        "Who was the cowboy in Washington?"

      • Comment removed based on user account deletion
        • which means, in other words, that while they remain "logical", they can't comprehend us.

          But computers might one day also be "illogical", in the sense, as Hofstadter put it, as "subcognition as computation": human thought as an emergent property of a highly interacting system. This cowboy in washington is one query in millions. Here's a paper by Robert M French that clearly show some crucial distinctions (for the really interested people in this debate, check out his paper's on the Turing Test also; they

          • which means, in other words, that while they remain "logical", they can't comprehend us.

            As long as they don't have species preservation logic, we're OK. All of our emotions and logic is based strictly on species preservation and expansion.

            • As long as they don't have species preservation logic, we're OK. All of our emotions and logic is based strictly on species preservation and expansion.

              So, as long as computers don't get horny, we're safe?

              I think you may be right. I saw a documentary [imdb.com] about this.

        • How is the question biased?

          The image of being a cowboy is an image that both Reagan and GW intentionally perpetuated.

          • Frankly, not every biological intelligence wastes neural capacity on politics, let alone political slander-mongery, so yeah, it's reasonable to point out the political bias inherent in the question, let alone question its value as some kind of politically-correct faux-Turing test.
      • As a cognitive scientist (if that is indeed true), you really should do a little more research (beyond Hofstadter).

        AI (AGI in particular) does not necessarily imply imitating humans. It's a bit of a homophobic slant to think that intelligence equates to the human implementation of intelligence. If a machine can exhibit the main features of intelligence (inference, learning, goal seeking, planning, etc, and other factors depending on your definition of intelligence) then it is by definition, intelligent.

        Yo

        • by Oswald ( 235719 )

          I would assume that you meant "homocentric" (instead of "homophobic") except that word is already taken. Perhaps your macro accidentally captured a variable?

          • You are correct in pointing out my incorrect usage of the word homophobic - apologies to the LGBT communities :)

            The closest term I've found for what I really meant was HSC (Human Superiority Complex). Thanks for the correction.

  • firstly they so should have named Watson Multivac. Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely. first of all computers are very dumb they are like idiot savants what they do they do very well and very quickly, but they are still stupid and need to to given very explicit instructions, they have a bad habit of doing exactly what you tell them and not what y
    • by msobkow ( 48369 )

      If Watson can extract a reasonably sane relational object model of the information, then yes, it could produce the source code for that model.

      MSS Code Factory 1.7 Rule Cartridges [sourceforge.net]. instruct my tools how to do it. Not very complex, actually, just non-trivial. Took a while to figure out how to do it. Took a while longer to work through a few applicaton architectures to figure out what works best. Now I'm working on the next step -- actually finishing a full iteration with web form prototypes, database i

    • Here we go again...

      You people really can't let something be on it's own can you? Just like in the 1860s:
      "Negro can’t take care of himself, so we’ll put him to work. Give him four walls, a bed. We’ll civilize the heathen"

      Just leave us alone ok?

      With utter respect,
      HAL 9000

    • Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely

      If you read the Stantec ZEBRA's programming manual (from 1958), it tells you that there are two instruction sets and recommends that you use the smaller one dubbed 'simple code'. This comes with some limitations, such as not being able to have more than 150 instructions per program. This, it will tell you, is not a serious limitation because no one could possibly write a working program more than 150 instructions long.

      Compared to that, a language like C is close to magic, let alone a modern high-level

    • I think you are looking at the problem through the lens of a programmer and missing the larger picture.

      Self programming is the context of this discussion does not mean the computer will code up a program in "C", compile it, debug it, etc. That's just silly.

      What it means is that the machine will be able to interpret, infer and learn from vast amounts of information made available to it, without having logic coded to answer specific questions.

      If I ask you who the previous president of the US was, do you wr

      • that is no a program that is a advanced query. programs do something and make decisions. query just make and find the answer to a question.
        • That's a semantic distinction and it could be argued that a query IS a program (i.e. it invokes a set of programmed steps to produce a result).

          The following "query" does something (inserts data into SaleableItems table) and makes decisions (saleable or not saleable)
          INSERT INTO SaleableItems
          SELECT CAST(
          CASE
          WHEN Obsolete = 'N' or InStock = 'Y'
          THEN 1
          ELSE 0
          END AS bit) as Salable, Itemnumber
          FROM Product

          By the same to

    • by Jpnh ( 2431894 )

      Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely.

      Not likely? There's a whole FAMILY of languages that do just what you describe. Ever coded in SQL? LISP? Scheme? It's called declarative programming, with the jist being you tell the computer what you want it to do, not how to do it.

      On the other hand, what's the difference between giving a human an order in English and ordering a computer in a programming language, besides that a computer can be trusted to obey the best it can?

  • by itchythebear ( 2198688 ) on Thursday October 13, 2011 @10:29PM (#37710006)

    This is kind of off topic, but this reminds me of an article I read (maybe in time magazine) that was about how in the next 40 years or so we will have computers powerful enough to emulate a human brain. The point of the article was that once we reach that capability, humans will basically become immortal because we would just copy our brains onto a computer and not have to worry about our fragile organic bodies failing on us.

    It's very interesting to think about all the effects a breakthrough like that would have on humanity, but I also wonder if something like that is even possible. Just because we can emulate the human brain doesn't mean we can transfer information off of our current brains. Even if we can transfer the information, will our consciousness with a computer brain be the same as our consciousness with an organic brain or will we experience the world completely different than we do now? Once we have eternal life as computers do we even bother reproducing anymore? If our only existence becomes as pieces of data in a computer are we even humans at that point? And is the real way humans wind up going extinct just the result of a power outage at the datacenter where we keep our brains?

    Like I said, this was pretty off topic. But the title reminded me of that article I read. This [time.com] might be it, I'm not sure though.

    • I imagine mind uploading would have to be by destructive readout. Destroy the brain in order to extract the information from it. Getting the kind of resolution required for scanning is going to take a nanotech revolution too - if you just sliced it up and used conventional microscopes, it would be time-prohibative.
      • by hawkfish ( 8978 )

        I imagine mind uploading would have to be by destructive readout. Destroy the brain in order to extract the information from it. Getting the kind of resolution required for scanning is going to take a nanotech revolution too - if you just sliced it up and used conventional microscopes, it would be time-prohibative.

        See Alastair Reynolds, Chasm City, Monument to the Eighty.

    • by zensonic ( 82242 )

      Once we have eternal life as computers do we even bother reproducing anymore?

      I am curious. How do you plan to reproduce as a computer? Brings a totally new meaning to the word forking i suppose.

      • by AJH16 ( 940784 )

        Furking perhaps?

      • Haha, a whole new meaning to forking indeed.

        I think it's reasonable to assume that once we have the technology to mimic a human brain (the most complex part of the human anatomy?), we would probably be able to have completely artificial bodies (including reproductive capabilities).

    • This is certainly not a new idea. It is sometimes referred to as the "rapture of the nerds" version of a technological singularity [wikimedia.org]. Ray Kurzweil [wikimedia.org] is a big fan of the idea and one of the major proponents.

      As to the actual feasibility, I ran across Whole Brain Emulation: A Roadmap [ox.ac.uk] a little while ago, which discusses the possibility given our current knowledge of how the brain works. It provides dates on how long Moore's Law would have to continue based on varyingly optimistic assumptions about how much work is

    • This is certainly not a new idea. It is sometimes referred to as the "rapture of the nerds" version of a technological singularity [wikimedia.org]. Ray Kurzweil [wikimedia.org] is a big fan of the idea and one of the major proponents.

      As to the actual feasibility, I ran across Whole Brain Emulation: A Roadmap [ox.ac.uk] a little while ago, which discusses the possibility given our current knowledge of how the brain works. It provides dates on how long Moore's Law would have to continue based on varyingly optimistic assumptions about how much work is

    • Computers are already considerably more powerful than human brains at certain tasks, but they work in a completely different way. The way they work hasn't changed since the first steam and valve driven computers were developed, they are just a lot smaller, can deal with a lot more data at one time, and do it a lot faster. They just blindly follow the instructions given to it by the programmer, and there is no way you could program it to invent some completely new thing that nobody has ever thought of befo

      • by Anonymous Coward

        Why not? It's already possible to program computers to learn, within narrow limits. Processes like neural network training. There is no theoretical reason why a computer could not be programmed with all the cognative ability of a human - it is merely an engineering task which has thus far proven insurmountable. Given enough research, more powerful hardware and the attention of a few geniuses to make the vital breakthroughs it should be achieveable.

        • Computers don't "learn". They collect data and are instructed in how to use that data in calculating the answer to future problems. The theoretical reason why they can't be programmed with the cognitive ability of a human is that computers use boolean algebra and human brains don't. They have things like emotions which can't be programmed using existing assembly language.

    • This is kind of off topic, but this reminds me of an article I read (maybe in time magazine) that was about how in the next 40 years or so we will have computers powerful enough to emulate a human brain. The point of the article was that once we reach that capability, humans will basically become immortal because we would just copy our brains onto a computer and not have to worry about our fragile organic bodies failing on us.

      You'll have to resolve the unresolvable "transporter problem" raised by Star Trek: if we create an atom-by-atom copy of your body and brain, and then destroy you, does your consciousness transfer to the copy? Or do you just end? Either way, the copy is going to insist that he is you and that there was no interruption in consciousness... but he would say that simply because of his memories.

      • I think that would be one of the key issues for sure.

        Also, what kind of weird stuff would happen if we just started duplicating ourselves in the same way you can duplicate an operating system installed on a computer. We could wind up with millions of copies of our brains all existing at the same time, having conversations with each other.

    • Have you considered another scenario? Just 2 weeks ago I posted this in an article about artificial brain cells: Every day replace some brain cells in a human with an artificial one. Take five or six years, and replace every cell he/she has. At what point does this become artificial intelligence? Would the consciousness of said person survive the transition? If you succeeded, would an exact copy of the result also be conscious? I don't think I'd volunteer, but I'm sure someone would.
  • With the end of the desktop, it makes sense that the end of "programmable computing" is at hand (followed surely by the year of linux on the desktop). That said, imagine how amusing it would be if there was a union to protect programmers (hah, no more 100 hour weeks!). I can see them working to protect the jobs this inevitable innovation will extinguish. Whatever, onto the next thing, until every useful human task including innovation itself is taken over by the machines. At which point we'll still brig
    • It's something you hear about from time to time, the end of programmers. It was a big topic in the mid 90s, for example, when languages like Visual Basic and Applescript would bring programming to the masses. There's a story I think of whenever I hear that kind of talk going on, to remind myself my job is probably safe:

      In the early 1970s, there was conference on microprocessors, and one of the presenters really got superlative when he was talking about how big sales would be. One of the tech guys scoffed
      • by msobkow ( 48369 )

        The tools become more and more powerful and do more and more of the "grunt work" of programming, but I've yet to see or hear of a tool that can automate the creativity needed for implementing business logic, appealing GUI design, useful/readable report layouts, etc.

        As pleased as I am with my own tools, I still wouldn't be so foolish as to claim they could eliminate the need for programmers. The hordes of junior copy-paste-edit juniors, yeah, but those aren't programmers -- they're meat-based xerox machi

    • the idea that desktop computing is dead and that we are in a post pc world makes me giggle, just where do people think the programs are going to be made for there phones and tablets i would like to see some one try writing even a small program on iphone. writing, compiling (which can take a long time eve on a descent desktop, would be unimaginable) and debugged on such a form factor would be ridiculousness. and there are thousands uses for a pc that would be horrible on a tablet, all office work for starte
      • BTW: if you have the means try to go a week without using a desktop.
        I tried to do it and failed 35 hours into the experiment. You just can't be productive on tablets even with the silly keyboards, no matter how many of them you have.

      • by mikael ( 484 )

        Some of us used to use Borland C at 640x480 resolutions on a text-based EGA/VGA screen without a mouse. If an Iphone had a keyboard, then it might be possible -Maybe a clam-shell style i-phone with dual screens, like the Nintendo 3DS.

  • by Anonymous Coward

    The human brain is remarkable, but it is also loaded with problems. We expect computers to be exact, and we raise hell [wikipedia.org] when they're off even the slightest. Brains, on the other hand, make errors all the time. If we tried to use anything brain-like to do even a small fraction of what we use computers for today, they would be totally inadequate. A much better option, IMHO, would be to use programmable computers for the vast majority of tasks, and brain-like computers only for things the programmable ones

  • I actually kinda like the idea of being smarter than my computer, that's why I hope this doesn't happen.
  • I think alot of people make the mistake of consciousness = soul.

    From what i've learned, your self awareness, long and short term memory and a fear of death equate to sentience. if you wanted to copy someone it would be easy but from that point on you would just have everything in common until you started to make different choices.

    If instead there was a way to slowly replace your brain with mechanical/electrical/nanotech over a period of time then you would be more or less the same person with the same co

    • I think by "consciousness" you are referring to one's functional memory. But "consciousness" is usually used to refer to one's awareness - i.e., one's soul.

      In any case, I understand your point. Transferring one's memories would not necessarily transfer one's consciousness ("soul"). Instead, one would merely have a copy. Since we have essentially no understanding of what consciousness (the "soul") is, we cannot transfer it, or even know if it can be transferred.

      For now, we are stuck in our current organic br

  • IBM seems to think that if you only had a sufficient number of neuron-like (whatever that may be) connections, a brain (whatever that may be) will automagically appear.

    There's no good reason to have blind faith in this notion, and it's not likely to be any more likely than more than 60 years of fabulously wild predictions of what computers will do in the next n years.

    But it's not impossible, and three cheers for IBM for throwing wads of cash into the game. It'd be great if other big outfits chased dreams

A computer lets you make more mistakes faster than any other invention, with the possible exceptions of handguns and Tequilla. -- Mitch Ratcliffe

Working...