Forgot your password?
typodupeerror
Science Hardware Technology

New "Wet Computer" To Mimic Neurons In the Brain 132

Posted by ScuttleMonkey
from the not-so-hard-hardware dept.
A new type of "wet computer" that mimics the actions of neurons in the brain is slated to be built thanks to a €1.8M EU emerging technologies program. The goal of the project is to explore new computing environments rather than to build a computer that surpasses current performance of conventional computers. "The group's approach hinges on two critical ideas. First, individual 'cells' are surrounded by a wall made up of so-called lipids that spontaneously encapsulate the liquid innards of the cell. Recent work has shown that when two such lipid layers encounter each other as the cells come into contact, a protein can form a passage between them, allowing chemical signaling molecules to pass. Second, the cells' interiors will play host to what is known as a Belousov-Zhabotinsky or B-Z chemical reaction. Simply put, reactions of this type can be initiated by changing the concentration of the element bromine by a certain threshold amount."
This discussion has been archived. No new comments can be posted.

New "Wet Computer" To Mimic Neurons In the Brain

Comments Filter:
  • by mcgrew (92797) * on Monday January 11, 2010 @05:28PM (#30729044) Homepage Journal

    I'll have to change my opinion that we won't ever have true artificial intelligence. A chemical based computer could possibly become intelligent. After all, thought itself is only an electrochemical process.

  • by pete-classic (75983) <hutnick@gmail.com> on Monday January 11, 2010 @05:39PM (#30729252) Homepage Journal

    Wow, talk about carbon bias!

    Can you cite a reason why silicon-based systems shouldn't be as capable carbon-based ones? Silicon-based have developed at a blistering pace as compared to the carbon. (Though I admit that they have the advantage of actually having intelligent designers . . .) I mean, life has a head start of a few billions of years!

    -Peter

  • by icebike (68054) on Monday January 11, 2010 @05:45PM (#30729338)

    I fail to see why you need chemical based computers in order to construct artificial intelligence.

    One could build the system out of tinker toys and achieve the same results, at different speeds and different costs.

    There is nothing that signifies intelligence which is provided by one construction method that is not present in another. Electrical, Optical, Mechanical, Chemical, Pneumatic... They are all just a means to an end.

  • by MichaelSmith (789609) on Monday January 11, 2010 @05:47PM (#30729378) Homepage Journal

    thought itself is only an electrochemical process.

    Thats what we think...

  • by Alzdran (950149) on Monday January 11, 2010 @05:52PM (#30729470)
    I can't speak for the GP, but I think we'll exploit properties we don't fully understand (say, by growing neurons on a grid that interfaces with them) much faster than we'll be able to translate those properties into other systems.
  • Re:1.8M Euro? (Score:4, Insightful)

    by blueg3 (192743) on Monday January 11, 2010 @05:53PM (#30729482)

    600k Euros / year isn't bad for an exploratory research program. The article suggests that they're funding multiple such efforts, searching for promising ideas that can be furthered by later funding -- which is often how these things are done.

  • by Chris Burke (6130) on Monday January 11, 2010 @05:55PM (#30729520) Homepage

    A chemical based computer could possibly become intelligent. After all, thought itself is only an electrochemical process.

    If you believe that thought/sentience/intelligence is only an electrochemical process, then why do you believe that the same effect can't be achieved from a purely electrical process?

    Imagine a computer no different than those we have today, but arbitrarily more powerful. On this computer is running a perfect simulation of the human brain, down to modeling every quark. Every process, chemical, electrical, and otherwise that takes place in the brain takes place in the simulation. We can already do this for small numbers of molecules; the only thing missing is the processing power. Why exactly could this simulation not exhibit the same intelligence as you or I? What is it missing, and why can what is missing not be added to the model?

  • by Monkeedude1212 (1560403) on Monday January 11, 2010 @05:59PM (#30729566) Journal

    Chemical reactions have a sort of random-ness to them that electricity through a wire can't duplicate. When the circuit isn't complete, electrons aren't moving. When two chemicals aren't reacting, their molecules still shift about in either their gaseous or liquid form. They could be affected by anything that comes into contact with them, depending on the substance it could be magnetic and thus making their movement affected by all sorts of things.

    Think of the number of random events that can occur on the cellular level.

    Computer software has gained the ability to learn, and gained the ability to change itself, even learned how to reproduce itself.

    What it hasn't gained is the ability for abstract thought, which I attribute to the incredible amount of random events that go on inside our brain.

    Though I could just be blowing smoke, I'm no physicist or chemist or biologist.

  • by sznupi (719324) on Monday January 11, 2010 @06:07PM (#30729692) Homepage

    That mostly signifies how relatively efficient are processes on which our bodies rely, in regards to density of information and computation. Much more than our current silicone-based systems.

    But both are far from possible maximum theoretical densities, efficiency. And you have a lot of "randomness" deep down...

  • by sznupi (719324) on Monday January 11, 2010 @06:12PM (#30729774) Homepage

    Soul, naturally ;)

  • by Chris Burke (6130) on Monday January 11, 2010 @06:29PM (#30730010) Homepage

    Soul, naturally ;)

    James Brownatron, Godfatherbot of Soul, disagrees! He stays on the scene like a sex machine with 99.99999999% uptime!

    I doubt however that's what someone saying "thought is just an electrochemical process" was going for. I mean, I even believe in souls, but I think they're a cop-out for explaining how brains produce intelligence. The question is what does produce intelligence, and I have a hard time believing it's the constituent components, and not the emergent pattern. The issue with AI is one of algorithms, not something that only chemicals can do and computers can't.

  • by Chris Burke (6130) on Monday January 11, 2010 @06:49PM (#30730286) Homepage

    It's probably somewhere in between (if understanding "constituent components" as relatively simple blocks giving particular functionality, each "generation" building on simpler blocks).

    Take it more to mean the functionality itself, as opposed to the molecules that provide that functionality. Basically, however neurons etc work together to make a working brain, why is it so essential that it be physical potassium ions and physical neurotransmitters to create that functionality, rather than electronic equivalents? If you simulate the flow of ions, what's the difference? That's the basic question I'm asking here.

    And...our current evidence suggests it's most likely incorrect. :(

    I don't really understand... How can intelligence not be an emergent property? A neuron, though fairly complex, isn't intelligent. 2 tied together isn't either...

  • by phantomfive (622387) on Monday January 11, 2010 @06:51PM (#30730306) Journal
    Biological brain processes are so agonizingly slow, though. It's not just the electrochemical signal, it's the process of learning new things, which involves not only making an electric link, but actually growing new physical connections, extending dendrites and growing synapses. This takes time, energy, and nutrients for the body. Doing it in electricity is so much easier, although we pay for it in energy costs. Compared to a 150 watt computer power supply, the average human body burns around 2000 calories in a day, that's about 2.3 watt hours. Our brains use significantly less energy, but they are so slow!

    I am not against research, I am in favor of studying everything in the world, but I don't think it is reasonable to assume that only a chemical based computer could produce artificial intelligence. Chemical-based has significant disadvantages when compared to silicon.
  • by jameskojiro (705701) on Monday January 11, 2010 @07:03PM (#30730480) Journal

    Is a lot like building an abacus using 3-D software and then manipulating your 3-D abacus to add 1 plus 3 to get four while chewing away millions of computational cycles...

    We need a better way to simulate the effect of a neuron without having to re-create everything down to the last protein and lipid in a nerve cell....

  • by LUH 3418 (1429407) <maximechevalierb@@@gmail...com> on Monday January 11, 2010 @08:43PM (#30731526)
    Many people bring forward this idea. I think it stems from the fact that "traditional" AI (which has only really been around for 60 years or so) has not yet yielded a "sentient" computer. People feel that this somehow means traditional AI, and even our whole computational model can't yield sentience. They attribute intelligence to the fabric rather than the logic it implements. I think these people fail to realize that whatever computation biological brains implement, we could simulate it on traditional computers, *if we even knew what is being computed*. The problem is that, so far, beyond the first layers of our visual system, and some very simple systems, we know not much about the way the brain is connected. However, from what's been discovered in neuroscience, it seems pretty clear that the early layers of the visual cortex perform simple convolutional operation that do not involve quantum physics, or fancy shmancy things we couldn't do *more efficiently* with silicon.

    The human brain is very complex, but given enough time, we very well might get to understand what makes us sentient and be able to replicate it in a computer. My personal opinion is that the brain is full of specialized hardware that has evolved over a very long time, and helps us to specific tasks (eg: facial recognition, hand-eye coordination, obstacle avoidance, language decoding), with a very powerful abstraction logic built on top (the stuff that "makes us sentient"). This abstraction logic is possibly very complex, and perhaps too difficult for us to conceive of at this time. If we are to learn anything from the rest of the brain, most of this logic probably focuses on transforming perceptual information into a form that makes it easy to reason with. On top of this, we probably again have specialized mechanisms, to do things like deduce causal relationships and generate hypotheses or semi-random associations of concepts (creativity).

    The reason the "traditional AI" camp hasn't succeeded at making sentient machines are multiple, but I would sum them up as follows:
    1) They have mostly given up. You probably can't get funding for claiming you'll come up with HAL9000, you'll sound like a wacko. Current AI research focuses simple learning problems (i.e.: supervised learning, reinforcement learning).
    2) The approaches tried in the past focused purely on formal logic, which, as we now know, works badly in open-ended environments. For it to work well, the properties of the environment have to be simple, restricted and well-defined.
    3) Supervised learning, unsupervised learning, etc., will not yield sentience. These approaches, which may actually exist in the brain, are good at solving problems of limited scope only. Our brains are not big wads of neurons performing a single computation. They are much more intricate and integrate many specialized components.

    The "right" approach to AI is probably an overall approach, integrating many existing techniques into one system. Perhaps an "engineering" approach to AI would work better. Focus on constructing it and then refining it, as opposed to developing an overall theory of how it will work first and trying to reduce it to its simplest component. We already have computer systems that do speech synthesis, speech recognition, facial recognition, depth perception, 3D model reconstruction, etc. We also have unsupervised learning, supervised learning, reinforcement learning, fuzzy logic, knowledge bases, automatic theorem provers, etc. It should be possible to build a non-completely stupid AI, if one combined all these techniques in the appropriate way. How to connect them, however, is probably where the true AI problem resides.
  • by pete-classic (75983) <hutnick@gmail.com> on Monday January 11, 2010 @08:49PM (#30731596) Homepage Journal

    Again billions of years head start! If something like human intelligence arises from silicon in fourty million years (the clock started, what? Sixty years ago?) then it will have happened in only one percent of the time as carbon.

    I think that you grossly underestimate our understanding of chemical building blocks of cognition. But, putting that aside, I think your argument recommends our efforts on the silicon front. Again, in only a few short decades we have gone from purely sequential (serial) designs to very fast serial designs that can usefully mimic parallel designs, to today's shaky steps toward significant parallelism.

    -Peter

  • by pete-classic (75983) <hutnick@gmail.com> on Tuesday January 12, 2010 @02:11AM (#30733890) Homepage Journal

    Because there's no evidence thus far for consciousness and cognition in anything other than carbon-based wetware.

    I'll go further and stipulate that such a thing doesn't exist. But that in no way answers my question.

    But there are no such systems at the moment, and there's no particular evidence that given hypothesis is correct. It may well be self-aware intelligence is tied to the particular mix of phenomena that take place inside of carbon brains.

    Sure. And we can "maybe" away any idea that hasn't yet been proven in practice. But that doesn't seem like a useful pursuit to me.

    But we have been steadily progressing at developing more complex and capable silicon/electronic systems for the last few decades, and there's no particular reason to expect that progress to slow or stop.

    On the other hand, we've been working on Biology and Chemistry in earnest for a couple of hundred years, and the capabilities of those disciplines doesn't seem to be generally developing toward any sort of artificial intelligence. We do know for a fact that biochem can give rise to intelligence, but we seem much further from creating it artificially with those tools.

    Your argument forcibly brings to mind those who insisted that, while heavier-than-air flight was possible for birds, it was utterly unachievable for man. Their only evidence being that man hadn't yet achieved it. I freely admit that it may not be possible for intelligence to arise from non-carbon systems, I'm merely pointing out that we've hardly begun the attempt.

    I doubt that either of us will be proven correct in our lifetimes, but that in no way negates my argument.

    -Peter

Neckties strangle clear thinking. -- Lin Yutang

Working...