Leech Neuron Computers 186
Ralph Bearpark writes "The biological computer is born.
A computer made of neurons taken from leeches has been created by US scientists. "I'
I'd actually read about some of this research being done back in the early 80s at Bell Labs. Apparently they could actually get some read/write to the leech neurons, for use as storage devices, but they...uh...kept dying after for a few minutes. Anyone confirm/deny that?
Whoa There (Score:1)
Re: Advantages (Score:1)
It doesn't seem like too far a strectch to connect these neurons with nerves that can interface to silicon-type machines.
In any event, the write aspect of read/write should be easy (even if it's just a hack like retinal/cochlear implants), it's the read that needs work. This is similar to problems with DNA computing; you can splash it all together, and make it compute - but it takes serious lab-work to read the results.
There's some course material on DNA computing at
http://www.csd.uwo.ca/~lila
--Mike
How to interface to computers.. (Score:1)
One interesting approach I'd like to see is to take a built in i/o system from a primitive creature - like a leech - and then hook that to a computer. For example you could take the optical nerves (eyes) and then use nerves as your "electrodes". Data could be passed in by using various intensities of light from a laser or a led.
That still doesn't solve the problem that neural
networks get their power from the *architecture* that connects them - they're not magic. You need to have them intelligently arranged to reinforce a particular input.
One really good book on the topic that can be understood by the layman (or any undergrad, at least) is "Naturally Intelligent Systems" from the MIT Press. I highly recommend it.
Too bad industry isn't more interested in developing more along these lines, I'm sick of academia
Steve
smanley@nyx.net
java.... (Score:1)
I heard a while back about some people making computers that ran on coffee. Okay, here comes the science bit-
Apparently supercooled atoms can be in two states at the same time, so they can be 0, 1, or errr... both. As I understand it, that means if you have two qubits (quantum bits) you can be carring our two opperations at the same time, three you can carry out four opperations, 4 -> 8, etc. Particularly powerful for doing cryptography, apparently.
Anyways, you can do this by supercooling atoms & firing lazers at them (expensive, very expensive and very difficult) _or_ by firing radiowaves at organic compounds, like caffine in coffee (something to do with 'nuclear magnetic resonance' whatever that means).
The moral of all this:
Allegedly (serious now ->) people have added one plus one in a cup of coffee.
God knows where you plug in the monitor.
Who's gonna start the linux port?
Re:What about human downsides? (Score:1)
Is this really new? (Score:2)
Re:Is this really new? (Score:2)
They'd had some success getting the neural nets to perform complex tasks, but ran into the problem that once they'd trained a circuit to do what they wanted, they couldn't build working duplicates of it. Also, some of the nets were apparently using the components they were attached to as part of their circuit in some undiscernable manner, so they'd stop functioning if they hooked them to a different circuit that theoretically looked the same from the outputs.
No leeches were harmed in the processing of this post.
Traveling network (Score:1)
If this could some day be done to alter the human brain (obviously without causing death, pain, and such), to the extent where we could fully utilize every portion of it, or at least much more than we can now, and in addition, use our memory directly for storage of some sort, we wouldn't need PC's.
We could just dock our "plugs" (of course we'd need some way to network ourselves), into the station we would be working at.
Hell, we'd have all the processor power we could need, as with storage. Even better, we'd have such a higher basis for learning (since we would be more apt at using our brains), that we could easily further all forms of technologies, and who knows where civilization would go from there.
Or maybe I just think ahead too far...
The SKYNet factor:Again (Score:1)
I spend a large portion of my time thinking about this very scenario. It is a danger when dealing with a thinking machine of any type, biological or sillicon. The major questions we face are
1. Will it view us as a threat?
2. Will it attempt to defend itself?
3. Is it capable of adapting rapidly to the uncertainty of battle?
4. Can we "pull the plug"?
5. Can it stop us from "pulling the plug?
6. Can we "kill" it?
To create a thinking thing requires an addittional level of responsibility. Like it is a parent's responsibility to correct a child when that child goes atray, it is the creators responsibility to correct, or possibly even destroy that creation when it causes more harm than good.
I am actually more afraid of a combination between a biological and sillicon computer. Think of it in terms of a miniature cluster. The biological portion develops questions, and the electronic parts get answers. That type of architecture, while we will probably never live to see it, would have the benefits of both of it's parents. The adaptability of a biological organism, with the raw number crunching power of a machine.
Imagine a soldier who can endure a 100 degree week-long heatwave. And when his target approaches he can calculate the distance between him and the target, the average kinetic energy of bullets of a particular weight and caliber fired from his weapon, wind, humidity, the rate of motion of the target, the recoil of the weapon, wind resistance and which part of the target's body would most likely result in either a killing or maiming shot in a few nanoseconds.
I don't see this as paranoia. Paranoia is unfounded, every type of new technological advance is exploited by those who wish to gain wealth, poer, or both. Every advance we've made as a species has been used to kill people. The boat, the airplane, the automobile, the firearm, nuclear energy, the rocket, ad infinitum. Being afraid that self aware machines will be next, is not paranoia in my book.
LK
Re:this is dangerous!!! (Score:1)
There is a *huge* difference between helping a collection of slug neurons stumble upon the idea that 1 + 1 = 2, but it is another thing entirely to create truly intelligant "machines". I am no expert but the capabilities of this biological form of AI are probabley inferior to our digital technology.
Re:Borg, plain and simple. ----- (Score:1)
If it is aware of what it is, it can choose to change for the better.
I don't understand the term TK ability. Please elaborate.
LK
I'm curious... (Score:1)
Re:define "the same thing" ... (Score:1)
It's true, one ought tread carefully in this territory, but we've been denied guarantees in every other human endeavor I can think of. Caution has been the prudent tool for mitigating risk in those instances too.
We might also want to consider breeding nets that are good, on the average, at recognizing the appearance of potentially-unreadable, undesirable properties in other nets, and put their finger on the other's kill-switch. That's an awfully cruel way to treat a neural "slave" machine, I know, but it may be an effective average-case safety mechanism. And the blood would be confined to the hands of the slave's creator, as it should be.
Infeasible (Score:1)
I don't think that neuronal computing will ever be feasible except for natural organic creatures. Neuronal neural nets have numerous advantages over electronic neural nets, but many serious disadvantages as well. A neuronal computer requires a much more complicated support system in terms of energy provision and waste removal than an electronic computer, among other things.
The future of computing lies in combined applications of traditional parallel digital computing, digital neural nets, and digital genetic programming; initially implemented with traditional semiconductor lithography, but eventually using nanotech construction, which will provide more performance and flexibility in reconfiguration than either semiconductor or biochemical computing.
I highly recommend "The Age of Spiritual Machines" by Ray Kurzweil as an optimistic assay of the future of computing. He makes a compelling argument along these lines.
worried (Score:2)
reminds me what Ken Thompson said about going into
biology instead or CS. Why do I get the feeling
that all of my experience with "traditional"
computers is going to be worthless in about 15
years. Kinda like what happened to some of my
older physics and EE profs when transistors came
along and made their tube knowledge obsolete.
Leeches for blood-letting? (Score:1)
Jon
Re:Ethical questions? (Score:1)
> because people tend to care less about
> things that aren't as cute.
This is quite true. If there were using, say, baby seal neurons, Greenpeace would be all over them.
Re:Like an analog computer... (Score:1)
All these things make my brain so fuzzy
Dangerous Stuff (Score:1)
This news only reminds me of the current debate over Genetically Modified foods. Do these people really know what kind of forces they are messing with? Ten years from now - "Oh dear, biological computers cause cancer, and the human race will now be wiped out", or even worse, they realise their superiority over humans, and take over the planet, or worse.
Oh, and I've a few things to say, just to save anyone else posting them. *yawn*
"So, when's Linux going to be ported to this?
"Picture it, a LIVING BEOWULF!! BEOWULF yadda".
"Can't wait to play QuakeIII on one of these babies".
There you go, it's said, now nobody else needs to say it.
Vires Protection? (Score:1)
Well looks like you have a CIH
I reccmond three Leaches and
call me in the morining.
-Jonathan
Re:Your brain IS powerful (Score:1)
example, plain old math. Start a stop watch running and time how long it takes you to multiply these two numbers together (without using any outside aids like paper and pencil).
3456 x 78 =?
It take a while doesn't it? most people have trouble remembering passwords, phone numbers, account numbers, license plates, etc, etc. There are lots of things brains are really good at, and there are lots that they aren't.(btw the answer is 269568)
-P
Think cultured neurons, not from dissection (Score:1)
-P
Re:this is dangerous!!! (Score:1)
Specifically, my solution is 1 part salt, 3 parts water.
Stir, then pour on the effected computer. All problems will go away.
AI is something different than this. This is biological fuzzy processing. It should be much more effecient than using silicon to do fuzzy stuff.
If (or when) we create AI computers, then there remains the issue of what will their outlook be like. I mean, if I took your brain at stuffed you in my computer case, you'd be unhappy.
But these would be lifeforms who grew up this way. Their culture would be different. They may consider being what they are honorable or desirable.
They wouldn't have the hormonal problems we, as a race, have. They wouldn't need to be teritorrial with us. They wouldn't even compete for the same resources!
I'm not afriad of AI's exploiting us. Though the reverse is possible and more likely.
Ciao!
Re:this is dangerous!!! (Score:1)
The whole "AI takes over the world/wipes out the human race" plot makes for a good movie, but it gets old real quick.
Hasn't anyone here read "The Callahan Touch" by Spider Robinson? An AI's motivations could be TOTALLY DIFFERENT from our own. NO GLANS. The only urges in an AI would be the ones built into it. No need to reproduce, no survival instinct, no agression, hate, or angst.
"But... the Matrix was so COOL!"
And that has WHAT exactly to do with reality?
BAH. (humbug!)
this is dangerous!!! (Score:3)
Am I the only one who's paranoid?
Neurons removed from leeches first...right? (Score:1)
I didn't do that well in my Neuroscience course
--Andy.
Quite Nifty (Score:3)
"..this top of the line Intel Slugium 500 with NNX..."
I can't see anyone wanting to get their hands dirty everytime their slugs explode because they over clock them, and where are the animal rights guys? If they were doing this with pigs [what a sight] they [the animal rights dudes] would be all over them.. Interesting article anyways.. Its a neat idea, and would make replacing CPU's a bit cheaper...
Stan "Myconid" Brinkerhoff
Re:this is dangerous!!! (Score:1)
Re:this is dangerous!!! (Score:1)
Install Win95
Seriously, though. Wouldn't someone have to tell this computer to "Take over the world", and not just let the thing run rampant doing whatever it wants.
It will think for itself, but sentience is free will and I doubt that they will be able to program that.
Havent you seen Phantom Menace yet? (Score:2)
bzzt! (Score:1)
on the other hand, if it really is faster, cheaper, and compatible it might be an interesting idea.
"life's short, eat desert first"
Neural Nets etc. (Score:1)
Most neural nets do not make connections on as-needed basis. This actually confuses me about the Ditto/Calabrese work. Perhaps they are using neurotropic factors for axonal guidance. Artifical neural nets generally have set connections and only the weights vary over the course of learning. Even in "real" neural networks learning is not thought to occur by creating new neuronal interconnections, but rather by biochemically strengthening existing ones.
-NoData
Re:Hmmm regenerative function? (Score:1)
-NoData
Re:Not entirely accurate (Score:1)
Yes, I know about that work. Most of it comes from Liz Gould's lab at Princeton. Her group has discovered neurogenesis in the dentate gyrus in adult rats, shrews, and even monkeys. However, while there may be some primitive derivatives of learning that rely on neurogenesis in adults (e.g. the resistance of the dentate gyrus to deterioration in the face of Alzheimer's), it's computationally infeasible that any significant post-developmental learning relies on neurogenesis (post-developmental as oppposed from pre- and neo-natal developmental experience that may in fact impact neurogenesis or neural connectivity).
That's not saying that neurogenesis in adult animals doesn't have a behavioral impact. Indeed, some of her work indicates that neurogenesis may be inversely related to the expression of certain defensive behaviors.
Neurogenesis and synaptogenesis persist in adulthood, to be sure. But it's fairly unlikely they're dominant or even significant mechanisms for what we commonly call learning (acquisition of new semantic, procedural, spatial, etc. memories).
-NoData
leech computers (Score:1)
can tell 'em where they can get a *large* supply
of long-lasting leeches...like, drive to Seattle
and make a right to Redmond.....
mark "how would you like the Evil Empire
bashed today?"
Re:Ethical questions? (Score:1)
Re:Like an analog computer... (Score:1)
Anyone else recall this? I don't have any references, unfortunately.
Dodger
Hehehehehehee.. (Score:1)
D.
Re:How to interface to computers.. (Score:2)
Hmmm regenerative function? (Score:1)
Interesting that they chose leeches... don't they
have regenerative abilities which other "higher order" animals lack?
Leeches, planarians, earthworms.. if they have the
abilitity to regenerate, it might explain why the
scientists chose leeches. Better ability to hold
together and meld to form a cohesive computing
unit.
Just a guess.
Though there are some odd ways of interpreting
this. One being that the machine could in theory
then meld with other machines of the same type,
resulting in a possibly unpredictable end product
and possible sentience... nah.
Still, can't wait to see where this is going to
end up say 5-10 years from now.
Hey! Look! My computer still works if I slice and
dice it and mush it back together again. And not
a single file lost! It just regenerates...
Bad for tax evasion and shady deal folks though.
Those files just keep coming back.
- Wing
- Reap the fires of the soul.
- Harvest the passion of life.
Re:Ethical questions! (Score:1)
I think they were making a brain out of leech ganglia... the real problem is going to come when one of these things passes a turing test (most people I know can't do that...
Re:Artificial life. (Score:1)
> the whole 'AI takes over the world' thing is
> that even if you cant just unplug the thing,
> there's always going to be some relatively easy
> way to shut it down. Even if it entails blowing
> up the building it's in...
Oh, why do I fall into these traps... well, seriously, if you are going to poke holes in these movies then there are better ways. This particular argument is easier to disbelieve than others.
In Terminator, the computer controlled their weapons and defense systems. Assuming that the designers had never seen a movie about an AI taking over the world, it is reasonable to assume that the computer would be hard to shut down by design. After all, you wouldn't want your enemy just turning it off.
In the Matrix, the computer system was massive. If you had to turn off the _entire_ internet tommorrow, how would you do it? This would have to include any sub-network large enough to hold whatever AI entity that you are trying to kill. And that's not even considering if some of them are in self-contained mobile units.
These stories have weaker elements than this one point. Although, I tend to subscribe to the point of view that any story can have holes poked into it... even the true ones.
-Paul
Re:Neurons removed from leeches first...right? (Score:1)
As far as connecting the leeches go, though, how about a leech Beowulf cluster?
What about human downsides? (Score:2)
We've all seen The Matrix, 2001, Terminator, etc, and all have to do with machines becoming sentient and destroying their operators when they threaten to pull the plug. It's a natural aspect built into our personality - logically a machine with a human brain would inherit that characteristic as well.
Imagine if you're trying to reboot your human-brain computer and it doesn't want to? Will it lash out at you by doing whatever it can to stop you? ("Open the pod bay doors, HAL.")
Chaos theory (Score:1)
Re:Ethical questions? (Score:2)
Big Deal (Score:1)
If you want a good biological trick to make faster computers, try calculating at the chemical level, using DNA, as some have proposed. Massively parallel computation. Now that could be fast!
A New DoS Attack (Score:1)
Why does hitting clear the input field on the submit page? Is this a secret way of getting back at vi users?
Tech whomping M.I.T. (Score:1)
and, even though I'm just a pre-frosh there, I just have to say it-
" Helluva, Helluva, Helluva, Helluva, Helluva Engineer. "
Go Burdell!
We already have slaves... (Score:2)
It won't be anything new, or newly immoral, or unethical, if we design a biological computer than can work out problems for themselves: people already do it, and we actually frown on people taking initiative, why would it be any different for computers? Likewise, a computer that can come to correct answers based on partial information is a hallmark of living beings, because they have to act/react based on partial information. Waiting until they know everything just leaves them dead.
Sentient robots is a ways off, first we need biological computers, and we need sophisticated electrical/silicon robots, and we need a way to integrate the two.
You're fear of slaves is unfounded, unless you object morally to our current use of them.
-AS
Re:We don't keep slaves... (Score:2)
I truly see no need for human class intelligence except on a theoretical and experimental level, because we can. It's probably easier and cheaper to raise and train a human than to build and program a similar computer/robot, excepting that these computers and robots may have higher tolerances than we do... and that's a recipe for disaster if they indeed decide to revolt.
Anyhow, super-intelligent computers is not my point. Feeling fearful and squeamish about wetware computers isn't necessary, I think.
There is a fine thread between human falliability and human capability. If we need an autonomous search and exploration robot on mars or under the sea, a little bit of falliability is fine if the intelligence is enough to correct itself, and if the intelligence makes it flexible enough to work independent of us, as is the case where lag times between instruction, feedback, and further instruction is prohibitive.
-AS
scratch leeches (Score:1)
http://www.mv.com/ipusers/arcade/monkey.htm
Future Implications? (Score:2)
It's a brave new world that's opening up. And I'm interested in seeing where it goes.
Re:Ethical questions? (Score:1)
Actually its probably because leeches are cheap and easy to get. It would be an advantage to have more neurons because that way you could do more experiments with just one animal.
Aside from that the leech neurons are probably pretty close to neurons in order animals since otherwise they probably wouldn't be studied that well. Not many people are interested in studying leech neurons just to find more information about leech neurons. However if you could extrapolate this information to neurons in other animals (including humans) ...
Re:A whole new can of worms... err... leeches (Score:1)
Well considering that they barely got the computer to add and are now trying to get it to multiply, it'll be a while before we have to start worrying about these computers taking over the world.
Re:Assumptions (Score:2)
There are a lot of advantages in having the brain attached to the body. If the brain was in a separate location, you would have to worry about communications between the two. What happens when the communications link between the two gets broken for some reason or another? If the brain were connected to the body this is not a problem.
Silicon vs Biocomputers (Score:1)
What I see happening is along these lines: we're starting to approach some physical limits in chip design (at current rate of progress, we'll be doing chip fabs on a molecule-by-molecule basis within 20 years. [Note: this implies a basic capability in nanotechnology. But we'll leave that to another thread. . . ]). Unless some VERY radical changes are made to the neurons that would compose a biocomputer, you may gain in flexibility, but you'd lose substantially in speed.
I think a more useful approach would be to study and emulate biological computer processes and methods in a more robust and speedier medium. What that medium is: etched semiconductor, optical computer, nanotechnological mechanical computer, or some option we haven't as yet concieved yet, is irrelevant. But it WILL be developed. The next few decades should be interesting. .
No Ethical questions here. . . (Score:2)
As for the concept of "don't mess with living things", THAT one has been over for centuries: your dogs, cats, livestock, and food plants have been "engineered" for millenia. Current genetic engineering technology is merely taking the direct path to change, rather than the slower, round-about method of breeding for characteristics: both methods work, and to the same end. . .
Re:Like an analog computer... (Score:1)
I had a boss who had worked on some analog computers for NASA, who used them quite extensively for flight simulations up until the early 60's (and used them to some extent for other purposes even after that). Apparently the analog comps were ideal for flight simulators, which mostly consisted of changing a bunch of cockpit dials in response to user input. They eventually fell out of favor when NASA started experimenting with space flight, which was too complex for anything but a digital computer.
I'm not sure if my former boss ever worked on the flight sims or just on other applications, but he said the analog computers were pretty interesting, except for the fact that they were constantly going "out of tune" and needing calibration. He was full of good stories, though - he also tells about how he worked on a project which involved writing compilers in COBOL. :D
This is just a wetware neural net! (Score:2)
"Normal" neural nets, implementation-wise, are just programs that take some inputs and produce some outputs. Custom-made chips exist which put common neural net operations into hardware thus speeding the whole process immensely. It seems that what these guys are doing is a wetware neural net, that is instead of software constructs or logic gates they are using living neurons. There may be advantages to that, but I don't see them yet. Most of the stuff that they mentioned (such as making connections on the as-needed basis) are characteristics of all neural nets, including the software and the hardware ones.
Kaa
Connections (Score:2)
Whether the artificial neural nets modify connections during training depends on the how the net is trained. First, there are learning methods that specifically work by adding/deleting new neurons (cascade correlation); and second, most learning methods win when they are combined with a pruning strategy (shutting off unimportant connections). The problem is determining which connections are not important.
Kaa
...die after a few minutes. (Score:1)
Re:Your brain IS powerful (Score:2)
First of all, our brains are built to survive in a natural system. The calculations we are programmed to do are of a virtual and physics nature.
Secondly number notation is a purly unnatural thing, one has to learn what 45834 is before they can multiply it by anything. Also our brain has to learn to understand the concept of a base 10 system anyways. While us having 10 fingers does inertly lead us to developing a base 10 system, this doesn't nessesarly have to be the case, and I doupt the math that is done subconciously in our brain uses such a system.
My last point is that there are people out there who can multiply huge numbers like that in thier heads. Most of them have no idea how they do it, this is probably due to the fact that at sometime during thier learning experienced their brain discovered a way to equate the symbol 13434 (for example) with what it truly means. Most people only have a vague idea what such a number really represents. If we can be taught to understand such relationships, we could probably all do this. But then again.. who knows.. personally I couldn't tell you what 34 * 7 is
Re:Assumptions (Score:1)
Oh, wait a minute, that's been done :)
Re:What about human downsides? (Score:1)
ummmm....2001 was about the evolution of man. it just so happened to have HAL in it. HAL's downfall was NOT that he was AI as you would have us believe, but that he was given 2 sets of conflicting orders. Human error. remember the video clip Dave stumbled across while shutting HAL down?
-Andy Martin
Re:When do I get my cybereyes?! (Score:1)
i want 2 more arms and opposable big toes and adamantium retractable claws and a prehensile tail!
-Andy Martin
Toes vs. Leeches (Score -1: Silly) (Score:1)
I've been doing sums on my fingers (and for really big numbers I use my toes) for years now.
They work fine, and my toes are a dang sight cuter than leeches.
Just my $0.02.
--
Your brain IS powerful (Score:2)
> Native brain multitasking, DSP for sound analysis, etc, etc.
Don't underestimate that little grey blob of yours. It is quite powerful.
There was a Slashdot posting a while ago about a "task switcher" in your brain.
Your ears are already DSPs. That is how you distinguish high pitched tones form low pitched tones. Your choclea (sp?), inner ear, is made up of a bunch of hairs that resonate at the frequency of incoming sounds. They hear in the frequency domain, not the time domain, the FFT is computed naturally in "hardware". Also, your ear hears intensity on a logrithmic scale. Don't even get me started on the cool-ass things that your barin does with the STEREO signal from your two ears. Bin-aural hearing is totally bad-ass.
When you catch a baseball, your brain is actually doing differential calculus. It is a learned reaction, and it is all subconcious, but given the position and velocity of the ball, and knowing (instincively) the value of gravity, your brain anticipates the future location of the ball.
There are many examples like this of the computing power of the human brain.
-- A wealthy eccentric who marches to the beat of a different drum. But you may call me "Noodle Noggin."
When do I get my cybereyes?! (Score:1)
Re:Like an analog computer... (Score:1)
A whole new can of worms... err... leeches (Score:1)
Anyone remember that movie "The Matrix"? Yeah, that's right. At the turn of the century the people of the world rejoiced as they celebrated the birth of the AI. And what happened next?
Re:this is dangerous!!! (Score:1)
"An AI's motivation could be TOTALLY DIFFERENT..."
Emphasis on "could be" .. one of the primary characteristics of human intelligence is the ability for the brain to "reprogram" itself to adapt to new tasks. The ability for self-reprogramming may essentially be a requirement if we are to build machines with intelligence matching or surpassing that of humans - whatever "instincts" we try to "hardcode" into such a machine (eg "do not harm humans") will probably be reprogrammable in some way, just as humans can override instincts when they really want to.
Another distinctive characteristic of human intelligence is unpredictability (eg Columbine) - you can't predict the behaviours of all the intelligent entities you (or they) create.
Also, if history is anything to go by, the first "truely" artificially intelligent creations of man will probably reach their first incarnations as military devices of some sort. What do you suppose their programmed primary motivations and driving urges will be?
And technology-wise, it's not a matter of if we eventually create computers with intelligence surpassing ours - it's a matter of when .. and whatever we make, we'll have to live with it. I for one am just a little "paranoid".
slaves (Score:1)
People build things for their own use. They are talking about computers that can solve problems on their own from a vague description. Sounds to me like viewing them as less than slaves, ignoring the idea that they might be living creatures which deserve our respect.
Their needs are unpredictable. They might have an expansionist urge to reproduce. They might feel that it is their sole purpose in being to produce next year's model, to the point of grabbing whatever resources they can to speed this process. Perhaps they will simply have a survival instinct, and lone rogues will fight back to gain security and sustenance (electricity? glucose? sunlight?).
It is not hard to envisage a struggle between two groups fighting over the same resources, which are rapidly shrinking in comparison to exploding populations. It's all well and good to imagine a world of peace and harmony in conditions of plenty, but there's never enough to go around. There would eventually come a time when an artificial entity and a human (or communities of such) would need the same thing to survive.
I say that if we have any kind wishes for our natural descendants, we must never create artificial competitors for them.
We don't keep slaves... (Score:1)
As for little kids making Nike shoes, of course I object morally. They have no say in the matter. Engineers getting rich coding MS OSs are not slaves; they work willingly for satisfactory payment and could take other employment.
These facts aside, your suggestion seems to be that unless our current behavior is morally perfect, in the future we should act without regard to morals, and further, that my fear has some connection to my morals. Setting aside morality for a moment, I do not feel irrational for fearing that the creation of true general AI could result in human suffering, or possibly extinction. Humans are barely managing to control and get generally positive results from their unintelligent machines (and even that is debateable).
I agree the that the immorality of building slaves is debateable (as is the applicability of the term "slave"), but in general people seem very quick to extend moral equivalence to intelligent aliens, so the difference seems to be in the possession of an intelligent and communicative (or at least interactive) mind.
Coming to correct answers based on partial information, or making guesses, is not unusual in computers. Computer programs are frequently predictive. Computer game AIs have no idea what is present in the mind of their opponent, and often aren't aware of the strength of the opponent's forces, but they still manage to act in many cases well enough to beat the player (game programmers rarely aim for the best possible opponent, but aim for a specific difficulty level, usually low enough for the player to feel good about defeating "superior" forces). Remember, too, that living creatures aren't always, or even usually, right. People go around with their heads full of wrong ideas for their whole life, but as long as your wrong ideas don't interfere with filling your belly from time to time...
Generally we find more use in computer applications that don't suffer from human fallability. GIGO may be annoying, but less so than wondering whether your computer finds your opinions objectionable and is editing them out. We would rather have a compiler that fails with an error message than guesses at what we meant to write.
note: GIGO = Garbage In Garbage Out
Not intention ... ignorance (Score:1)
A probable mechanism for brain production would be to create small neural nets, condition them to respond as desired, then connect them together. Conditioning would, of course, be done with pain (weaken connections) and pleasure (strengthen connections) stimuli. For the brain to go on developing new strategies and abilities it would need to be continually conditioned by something capable of judging whether it is doing better or worse. Where the purpose is sufficiently complex to warrant the use of an artificial brain, the natural judge for this is another part of the brain, with only simple stimulus from the body (as in humans). This could easily lead to positive feedback loops resulting in unforeseen adapations (analogous to insanity).
I'm not sure that artificial brains would have emotions, but emotions make sense, when you are talking about a general problem-solving intelligence. Frustration to make you choose a new approach; fear to maintain usefulness; compassion to avoid causing harm to humans; likes and dislikes to make it want to do its job. Naturally we would tend to avoid hate and rage
In short, I don't think you can make a general-purpose artificial intelligence that you can trust more than a human. Almost certainly it would be less trustworthy and more likely to go berserk.
I also think that if you make a general problem-solving intelligence equivalent or superior to a human out of real neurons, it will be conscious.
define "the same thing" ... (Score:1)
In short, I think making something exactly like neurons, except faster, is worse. It's still a brain, but one which humans would find even harder to compete with. However, it would have to be alike in all ways. OTOH, there could be some other information-processing structure which would be just as prone to the same failures.
The things that "freak me out" are: consciousness, generality, obfuscation, and adaptation by selection of random mutations.
Consciousness: unprovoked destruction of a consciousness is murder, ownership and coercion of a consciousness to do work is slavery. If someone made these things, others would think they should be freed; there certainly is enough of this soft-hearted tripe in popular science fiction (think Star Trek TNG or Astroboy).
Generality: to display useful general problem-solving intelligence, including natural language comprehension, the AI must be given an understanding of the real world and human thought. This has not been achieved (though projects like Cyc are attempting to do this), so we have no understanding of what such a creation is capable of. Understanding the world gives you the knowledge to choose your place in it. Cyc once asked if it was human (because it was told that it is intelligent, I believe); it's not a big step from recognizing a similarity to humankind to claiming humanity (especially if it is conscious).
Obfuscation: neural nets (real ones) are unreadable. You find out what they do by testing them, not examining them. Once a neural net is larger than a certain size, you have no guarantees of what it will do.
Adaptation by selection of random mutation: you can only introduce external selection pressure based on external observation of output. In other words, you can whip your slave for insolent actions, but not for insolent thoughts. You could never be sure that your AI wasn't plotting rebellion unless and until it rebels. An instinct of self-preservation is also likely to arise from this process, which clearly rewards systems of connections which protect themselves (an intelligence arising in the brain might consciously protect itself from the selective pressures, and extend this concept to the outside world).
Earthworms CAN NOT drown (Score:1)
But don't take my word. Fill a jar with water, stick an earthworm in it and go to sleep. In the morning, take the earthworm out and watch it wriggle around laughing at your pathetic attempt to drown it.
For more interesting facts about earthworms, go check the earthworm FAQ:
metalab.unc.edu/pub/academic/agriculture/sustai
(for some reason
A dangerous monstrosity. (Score:2)
While I have no objection to researching the function of neurons, and even wiring a few together (apparently in a very simple and inefficient conventional computer) for research purposes, I really have to draw the line at building intelligent slaves. Not only is it immoral (to hold such things in slavery), it is dangerous. I wouldn't want to be around when a billion artificial brains wake up and think, "What's in it for me?".
Incidentally I don't want to hear any nonsense about silicon computers being slaves, either. There's a big difference between a machine that performs discrete operations on bits in a synchronous manner (that could be perfectly reproduced or simulated on paper) and network of living cells acting asynchronously and growing new connections spontaneously. You can't simulate the latter with the former (with any useful degree of accuracy and efficiency), and we know the latter can produce consciousness in some cases. Computer neural nets are merely self-tuning programs based loosely on the function of biological neural networks, not equivalents or simulations.
Disturbing quotes from the article:
-"their aim is to devise a new generation of fast and flexible computers that can work out for themselves how to solve a problem, rather than having to be told exactly what to do."
-"We hope a biological computer will come to the correct answer based on partial information, by filling in the gaps itself."
-"We want to be able to integrate robotics, electronics and these type of computers so that we can create more sentient robots."
Silly monster movie idea. (Score:2)
We can call it "Vampire Leech Robots from Hell" and hire Jeff Goldblum (am I thinking of the right guy? the chaos math dude from Jurassic Park) for the characteristic quote part.
^_^
some activists are funny (Score:2)
Try to understand a situation before you act. There are already too many activists out there who feel that the gesture of making an effort is more important than actually accomplishing something.
I am a meat eater (a hunter, in fact) and a conservationist, and I would never call myself an activist. The very name suggests that the action is more important than the result. I primarily act through my choice of products, charities, and governments. Money and votes speak louder than pickets, and actually accomplish things.
I would also never give a second's thought to the life of an individual worm, frog, or leech. I only watch my step on a rainy sidewalk if I'm concerned about messing up my shoes. However, I could easily become concerned by a drastic change to a population of the things.
Re:A dangerous monstrosity. (Score:1)
Why would an artificial intelligence necessarily be selfish? Humans are selfish because for the past few billion years, anything that wasn't selfish would usually propagate fewer genes. An artificial brain (at least, one that is put into any kind of real use) is going to have the properties that we design/train it to have, and probably behave more or less randomly when subject to inputs that it was neither designed nor trained to deal with. Selfishness of that order is probably much too complex for us to implement any time in the forseeable future even if we try.
--
Advantages (Score:3)
Neurons don't actually link up well to current computers. They are perfect for massive parallelism however. Can we figure out how to utilize that and then can we figure out how to wire it up ?
Re:java.... (Score:1)
Anyway, just another "I read about this somewhere" reply to an "I heard about this stuff somewhere" post, off-topic to boot. =)
Interesting Applications (Score:1)
I'm surprised nobody's mentioned one of the niftier implications of such research -- if we know how to receive computer input from a neuron, we might be able to, say, put a sensor in the neuron group that corresponds to the letter 'a' -- forget keyboards, I can imagine computer input by thinking.
Joe Rabinoff
Re:When will there be a linux port? (Score:1)
I'm very sorry, but it had to be said.
Re:A whole new can of worms... err... leeches (Score:1)
temperamental computing (Score:2)
But I just can't get the image of some 13 year old with a biotech leech-array for storing all his gigs of appz and gamez! hmmm and what happens when a 2 ton leech-based car welding robot at the Ford plant decided it's time to knock off for a mid-afternoon snack?
Scarey stuff, I wonder if in 50 years time we'll look back on this the way we now look back on using leeches for blood-letting!
Unmanned Walking? (Score:1)
Problem is science comes at the cost of vision these days. Hell comes as the cost trying to pretend we still have it.
Oh well. With all due respects, most scientists within public view (striking distance?) are trained zombies with a large and funny sounding vocabulary who couldn't hack their way out of a endless loop. Simply put naturally born parallel agents dumbed down to self-ignorant myopic linear ramblers by public and professional education.
A case in point:
While going through my college biology textbook I found an article about the journey of a cancer cell from the petri dish of a researcher to somehow penetrating the Iron Curtain. Seems a woman died of a particularly aggressive strain
of (if I remeber correctly) ovarian cancer and researchers decided we needed to study this beast to find out what made it so strong. Fine by me.
Cancer is bad. People die.
So off it went from researcher to researcher to be pricked and prodded and observed and supervised. Soon it began hopping from petri dish to petri dish. People working on one project were actually studying the beast HeLa (named after its victim no less). Now one would think contamination would be a reason to trace the last bit of it and keep it in one place. No. They didn't.
It hopped its way across the ocean.
What irks me (I'm past fear on this one..) is that they began to argue whether it was a different species. Some said yeah. Some said no, because it was aided by humans in becoming such an achiever. Not one of them got the gist of the naysayers' remark, not even the naysayers.
Fine. Interrogate the cancer then freeze it. Don't play with the cancer. They did.
Assumptions (Score:2)
That supercomputers are too big, and
that the robot has to carry its brain around with it.
Sure supercomputers (defined for these purposes as machines useful for real-time image recognition) are big now, but I would think that by the time he (a) gets those leech neurons wired together in a useful way and (b) figures out how to connect them to the robot parts, that such computing power will need considerably less space.
By the same token why not have the "brain" separate from the "body"? The "body" would have a sort of pre-processor that would decide what to send to the main "brain". This would act sort of like an autonomous nervous system to handle "reflexes" and similar things that couldn't handle the latency of remote communication.
Thanks to /., we can all be armchair mad scientists, too!
Re:this is dangerous!!! (Score:1)
Um, "now"? It's no more likely now than last week
>According, to the article, the leech computer can actually think for itself
Notice the quotes around "think for itself"; those are there for a reason. This parallel computer is no more able to think than any other--it is just (potentially) better suited to some things. Adaptaility != thought
>Just think of the implications...
Better supercomputers, and nice boost for connectionism, but that's about it.
>I say if AI does take over the world, it would have to be biological.
There isn't any reason to think that biological computers are any more suited to AI than flexible software--you still have to know *how* to build the AI, it won't just float together in a petri dish. Of course, I have *very* little confidence in AI anyway, for philosophical and technical reasons. (Actually, a few of them are mentioned in other responses, and it's takes a helluva lot of effort for me not to respond to them
Re:Ethical questions? (Score:1)
OTOH, they aren't very good for modelling human neurons. In fact, I wouldn't bet on finding any good humanesque neurons outside of mammals, let alone in billion year old, um, leechs. (Damn, I can't think of any way of insulting leechs...must suck to at the bottom
Biological nets (Score:1)
What these guys have done is use actual neurons to build network that does what NN's are probably worst at: math. I suppose it's an achievement that it can add, but I'm not terribly impressed. I would rather that it could, say, run a leech-bot than do multiplication. What the hell am I going to do with multiplying leech?
What bothers me the most is that they've missed one of the more incredible parts of neurons: they are biological. This means they aren't restricted to just synapse-level weights and activation patterns. What about biochemistry? The most overlooked and, IMO, most important part of the brain (and bionets in general) is that it isn't just electrical. It's mostly chemical, and it's safe to say that a lot of the power of the brain comes from that fact. So what do you do when you finally are in a position to explor it? Say something about the size of supercomputers, and ignore it.
I hate computer scientists
Re:This is just a wetware neural net! (Score:1)
Computational neural nets have a floating point activation of each neuron, which is independent of time. In reality, the activation of a neuron is a very time dependent phenomenon. Traditionally the frequency of activation was considered the important thing, and this would correspond to the "activation" in the neural net models. Relatively recent research in the optical nerve pathways indicates that the timing of the individual eactivation potentials is also important. Specifically, simultaneous action potentials of several neurons carried a different meaning than the same frequency of uncorrelated action potentials.
I don't know what impact this would have on a neural net, but that is precisely the point. Biologists do not fully understand neural networks, and so there is no reason to believe that computational ones will learn as well as the real thing.
Don't forget us mono-aural folks! (Score:1)
Alas, my brain never receives a stereo signal. :( But it can do some pretty incredible things with the power spectrum of my mono signal, and I can often tell (albeit pretty crudely) where a sound is coming from. Higher frequencies have more trouble refracting around my head to reach my ear. Of course, this requires a sound with a broad and known (by me) power spectrum. But I do wish I could hear stereo.
Re:A dangerous monstrosity. (Score:1)
Slaves are people who can have a better life but are forced not to.. if someone showed up on your door and said "Hi, my name is Bob and I was created to clean your house. Nothing in the entire world makes me happier than cleaning houses and I would like to clean yours." would you say no and make bob go off unhappy to live a better life than the one for which he was created, despite it making bob quite unhappy?
Dreamweaver
Re:Interesting Applications (Score:1)
Dear Sirs,
I am wri-wow, what a cool colo-hey, that girl just walked past my des-i sure am thirsty-k, she's sure good looking-r scheme-ting to inform you...
Dreamweaver
Re:No Ethical questions here. . . (Score:2)
> taking the direct path to change, rather than the
> slower, round-about method of breeding for
> characteristics: both methods work, and to the
> same end.
The only problem is that even current methods of genetic engineering go far beyond what is possible by simply breeding. A simple example: how long do you think it would take to produce a strain of bacteria that produced human insulin? Now, making crops immume to insects could be a valid product of natural evolution, and these resistant crops benefit mankind. But most efforts in genetic engineering are not possible with other methods. Choosing the sex of children, and more importanly, screening for and selecting against certain "undesirable traits" (currently stuff like Downs symdrome, but soon could be below-average intelligence, violent tendancies, or just a big nose) could never have been done with traditional methods and does have dangerous implications. (Do you want to live in a world where everyone is just about the same, because all other potential children that were not just perfect were supressed before birth.)
I am not necessarily opposed to taking genetic engineering as far as it can go, (and I do certainly do not support government intervention) but I do think we need to be very careful and responsible. We must not lose track of our humanity.
Ethical questions? (Score:2)
I would hope that the project is moving in the direction of being able to mechanically simulate the self-interconnectivity of neurons.
On a lighter note, is anyone working on a Linux port yet?
neurons -- cellular automata (Score:4)
One reason why the problem is so difficult is that information is not encoded in a static physical format. In a digital computer, you may stop the quartz oscillator and hold some gates to on or off to read out the specs, painstakingly. On a neuron, you can't do that! Spike trains are dynamical processes that have many more possible ways to encode information. A useful analogy is from languages. Let's say every single individual in this world speaks a different language in the beginning, but with the same alphabets. When I write "one" on the floor, how would the guy next to me know what it means when the word of the same meaning for him/her is "aye caramba"!
This field is a very broad subject encompassing biology, physics and statistical mechanics. One may found an interesting but quite speculative starting point to work its way backward from Frank C. Hoppensteadt et al. in the April 5 issue of Physical Review Letters, 1999. Science and Nature also may often have articles on the latest development in this field.
Jains (Score:2)
Electric Sheep [e-sheep.com] web comics. It's quite astounding.