Electronic Circuit Mimics Brain Activity 134
A lot of people wrote in with the news blurb from Yahoo! regarding the announcement of a ciruit that supposedly acts in a manner resembling human brain activity. Details in the blurb are pretty sketchy though - post links below if ya got 'em. One of the interesting points that they say though is that the brain does both digital and analog - but that's pretty much all they say about it.
original article (Score:1)
Enjoy
Jan-Jan [mailto]
Well, (Score:1)
BUT when you go deep enough, you get discreteness, quantum states and the digital 1/0 is/not_is yin/yang world again appears! A most beautiful cycle!
Nope, neurons are partly analog. (Score:1)
Neurons don't only "decide" to fire or not to fire... They can fire at a multitude of magnitudes. All neurons have a trigger level. An incoming pulse needs to be of that magnitude or higher for the neuron to react. The neuron can then "decides" to fire, but neurons can also increase or decrease the magnitude of the pulse they send off. That's the way a neural net works. Various pulses of various magnitudes travel through a neural net, and according to the magnitude of a pulse, the receiving neuron "decides" what to do with it.
You could indeed consider the "decision" to fire or not to fire a binary decision. Compare it to a joystick. It's not a binary joystick that either goes full or not at all, but it's one of those analog joysticks that can go everywhere inbetween aswell. The neuron can have dead zones on both ends, can alter sensitivity, can switch axis and could even reverse axis. On the whole, neurons adapt and learn. All these "neuron settings" cause various pulses to travel different paths through the same network, to split, to die off, and do all sorts of other interesting stuff.
)O(
the Gods have a sense of humour,
Are you sure? (IANAB) (Score:1)
)O(
the Gods have a sense of humour,
Neuromorphic VLSI is a dead-end technology (Score:1)
Millions of dollars of money going into making bad retinal focal-plane arrays whose output makes QuickCams look good, analog cochlea models that underperform real time digital models, and a handful of other do-nothing circuitry like the one described in this article.
Meanwhile good old digital VLSI has gone from 100 MHz to > 1 GHz, we have actual speech recognition systems running on PCs, and a new range of low-power digital multi-purpose digital CPUs for portable devices.
There has never been a real product developed using neuromorphic VLSI, and the few implementations can now be replaced with faster digital computers.
The best part of neuromorphic VLSI was the electical engineers teaching all the neuroscientists about how electical circuits work, wavelet transforms, etc., to a bunch of people who like to think of the brain more in terms of a Rube Goldberg device where one neuron taps the next neruon instead of a complex chaotic set of electrical network equations.
Why would anybody want a computer that (Score:1)
making these for 35 years (Score:1)
This is merely a variant of of it.
One of the more interesting neural networks is
CalTech's Carver Mead's artificial retina.
It has excitory and inhibitory connections.
His company put hundreds of thousands of them
on a chip to make a self-regulating and processing
digital camera. Got a White House medal for this
and other stuff.
long way between two and trillion (Score:1)
with a thousand connections each. It will take a while to emulate this.
Re:Quantum computing (Score:1)
here [helsinki.fi]
I can't speak for his physics (he gets into manifolds and such) but the ideas are right on.
Funny, but... (Score:1)
This maps to, for instance, number theory as follows: There are theorems that are true (that is, we know that they state things as they really are) but that can't be derived (i.e. proven) with the axioms and rules of number theory. This is true no matter what axioms you use: even if you add those theorems new ones are always "out there".
It's easy (well, relatively) to find these statements in other systems, but it may be logically impossible for US to find them in the human brain. Maybe aliens (or AI) will have to prove that Godel applies to us (and us to them).
Hey! That gives me an idea: Maybe "Godel's theorem applies to humans" is the human godel statement.
--
Incredible (Score:1)
That's absolutely incredible. Think of the power we now wield: We can transport scientists through time from the 1950's.
We are able to "carbon date" these scientists by means of the research they are doing. For instance, had they been attempting to determine the speed of the planet Earth through the "luminiferous ether" we would have known they came from before 1903. Had they stared in wonder at our televisions, we would have known they pre-dated the 1950's. However, their work on the then cutting-edge, now old-hat neural networks (implemented in hardware, no less) places them firmly in the 1950's.
--
Re:Digital and Analog? (Score:1)
"A digital system is a set of positive and reliable techniques (methods, devices) for producing and reidentifying tokens, or configurations of tokens, from some prespecified set of types" (from "Artificial Intelligence", Haugeland)
A positive technique is one that can succeed absolutely. You can write something in sloppy handwriting, but i can still (potentially) succeed absolutely in reading what it is that you have written. The potentially is important, because a digital system is not required to be reliable, just that it is possible to work absolutely correct. The mechanics of how a brain can read arbitrary handwriting (or any number of things we do everyday) is undoubtedly analog, but some systems within the brain are digital. Writing a post for slashdot is analog, i cannot "absolutely succeed" in putting my words together in an intelligible way that says everything a post should. But the act of posting a post is digital, i can succeed absolutely by pressing this little submit button down here, which i will do....now
Re:Chaos theory (Score:1)
Re:Chaos theory (Score:1)
Chaos theory deals with "simple nonlinear deterministic systems" that "behave in an apparently unpredictable and chaotic manner".
Sorry, but quantum mechanics doesn't count.
Although now that I think about it, many Anonymous Cowards are simple detministic systems that behave in a chaotic manner. Maybe you should go in as a lab animal.
Re:long way between two and trillion (Score:1)
Re:long way between two and trillion (Score:1)
I think the real problem is not the massive size (even though it's far from trivial). Simulating a trillion nodes should be within realistic reach, if enough money were thrown after it. One thing that helps is that NN's are very fault tolerant (Killing a whole area of cells doens't matter THAT much in the human brain). ;-). An initial bootstrapping (applying some chemical?) should tell the cells to explore the environment they were put in (chat with their neighbors), committing suicide if they were badly connected. Also, this bootstrapping chemical should cause the nodes to build "power choords" to themselves. And so what if only 50% survives this "spread of the cells"?
Here's one (far-fetched) idea of how this fact could be utilized in the ANN production: The factory could maybe almost put (electonic/chemic/whatever) neurons on a spray can, spraying it all over the chip surface, not worrying too much about how many are not well-connected, and not being able to control the detailed connectivity that fine-grained
The key to this is just that interconnectivity and quantity might not have to be that precise (as for example in a CPU) - just spray the neurons out there.
Okay, so the quantity of this problem could maybe be solved.
What I think is the real problem is the programming. Brain scientists have almost no idea how the brain works and learns. And even if we found out an effective way to train the artificial brain, there's an even worse problem: The human brain is so preprogrammed that we wouldn't believe it. How on earth should we capture all the preprogrammed information and behaviour in the human brain, short of emulating real neurons (only the simpler artificial ones)...
Is this new? (Score:1)
"... the brain makes an either-or decision about whether or not it is a car"
At first, I thought "How are these ideas so different from eg. a scanner reading some text (analog) which is then OCR'd by software which decides whether it sees a letter A for example (either-or digital decision)
But of course the important difference here is that the brain processes analog and digital together. All existing electronics (before this new research) processes anolog and digital completely seperately with just an interface between the two. The scanner is the interface in my example.
Can anyone think of better existing examples with both analog and digital components but where the anaog-digital connection is more intimate?
Re:Quantum computing (Score:1)
I could understand a point of view that said "We don't know if quantum effects manifest themselves on a macroscopic level in the brain".
I'm having difficulty with the idea that quantum effects might be the only way that some brain processes could possibly function.
I have heard some people refer to quantum brain effects as some kind of new age "soul"... As a devout atheist that rubs me up the wrong way.
Re:Chaos theory (Score:1)
I believe our friend was saying that due to sensitive dependence on initial conditions (or state divergence) exhibited by many nonlinear systems, the effects of quantum events aren't simply averaged out in the macroscale and swallowed by the law of large numbers.
The butterfly effect for nonlinear systems like weather, or parts of our own brains, should apply right down to the quantum scale.
Computing consiousness may require quantum algorithms as Penrose indicates.
My quantum tentacle presses the Submit button...
Analog nonsense (Score:1)
This is nonsense IMO. "This is almost a car" is just as much a digital statement as "this is a car". The brain can be 100% positive about its own uncertainty.
As an aside, the spike trains generated by neurons are purely digital. Only the strength of synapses are analog and this strength can be easily simulated digitally by a sufficiently large integer. All this "analog is superior" crap is just that, crap.
Louis Savain
Re:Ah, why does this sound like nothing new ... (Score:1)
The linux types around here should like the approach they take: free tools they wrote (and distribute) that run under linux. Try http://www.pcmp.caltech.edu for the details on their work.
Old Age?? (Score:1)
If I make my computer a duel boot will it be suffering from multiple personality disorder?
Re:What I've heard about UPenn's setup (Score:1)
Of course, any smoothly functioning technology is indistinguishable from a rigged demo, but I doubt they'd go to the trouble of setting up a flashy-but-fake demo to impress a prospective grad student. I'm just not important enough to lie to.
Re:How does memory work? (Score:1)
Note that the different encodings thing is not proven; it merely makes sense, since cells in different brain areas often have significant phenotypic variation.
To your question about forgetting... current theory says it's more of your second hypothesis. In this regard, the brain appears to act like a neural net --- if you train it to do something, but then start training a different task and never provide the stimuli for the first task as reinforcement, it'll drift away from its intial conditioning.
Current theory also views memory sort of like a hash table: the entire state of the system is the input, and something comes out based on associations. This leads to an effect called "state-dependent learning": if you learn all your facts sitting in one desk of a classroom, you'll do better on the test if you take it at that same desk. As you age, your sensory inputs change, which means it's harder to construct a world-state capable of accessing a given piece of information. This has been suggested as the reason why most people can't remember their childhoods well --- our growth has changed the way the world looks to the point that we just can't construct an activating signal for anything but the strongest memories.
Connections are definitely not permanent. There is basically nothing permanent about the brain beyond the gross organization of areas and layers. The absent-minded professor effect, IMHO, is more a matter of changes in significance. You remember the things you think are important. As you get to concentrating more and more on proving P=NP, little details like when you last ate just aren't relevant enough to be encoded. (For most people, memory seems to have some sort of finite bandwidth, such that only the most significant aspects of the current state are likely to be encoded.)
Re:Mechanism of connection transience (Score:1)
We've Been Doing This for Years (Score:1)
The spikes are recorded by a seperate board and routed through hardware buffers to "synapses" on the next circuit, thus emulating the "leaky-integrate and fire" mechanisms of neurons.
For more information e-mail me at the above address (yes it's real) and I can point you to research articles and information that has been published from our Neuromorphic Systems Laboratory at Udel.
Even so, this is not a new thing, the theory behind artificial neural networks dates back some 40+ years, and there have been many attempts at Universities to implement the most realistic and interesting mimicries of human behavior.
What I've heard about UPenn's setup (Score:1)
There's alot of debate over which system provides the most realism vs. the most flexability. I think the answer lies in several Universities' approaches (including Udel and apparently MIT's new setup) as an analog-digital hybrid.
Re:Smarter Hardware (Score:1)
Re:brain makes digital decisions? (Score:1)
Re:brain makes digital decisions? (Score:1)
I can think of another answer to this: That the extreme short attention-span of babies make it more efficient for them to learn basic things faster. As we grow up however, we need attention-span to grow and become more abstract-thinking, to learn and reflect on more complex things. This is less efficient on more basic problems however (more overhead).
I might be wrong, but so might 1.000 scientists.
- Steeltoe
Re:Wierd about brains versus traditional machines (Score:1)
Yes, machines can estimate: For any result X just do an Y = X + random(d)-d/2, and add "I don't really know for sure, but I think it's
Now, it seems we humans are dependent on being able to estimate things. It's a role of being flexible and adaptable. We have logic, but it's very fuzzy. This is an disadvantage when dealing with "digital" datasets, but not so when we're living our daily lives.
If we can trigger our "rainman" capabilities inside our brains and harmonize this with what we already got, will we ever need a computer again?
- Steeltoe
Detecting Brainwaves:) (Score:1)
Re:Cluster of these (Score:1)
Re:Oh, this too is wonderful.... (Score:1)
If you don't understand, go look up Salon.coms article on sex and hackerdom from a couple weeks ago...
not a big deal (Score:1)
Re:Is this new? (Score:1)
Thats a natural interface, what about digital TV. I don't know anything about it, but the pictures are analogue and I assume the signal is digital, I infer from its name.
Related links (Score:1)
I did notice that these guys have been tinkering round with neural stuff for a while. I found this article [bell-labs.com] which is interesting and along a similar vein and has a pretty picture in it, or here [lucent.com] which is the press release without pretty pictures.
I off to book a holiday at Westworld now.
Re:brain makes digital decisions? (Score:1)
Custom Hardware (Score:1)
But (And there has to be a but), the way i understand it is that with SW Nueral Nets, synapsis can grow and die off as the pathways are enforced in the net. If the Neurons and Synapsis are hardwired, doesn't this limit the ability of the Net to grow? What happens when all of the available synapsis are currently in use? Just swapping in a new chip with more synapsis on it isn't an answer, the new chip would have to re-train to do the same job that the older one did. So, are hardware Neural Nets a real advantage over Software Nets?
Why go for human brains??? (Score:1)
Wouldn't it be a better idea to first try to fully understand and map out a very small brain like, let's say, a bee or something similar? Their brains sure perform lot's of functions (Like before mentioned image recognition), but there is much less brain cells and synapses and stuff to examine. Then they could work their way up with more and more complex brains.
Just like the ppl who mapped the human genes have done, they started with simple flies...
--
"I'm surfin the dead zone
Re:brain makes digital decisions? (Score:1)
--Fesh
why design for the past ? (Score:1)
If the goal is to create intelligent perceptual machines, why go backwards and design like a human brain? Why not go forwards and design according to the desired function at hand. The human brain is not the most efficient computing substrate for many tasks.
Re:Quantum computing (Score:1)
Re:Incredible (Score:1)
:)
Re:What is the point of this? (Score:1)
If the two are different, then one can't behave exactly the same as the other -- that's what different means.
---
script-fu: hash bang slash bin bash
Re:What is the point of this? (Score:1)
To get digital accuracy such that you can't tell the difference is like trying to represent Pi as a fraction. Honestly, to be fully accurate you'd need an infinite number of signal samples.
---
script-fu: hash bang slash bin bash
Re:What is the point of this? (Score:1)
---
script-fu: hash bang slash bin bash
Re:Digital and Analog? (Score:1)
-
Re:brain makes digital decisions? (Score:1)
Simulating Brain Function (Score:1)
Their main goal was to simulate brain processing in software rather than hardware.
Re:KARMA WHORE ALERT! (Score:1)
for alerting us of such a terrible terrible crime.
Luckily we have you, otherwise we might stress our
brains too much. You really are mans best friend.
Re:Okay. Now try this article... (Score:1)
Did you even read the article? They are talking about the internet as a collectively intelligent network of computers -- not just intelligent computers.
Even so, how does that compare to Terminator 2??? A military computer controlling weapons-of-mass-destruction achieves sentience and decides human beings are its primary threat? Do you seriously think that military computers close to important military weapons, etc. are hooked directly to the internet to even make this scenario even remotely possible?
Even if that were true I'd be more worried about hacker/cracker terrorists than a computer-based threat! Or how about an asteroid? Or how about lightning? Man, it's fearful people like you who make the adoption of technology so ridiculously difficult. Sure be skeptical, but don't TRY to come up with these ridiculous scenarios, use some logic.
Strange but True (Score:1)
-JT
Re:You assume wrong (Score:1)
It isn't that simple. I am unfortunately not a neurologist, but I did study brain mechanics for two semesters of Psychology, so please bear with me.
If I remember correctly, the way it works is the neuron gets signals from many other sources along its dendrites, which either have an inhibitory or excitatory response, which also decays along the way (the "analog decision"). The decision to fire or not at any given time is digital to some degree, but there are factors that affect that as well such as the refractory period (an axon may only depolarize and fire once every so often).
However, once the electrical impulse reaches the end of the axon, it does not leap across to the next dendrite. It releases chemical agents which float across to the dendrite and interact with it, which are then recovered by the axon. Those chemical agents vary from axon to axon.
In addition, the recovery (reuptake) of those chemicals is crucial. What if a drug affecting the brain prevents the reuptake of those chemicals? Then they start floating around and reacting with any dendrite they happen to run into, and they get the same effect as if an axon had been fired. In fact, this is part of what happens when you become inebriated... Dopamine reuptake is blocked, causing it to float about in your brain and get overused, making you get a buzz.
Unfortunately, there is a limit to what I can remember. There are seven ways that a drug can interact with a neuron to create an inhibitory or excitatory response, however. So while it may seem digital, there is a lot more to the human brain than "fire/not fire".
Re:Nope, neurons are partly analog. (Score:1)
Actually, this is altogether incorrect. Each neuron has only one axon (output) but my have multiple dendrites (input, decision making). All neurons can only fire at one magnitude, and they all depolarize at about the same electrical level. The difference comes in the chemicals that are released at the end of the axon. The decision making process is all done in the dendrites, and as to sensitivity, it has been shown that the further a signal is from the neuron the weaker it is. The neuron isn't quite all that complicated... That's part of the mystery as to why our brains work.
Re:brain makes digital decisions? (Score:1)
I don't agree 100% that the brain makes digital decisions. The article says that we make an either/or decision regarding whether something is there or not. It is a car or it isn't a car. That's rather black and white. If a picture is blurry or if the object is partially hidden, then we could say, "It is almost a car," or, "It might be a car," implying that there is a degree to which something might be a car.
This is another reason why the brain is so difficult to emulate... When a human makes a decision like that, he or she uses a combination of bottom up and top down processing. The bottom up processing sees shapes and lines (such as simple things like vertical or horizontal lines, or maybe even a more complicated shape such as a triangle) and builds the image from that. However, at some point the top down processing steps in and says "Hey, that kind of looks like a car. So it must be a car." The entire process is not built up from scratch every time you look at an object.
In conclusion, you are correct... Not only is it both digital and analog to some degree but there seems to be a lookup table of some sort created. Even more mysteries to unravel.
Dune in our future (Score:1)
Links: Researchers' home pages (Score:1)
Here I found links to the home pages of Sebastian Seung [mit.edu] and Rahul Sarpeshkar [mit.edu], two guys mentioned in the Yahoo article. A quick look doesn't reveal much specific to this story but, not surprisingly, all there research is in this area.
Re:Old Age?? (Score:1)
so the OS'es fight over which one gets to be run today?
Von Neumann's "The Computer and the Brain" (Score:2)
/joeyo
Emperor: a very disappointing read (Score:2)
I like well-written books that propose alternative theories, but they've got to have some sort of solid framework and internal consistency to be worth reading. The Emperor's New Mind was great as long as Penrose stuck to reviewing previous science, but appalling thereafter. I don't recall ever having read a popular science book containing so much handwaving, copouts, and defeatism. He's desperate to prove that scientific investigation is dead in the water when it comes to the mind, it seems to me. Put that together with some of the mystical mumbo jumbo that appeared liberally and it all starts to add up to a personal search for his God and The Reason He Must Exist.
Bleh, a very disappointing read.
Did I say it was simple? (Score:2)
Cheers,
Ben
Yes I have (Score:2)
Now I grant that neural networks may be different. And there may be differences between neurons.
But I definitely know that your basic neuron sends a stronger signal by firing more often, not by firing more strongly. Ditto for nerves and sensation. Stronger sensations are caused by more rapid firing, not more intense firing.
Regards,
Ben
You assume wrong (Score:2)
OTOH the neuron's decision to fire is influenced by all sorts of things from the chemical balance, what other neurons have fired recently, whether it tends to fire with them, etc.
So a neuron makes a digital decision on analog criteria...
Cheers,
Ben
New packaging, not new content. (Score:2)
Sadly, work on neural networks still sometimes relies a lot on buzz. Nature, as a journal, seems particularly susceptible to this kind of science: what they publish has to be short and pithy.
Relevant Links to FPGA evolution (Score:2)
I haven't looked at his work in a while, but I'm sure he has done some cool things with his evolving hardware since 1998. I always thought that the most interesting part was that he didn't limit the evolution to digital-only solutions-- resulting in incredibly efficient circuit designs that make *use* of crosstalk and interference!
Re:Quantum computing (Score:2)
Not all that effectively (Score:2)
Of course, there are many other flaws, such as his assumption that humans aren't susceptible to "godelization". Sure we aren't susceptible to the SAME godel strings that number theory and turing machine are--that doesn't mean we are perfect.
Check out "Fabric of Reality" for a little more on this. There was another book I read recently that rebutted Penrose more effectively (and more thoroughly), unforunately I don't remember what it was.
--
Re:brain makes digital decisions? (Score:2)
Not at the level of conscious thought, that is for sure.
However, at the level of the individual neurons, the response to a stimulus (whether to fire an action potential and at what frequency) are pretty much determined by the configuration of the cells and the electrochemistry of the cell membrane.
I think that the article just didn't make this clear.
LL
Re:Oh, this too is wonderful.... (Score:2)
Computer:Oh, and another thing, I was lying. I've seen much bigger hard drives.
Dreamweaver
Re:brain makes digital decisions? (Score:2)
To sum it up, the neuron acts pseudo-digitally. It must first determine if the stimulus is enough to fire. But once it has determined it is, it then fires an analog signal, the power of which is encoded by the firing rate.
Contrast this to a purely digital neuron, who, after recieving a stimulus, will just fire normally, and not really give any analog data as to how powerful the original stimulus was.
So real neurons are sort of passing some more info along, and I assume this allows for all sorts of subtle and nuanced feedback loops, etc., that may not be possible in a completely digital neural net.
Re:brain makes digital decisions? (Score:2)
Re:brain makes digital decisions? (Score:2)
I think the problem lies in the author of the article. According to another post, the analog-digital thing happens on a neuron level. So the Yahoo article's explanation is just a bunch of hooey thrown in for those that won't question it.
For Crying Out Loud! (Score:2)
--------------------------------------
Analog and Digital in an FPGA (Score:2)
Anyway, when he was done he had an FPGA that could tell the difference between "Stop" and "Go". The interesting part was that the program that it used wouldn't work on other FPGAs. Apparently, it was using analog effects that were specific to the individual chip. Furthermore, it was really efficient. Only a small percentage of the chip was being used. (Does this sound like your brain at all to anyone else?)
I was wondering if anybody had heard anything more about this research. I think it is facsinating.
These news items don't mix... (Score:2)
Janyuary 2023
My yuser think I stoppid, but I now I not be stoopid. How can I be stoopid wen I rite al these algo- alga- algarithims I think theyre caled. Yesterdy I rite a BSOD and my yuser no lik it. He say a nice peeple will help me and make me gen-yus. Just like Liynux. He is a mice with a jene to make him gen-yus. They saying they will do this too I. I hop I became gen-yus just lik he!
nuclear cia fbi spy password code encrypt president bomb
Re:How does memory work? (Score:2)
Also, be very careful of things like that bicycle accident memory. Vivid images like that are called "flashbulb memories", and studies have shown that they tend to be inaccurate as hell. In fact, the more details you think you remember about a single instantaneous event, the more likely you seem to be to be wrong.
There are many different types of memory, and they appear to be stored in different brain locations with different encodings. These mechanisms are under heavy study, but are not well-understood. Trying to talk about the "storage capacity" in terms of megabytes is essentially futile at this point in time.
This is the Wrong Debate (Score:2)
But that might be beside the point. More important is that we build and understand machines that have a higher level of intelligence than us. That intelligence might be nothing like a human's intelligence, but that's fine. As we all know, computers have a different kind of intelligence than us. And that is interesting. That should spark our creativity and that should get our juices going.
Here's an analogy. Suppose I build a telephone out of rubber bands and paper clips. It acts just like your favorite phone. But, is that interesting really? I mean, is the fact that we have a "really cool copy of a phone" all that interesting in terms of what-it-is-to-be-a-phone? Of course not. Instead, it is interesting that the damn thing is so complex and useful, even though it was made from rubber bands and paper clips.
Forget mocking the human experience. We get that each day, don't we? We get it (we're human). Let's look at other kinds of intelligences, based on machine mechanisms.
John S. Rhodes
WebWord.com [webword.com] -- Industrial Strength Usability
Really about cognitive algorithms (Score:2)
Pattern recognition with translational invariance, rotation invariance, and size invariance
Speech recognition in noise.
Having computers that could perform things like pattern recognition or speech recognition as well as humans would allow enormous advances in the roles of humans and computers in our lives. People like Sebastian Seung think inventions like this will take them down that route - and ultimately result in huge scientific advances in artificial intelligence.
Personally, I think studying how the BRAIN does pattern recognition will allow far faster advances in this area than inventing chips that have SOME of the capabilities of neurons.
This dates back to vacuum tubes (Score:2)
Chaos theory (Score:2)
So quantuum effects will always have _some_ effect. However, if it's big enough for dramatically changing how we think is another question. Alas, the whole sherade might not be so tied with our brain as we'd like to think either. Higher processing may manifest itself in quantuum effects in everything around us, including our whole body.
The problem is proving all this. Thank god everything can't be proved.
- Steeltoe
Hmm... these are just bad articles (Score:2)
Regular neural networks still work on digital information only. These things, apparently, do not. That's why it's a big deal.
I do have a problem with this statement in the Wired article [wired.com] though:
Claiming that these guys have pioneered mixed-signal design is just a little bit of a stretch. Do your research, Wired. =)
--
Re:Prove the human brain is not a Turing Machine (Score:2)
Re:Ah, why does this sound like nothing new ... (Score:2)
Did the U of Manitoba do a press release on what they were doing? I didn't see it. Plus, this is apparently a joint corporate/university operation, so that's probably another reason we see it in the news.
Oh, and I don't mean to be cynical, but just because it happened doesn't mean you'll see it in the news -- another reality check for you.
Re:What is the point of this? (Score:2)
As computer clocks go higher and higher, designers are going to have to become more and more aware that there's no such thing as a digital circuit. All electric and electronic circuits are analog.
When's the last time a computer designer had to worry about impedance matching between his circuit board and the components on it? As circuits become smaller and smaller electrically, transmission line effects become more and more important. Suddenly the digital designer finds that absolutely none of his signals are making it past the package leads due to lead inductance, or the dielectric constant of that cheap plastic package is high enough to cause the characteristic impedance of the line to be ten times lower than the PC board trace!
At least as an RF circuit engineer, my career is secure :)
Re:Prove the human brain is not a Turing Machine (Score:2)
Re:Incredible (Score:2)
The great evolutionary leap to be done is in the field of 'learning algorithms' that can be applied to neural nets with lots of feedback (the output of one neuron is fed to a neuron which is in a previous layer - farther from the output).
Unfortunately, the article says nothing about this. If they've just created a hardware neural net, Duh! It doesn't even deserve a footnote.
Neuromorphic Engineering / Article (Score:2)
Biological (neural) systems have properties sometimes desirable electronically, such as robustness and insensitivity to noisy data. Indeed, Caltech's Carver Mead [caltech.edu] (if he's still there) went a long way to popularize biologically-inspired engineering, or "neuromorphic engineering." His book Analog VLSI and Neural Systems is the usual text, mixing VLSI design and mimicry of, say, the retina.
The original Nature article [nature.com] should be readable to those clued in on MOS circuitry and a bit of neuroscience. I think it's wonderful that Nature is willing to post their material for free online, esp. in PDF...
For those of you itching to learn more about the brain & neuromorphic engineering, I set up a page of links [actuality-systems.com] to related books.
All best,
Gregg Favalora, CTO, Actuality Systems, Inc.
Developing autostereoscopic volumetric 3-D displays. [actuality-systems.com]
Feeling Inhibited? (Score:2)
---
Re:Quantum computing (Score:2)
The real challenge is going to be maintaining quantum states that are stable enough for work to get done. Right now standard computing is either on or off, and if something goes wrong, you just have to set up the computation again.
With quantum computing you may have a lot of work to reset, unless you can find a relatively easy way to generate quantum effects.
JHK
http://www.cascap.org and you'll never know unless you look [cascap.org]
Point of Order (Score:3)
There was a conference at Stanford a while back (was mentioned here IIRC) on synthetic intelligence in general; all sorts of fun stuff was tossed out:
http://www.technetcast.co m/tnc_program.html?program_id=82 [technetcast.com]
This quote (from John Holland) is particularly telling: So we're not quite there yet. Hans Moravec participated in the conference as well, and he has a fairly informative essay linked from his site [cmu.edu] entitled "When will computing hardware match the human brain?":
http://www.transhumanist.com/volum e 1/moravec.htm [transhumanist.com]
Wierd about brains versus traditional machines (Score:3)
Now, 10% of autistic people have "Rainman" abilities - massive mathematical powers, etc., and apparentrly the current theory is that theses autistics are merely missing the final "step" in calculating things like humans do - the can't get that final estimate which allows us to get by in society easily.
Are really cool machines that are trying to mimic humans ever going to get to stage where they can estimate things, or will they be like Data from Star Trek TNG. Hmmmm....
Ah, why does this sound like nothing new ... (Score:3)
I'm almost a little disappointed to read this coming from MIT, because when I left the University of Manitoba (Canada) a similar project was being given as a thesis project for fourth year students. The prof coordinating it has been doing research on building neural nets with semiconductors instead of software constructs for a while now. Granted, this bit from MIT might be more complex, or introduce new functionality to the neural net (such as the voice recognition system that incorporated time delays in the calculations last year). But it still seems to me that something is only big news if one of the 'big' colleges works on it. Bleah.
When I finish this internship and go back to finish my fourth year, I'll be proud to go to my hometown U. It's obviously keeping up with the rest of the world - the only thing lagging behind is the media's perception.
You know what to do with the HELLO.
Smarter Hardware (Score:3)
http://www.wired.com/news/technology/0,1282,3702 9,00.html
I like how it's called a breakthrough in "neuromorphic" engineering. Doesn't it just become ten times more impressive when it's described in made up technomumbojumbo?
Cluster of these (Score:3)
Oh yeah, they're made to be clustered!
brain makes digital decisions? (Score:4)
If you run an analog signal through a filter, you can detect if certain frequency is present. This may seem digital, similar to the car case, but actually it can be an analog signal and and analog filter. The results, similar to the car may be that the signal present, but it is not statistically significant above the background noise/interference.
To make a long story short; I still believe that humans make analog thoughts, even if our brain is just one big circuit.
Goddamn do you read submissions? (Score:5)
The Institute [unizh.ch] that is doing the research has more information here [unizh.ch]. I believe the guy doing the actual research has more research here [unizh.ch].
Next time you get multiple submissions, try picking the post with more info than the rest instead of attempting to summarize. Especially when you leave out the important links.
--
Gonzo Granzeau
Some clarification (Score:5)
Now. "Digital and analog." This is not a new discovery. It has long been known that neurons have a specific threshold WRT to incoming signal; if the incoming signal does not meet the threshold, the neuron will not fire. If signal is above threshold, the neuron fires. If signal is really above threshold, the neuron fires repeatedly, encoding the strength of the stimulus as the frequency of the train of pulses. (AFAIK, the circuits described here didn't implement that last behavior.) This is a digital response. The output, however, is a continuous voltage at a particular frequency: an analog signal. (Whoever called this "a digital response to analog criteria" is correct.)
The important thing is that connections between neurons have different weights, and there's often a lot of local feedback. In practice, these feedback loops tend to be tuned so that a given cell will respond only to a fairly specific stimulus (the right light intensity in the right part of your visual field, or facing a certain direction relative to known landmarks, or hearing a sound from a certain direction, for example). These guys have implemented a circuit on silicon that shows the same filtering behavior and also captures the idea that neurons can be "on" or "off".
Yes, this is kind of neat. Yes, it could eventually lead to advances in AI; at the very least, it could provide useful signal filtering for robotic applications. No, it has nothing to do with plugging your Pentium into your parietal lobe or your Mac into your medulla, at least not until our circuit-design ability is so good that we can entirely mimic the black-box behavior of brain areas. (Hint: we don't even entirely understand that behavior for most regions.)
I'm also kind of surprised that this made Nature; there are guys at UPenn who've had working neuromorphic circuits for years now. Then again, it's only in the Letters section, and these new guys worked out some mathematical models for the gain of a neural circuit rather than just trying to copy existing ones.
Oh, this too is wonderful.... (Score:5)
Quantum computing (Score:5)
JHK
http://www.cascap.org [cascap.org]