Mapping the Brain's Neural Network 143
Ponca City, We Love You writes "New technologies could soon allow scientists to generate a complete wiring diagram of a piece of brain. With an estimated 100 billion neurons and 100 trillion synapses in the human brain, creating an all-encompassing map of even a small chunk is a daunting task. Only one organism's complete wiring diagram now exists: that of the microscopic worm C. elegans, which contains a mere 302 neurons. The C. elegans mapping effort took more than a decade to complete. Research teams at MIT and at Heidelberg in Germany are experimenting with different approaches to speed up the process of mapping neural connections. The Germans start with a small block of brain tissue and bounce electrons off the top of the block to generate a cross-sectional picture of the nerve fibers. They then take a very thin slice, 30 nanometers, off the top of the block. 'Repeat this [process] thousands of times, and you can make your way through maybe the whole fly brain,' says the lead researcher. They are training an artificial neural network to emulate the human process of tracing neural connections to speed the process about 100- to 1000-fold. They estimate that they need a further factor of a million to analyze useful chunks of the human brain in reasonable times."
Missing tag (Score:2)
Re: (Score:3, Interesting)
You mean ANNs ... (Score:1)
Feedforward NNs versus biological NNs (Score:4, Interesting)
Recurrent neural networks (Score:2)
Re:Missing tag (Score:5, Informative)
A neural network (well, anything more complex than the single-layer perceptron [wikipedia.org] anyway) is an arbitrary classifier. I'm curious as to why other methods are "much better". Unless you do an exhaustive search of the feature-space, all classifier methods are subject to the same limitations - local maxima/minima (depending on the algorithm), noise effects, and data dependencies. All of the various algorithms have strengths and weaknesses - in pattern recognition (my field) NN's are pretty darn good actually.
It's also a bit odd to just say 'neural networks' - there are many many variants of network, from Kohonen nets through multi-layer perceptrons, but focussing on the most common (MLP's), there's a huge amount of variation (Radial-basis function networks, real/imaginary space networks, hyperbolic tangent networks, bulk-synchronous parallel error correction networks, error-diffusion networks to name some off the top of my head), and many ways of training all these (back-prop, quick-prop, hyper-prop, batch-error-update, etc. etc.) I guess my point is that you're tarring a large branch of classification science with a very broad brush, at least IMHO.
Not to mention that this is all the single-network stuff. It gets especially interesting when you start modelling networks of networks, and using secondary feature-spaces rather than primary (direct from the image) features. Another part of my thesis was these "context" features - so you can extract a region of interest, determine the features to use to characterise that region, do the same thing for surrounding regions, and present a (primary) network with the primary region features while simultaneously(*) presenting other (secondary) networks with the features for these surrounding regions and feeding the secondary network results in at the same time as the primary network gets its raw feature data. This is a similar concept (if different implementation) to the eye's centre-surround pattern, and works very well.
If you work through the maths, there's no real difference between a large network and a network of networks, but the training-time is significantly less (and the fitness landscape is smoother), so in practice the results are better, even if in theory they ought to be the same. I was using techniques like these almost 20 years ago, and still (very successfully, I might add) use neural networks today. If it's a fad, it's a relatively long-running one.
Simon.
(*) In practice, you time-offset the secondary network processing from the primary network, so the results of the secondary networks are available when the primary network runs. Since we still run primarily-serial computers, the parallelism isn't there to run all of these simultaneously. This is just an implementation detail though...
Neural Net as LAN for nanocomputing;Re:Missing tag (Score:1, Interesting)
-- Prof. Jonathan Vos Post
Re: (Score:1)
Almost every artificial organizational technique (computers, militaries, governments, etc.) seems to partition up complexity into semi-independent units. Perhaps there's in inherent advantage of such.
Also, could you please recommend
Re: (Score:1)
Re:Missing tag (Score:4, Informative)
Actually, more recent methods don't have local maxima/minima. Something like a support vector machine optimizes an objective function. Of course, this is somewhat of a tangent, in that the objective function might not be a useful metric for performance, but people have shown that the minimum objective function value of a SVM does relate to its generalization performance. It's a little disconcerting that a NN has an objective function but that it can find it's minimum or that the minimum doesn't give good performance on test data (over-fitting)...
Of course, part of the NN's problem stems from the fact that it is an arbitrary classifier. It's hard to give generalization results for an algorithm that has an infinite VC dimension. (There are techniques to restrict the size of the weights to give some guarantees.) However, this doesn't mean NNs can't perform well in practice. It probably means that the current theoretical analysis is somewhat flawed in relation to the real world.
So have you ever compared your NN algorithms with the popular algorithms of the day such as SVMs with kernels or boosting algorithms. Also, are your NN algorithms generic or do you heavily customize and tweak to get good performance.
Global optimum for more than 2 classes? (Score:1)
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:1)
Re: (Score:1)
For something that's a couple of pounds, and doesn't require a lot of energy, it's pretty damn effective if you ask me.
Re: (Score:2)
Much better for what? If, in 30 years, this technology allows full mapping of the entire human brain, I'll be quite happy... It will mean, that my conscience may live after the mortal flesh dies.
It may take another hundred years or more for full reconstruction of a new brain (and the rest of the body) to become possible, but in the mean time I'll be preserved just as I was, when I died and my brain was scanned.
Even better,
No GPS Needed (Score:2)
Did I get that right? (Score:5, Funny)
So, they're training a neural network to automate the process of mapping a neural network, in the hopes of creating an intelligence that they can train to automate other processes?
My brain hurts...
Re: (Score:1)
-Buck
Re: (Score:2, Offtopic)
What constitutes a "map" here differs from elegans (Score:5, Informative)
Re:What constitutes a "map" here differs from eleg (Score:1)
Interesting (Score:5, Insightful)
Horse, push cart. (Score:5, Informative)
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:3, Interesting)
I know visually we are looking at least at something around 24 frames per second. The eye is supposed to have a resolution of around 1000 dpi. Not sure how to measure the viewing area. But let's say it is lesser and lesser resolution the higher the angle. Let's say, just to have a number, that we have a 16:9 viewing ratio at two feet distance. Lets say it's three feet wide. That shou
Re: (Score:1)
Read up on it. http://en.wikipedia.org/wiki/Blind_spot_(vision) [wikipedia.org]
And the raw video feed from your eye is not stored. It is interpreted and discarded.
Memories are the same. Not that much information. Memory is suprisingly volatile, and your brain does not retrieve information from a built in hard d
Re: (Score:1)
I agree, if you look at my first post, about the poor eye vision. But not it being filled in in any way. I know from my active life in the military, martial arts, racing etc what is real and not. If you drive at 150mph or have a fast kick flying at you, you cannot do deal with it on a guess, or filled in best guesses. Not that the people who come up with these ideas are usually from that part of life anyway.
When you can discuss all sorts of details with someone who has a photogr
Re: (Score:1)
And about martial arts, humans detect motion over detail. You automatically evade something that comes your way, without actually seeing what it is. You see something is coming and do not assess if it's dangerous or not, you evade/block just in case.
An what you explain of noticing things, that should not be noticed, that is explained by your subconscious. Read up on how people missing most of their bra
Re: (Score:1)
No I've never done LSD or any drugs, simply did not interest me. I've always been too interested in life and my senses to take chances of limiting them or adding vias.
Though I find it interesting how you want to explain it away with the subconscious.
Your post are so full of bullshit.. (Score:2)
Repeat after me:
YOUR EYES AND BRAIN AREN'T DIGITAL FINALSPEC1.0 DESIGN.
MOVIES use 24 fps with motion blur because it gets an acceptable quality, your eyes can still notice things happing during 1/200 of a second.
In our digital graphics class the teacher mentioned something like 20 million cells in the eyes which registred data but only one million nervs to the brain.
YOU HAVE SHIT POOR QUALITY OUTSIDE ONLY A FEW PERCENT OF THE CENTRUM OF YOUR VISION.
I guess your dpi esimate may
Re:Interesting (Score:4, Informative)
Even if your (incorrect assumptions) were correct, 36" x 20" at 1000dpi would be 36000 pixels x 20000 pixels = 720M pixels. Clue: dpi is a scalar measure rather than area.
Of course, the human eye does not work anything like that. Rather than farting numbers I spent 10 seconds on Google to find this [ndt-ed.org] which looks into the question of Visual Acuity. The "high-res" part of the eye is a very small circle with about 120 "dots" across its diameter.
As we do not resolve entire "frames" in a single go, the concept of a frame-rate is completely ludicrous. Your argument earlier in the thread about observing skipping when seeing a high speed stimuli doesn't show evidence of a *periodic* frame rate. It just shows that there is a *minimum* temporal resolution. One does not imply the other, especially when the eye is processing asychronous input (from rods and cones).
Although you don't believe that the brain fills in the missing images with educated guesswork, we've already established that what you believe is shit. Most (if not all) neuroscientists have accepted that the high resolution continuous visual imagery that we see is mostly an illusion produced by the mind. There are many well reported experiments that provide evidence of this. You should look for anything on Visual Illusions - there are far too many decent results in peer reviewed journals for me to spend time looking for you. Change Blindness is a related phenomena.
Finally you've cooked up some stupid figures for the number of cells in a brain. Why do you feel the need to demonstrate how stupid you are? The actual numbers (which you get wrong by 3 fucking orders of magnitude) are in the summary of the article! How hard is it to read the 100 billion neurons at the top of the page.
So next time you feel the need to pontificate needless about something that you don't know anything about. Don't. You, sir, are a thief of oxygen and your pointless ramblings have made everyone reading this article collectively dumber.
PS Feel free to mod me flamebait, as I am clearly annoyed. But when you do so remember that the everything the parent poster wrote was incorrect and that I have pointed out to him where he is wrong.
Re: (Score:1)
Yes, I rather rapidly threw together numbers of the net, screwed up my math, and yet there you came and made my point stronger.
The whole subject is best guesswork, which is following the same thread of thin
Re: (Score:2)
While I don't practice neurobiology myself, my girlfriend's PhD was in psycho-physics and how to exploit the "compression" in the human visual system. Her research was very much at the practical end of the field.
Where we disagree is on whether or not your point stands. You claimed that despite the lack of bandwidth between the eye and the brain, the brain was *not* responsible for synthesizing the majority of what we think that w
Re: (Score:2)
B) We don't memorize the entirety of whole scenes (not even those with photographic memories, though they're close). We use pattern recognition. That's why you can tell coke can is a coke can given from a slightly dented coke can to a crushed coke can, and can tell that even a crushed coke can isn't a crushed coke bottle. You memorize the patterns that make up a particular concept, match any given object to your set of memorized patterns (re
You are a simmulant (Score:2)
No really. it's overwhelmingly probable you are a simulation.
According to the article above that are 100 trillion neurons to simulate. Even if they were multi-state that's approaching trivial by computational standards. And if you are willing to run the simulation at sub real time you could do it now.
So according to the anthropic principle, either 1) the human race goes extinct in the near future before this level of simmulation is poss
Re: (Score:1)
Going from millimeter to centimeter in 1d is a factor of 10. Going from cubed millimeter to cubed centimeter (3 dimensions) is a factor of 1,000.
So you'd have 9,350,000 cells per cubed centimeter if your numbers are correct.
Re: (Score:1)
Wave your hand rapidly in front of you and tell me if you see one hand moving or a series of snapshots. If you see a series of snapshots then you have exceeded the rate by which the eye sees.
Ditto, if you know anything about resonating frequencies in object you'd notice how your nervous system is responding to certain frequencies. It clearly runs at a frequency.
I'm not going to tell you about your vision, and I agree on the low
Re: (Score:1, Informative)
Re: (Score:1)
Tell a fencer they are guessing what is coming in. I know this is the best explanation that brain diggers can come up with. I'm just not that narrow minded.
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
If I wanted a scientific discussion do you really think I would have started it here? Not to say that there could not be people capable of it, but rather the quantity that is not. Never mind those you don't even want to talk to.
Nothing said here could seriously be considered, well... serious. You go here to read interesting links and see some arbitrary views with some occasional value thrown in. I think the best value would be that it sometimes makes people look at and t
Re: (Score:1)
I believe this is an artifact of artificial lighting acting as kind of a strobe. Because of the variance in the sample rate of the cells of our eyes, we generally see each "frame" as a blurred hand such that it is hard to see the actual b
Re: (Score:2)
Nope - it will be just the wire connections, just as if computer hardware without any software will have it's electrical circuits traced in order to understand better what a program running on the screen does - about in that magnitude, probaly much higher.
The "software" on a human brain is programmed from before birth and constantly changed.
Just the computing power of keep
Weights are encoded in the structure (Score:1)
'Weight' is an abstraction used in NN simulations.
In biology, the strength of interaction between two neurons is determined primarily by the number of synapses between them -- how many of Neuron B's dendrites lie on Neuron A's axon. This is exactly the information contained in the wiring diagram.
Of course, half the fun of any advancement is learning all the new things that are *not* explained or turned out to be more complicated than previously thought. With good wiring diagrams, we can make predict
Re: (Score:1)
To me, the bigger question than the "weights" and other mediating factors of the brain's network, is where do the network's outputs go? Artificial Neural Networks, like the biological kinds, are indeed grea
Re: (Score:2)
Helpful for computer technology (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:3, Interesting)
Our computer technologies have yet to achieve the complexity of most biological brains. I'd love to see these new informations derive a new form of super-computer. Of course...We have to watch out for iRobot scenarios...
Don't hold your breath for an iRobot.
If each of the 100 billion neurons managed the 1000 or so synapses, and say a modern day PC with a quad processor could computationally handle say 100 neurons, you would need 1 billion PCs. Since 1 billion PCs would find it difficult to walk, the old
Lets not "say" (NS) (Score:1)
Re: (Score:2)
That's only as long as you simulate neurons in software, which is probably as inefficient as it gets, as opposed to building and connecting artificial neurons in hardware directly. The fact that the brain manages to cramp the intellectual differences between reptiles and humans into a few cubic centimeters should tell you some
Re: (Score:1)
You're assuming that a neuron is simple, which it is not. To simulate just one neuron, or any other kind of cell, you may need a billion PCs.
Re: (Score:1)
So it's reasonably probable that a neuron's functions can be simulated with useful accuracy without going into nasty details like pr
Re: (Score:2)
Are all of the functions of a neuron required to produce intelligent behavior? If not, which can we omit? How will we even know when a system is behaving intelligently? Even humans take years to learn how to communicate and rationalize. Could we provide even a perfect simulation of the human brain the proper environment to train in to ensure these results? Once we have such a mod
Re: (Score:1)
As for the rest: As I see it, there are two possibilities: Either (1) every little detail, down to the quantum behavior of atoms inside the neuron, are _necessary_ for intelligent behavior, or (2) there is one higher-level description of a neuron that is sufficient, and the lower-level (quantum, chemical, proteic, etc) behavior is just implementation detail. I'm rather sure (2) is true, mostly because of th
Re: (Score:1)
the overwhelming majority of that size is due to convenience and lazyness, your example, hello world, assuming just a c single line printf with the stdio include, you get an executable of 4.7kb,
now, I just went and re-wrote that in assembly, using system function calls instead of the c library, got a 725 byte binary with symbol info etc, 320byte when stripped, that still isn't terribly efficient conside
Computational Complexity (Score:2)
It's quite interesting that these German researchers are mapping pieces of the brain; however, even if they were to map the entire human brain, first, we still do not know how to perfectly simulation the biological processes occuring in the brain. Yes, we are able to simulate a single neuron, or small clumps of neurons; however, the dynamics of simulating billions of interconnected neurons is not fully understood.
Second, even if we were able to map the entire human brain and run a perfect simulation, the
Re: (Score:2)
It would be very difficult to get something useful out of it. Answers wouldn't always be the same due to the semi-random effect brains have a tendency to produce. You would spend a lifetime putting something in and watching it come back differently than it did a few minutes ago. It wouldn't be too unlike a network connection, if packets were voltage gradients and various neurotransmitters. Since there are only a few ways each ca
Re: (Score:3, Insightful)
Re: (Score:2)
The best idea for that would be to get the brain, cranial ner
Re: (Score:2, Funny)
We call this "raising children"
--
BMO
Re: (Score:1)
Not only should this be modded funny but insightful as well. Because what he's saying is very true.
People tend to think of computers as something that should just work out of the box, but really, not all computers (and software) can be like that. A human child can be considered as a computer that needs years to train. If robots ever become popular, I expect that "pet" robots would have to mimic these learning capabilities and "grow" with its master over the years.
Re: (Score:2)
If we truely come up with computers that mimic the human brain, we're going to see the same problems that we have "programming" children, maybe even worse, because human children have inherited behaviors that make teaching easier, and elec
Re: (Score:1)
To be fair, how much of the brain's processing power is devoted to involuntary issues (heart, lung, pain receptors), voluntary motion systems (muscles), and then how much left over for intelligence?
If you stripped away the brain needed for everything except cognitive thought, how much processing would you need? I suppose that is why its best to st
Re: (Score:2)
CCortex (Score:3, Interesting)
An attempt to emulate a brain on a network of computers.
Plasticity (Score:3, Interesting)
Re: (Score:1)
Re: (Score:1)
Edelman showed that 1) individual circuits are not restricted to a single function, 2) that the operation of any brain circuit has a propagation rate dependent upon electrolytic characteristics at the time the circuit is activated, 3) a signal through the circuit, possible by the electrolytes, changes those electrolytes, 4) the next loop through that circuit will have a different propagation rate than the previou
they got the complete neural map of C.Elegans (Score:4, Interesting)
Re: (Score:1)
You could write some kind of rough analog of the neural network. However, an actual simulator of a C. elgans would be impossibly complex, as a simulator of an actual cell would be impossibly complex.
Re:they got the complete neural map of C.Elegans (Score:4, Informative)
The best current knowledge of C. elegans neurophysiology involves qualitative descriptions of small circuits, involving a few dozen neurons. Unfortunately, while you can do a lot of good behavioral studies and other experiments, it's impossible to directly record the activity of specific neurons. Also, it turns out that some "neural" functions are actually performed by other cells. For example, one pattern generator in the digestive tract actually resides in intestinal cells instead of neurons -- my lab is working on the genetics involved.
This shit gets complicated, fast.
IAAUCER
I am an (undergrad) C. Elegans researcher
Re: (Score:1)
it's impossible to directly record the activity of specific neurons.
It appears there are definitions of impossible I am unfamiliar with, I've read a number of papers that reference to not only directly measuring impulses of individual neurons, but also sending impulses to one or even multiple neurons with nanowire arrays. A qualitative description of functional circuits is unnecessary when it comes to testing it's 'transfer function' with any arbitrary input, or even giving it a full simulated environment. That is, an understanding of the network is unnecessary for simula
Re: (Score:1)
These techniques tend to involve either some in vitro preparation, where you h
like Lie groups! (Score:1)
Map out the structure, determine the operators, and combine the inputs with the outputs! Bang, easy as pie!*
*joke :-)
Re: (Score:3, Funny)
while (! dead) {
if (leftTenticalSensesFood()){wiggle(left);}
if (rightTenticalSensesFood()){wiggle(right);}
if (frontTenticalSensesFood()){munch();}
if (femaleWormEncountered()){fuckTheWigglyMamma();}
}
end();
Re: (Score:1, Funny)
}
Hey, that's *my* life! I call prior art.
Factor of a million (Score:1)
In other words, it won't take long before the actual hardware is in place for this analysis to happen. Shortly after that will be the robot insurrection. I've seen movies about this. It doesn't end well.
Re: (Score:1)
what does a neuron map look like? (Score:2)
100 trillion synapses? (Score:2)
Re: (Score:1)
Re: (Score:1)
And I still cannot remember where I put the f&&&ing car keys!
Oblig. Futurama (Score:3, Funny)
Leela: Is this some sort of brain scanner?
Farnsworth: Some sort, yes. In France, it's called a guillotine.
Leela: Professor! Can't you examine my brain without removing it?
Farnsworth: Yes, easily!
Me! (Score:2)
When I'm done with it!
Re: (Score:2)
Re: (Score:1)
Modding you waaaay down
Re: (Score:2)
Maybe it's not all that bad ... (Score:2)
Feels good (Score:1)
Yes, but ... (Score:2)
Beat this - "Brainbow" (Score:1)
Bruce McCormick (Score:2)
A professor at Texas A&M University, Bruce McCormick, was pushing for this for years.
Check out Welcome to the Brain Networks Laboratory at Texas A&M University! [tamu.edu].
The idea is to use a knife-edge scanning microscope to make images of very thing slices from brains.
I'm curious if Dr. McCormick has retired. His web page last list courses he taught in 2002.
One has to start somewhere... (Score:1)
Re: (Score:2)
Re: (Score:2)