Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Biotech Networking Science

Mapping the Brain's Neural Network 143

Ponca City, We Love You writes "New technologies could soon allow scientists to generate a complete wiring diagram of a piece of brain. With an estimated 100 billion neurons and 100 trillion synapses in the human brain, creating an all-encompassing map of even a small chunk is a daunting task. Only one organism's complete wiring diagram now exists: that of the microscopic worm C. elegans, which contains a mere 302 neurons. The C. elegans mapping effort took more than a decade to complete. Research teams at MIT and at Heidelberg in Germany are experimenting with different approaches to speed up the process of mapping neural connections. The Germans start with a small block of brain tissue and bounce electrons off the top of the block to generate a cross-sectional picture of the nerve fibers. They then take a very thin slice, 30 nanometers, off the top of the block. 'Repeat this [process] thousands of times, and you can make your way through maybe the whole fly brain,' says the lead researcher. They are training an artificial neural network to emulate the human process of tracing neural connections to speed the process about 100- to 1000-fold. They estimate that they need a further factor of a million to analyze useful chunks of the human brain in reasonable times."
This discussion has been archived. No new comments can be posted.

Mapping the Brain's Neural Network

Comments Filter:
  • Vaporbiology
    • Re: (Score:3, Interesting)

      Not to mention - and I a sure I'll get modded down here - is that neural networks aren't very effective. They get a lot of hype in the media, and people who don't work in optimization like them. They sound appealing and cool, but there are other methods that are much better. If you look at the liturature, there is not that much hard science behind them. Some statistical mechanics people have some results, but they really are just a fad, like fractals or catastrophe theory. But bear in mind LOTS of people i
      • Dunno about your NN, but mine is pretty effective ... come to think of it, way more effective than it should be on a Saturday night :(
      • by benhocking ( 724439 ) <benjaminhocking.yahoo@com> on Saturday November 24, 2007 @03:36PM (#21465035) Homepage Journal
        Many engineers have studies feed-forward neural networks and found them to be far inferior to other solutions. Of course, our brains use recurrent neural networks, which, unfortunately for engineers, are very difficult to analyze. There are many secrets yet to be teased out of these neural networks, but much progress has already been made. Researchers in our lab, for example, have demonstrated how introducing random synaptic failures improves not only energy efficiencies, but also the cognitive abilities of simulated neural networks. I'm currently researching the effects of variable activity (as measured in a biological neural network by an EEG), and I dare say there's a lot more that we don't know about these networks than we do.
      • Re:Missing tag (Score:5, Informative)

        by Space cowboy ( 13680 ) * on Saturday November 24, 2007 @03:42PM (#21465075) Journal
        Perhaps you didn't get much out of neural networks, but my PhD thesis was on the similarities between a kohonen network [wikipedia.org] and relaxation-labelling [ic.ac.uk] equations. Part of it is up on my blog [gornall.net] (I haven't actually got as far as that bit yet, but the groundwork is there).

        A neural network (well, anything more complex than the single-layer perceptron [wikipedia.org] anyway) is an arbitrary classifier. I'm curious as to why other methods are "much better". Unless you do an exhaustive search of the feature-space, all classifier methods are subject to the same limitations - local maxima/minima (depending on the algorithm), noise effects, and data dependencies. All of the various algorithms have strengths and weaknesses - in pattern recognition (my field) NN's are pretty darn good actually.

        It's also a bit odd to just say 'neural networks' - there are many many variants of network, from Kohonen nets through multi-layer perceptrons, but focussing on the most common (MLP's), there's a huge amount of variation (Radial-basis function networks, real/imaginary space networks, hyperbolic tangent networks, bulk-synchronous parallel error correction networks, error-diffusion networks to name some off the top of my head), and many ways of training all these (back-prop, quick-prop, hyper-prop, batch-error-update, etc. etc.) I guess my point is that you're tarring a large branch of classification science with a very broad brush, at least IMHO.

        Not to mention that this is all the single-network stuff. It gets especially interesting when you start modelling networks of networks, and using secondary feature-spaces rather than primary (direct from the image) features. Another part of my thesis was these "context" features - so you can extract a region of interest, determine the features to use to characterise that region, do the same thing for surrounding regions, and present a (primary) network with the primary region features while simultaneously(*) presenting other (secondary) networks with the features for these surrounding regions and feeding the secondary network results in at the same time as the primary network gets its raw feature data. This is a similar concept (if different implementation) to the eye's centre-surround pattern, and works very well.

        If you work through the maths, there's no real difference between a large network and a network of networks, but the training-time is significantly less (and the fitness landscape is smoother), so in practice the results are better, even if in theory they ought to be the same. I was using techniques like these almost 20 years ago, and still (very successfully, I might add) use neural networks today. If it's a fad, it's a relatively long-running one.

        Simon.

        (*) In practice, you time-offset the secondary network processing from the primary network, so the results of the secondary networks are available when the primary network runs. Since we still run primarily-serial computers, the parallelism isn't there to run all of these simultaneously. This is just an implementation detail though...
        • by Anonymous Coward
          Yours is the best comment yet on this thread. I did NN stuff at U.Massachusetts/Amherst 1973-1974 when NN was temporarily un-cool, due to the Minsky-Papert Perceptrons book. I shifted to genetic algorithms 1974-1977 while beta-testing John Holland's book. One conclusion I reached was that memory and cognition are nanomolecular processes, and the neural network is merely the Local Area Net that connects distant regions of nancomputers.

          -- Prof. Jonathan Vos Post
        • by Tablizer ( 95088 )
          If you work through the maths, there's no real difference between a large network and a network of networks, but the training-time is significantly less (and the fitness landscape is smoother), so in practice the results are better, even if in theory they ought to be the same.

          Almost every artificial organizational technique (computers, militaries, governments, etc.) seems to partition up complexity into semi-independent units. Perhaps there's in inherent advantage of such.

          Also, could you please recommend
          • by Maian ( 887886 )
            I figure that inherent advantage of partitioning up complexity is parallelism and the efficiency that results. Depending on what's being parallelized, it increases throughput, execution times, etc. Heck, you could consider human productivity itself one massively parallel machine. Ever heard of the term "man-hours"?
        • Re:Missing tag (Score:4, Informative)

          by mesterha ( 110796 ) <chris.mesterharm ... com minus author> on Sunday November 25, 2007 @02:33AM (#21469157) Homepage

          A neural network (well, anything more complex than the single-layer perceptron [wikipedia.org] anyway) is an arbitrary classifier. I'm curious as to why other methods are "much better". Unless you do an exhaustive search of the feature-space, all classifier methods are subject to the same limitations - local maxima/minima (depending on the algorithm), noise effects, and data dependencies. All of the various algorithms have strengths and weaknesses - in pattern recognition (my field) NN's are pretty darn good actually.

          Actually, more recent methods don't have local maxima/minima. Something like a support vector machine optimizes an objective function. Of course, this is somewhat of a tangent, in that the objective function might not be a useful metric for performance, but people have shown that the minimum objective function value of a SVM does relate to its generalization performance. It's a little disconcerting that a NN has an objective function but that it can find it's minimum or that the minimum doesn't give good performance on test data (over-fitting)...

          Of course, part of the NN's problem stems from the fact that it is an arbitrary classifier. It's hard to give generalization results for an algorithm that has an infinite VC dimension. (There are techniques to restrict the size of the weights to give some guarantees.) However, this doesn't mean NNs can't perform well in practice. It probably means that the current theoretical analysis is somewhat flawed in relation to the real world.

          So have you ever compared your NN algorithms with the popular algorithms of the day such as SVMs with kernels or boosting algorithms. Also, are your NN algorithms generic or do you heavily customize and tweak to get good performance.

          • As far as generalisation goes, with the stuff I do there is generally an abundance of training data - it's easy to get imagery, and it's easy to generate the input data from that imagery. As such, using some of the data as training data, and the remainder as generalisation-test data is a good (and pretty standard) approach. All you do is train on (say) 2/3 of the data while simultaneously checking the results against the remaining 1/3 of the data. You'll get two error-curves, which initially approximate eac
      • Re: (Score:2, Funny)

        The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.
      • How is it not effective? Could the problem be, like you alluded to, that there isn't much science behind it atm... and thus perhaps we don't fully understand how it exactly works, so we can't replicate it 'effectively?'

        For something that's a couple of pounds, and doesn't require a lot of energy, it's pretty damn effective if you ask me.
      • by mi ( 197448 )

        They sound appealing and cool, but there are other methods that are much better.

        Much better for what? If, in 30 years, this technology allows full mapping of the entire human brain, I'll be quite happy... It will mean, that my conscience may live after the mortal flesh dies.

        It may take another hundred years or more for full reconstruction of a new brain (and the rest of the body) to become possible, but in the mean time I'll be preserved just as I was, when I died and my brain was scanned.

        Even better,

  • In my case it'd be trivial: brain cell 1 ---> brain cell 2. Done!
  • by UbuntuDupe ( 970646 ) * on Saturday November 24, 2007 @02:46PM (#21464671) Journal
    They are training an artificial neural network to emulate the human process of tracing neural connections to speed the process about 100- to 1000-fold.

    So, they're training a neural network to automate the process of mapping a neural network, in the hopes of creating an intelligence that they can train to automate other processes?

    My brain hurts...
    • Hey I think you might be on to something here. If we can just layer enough neural networks decoding neural networks on top of each other we can follow the pain through our brain an map the connections that way.

      -Buck
    • Re: (Score:2, Offtopic)

      "My brain hurts..." Please hold still. It's being mapped. Thanks.
  • One reason we could do the aforementioned mapping with C. elegans is that the worm's neurons are always laid out the same way from worm to worm. This is not the case for humans, and probably not the case for any vertebrate.
  • Interesting (Score:5, Insightful)

    by rm999 ( 775449 ) on Saturday November 24, 2007 @02:50PM (#21464705)
    Sounds interesting, but is a map enough to understand the brain? I know in artificial neural networks, the actual structure isn't as important as the weights on the nodes. Will hitting the brain with electrons be enough to give us an understanding of these "weights", or just the connections between them?
    • Horse, push cart. (Score:5, Informative)

      by zippthorne ( 748122 ) on Saturday November 24, 2007 @03:02PM (#21464799) Journal
      Science is all about the baby steps. You can't talk about determining the weights before you know what the connections are.
    • Re: (Score:2, Insightful)

      by bazald ( 886779 )
      Or more to the point, is a neural network really the correct interpretation of a brain's structure? I suppose we might find out soon, if they can get the "weights" "right".
    • by melted ( 227442 )
      Weights are trained and they'll be different depending on the connections and on the training. The rough structure of long-range connections is actually fundamental to how the brain works. This is essential to explain long and short term memory, fear, involuntary reactions to stimuli, perception of the world as a coherent audio/visual picture as opposed to a set of disparate inputs, etc.
    • Re: (Score:3, Interesting)

      by unixfan ( 571579 )
      Hell, it will be fun to see what they figure out to be the capacity of the brain, as far as how much information it can store.

      I know visually we are looking at least at something around 24 frames per second. The eye is supposed to have a resolution of around 1000 dpi. Not sure how to measure the viewing area. But let's say it is lesser and lesser resolution the higher the angle. Let's say, just to have a number, that we have a 16:9 viewing ratio at two feet distance. Lets say it's three feet wide. That shou
      • by diskis ( 221264 )
        The eye is suprisingly simple. Your brain doesn't get all that much information. A lot is filled in by your brain, based on interpolation and assumptions. You are aware that you have a huge hole in your vision right?
        Read up on it. http://en.wikipedia.org/wiki/Blind_spot_(vision) [wikipedia.org]

        And the raw video feed from your eye is not stored. It is interpreted and discarded.

        Memories are the same. Not that much information. Memory is suprisingly volatile, and your brain does not retrieve information from a built in hard d
        • by unixfan ( 571579 )
          "Filled in by your brain" eh?

          I agree, if you look at my first post, about the poor eye vision. But not it being filled in in any way. I know from my active life in the military, martial arts, racing etc what is real and not. If you drive at 150mph or have a fast kick flying at you, you cannot do deal with it on a guess, or filled in best guesses. Not that the people who come up with these ideas are usually from that part of life anyway.

          When you can discuss all sorts of details with someone who has a photogr
          • by diskis ( 221264 )
            Go look at the wikipedia link. There is a little experiment that exactly shows you the hole in your vision.
            And about martial arts, humans detect motion over detail. You automatically evade something that comes your way, without actually seeing what it is. You see something is coming and do not assess if it's dangerous or not, you evade/block just in case.

            An what you explain of noticing things, that should not be noticed, that is explained by your subconscious. Read up on how people missing most of their bra
            • by unixfan ( 571579 )
              I see you don't do martial arts. You have to see what is coming your way or you will buy the faint and be set up for another attack. The normal urge is to avoid what comes your way. But in order to be good, you train to only respond to real threats, not faints.

              No I've never done LSD or any drugs, simply did not interest me. I've always been too interested in life and my senses to take chances of limiting them or adding vias.

              Though I find it interesting how you want to explain it away with the subconscious.
      • .. I don't know where to start.

        Repeat after me:
        YOUR EYES AND BRAIN AREN'T DIGITAL FINALSPEC1.0 DESIGN.

        MOVIES use 24 fps with motion blur because it gets an acceptable quality, your eyes can still notice things happing during 1/200 of a second.
        In our digital graphics class the teacher mentioned something like 20 million cells in the eyes which registred data but only one million nervs to the brain.
        YOU HAVE SHIT POOR QUALITY OUTSIDE ONLY A FEW PERCENT OF THE CENTRUM OF YOUR VISION.
        I guess your dpi esimate may
      • Re:Interesting (Score:4, Informative)

        by smallfries ( 601545 ) on Saturday November 24, 2007 @07:05PM (#21466315) Homepage
        When you pull figures from your arse, you don't actually add anything to the discussion. All that we've learned is that your arse is rather large, and you are used to removing things from it.

        Even if your (incorrect assumptions) were correct, 36" x 20" at 1000dpi would be 36000 pixels x 20000 pixels = 720M pixels. Clue: dpi is a scalar measure rather than area.

        Of course, the human eye does not work anything like that. Rather than farting numbers I spent 10 seconds on Google to find this [ndt-ed.org] which looks into the question of Visual Acuity. The "high-res" part of the eye is a very small circle with about 120 "dots" across its diameter.

        As we do not resolve entire "frames" in a single go, the concept of a frame-rate is completely ludicrous. Your argument earlier in the thread about observing skipping when seeing a high speed stimuli doesn't show evidence of a *periodic* frame rate. It just shows that there is a *minimum* temporal resolution. One does not imply the other, especially when the eye is processing asychronous input (from rods and cones).

        Although you don't believe that the brain fills in the missing images with educated guesswork, we've already established that what you believe is shit. Most (if not all) neuroscientists have accepted that the high resolution continuous visual imagery that we see is mostly an illusion produced by the mind. There are many well reported experiments that provide evidence of this. You should look for anything on Visual Illusions - there are far too many decent results in peer reviewed journals for me to spend time looking for you. Change Blindness is a related phenomena.

        Finally you've cooked up some stupid figures for the number of cells in a brain. Why do you feel the need to demonstrate how stupid you are? The actual numbers (which you get wrong by 3 fucking orders of magnitude) are in the summary of the article! How hard is it to read the 100 billion neurons at the top of the page.

        So next time you feel the need to pontificate needless about something that you don't know anything about. Don't. You, sir, are a thief of oxygen and your pointless ramblings have made everyone reading this article collectively dumber.

        PS Feel free to mod me flamebait, as I am clearly annoyed. But when you do so remember that the everything the parent poster wrote was incorrect and that I have pointed out to him where he is wrong.
        • by unixfan ( 571579 )
          Thank you for proving my point. There are not enough eye "resolution to explain our quality vision. You should befriend and talk to some neurosurgents who actually practice vs someone full of theory, and discover how much they profess not to understand. Some of things that happens they simple can't match.

          Yes, I rather rapidly threw together numbers of the net, screwed up my math, and yet there you came and made my point stronger.

          The whole subject is best guesswork, which is following the same thread of thin
          • We both agree that your numbers weakened your point as they were clearly wrong.

            While I don't practice neurobiology myself, my girlfriend's PhD was in psycho-physics and how to exploit the "compression" in the human visual system. Her research was very much at the practical end of the field.

            Where we disagree is on whether or not your point stands. You claimed that despite the lack of bandwidth between the eye and the brain, the brain was *not* responsible for synthesizing the majority of what we think that w
      • A) The eye compresses images even before it gets sent to the brain.

        B) We don't memorize the entirety of whole scenes (not even those with photographic memories, though they're close). We use pattern recognition. That's why you can tell coke can is a coke can given from a slightly dented coke can to a crushed coke can, and can tell that even a crushed coke can isn't a crushed coke bottle. You memorize the patterns that make up a particular concept, match any given object to your set of memorized patterns (re
      • Bad news. You are a simulation on a playstation 9, not a real human.

        No really. it's overwhelmingly probable you are a simulation.

        According to the article above that are 100 trillion neurons to simulate. Even if they were multi-state that's approaching trivial by computational standards. And if you are willing to run the simulation at sub real time you could do it now.

        So according to the anthropic principle, either 1) the human race goes extinct in the near future before this level of simmulation is poss
      • I see numbers of 9,350 cells per cubed millimeter which is 93,500 per cube centimeter.

        Going from millimeter to centimeter in 1d is a factor of 10. Going from cubed millimeter to cubed centimeter (3 dimensions) is a factor of 1,000.

        So you'd have 9,350,000 cells per cubed centimeter if your numbers are correct.
    • by no-body ( 127863 )
      Will hitting the brain with electrons be enough to give us an understanding of these "weights", or just the connections between them?

      Nope - it will be just the wire connections, just as if computer hardware without any software will have it's electrical circuits traced in order to understand better what a program running on the screen does - about in that magnitude, probaly much higher.

      The "software" on a human brain is programmed from before birth and constantly changed.
      Just the computing power of keep
    • 'Weight' is an abstraction used in NN simulations.

      In biology, the strength of interaction between two neurons is determined primarily by the number of synapses between them -- how many of Neuron B's dendrites lie on Neuron A's axon. This is exactly the information contained in the wiring diagram.

      Of course, half the fun of any advancement is learning all the new things that are *not* explained or turned out to be more complicated than previously thought. With good wiring diagrams, we can make predict

    • by E++99 ( 880734 )

      Sounds interesting, but is a map enough to understand the brain? I know in artificial neural networks, the actual structure isn't as important as the weights on the nodes. Will hitting the brain with electrons be enough to give us an understanding of these "weights", or just the connections between them?

      To me, the bigger question than the "weights" and other mediating factors of the brain's network, is where do the network's outputs go? Artificial Neural Networks, like the biological kinds, are indeed grea

    • RTFA, rm999 and moderators. This very point comes up at the end.
  • Our computer technologies have yet to achieve the complexity of most biological brains. I'd love to see these new informations derive a new form of super-computer. Of course...We have to watch out for iRobot scenarios...
    • How the heck do you program something that complex though?
      • I'm not saying that we should program a brain, but possibly take some of the ideas of how brains work and implement them. There's been an estimate that by 2013 we will see a super computer that will exceed the power of the human mind.
    • Re: (Score:3, Interesting)

      by canuck57 ( 662392 )

      Our computer technologies have yet to achieve the complexity of most biological brains. I'd love to see these new informations derive a new form of super-computer. Of course...We have to watch out for iRobot scenarios...

      Don't hold your breath for an iRobot.

      If each of the 100 billion neurons managed the 1000 or so synapses, and say a modern day PC with a quad processor could computationally handle say 100 neurons, you would need 1 billion PCs. Since 1 billion PCs would find it difficult to walk, the old

      • Now lets say usable processing power doubles every 5 years, and it shrinks to something small enough it can walk into our living rooms. That would be at least 150 years from now.

        That's only as long as you simulate neurons in software, which is probably as inefficient as it gets, as opposed to building and connecting artificial neurons in hardware directly. The fact that the brain manages to cramp the intellectual differences between reptiles and humans into a few cubic centimeters should tell you some

      • by E++99 ( 880734 )

        If each of the 100 billion neurons managed the 1000 or so synapses, and say a modern day PC with a quad processor could computationally handle say 100 neurons, you would need 1 billion PCs.

        You're assuming that a neuron is simple, which it is not. To simulate just one neuron, or any other kind of cell, you may need a billion PCs.
        • by bodan ( 619290 )
          You're assuming that the neuron's complexity must necessarily be simulated exactly. Artificial heart valves, hip replacements and even blood (this one is still in prototype phase) are much less complex then the organs they replace (they don't even have cell-scale features, let alone imitate those of the replaced organ) but can simulate their function very well in many respects.

          So it's reasonably probable that a neuron's functions can be simulated with useful accuracy without going into nasty details like pr
          • Ah, but can such an approximation truly result in intelligence or would it just be a very slow and complex classifier? Or both? :)

            Are all of the functions of a neuron required to produce intelligent behavior? If not, which can we omit? How will we even know when a system is behaving intelligently? Even humans take years to learn how to communicate and rationalize. Could we provide even a perfect simulation of the human brain the proper environment to train in to ensure these results? Once we have such a mod
            • by bodan ( 619290 )
              The "do we really want to do this?" question is for another discussion, as are the other ethical ones.

              As for the rest: As I see it, there are two possibilities: Either (1) every little detail, down to the quantum behavior of atoms inside the neuron, are _necessary_ for intelligent behavior, or (2) there is one higher-level description of a neuron that is sufficient, and the lower-level (quantum, chemical, proteic, etc) behavior is just implementation detail. I'm rather sure (2) is true, mostly because of th
      • I understand your point that things are becoming bloated etc etc, but I'm going to go off topic a bit here

        the overwhelming majority of that size is due to convenience and lazyness, your example, hello world, assuming just a c single line printf with the stdio include, you get an executable of 4.7kb,

        now, I just went and re-wrote that in assembly, using system function calls instead of the c library, got a 725 byte binary with symbol info etc, 320byte when stripped, that still isn't terribly efficient conside
  • It's quite interesting that these German researchers are mapping pieces of the brain; however, even if they were to map the entire human brain, first, we still do not know how to perfectly simulation the biological processes occuring in the brain. Yes, we are able to simulate a single neuron, or small clumps of neurons; however, the dynamics of simulating billions of interconnected neurons is not fully understood.

    Second, even if we were able to map the entire human brain and run a perfect simulation, the

    • by Renraku ( 518261 )
      Lets say that we could, in fact, emulate an entire human brain in real time.

      It would be very difficult to get something useful out of it. Answers wouldn't always be the same due to the semi-random effect brains have a tendency to produce. You would spend a lifetime putting something in and watching it come back differently than it did a few minutes ago. It wouldn't be too unlike a network connection, if packets were voltage gradients and various neurotransmitters. Since there are only a few ways each ca
      • Re: (Score:3, Insightful)

        by Fourier404 ( 1129107 )
        If you emulate a human brain in real time, as well as connections to emulated eyes, ears, and mouths, you just have to talk to it for a couple years and it'll learn English. =D
        • by Renraku ( 518261 )
          First you'd have to have the IO systems connected properly. If you just grow a brain, this doesn't happen. For all practical purposes, the spine is part of the brain. The optic nerves are part of the brain. The cranial nerves are brain too. They all grow out of the brain when the brain is given chemical signals at the right time. Or more specifically, the stem cells are. If its not given, they don't grow. The brain will be a closed system.

          The best idea for that would be to get the brain, cranial ner
      • Re: (Score:2, Funny)

        by bmo ( 77928 )
        "It would be very difficult to get something useful out of it. Answers wouldn't always be the same due to the semi-random effect brains have a tendency to produce. You would spend a lifetime putting something in and watching it come back differently than it did a few minutes ago."

        We call this "raising children"

        --
        BMO
        • by Maian ( 887886 )

          Not only should this be modded funny but insightful as well. Because what he's saying is very true.

          People tend to think of computers as something that should just work out of the box, but really, not all computers (and software) can be like that. A human child can be considered as a computer that needs years to train. If robots ever become popular, I expect that "pet" robots would have to mimic these learning capabilities and "grow" with its master over the years.

          • by bmo ( 77928 )
            Yeah, unfortunately I got modded "redundant" and "funny." I also got modded "redundant" for my Beck joke. How is it redundant when I was the only one to come up with that? What...ever. I got more karma than I know what to do with. Go ahead, waste your mod points on me.

            If we truely come up with computers that mimic the human brain, we're going to see the same problems that we have "programming" children, maybe even worse, because human children have inherited behaviors that make teaching easier, and elec
    • Second, even if we were able to map the entire human brain and run a perfect simulation, the computing power of today cannot handle this complex task in real time.

      To be fair, how much of the brain's processing power is devoted to involuntary issues (heart, lung, pain receptors), voluntary motion systems (muscles), and then how much left over for intelligence?

      If you stripped away the brain needed for everything except cognitive thought, how much processing would you need? I suppose that is why its best to st
      • It may not be possible to do so without changing the behavior of the cognitive part. Investigating if that was the case or not would already be an extremely important milestone!
  • CCortex (Score:3, Interesting)

    by Colin Smith ( 2679 ) on Saturday November 24, 2007 @03:14PM (#21464893)
    http://www.ad.com/ [ad.com]

    An attempt to emulate a brain on a network of computers.
     
  • Plasticity (Score:3, Interesting)

    by DynaSoar ( 714234 ) on Saturday November 24, 2007 @03:31PM (#21464995) Journal
    That's what they call the brain's ability to change. By the time they complete a wiring diagram, it'll have changed. Also, knowing the wiring and connections is not enough. Knowing which connections are excitatory and which are inhibitory is necessary, and then tracking down loops of excitatory against excitatory resulting in inhibitory, etc. It's all fine and well to have a map, but that doesn't tell you squat about what anything does. A useful map would have to be dynamic, and the complexity of that is far more than just what they're considering for a wiring diagram.
    • by Morkano ( 786068 )
      Yes, this won't solve all of our problems, but it's definitely a good thing to have. It makes it a lot easier to talk about what is excitatory and what is inhibitory if you have an base for how they're all connected.
    • Spot on, but there's more if the ideas in Neural Darwinism are true - and I think that they are.

      Edelman showed that 1) individual circuits are not restricted to a single function, 2) that the operation of any brain circuit has a propagation rate dependent upon electrolytic characteristics at the time the circuit is activated, 3) a signal through the circuit, possible by the electrolytes, changes those electrolytes, 4) the next loop through that circuit will have a different propagation rate than the previou
  • by Z80a ( 971949 ) on Saturday November 24, 2007 @04:09PM (#21465239)
    and it's only like 302 neurons,so,it's possible to write a simulator of it?
    • by E++99 ( 880734 )

      and it's only like 302 neurons,so,it's possible to write a simulator of it?

      You could write some kind of rough analog of the neural network. However, an actual simulator of a C. elgans would be impossibly complex, as a simulator of an actual cell would be impossibly complex.
    • by lazybratsche ( 947030 ) on Saturday November 24, 2007 @09:43PM (#21467383)
      Well, that's the hope, and a major source of appeal for the humble nematode. Unfortunately, that's still far beyond what we know right now. The physical map of every neuron and their connections has been complete for decades. Still, despite a whole lot of effort, researchers are still working to piece together small functional circuits for the simplest of behaviors. A lot of complexity arises in neural circuits -- one physical circuit can contain several independent functional circuits, depending on the types of inputs.

      The best current knowledge of C. elegans neurophysiology involves qualitative descriptions of small circuits, involving a few dozen neurons. Unfortunately, while you can do a lot of good behavioral studies and other experiments, it's impossible to directly record the activity of specific neurons. Also, it turns out that some "neural" functions are actually performed by other cells. For example, one pattern generator in the digestive tract actually resides in intestinal cells instead of neurons -- my lab is working on the genetics involved.

      This shit gets complicated, fast.

      IAAUCER
      I am an (undergrad) C. Elegans researcher
      • it's impossible to directly record the activity of specific neurons.

        It appears there are definitions of impossible I am unfamiliar with, I've read a number of papers that reference to not only directly measuring impulses of individual neurons, but also sending impulses to one or even multiple neurons with nanowire arrays. A qualitative description of functional circuits is unnecessary when it comes to testing it's 'transfer function' with any arbitrary input, or even giving it a full simulated environment. That is, an understanding of the network is unnecessary for simula

        • I was talking about C. elegans specifically. Certainly, there have been plenty of direct recordings of neurons for decades. These have tended to be of very specific model systems however. With the giant squid axon or other preparations, you can stick tiny electrodes in different parts of the neuron to record exactly what's going on. And there are the nanowire arrays you mention -- though these are a more indirect technique.

          These techniques tend to involve either some in vitro preparation, where you h
      • Now I am starting to realizr the importance of bioinformatics in neuroscience. This sort of multiple inputs produce differing behavior is reminiscent of Lie groups [wikipedia.org] in mathematics. Its basically the same thing: a large (huge?) system of inputs, operators, and outputs, but a limited set at that (though limited is used with a grain of salt here ;-)

        Map out the structure, determine the operators, and combine the inputs with the outputs! Bang, easy as pie!*

        *joke :-)

    • Re: (Score:3, Funny)

      by Tablizer ( 95088 )
      [worm has] only like 302 neurons,so,it's possible to write a simulator of it?

      while (! dead) {
          if (leftTenticalSensesFood()){wiggle(left);}
          if (rightTenticalSensesFood()){wiggle(right);}
          if (frontTenticalSensesFood()){munch();}
          if (femaleWormEncountered()){fuckTheWigglyMamma();}
      }
      end();

      • Re: (Score:1, Funny)

        by Anonymous Coward
        while (! dead) {if (leftTenticalSensesFood()){wiggle(left);} if (rightTenticalSensesFood()){wiggle(right);} if (frontTenticalSensesFood()){munch();} if (femaleWormEncountered()){fuckTheWigglyMamma();}
        }


        Hey, that's *my* life! I call prior art.
             
  • That's roughly 20 to 30 years of general purpose computer dev time. They could definitely get their factor of a million much faster with more cores, or specialized hardware.

    In other words, it won't take long before the actual hardware is in place for this analysis to happen. Shortly after that will be the robot insurrection. I've seen movies about this. It doesn't end well.
    • Shortly after that will be the robot insurrection. I've seen movies about this. It doesn't end well.
      Depends on your point of view :) I for one welcome our robotic overlords :)

  • The neuron map of the average human brain looks like this [venganza.org]. More evidence for the existence of him [venganza.org]!
  • That's simply not true. High estimates for the total number of cells in the human body are 100 trillion, a quick google search yields this: "Science NetLinks, a resource for science teachers, stated that there are approximately "ten to the 14th power" (that's 100 trillion) cells in the human body." maybe they meant 100 billion? I remember from one of my AI classes that there are up to 10,000 synapses per neuron, while other have fewer than 100.
    • by Layth ( 1090489 )
      A synapse isn't a cell.
  • by Dachannien ( 617929 ) on Saturday November 24, 2007 @05:16PM (#21465667)
    Farnsworth: Lie down here and we'll do some tests. If Fry is out there, then Leela's brain could be acting as a five-pound Ouija board.
    Leela: Is this some sort of brain scanner?
    Farnsworth: Some sort, yes. In France, it's called a guillotine.
    Leela: Professor! Can't you examine my brain without removing it?
    Farnsworth: Yes, easily!
  • by SEWilco ( 27983 )
    Map my brain!
    ...reads article...
    When I'm done with it!
  • Comment removed based on user account deletion
  • keep in mind that huge swaths of the human brain are for data storage, motor control, sensory processing, autonomic functions and other elements not directly related to sentience. If your intent is to use this information to produce a viable synthetic human intelligence, you wouldn't necessarily need to model the who shebang.
  • I hooked one of these up to my brain and so far its working just fine. The only side effect is a slight tingly fe
         
  • ... when the map is done, will anyone ever be able to re-fold it?
  • A professor at Texas A&M University, Bruce McCormick, was pushing for this for years.

    Check out Welcome to the Brain Networks Laboratory at Texas A&M University! [tamu.edu].

    The idea is to use a knife-edge scanning microscope to make images of very thing slices from brains.

    I'm curious if Dr. McCormick has retired. His web page last list courses he taught in 2002.

  • Taking off one layer of tissue at a time to expose the neural net is a bit caustic. Maybe a method should be considered where the host is left functionally intact? Also, in order to "map" a human brain, one is going to need something on the order of about 2 to 4 peta bytes of storage, minimum. It is wonderful that we are heading in the direction of "downloading" our brains. I can see a time when forgetting the wife's birthday will become a footnote in the history books.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...