Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

fMRI Data Reveals How Many Parallel Processes Run In the Brain 91

New submitter xgeorgio writes: From MIT Technology Review: "The human brain carries out many tasks at the same time, but how many? Now fMRI data has revealed just how parallel gray matter is. ... Although the analysis is complex, the outcome is simple to state. Georgiou says independent component analysis reveals that about 50 independent processes are at work in human brains performing the complex visuo-motor tasks of indicating the presence of green and red boxes. However, the brain uses fewer processes when carrying out simple tasks, like visual recognition.

That's a fascinating result that has important implications for the way computer scientists should design chips intended to mimic human performance. It implies that parallelism in the brain does not occur on the level of individual neurons but on a much higher structural and functional level, and that there are about 50 of these. 'This means that, in theory, an artificial equivalent of a brain-like cognitive structure may not require a massively parallel architecture at the level of single neurons, but rather a properly designed set of limited processes that run in parallel on a much lower scale,' he concludes." Here's a link to the full paper: "Estimating the intrinsic dimension in fMRI space via dataset fractal analysis – Counting the `cpu cores' of the human brain."
This discussion has been archived. No new comments can be posted.

fMRI Data Reveals How Many Parallel Processes Run In the Brain

Comments Filter:
  • analog computer (Score:5, Informative)

    by Spazmania ( 174582 ) on Saturday November 08, 2014 @05:56PM (#48342399) Homepage

    The brain is an analog computer. The notion of parallelism is fundamentally different for an analog computer... In a sense, every single neuron is operating independently and in parallel with the rest. Describing it in terms of parallel processing with digital CPUs makes no sense.

    • On a brain cell level, but if we zoom out, so to speak, there should come into scope some system we can label where the brain does multipe things at once reliably: balance, process sound and vision, etc.

      What interests me the most are the levels of subconscious/consciousness and where all this combines to create our singular, waking awareness.

      • What interests me the most are the levels of subconscious/consciousness and where all this combines to create our singular, waking awareness.

        Based on evidence of the effects of dissociative drugs, psychedelic drugs, and general anaesthetics, it seems likely that our 'singular, waking awareness' is primarily an effect of the information transfer between various brain regions through the posterior cingulate cortex.

        Of course, knowing that doesn't make it any less of a head-fuck to contemplate how strange it is to be anything at all.

      • What interests me the most are the levels of subconscious/consciousness and where all this combines to create our singular, waking awareness.

        Then you might be interested in reading this, [fyngyrz.com] which describes how it might all work, and how an (actual) AI could be made to work.

      • by mikael ( 484 )

        You have neurons, which are arranged into "cortical units". These in turn are arranged into wide striate layers (for increased resolution) and pyramids (for higher levels of cognition). With human vision, the neural pathways follow the topology of the retina.

        http://en.wikipedia.org/wiki/T... [wikipedia.org]

        With human audio, the neural pathways follow the frequency of sound (http://en.wikipedia.org/wiki/Tonotopy)

        This research paper covers the evolution of the human brain when compared to reptiles and other mammals:

        http://pe [sissa.it]

    • While it may seem analogue, I'd definitely call the brain digital from a functional perspective.

      The amount of neurotransmitters, strength of electrical activity, and so on are definitely analogue inputs; but due to the way that action potentials fire in cells, you're either "firing them" or "not firing them" (analogy: magnetic data on a disc is also analogue, but we only really care about the on/off state of it). Most information appears to be transferred based on the rate of firing them, and is not encode

      • by Anonymous Coward

        Digital framework, analog algorithm? Can that be a thing or am I... making myself look like an ass?

        • Sure; we have artificial neural network algorithms. Check out this letter-recognition (backpropagation) network using 80 neurons [codepen.io] that I wrote in JavaScript during a boring Christmas vacation with my parents. (And it sucks- not because it's JavaScript, but it makes embarrassing mistakes, which are the fault of the huge string literal of neuron weights at the end of the code).

          Biologically, the process with a real neuronal cell body reaching a certain (unpredictable) voltage and firing is extremely complex. Th

          • by Bengie ( 1121981 )
            Based on what I've read, which was heavily simplified, the brain cycles quickly between a chaotic and deterministic. Technically, chaotic is a form of deterministic, but it's definitely not random, not at the macro scale anyway.
            • Yep. Microscopic processes are affected by quantum fluctuation, in both neurons and transistors. Macroscopically, a transistor behaves reliably, like a switch. Humans are less predictable- but their thoughts are more deterministic than they realize.
      • Re: (Score:3, Insightful)

        by Spazmania ( 174582 )

        You misunderstand the difference between a digital computer and an analog computer. Both are based on 1's and 0's, on and off.

        The digital computer is driven by a clock strobe. When the clock strobes, the whole set of circuits accepts and processes the next inputs. As a result, the circuit is stable at the end of each clock cycle.

        An analog computer has no clock. Inputs are processed as soon as they arrive. As a result, the circuit is never known to be in a stable state. It's continually in flux based on its

        • Re: (Score:3, Interesting)

          by Anonymous Coward

          I think that the distinction you are trying to draw is not digital versus analogue. It is synchronous versus asynchronous circuit design.

          You can build asynchronous digital logic circuits using various self-timing mechanisms. You do not have to use clocked input buffers to synchronize tiers of logic gates, that is just a convention that makes reasoning about the system a lot easier. The design process is much more difficult, as you have to consider many more combinations of signal paths much as in typical

        • by Anonymous Coward

          You misunderstand the difference between a digital computer and an analog computer. Both are based on 1's and 0's, on and off.

          Um, no. An analog computer uses a continuous range of voltages as input values. The defining property of digital circuits is that they collapse this range into two discrete states (digitizing it).

          There's no reason why you couldn't use a clock signal in an analog computer, even if neural networks generally work clockless.

          Neurons are somewhere in between analog and digital, since they take in weighted analog input values, and evaluate them in a binary fire/don't fire output.

        • by tlhIngan ( 30335 )

          You misunderstand the difference between a digital computer and an analog computer. Both are based on 1's and 0's, on and off.

          The digital computer is driven by a clock strobe. When the clock strobes, the whole set of circuits accepts and processes the next inputs. As a result, the circuit is stable at the end of each clock cycle.

          An analog computer has no clock. Inputs are processed as soon as they arrive. As a result, the circuit is never known to be in a stable state. It's continually in flux based on its

      • by AK Marc ( 707885 )

        Most information appears to be transferred based on the rate of firing them, and is not encoded in any special aspect of the spikes themselves.

        Other than direction. Disks send information one way. Neurons fire in different ways/manners. They can even change, building new pathways. It takes lots more work to make a static model of a dynamic system.

    • The brain is an analog computer. The notion of parallelism is fundamentally different for an analog computer... In a sense, every single neuron is operating independently and in parallel with the rest. Describing it in terms of parallel processing with digital CPUs makes no sense.

      Imagine a Beowulf cluster of these analog computers...

    • The notion of parallelism is fundamentally different for an analog computer...Describing it in terms of parallel processing with digital CPUs makes no sense.

      came here to say this...

      that misunderstanding is an inherent problem in computing and "ai" i fear

      our brains are not "like computers" in how they work

      • our brains are not "like computers" in how they work

        True enough, but that says nothing about what kinds of processing can be realized in either. There are so many layers of abstraction between the brain and the mind that it doesn't make sense to say that minds are made of neurons. Minds are made of abstract things which are made of abstract things, (which are... etc, etc), which are eventually made of neurons. But they could eventually be made of transistors, what does it matter how the bottom few layers work?

    • Every neuron isn't independent of the rest. A neuron is node in a network and its output depends very heavily on its inputs, which are from other neurons. Thus it's not independent. Nearby neurons involved in similar tasks often share the same noise, indicating that they are very tightly coupled (either because they share inputs or because they are connected).
    • Re:analog computer (Score:4, Insightful)

      by ultranova ( 717540 ) on Sunday November 09, 2014 @07:58AM (#48344627)

      The brain is an analog computer. The notion of parallelism is fundamentally different for an analog computer...

      Citation needed. Intuitively the difference between a digital and analog computer is that the former has two discrete signal levels while the latter has a continuous band. This doesn't seem to imply anything about the actual structure of the system.

      Also, it isn't certain that the brain is actually analog. Individual neurons have discrete "firing" and "not firing" states. Firing rate is often summarized as neuron activation level, since it correlates with energy usage which is what various imaging techniques actually measure, but that doesn't prove that the timing of individual firing events doesn't matter. And if they do, we have a digital system.

      In a sense, every single neuron is operating independently and in parallel with the rest. Describing it in terms of parallel processing with digital CPUs makes no sense.

      Every single transistor is also operating independently and in parallel with the rest.

      • by AK Marc ( 707885 )

        Citation needed. Intuitively the difference between a digital and analog computer is that the former has two discrete signal levels while the latter has a continuous band.

        And if the CPU I/O is digitial, would that prevent the memory from being analog? There may be a strange mix. The trigger of a firing is digital. Whether there's a connection is digital. But when to fire, when to form a new connection is purely analog within the cell.

    • Mod parent up AND consider:

      a) remember that the use of Independent Component Analysis (ICA) is appropriate for linear processes and therefore must necessarily be, to an unknown degree (until you actually know the underlying distribution), an approximation ie. the more unlinear the process, the less ICA accurately reflects the underlying processes; and

      b) the actual processing methodology of the brain is unknown, heck, we do not even understand the encoding used by the brain.

      So the article really rests on t

  • the screen displays either a red or green box on the left or right side. If the box is red, the subject must indicate this with their right index finger, and if the box is green, the subject indicates this with their left index finger.

    I'm color-blind you ignorant clod. Green, brown, yellow, red, whatever ...

    Typically, fMRI machines divide the brain into three-dimensional pixels called voxels, each about five cubic millimeters in size. The complete activity of the brain at any instant can be recorded using a three-dimensional grid of 60 x 60 x 30 voxels.

    Is this a fine-enough resolution? If we used 1mmx1mm, would we see more than 50 "areas of activity" at one time? Or are we assuming this because that's what we have available right now?

  • Are well known to be massively parrallel and occur at the level of individual neurons.

    It looks like this is just muddying the waters between functional units and internal parallelism

    • It has no relationship the common usage of the term in computing, a far better way of phrasing that would be tasks.

      • I disagree. One needs the right to use the word "process" in its general but still technical sense: A series of causally related events and states. Often but not always an "organized and constrained" series of causally related events and states. Often but not always achieving or configured/designed/arranged to achieve some particular purpose or end result.

        If conventional, present-day computing has borrowed this common English term and specialized it further, fine, but that shouldn't prevent someone correctl

    • While you're not wrong, I do think that from the perspective of the article, it's also not really so relevant.

      'This means that, in theory, an artificial equivalent of a brain-like cognitive structure may not require a massively parallel architecture at the level of single neurons, but rather a properly designed set of limited processes that run in parallel on a much lower scale'

      Basically from my understanding, he's saying here that if we handle the sub-systems in a more traditional manner - as in, existing edge detection and motion detection algorithms in standard computing systems - that with ~50 parallel threads, we could have something brain-like.

      It's also worth considering though that this is far less cool than it sounds at first blush simply by fact that the sub-syste

      • The problem with that is, it's not particularly new or interesting. Marvin Minsky was shouting the idea from the rooftops as far back as I can recall.

        The motivation for neural nets/fuzzy logic vs traditional von neuman architectures has always had a very big helping of "It's just too hard to program/debug massively parallel code on Von Neuman architectures". This also gave us things like Data Flow Architecture, amongst other ways of viewing the problem to make it more tractable. So what the article misses i

      • Correct me if I'm wrong, but I understood that some aspects of visual processing happen either in eye itself or the optic nerve. And part of this fellow's experiment involved visual recognition. Would the visual pre-processing before the signals reach the brain throw off some of the 'results' as well? It it also my understanding that the spinal column also does some pre-processing, so to speak. I'm wondering if at the least, his simple experiment didn't really 'stress-test' the system, so he might be missin
        • by Bengie ( 1121981 )
          The eyes do very basic "pre-processing", like amplifying edges and other things. It's pretty much limited to tuning contrast, sharpness, etc.
        • by mikael ( 484 )

          Human retinas have a resolution of 100 million neurons each. But there are several layers to the retina that detect spots, edges, color opposition (blue vs. yellow, red vs. green, white vs. black). All of this information gets compressed down to around 1000 chunks of data which then go through 10 million neurons in each optic nerve.

    • The nature of the computational problems being solved by the brain suggests that a good processing architecture would be a hierarchy of processing (with feedback of course) in which parallel processing and serial processing take place at many levels in the hierarchy, with the leaf nodes of processing being massively parallel search/matching/associative traversal, but with serial decision / aggregation of result mixed in there. Map reduce anyone?

      But that sort of mixed parallel serial computing should happen

    • Oh Please Edge Detection and Motion Detection. Are well known to be massively parrallel and occur at the level of individual neurons. It looks like this is just muddying the waters between functional units and internal parallelism

      A vision system has multiple tasks to perform, low and high level tasks. The edge detection and motion detection that you describe are primitive feature extraction operators, low level tasks. Cognition, the interpretation of these features that allows the building of a model of what is being seen is something very very different, a high level task.

      For example determining the magnitude of an edge and the direction of an edge by looking at a pixel and its immediate neighbors is a very simple mathematical o

      • Sorry but at what point does it become cognition ?

        Object/facial recognition is also well modeled by massively parallel neural nets. It's also known to occur at multiple levels through the visual system.

        The way this "Article" plays fast and loose with B.S. is incredibly annoying. Take "Identifying the CPU Cores of the Brain". You want to show me anything in the brain that looks anything at all like a CPU Core ? What's next he is going to refer to transcranial magnetic stimulation as overclocking ?

        • Sorry but at what point does it become cognition ?

          Far above "the level of individual neurons". Its conceivable that an individual neuron may be triggered by the magnitude of an edge, or the orientation of an edge. This is something that can be massively parallel. Now the matching of a collection of edges to a template, that could conceivably be paralleled -- testing some number of templates in parallel, but that would be something less massive than edge detection. I think this template matching, say collection of edges == cat, is getting to the point where

        • by mikael ( 484 )

          There was the concept of the "Perceptron". You have your camera that takes live video. This feeds into the perceptron. At the lower levels, edges, arcs, corners and dots are detected. Then at a higher level, shapes like circles, squares and triangles are detected. Higher still, objects like faces, cats, and balls are detected.
          The brain seems to generate a set of hypotheses about what something could be then pick the closest match.

      • Interpreting a large number of edges over perhaps a large part of the field of view to recognize the immediate environment using a memory of stored models and templates has completely different computational requirements and an entirely different opportunity (or relative lack of it) regarding parallelization.

        Determining how well input matches a particular model is independent on how well it matches another model, and can thus be done in parallel. And of course, since neural networks don't separate memory a

        • Interpreting a large number of edges over perhaps a large part of the field of view to recognize the immediate environment using a memory of stored models and templates has completely different computational requirements and an entirely different opportunity (or relative lack of it) regarding parallelization.

          Determining how well input matches a particular model is independent on how well it matches another model, and can thus be done in parallel. And of course, since neural networks don't separate memory and processing units like von Neuman architecture does, it's hard to see how such operations could avoid parallelism.

          I'm not saying there is no parallelism in pattern/template matching, just probably a lot less relative to low level primitives like edge detection.

    • The news report just confirms what Ray Kurzweil has been say all along about the hierarchical structure of the mind. What makes for thought happens at both lower and higher levels of the mind. Basically, if something gets recognized at the higher level, the lower levels don't kick in. If something is difficult to recognized at a higher level, then the lower levels start working until some pattern or part of a pattern emerges.

      A rough example of how this works: suppose you see the back of a curly haired woman

    • Oh, darn, you beat me to it.

      But I just wanted to add that fMRI lacks the resolution to measure individual neurons, so I don't know how it could possibly be used to rule out neuron-level parallelism. It is like recording people's height in whole feet and concluding there are only 6 different heights of people.

  • by Hognoxious ( 631665 ) on Saturday November 08, 2014 @06:16PM (#48342479) Homepage Journal

    ... the answer is one.

    • by Guppy ( 12314 )

      ... the answer is one.

      No, no. Definitely capable of at least two threads, since when I get a boner my brain still manages to spare processing power to continue breathing. Although if I were to try chewing gum at the same time, there could be trouble.

      • Breathing isn't a function of the brain proper. The top of your spinal cord handles those responsibilities for the most part.

    • by allo ( 1728082 )

      Tits. Did i say Tits? Tits!

  • Opens list of processes

    Finds "Repeat partial song on loop indefinitely"

    Ends process.

  • 1. Yes, the brain is massively parallel and "analog" - BUT not every neuron forms a distinct cognitive function (FBNs) and neuron do fire in pulses/spikes (almost binary) rather than continuous (analog) outputs.

    2. No, the article does NOT identify 'cpu cores' in the brain. It uses this metaphor (stated clearly in the paper as such) to point out the level of parallelism needed to run anything remotely similar to the complete functional 'package' in the brain.

    3. The resolution of modern fMRI is at 3mm^3 (30K-

  • ...to the one using the hammer, there is a tendency for everything to look like a nail. Identifying fMRI correlates may not actually indicate the number of cognitive components in play, any more than counting the number and location of gasoline stations tells us much detail of what people in a city are doing. At most, it gives us some useful hints.

    • The scanner measures the hemodynamic response function. The brain only sends oxygen-rich blood to the regions of the brain that need it. I guess this restricts power consumption, heat sinking requirements, and so on, a bit like the power limiting circuits on a processor. It is likely that the power regulation is a lot less fine grained in the brain than the thinking process itself. So if you had two separate regions that were fed by a single blood supply, you would not be able to distinguish them. In pract

  • If you want to emulate a brain with chips, you have a major obstacle to overcome; the fact that chips don't change dramatically over time as they acquire experience with the world through a coordinated set of sensory-motor systems. You would not just need the 50 or so high-level processors that are dedicated to specific tasks, linked together very specifically, you would also need the entire system to be able to rewire itself at both microscopic and macroscopic levels based on experience. Without living or
  • While driving about a dozen years ago, I thought: "Wow, I'm thinking."
    Then I thought: "Wow, I'm thinking the thought: "Wow, I'm thinking.""
    Then I thought: "Wow, I'm thinking the thought: "Wow, I'm thinking the thought: "Wow, I'm thinking."""
    Fortunately I didn't crash.
    However, I couldn't get to the next level without my mind drifting elsewhere.

  • I'd come down hard on the last line as still relevant.
    /*
    * If the new process paused because it was
    * swapped out, set the stack level to the last call
    * to savu(u_ssav). This means that the return
    * which is executed immediately after the call to aretu
    * actually returns from the last routine which did
    * the savu.
    *
    * You are not expected to understand this.
    */

    (credit to http://cm.bell-labs.com/cm/cs/... [bell-labs.com]) And yes, let me forestall a lot of comment -- as the link above mentions, the

  • So, I'm not gonna be able to simulate the brain on an ATTiny85, then, am I? Not even at 20 MHz?
  • That's all interesting but how many things can a brain do at the same time CONSCIOUSLY? Many studies point to the same number: 1 (one).

    • Comment removed based on user account deletion
      • by tsa ( 15680 )

        I understamd that, but this research does not prove that you can talk on the phone whilst driving and do both things just as good as when doing them separately. That was what I thought of when I typed my post. I should have been clearer.

        • Comment removed based on user account deletion
          • by tsa ( 15680 )

            Yet research shows that people drive nearly equally bad while calling handsfree as while calling handheld. Just like in a computer, multitasking means being slower performing both skills. And because the environment changes while you talk or listen to someone while driving, you will miss more and thus be surprised by unexpected things more.

  • Brodmann already counted the CPUs of the brain. They are called Brodmann areas. BA17, for example, is primary visual cortex. BA45 is Broca's area (speech). There are about 50. They are defined by differences in the micro-cellular architecture of the area. Most areas of cortex look roughly the same, but there are many differences, for example the input layers of primary sensory areas are larger than in other areas. Some areas have large output layers, or more inhibitory cells, etc.

    The brain does have many di

    • It's certainly interesting that the PCA-like analysis in the cited paper comes up with a similar number of subsystems, although I wonder if they ended up matching the Brodmann areas. And importantly, any set of areas is more like a subsystem, in which, if my quick look over the paper serves me well, activations make a unique contribution to task solving.

      The question is, does this bring us closer to a computational understanding of how the overall processes work? Localization of function alone doesn't, I

  • by Culture20 ( 968837 ) on Sunday November 09, 2014 @09:28AM (#48344915)
    How does this apply to phrenology?
  • Comment removed based on user account deletion
  • Comment removed based on user account deletion

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...