Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Sand in the Brain: A Fundamental Theory To Model the Mind 105

An anonymous reader writes "In 1999, the Danish physicist Per Bak proclaimed to a group of neuroscientists that it had taken him only 10 minutes to determine where the field had gone wrong. Perhaps the brain was less complicated than they thought, he said. Perhaps, he said, the brain worked on the same fundamental principles as a simple sand pile, in which avalanches of various sizes help keep the entire system stable overall — a process he dubbed 'self-organized criticality.'"
This discussion has been archived. No new comments can be posted.

Sand in the Brain: A Fundamental Theory To Model the Mind

Comments Filter:
  • Re:Sand in our Brain (Score:5, Informative)

    by Charliemopps ( 1157495 ) on Sunday April 06, 2014 @08:56PM (#46680001)

    Ok, well... my understanding of it is that nature is made up by random events. If those events were all there were, you'd get white noise. A perfectly even randomness. However, nature also has laws. With regard to sand, there's gravity, and slope, friction, etc... and that means these randomly falling grains of sand, on the macro scale, end up forming patterns. These patterns end up being very complex but predictable with statistics. Understanding a dune from the point of view of a grain of sand is nearly impossible. You just need to know the rules the system is following and then you can make accurate macro-scale predictions without having to compute every grain of sand in the dune.

    The arguments made its way into nearly every branch of science now. Our attempts at brute forcing nature, and trying to connect the sub-atomic scale with the macro scale have mostly failed. But it now seems that maybe nature doesn't work that way. Nature seems more to work based on sets of probabilities, and particles seem to work more like "attributes" than matter. So perhaps the brain works like this to. It's a collection of chaos, bound by rules. Those rules cause the microscopic chaos to form patterns on the macro scale.

  • Re:Sand in our Brain (Score:5, Informative)

    by wanax ( 46819 ) on Sunday April 06, 2014 @09:52PM (#46680197)

    The linked article was horribly written. I'll give a shot at trying to explain it (or rather, a really, really simplified version).

    Two of the fundamental problems that neural circuits must solve are the noise-saturation dilemma and the stability-plasticity dilemma. The first is best explained in the context of vision. Our visual system is capable of detecting contrast (ie. edges) over a massive range of brightness, spanning a space of about 10^10. Given that neurons have limited firing rates (typically between 0 and 200hz), there needs to be some normalization criteria that allows useful contrast processing over massive variations in absolute input (more on this later). The stability-plasticity dilemma is that the brain needs to be sufficiently flexible to learn based on a single event (let's say, touching a hot stove is a bad idea), but once learned memories have to be sufficiently stable to last the rest of a creatures' life span.

    The stability-plasticity dilemma implies that neural circuits must operate in at least two (as I said, very simplified) distinct states, a "resting" or "maintenance" state, and a "learning" state, and that there is a phase-transition point in between them. Furthermore, these states need to have the following properties regarding stability:
    1) the learning state must collapse into the maintenance state in the absence of input (otherwise you get epilepsy).
    2) reasonable stimulation (input) during the resting state must be able to trigger a phase change into the learning state (or you become catatonic).

    Many circuits/mechanisms have been proposed to explain how the brain solves these dilemmas. Most of them involve the definition of a recurrent neural network [scholarpedia.org] using some combination of gated-diffusion and oscillatory dynamics to fit well known oscillatory and wave-based dynamics that have been recorded in neural circuits. Some of these models employ intrinsic learning using a learning-rule (ie. self-organized maps) while others are fit by the researcher. One key point about this class of models (as opposed to the TFA approach) is that they have a macro-circuit architecture specified by the modeler. Typically these models are at least somewhat sensitive to parametric perturbation.

    TFA describes another approach, which comes out of research on cellular automata done by Ulam, von Neumann, Conway and Wolfram. This approach posits that parametric stability and macro-circuit organization is only loosely important so long as the system obeys a certain set of rules regarding local interaction (could also be through of as micro-circuit) because it will self-organize to a point of 'critical stability'. In the the two-state model described above, this approach predicts that neural circuits are always at a state of 'critical stability' where maintenance occurs through frequent small perturbations or avalanches, and any new input will trigger a large avalanche, causing learning. Bak has proposed this as a general model of neural circuit organization. One trademark of these type of models is that they show 'scale free' or 'power law' behavior, where the size of an event is inversely proportional to its frequency by some exponential function. Some recent data has shown power-law dynamics in neural populations (a lot of other data doesn't show power-law dynamics).

    One big problem with the critical stability hypothesis is that it doesn't deal well with the noise-saturation dilemma: it needs to cause the same general size of avalanche whether it's hit by one grain of sand, or 10^10 grains of sand.

    None of this is particularly new, neural-avalanches (albeit in a different context) were postulated in the early 70s. Could some systems in the brain exploit self-organized criticality? Sure, but there is a lot of data out there that's inconsistent with it being the primary method of neural organization.

  • Re:As an observer (Score:5, Informative)

    by mikael ( 484 ) on Monday April 07, 2014 @02:15AM (#46681137)

    There was an idea in computing several decades ago about "asynchronous computing". The idea was that you could get rid of the need to have all the different regions of your silicon chip clocked at exactly the same speed. Instead, data would move between different units at different speeds according to demand. If a particular circuit wasn't used, you could put it in a low power state, if something was being filled up with data, you boosted the clock speed. You end up with data "flowing" through the system or data-flow- computing.

    So it's much similar to the brain where different regions light up under fMRI analysis as oxygen flow increases as they are used. And scientists have a good idea what different regions of the brain do - usually a high-level function like generate-muscle-motion-to-say-phrase or recognise-name-of-object-from-picture. From other methods of MRI scans, they have identified the pathways where different parts of the brain communicate along, and are able to visualize these as "connectograms", Phineas Gage is the best example.

  • Re:Sand in our Brain (Score:4, Informative)

    by mikael ( 484 ) on Monday April 07, 2014 @02:21AM (#46681157)

    Look up "boids". Each critter has a field of view and a current direction. It only responds to what it sees in that field-of-view. If other critters start running, it starts running too. If they stop, it stops. With fish, the minute one turns, there is a flash of light. That instructs all the others to turn as well, providing the flash is bright enough. Maybe it takes two or more.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...