Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Biotech Math Science

Our Brains Use Binary Logic, Say Neuroscientists (sciencedaily.com) 69

"The brain's basic computational algorithm is organized by power-of-two-based logic," reports Sci-News, citing a neuroscientist at Augusta University's Medical College. hackingbear writes: He and his colleagues from the U.S. and China have documented the algorithm at work in seven different brain regions involved with basics like food and fear in mice and hamsters. "Intelligence is really about dealing with uncertainty and infinite possibilities," he said. "It appears to be enabled when a group of similar neurons form a variety of cliques to handle each basic like recognizing food, shelter, friends and foes. Groups of cliques then cluster into functional connectivity motifs to handle every possibility in each of these basics. The more complex the thought, the more cliques join in."
This discussion has been archived. No new comments can be posted.

Our Brains Use Binary Logic, Say Neuroscientists

Comments Filter:
  • by pubwvj ( 1045960 )

    2^11

  • It's pretty much obvious at first glance.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      except that patterns of firing matter.

      look up eg. P3a and P3b (components of P300s) for some of the simplest examples.

    • by JoeMerchant ( 803320 ) on Saturday December 03, 2016 @06:06PM (#53416913)

      It starts out obvious, but then factors like conduction speed, receptor sensitivity, calcium channel recharge rates, etc. etc. all factor in to make a "wet" neural network quite a bit more complex and nuanced than an electronic network of NAND gates.

      One of the open questions in "brain replication" is: can you get the same end result without the delays, varying sensitivity, numbing from multiple firing, etc.?

      OP seems to be saying that they think the hierarchy is using binary structures, but not that the firing/not firing is a simple 0 or 1 condition.

      • >OP seems to be saying that they think the hierarchy is using binary structures, but not that the firing/not firing is a simple 0 or 1 condition

        How could they tell?

        • by JoeMerchant ( 803320 ) on Sunday December 04, 2016 @12:07AM (#53418215)

          There's an interesting theory running around Neuroscience (and, let me tell you what a Harvard educated neuroscientist said about virtually all neuroscience and brain studies: "they're great theories, they've got tiny experimental observations to back them up, they're basically pulling all of this out of their asses.") so, anyway, the theory goes that "thoughts" are encoded as repeating firing patterns. As far as I understand the theory and what little I read of OP, the patterns themselves are more nuanced than binary, but one "idea" might be encoded as firing pattern like ..-..- ..-..- while a competing idea might be encoded as a different firing pattern: ...---... ...---... so, there can be thousands of distinct firing patterns (continuing the morse code analogy, even a simple 3 letter code holds over 17000 unique patterns, make those inter-firing intervals analog and a virtual infinity of patterns can be encoded with 8 to 10 intervals) so, groups of neurons start "singing" their songs based on inputs from other areas of the brain/body, usually several different patterns based on competing inputs from different "source" areas. The neurons in a "processing node" sort of fight it out, each repeating their idea of the "right" firing pattern until they reach some form of consensus, then that coherent pattern becomes a "valid" input to the next level of processing, which itself may be getting inputs from a bunch of different areas that it has to hash out until it can reach a coherent pattern to pass along. Eventually, these "ideas" fire off motor control routines and actuate the body to move, speak, etc.

          The thing that's most intriguing to me about this theory is that it's somewhat repeated in human society. We get together, repeat ideas among small groups, pass them along to "higher levels" and eventually act as societies to do things. Now, with the internet, we have billions of people acting in some ways like neurons in a brain, reaching consensus about some things and chanting in chaotic disagreement about others.

          • The thing that's most intriguing to me about this theory is that it's somewhat repeated in human society. We get together, repeat ideas among small groups, pass them along to "higher levels" and eventually act as societies to do things. Now, with the internet, we have billions of people acting in some ways like neurons in a brain, reaching consensus about some things and chanting in chaotic disagreement about others.

            What you have just described is Marvin Minsky's Society of Mind [wikipedia.org], which was the subject of his 1986 book. You stand in good company.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        factors like conduction speed, receptor sensitivity, calcium channel recharge rates, etc. etc. all factor in to make a "wet" neural network quite a bit more complex and nuanced than an electronic network of NAND gates.

        Clocks, wire delay, V-F curves and other things are all issues in electronic networks. It's all analog down there. And then it's all quantum under that.

        One of the open questions in "brain replication" is: can you get the same end result without the delays, varying sensitivity, numbing from multiple firing, etc.?

        I would suspect that the processes in brain are as robust as necessary for the survival of the biological organism and to minimize energy consumption, and as non-robust as necessary for the same reasons. Digital electronics common today is also sufficiently robust when operating under the specification as to allow it to do useful work over the whole manufact

        • The difference is we actively attempt to overcome those non-idealities in electronics to get a deterministic result. They same cannot necessarily be said for neural networks. At times their core functionality works with the non-ideal features ofor the system.
    • Yes, but there's so many of them that filtering the incoming intermediate signals can yield fuzzy logic.
    • That's no how neurons work. They fire with different amounts. More like transistors than switches.

  • by presidenteloco ( 659168 ) on Saturday December 03, 2016 @06:04PM (#53416909)

    I only glanced at the news article summary of the research, but it seems like they're saying you need an exponential number of neuron groups compared to the amount of information in bits that is coming in or being classified. It's hard to see how ONLY this organizing principle could be efficient in terms of amount of matter and energy needed to store the info.

    There must be another organizing principles going along with this, to make it representationally and physically efficient.
    Principles such as perhaps:
    1) The cliques are organized in an efficient representation abstraction hierarchy
    2) Only create neuron cliques for those combinations of things which DO occur, rather than the exponential number of possible combinations which do not occur in the world. i.e a sparse distributed representation, as has been suggested in other neuroscience and computational neuroscience research.

  • Smells bullshitty (Score:2, Interesting)

    by Anonymous Coward

    I mean, maybe it's just bad reporting, but seriously:

    At the heart of the Theory of Connectivity is the algorithm, N=2i–1, which defines how many cliques are needed for an FCM and which enabled the authors to predict the number of cliques needed to recognize food options, for example, in their testing of the theory.

    “N is the number of neural cliques connected in different possible ways; 2 means the neurons in those cliques are receiving the input or not; i is the information they are receiving; a

    • by narcc ( 412956 )

      While I have no doubt the reporting is bad, I doubt the "science" is much better.

    • Neuroscience is entirely bullshit. It's amazing it still exists now 13 years after the human genome was decoded.

      • by tgv ( 254536 )

        First: the humane genome has not been decoded. We've got a string of [AGCT]*, that's it. What it means, we don't know.

        Second: neuroscience is the modern name for a combination of cognitive psychology and neurobiology. It is a serious, complex and worthy subject of research, but there are a lot of bad researchers. It's unfortunately easier to get into neuroscience than into physics.

    • 2^i is just the total number of combinations that a binary number of i digits can contain. They subtract 1 for the number "0" because that would represent no input food presented. What they found is that the number of "cliques" (N) required to identify between a number of "inputs" (i) is the same as if that brain had classified those items into a binary number.

      Maybe soon discover that colour is processed in 24bit RGB

  • by Dread_ed ( 260158 ) on Saturday December 03, 2016 @06:38PM (#53417017) Homepage

    This seems to say that by interlinking subsets of binary decision modules you can simulate (or create?) a silicon based system that will approximate the decision tree of a biological entity. Toss in an additive memory system that enhances pattern recognition based on past experience (machine learning compatible?) and you have a system that will grow in discernment the way biological systems do. The trick would seem to be in modeling the appropriate type and number of these underlying modules, designing them to revise their output based on the relevant memory experience, and then assigning priority to the outputs from those modules, giving self preservation and threat detection precedence for instance.

    (Half cocked speculation) I can see custom cores designed to evaluate input based on their own narrow realm of specialization (food, friend/foe, threat/non-threat, shelter, etc. could be analogous to other machine relevant inputs) and with their own memory stores of experiential reference material. These feeder cores would process input with regard to their own specialization and then hand off their individual result to another coordinating core designed to integrate results from the feeder cores. The coordinating core would have a prioritization system to weight the inputs and handle conflicts. The coordinating core would also build an experiential database comprised of inputs from the other core modules and the results of the decisions made from those inputs and the viability of the decisions.

    Emergent phenomena and complexity would seem to be a logical result of the combination of a large array of interacting modules provided the output space is varied and robust.

    • Yeah I think that is also what it is saying. We should be able to upgrade our neural network models to use this method of organization and get more performance. We should also be able to mimic any single subsystem of the human brain in the relatively near future*. Mimicking a full brain - what we think of as a general purpose artificial intelligence - is still immensely harder because of all the complex relationships between subsystems and the vast amount of memory and hardware we'll need. Even the mass

  • by epine ( 68316 ) on Saturday December 03, 2016 @07:13PM (#53417211)

    I've been waiting for a good opportunity to take this new toy out for a spin. Semantic Scholar claims to have brain science almost completely covered.

    * author search [semanticscholar.org]

    Not bad.

    * topic search [semanticscholar.org]

    Not blindingly great. But the third link down is a primary hit.

    Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation [semanticscholar.org]

    There's not a lot of related material here that I'd have gone chasing after the hard way. Apparently, either this research result or this search engine is still too new.

    Nevertheless, I retain high hopes.

  • They must mean eight (numbered 0-7, of course).
  • by ShooterNeo ( 555040 ) on Saturday December 03, 2016 @09:59PM (#53417903)

    Summary of the TFA : the actual method of brain signaling is primarily all or nothing electrical impulses. The timing is analog - THEORETICALLY a difference in an electrical impulse arriving down to the planck-second could have an effect.

    This research does not change any of this. The brain is still analog at the individual synapse level, it just follows certain patterns that are related to binary math for setting up arrays of neural circuitry.

    In practice, like any analog system, true resolution is finite because there is noise. So the system merely needs to be quantized down to the level of resolution of noise and you can replicate it's behavior exactly in a digital equivalent. Remember analog PIDs and other simple analog computers, the ones that used vacuum tubes and were used from ww2 and a few decades after? Those systems also had finite effective resolution even though analog systems theoretically have infinite resolution. That was because of all the various forms of noise in the actual physical equipment. In practice if you replace an analog system with a digital system you can get BETTER resolution because all the intermediate processing steps do not introduce additional noise. (while each vacuum tube op amp you solder in picks up extra noise that is added to the signal)

  • The most shocking part of this study is that the human brain uses logic. The authors thought they could slip that through? Who's going to believe it?

    • by Tablizer ( 95088 )

      The most shocking part of this study is that the human brain uses logic.

      It is shocking for those of us shocked over the election results.

  • Neural network can produce logic or lossy yet meaningful metrics and then you can yet combine multiple levels and have precognitive assumptions and or forceful cognitive assumptions. Result: deterministic complexity vs statefull and relevant guesses.

    Static logic is best left to assembly code and extrapolated guesses to trained neural matrices. Repeat after me... left, right.. left, right... left, right...

  • Being saying this for years. If our braun behaviour is a consequence of neurin behaviour, this is inevitable. Think of a horde of Minions, each with a different idea of what a banana looks like, and a single button each to press if they see what they think is banana.

  • Sure we have binary circuits in the brain but logic is another issue completely. Humans will have all kinds of things going on that do not relate to the task or subject at hand. For example, looking good, retaining a posture of self, or showing that you can act up at times, may take precedence over accomplishing the goal at hand. Logic is not really a human characteristic.

No spitting on the Bus! Thank you, The Mgt.

Working...