Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science Technology

Physicists Overturn a 100-Year-Old Assumption On How Brain Cells Work (sciencealert.com) 135

An anonymous reader quotes a report from ScienceAlert: A study published in 2017 has overturned a 100-year-old assumption on what exactly makes a neuron "fire," posing new mechanisms behind certain neurological disorders. To understand why this is important, we need to go back to 1907 when a French neuroscientist named Louis Lapicque proposed a model to describe how the voltage of a nerve cell's membrane increases as a current is applied. Once reaching a certain threshold, the neuron reacts with a spike of activity, after which the membrane's voltage resets. What this means is a neuron won't send a message unless it collects a strong enough signal. Lapique's equations weren't the last word on the matter, not by far. But the basic principle of his integrate-and-fire model has remained relatively unchallenged in subsequent descriptions, today forming the foundation of most neuronal computational schemes. According to the researchers, the lengthy history of the idea has meant few have bothered to question whether it's accurate.

The experiments approached the question from two angles -- one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve's firing. Their results suggest the direction of a received signal can make all the difference in how a neuron responds. A weak signal from the left arriving with a weak signal from the right won't combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message. This potentially new way of describing what's known as spatial summation could lead to a novel method of categorizing neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction. Better yet, it could even lead to discoveries that explain certain neurological disorders.

This discussion has been archived. No new comments can be posted.

Physicists Overturn a 100-Year-Old Assumption On How Brain Cells Work

Comments Filter:
  • For goodness sake... (Score:2, Interesting)

    by Anonymous Coward

    The signal to noise ratio here is getting woeful (granted I'm posting this on an article with only one visible reply so far, but it's the general principle of the thing...). I'm wondering if the editors in their infinite wisdom might like to try the odd article restricted to no anonymous accounts and no accounts younger than a week, say... there's sometimes some decent stuff at 0, and sometimes even at -1, depending on the subject matter, but I don't want to wade through four pages of antisemitism, three pa

    • by Anonymous Coward

      Wherever there are readers, and anonymity, there is garbage like this. Assholes write bots to spam this stuff and laugh about it. Some people actually type this shit out because their lives are empty.

      In any case, there will have to be some kind of technological solution to the problem, as no amount of shaming or whatever will stop them.

    • Re: (Score:2, Offtopic)

      > if the editors ... might like to try

      BWAHAHA. That's a good one!

      I've been reading /. for ~20 years. The running joke IS the editors doing fuck all. Why do you think we get so many dupes HOURS apart.

      But yeah the signal:noise ratio has shifted from signal to mostly noise. :-( There are a couple of ways that could be fixed but that would require work and the editors don't give a fuck about that.

      1. Add Unicode support
      2. Add the ability to edit posts within a ~5 minute timeframe.
      3. Remove the shitty lam

      • by Anonymous Coward

        Interesting. Original AC here, and the comment I replied to has been removed. So there IS some mild censoring going on, but very little? That's... even worse than doing nothing at all I reckon.... they remove a relatively yawn-worthy 'apk takes it up the bum' style post but leave the swastikas and the deranged shitposting? yeeesh.....

        And honestly, I can't see how 'no AC' articles wouldn't protect against shitposting with the other, relatively well behaved parts like karma and mod points...

      • ...

        Every online community eventually dies as the masses have moved onto the latest fad and only the die hard fans remain. /. is no different.

        I'm a long term fan. Does that mean I'm going to die hard? Wait, that's not what I meant! RETRACT!
        Just thought I'd add a smile to your day.
        OMG this just isn't working.

    • I'm wondering if the editors in their infinite wisdom might like to try the odd article restricted to no anonymous accounts and no accounts younger than a week

      You can do that for yourself by simply adjusting the minimum moderation level for the articles you read.

      I never see the sewage and trolls unless someone with a logged-in account responds to them.

    • by Tablizer ( 95088 )

      I don't want to wade through four pages of antisemitism

      It's an illustration of what happens when people's neurons are not working correctly.

    • by Anonymous Coward

      Or hire some fucking mods. This isn't a quaint nerd project anymore. The site has plenty of ad revenue. Itll be dead shortly if this doesn't get any better.

    • May I congratulate the parent? This is the first Slashdot topic I can remember - and I have been reading for many years - in which none of the first dozen or so comments were irrelevant, obscene or vicious.

      Someone has moderated the parent Offtopic, which strictly speaking is correct. But from a meta point of view it hits the nail right on the head.

  • by Anonymous Coward on Wednesday August 07, 2019 @12:09AM (#59055198)

    Enhancement or suppression of individual neural signals is consistent with Jerry Lettvin's original work on retinal neurons from the 1960's. The physical layout of neves *matter*, something most modern "hey, let's wire up human brains" ignores.

    • 100% correct. What AI nutters call "neural networks" is a complete joke. Total marketing hype meant to fool idiots that computers can think like a brain.

      • by Anonymous Coward on Wednesday August 07, 2019 @12:23AM (#59055228)

        AI does NOT mean "computers that think." Not at all. Not even remotely. If it meant that, I would agree with you that it doesn't exist. But it doesn't mean that.

        The "A" in "AI" stands for "artificial". You know, as in "not real."

        By way of comparison, "Leatherette" exists, but it is not real leather, and it is not supposed to be.
        Similarly, "Artificial Intelligence" exists, but it is not real intelligence, and it is not supposed to be.

        You seem to be thinking of something like "synthetic intelligence." That is pure science fiction.

        "Artificial Intelligence" is just a loose collection of algorithms and software engineering techniques that have a common "gist." Nothing more. I Hope that helps clear things up for you.

        • by Anonymous Coward

          More to the point though, as a developer exploring ML my first thought was we need to start rethinking Neural Nets. To your point Machine Learning is not the only form of AI, but it's an important one right now. There is a lot of time and money being spent in that space right now and any improvement to existing ML tools could stand to make the owner of those improvements a lot of money.

        • Yes yes we're all aware of how companies are trying to redefine the term since figuring out real AI wasn't going to be something anyone alive today will see. It's just not really taking hold outside of the people trying to sell "AI" products. And "machine learning" would be more accurate for the pedants anyway.
        • by anegg ( 1390659 ) on Wednesday August 07, 2019 @02:24PM (#59058288)

          I have always thought that the term "Artificial Intelligence" was originally coined to describe an intelligence (i.e., a conscious, thinking entity) created by artifice (i.e. artificially); this to be juxtaposed against the only other form of an intelligence known, what might be called a natural intelligence (i.e., one that has arisen through natural processes).

          It seems clear to me that the term "Artificial Intelligence" has been suborned by the marketers and a sort of "grade inflation" effect whereby anything that achieves results that even dimly appear to be similar to what is achievable by a natural intelligence is now termed "AI" even though it is clearly decomposable into nothing more than a collection of algorithms and software engineering techniques.

          What I wonder is what the "secret sauce" will be once a true "artificial intelligence" (a conscious, thinking entity created through artifice) is brought about, assuming that is possible at all. I *do* think it is possible, because I think that our minds are conscious and thinking, and I believe that our minds operate solely on the machinery of our physical brains. So perhaps this new way of looking at how neurons actually operate might further our efforts in this direction.

          • by Anonymous Coward

            You have always thought wrong, your interpretation of the word was not the original meaning. "Artificial Intelligence" is a very old computer science term that has covered a class of algorithms that were cooked up many decades ago (minimaxers, various types of tree searches, heuristics, and so on). Their common thread is that they make the machine "mimic" intelligent behavior, without actually being intelligent.

            So we get something that looks kind of like thinking, without any actual thinking going on. He

      • by Anonymous Coward

        You've seen the advances in image recognition, language translation, and other classification areas, and you *could* just go read up on how these systems work, yet you don't.

        I assume you're a classic programmer and haven't yet done a course on DNNs and do don't know how to code or train them. That's OK, but you're one tool short of a full toolbox and filling that hole with whiney shit is not helpful.

        They're very simple in principle:

        Layers of neurons
        Each neuron a non-linear equation connecting inputs to outp

        • by Anonymous Coward

          All they do, is this ridiculously oversimplified vector-matrix multiplication and summation.
          No simulation of the *spiking* signal protocol, the much more complex weighting (as per TFA), nor of the chemical processes in the synaptic gap, or the "broadcast" effects that neurotransmitters can have!

          It's a wonder it halfway works at all!

          That's also why they need ten times the "neurons"! (Or more like: matrix size.)

          • by Anonymous Coward

            You missed the activation functions.

            A system as you describe would be reduceable to just linear equations, the first attempts at DNNs used awful functions like sigmoid to add non-linearity, now stupidly simple things like RELU and MAXOUT have been found to work much better. MAXOUT does spike.

            And of course any weighting is possible, including locally weighting A closer to B than C. So discoveries like this, won't enhance DNNs, these cases are already possible in the AI-neuron.

            Culling (often used in image cla

        • Re:Silly (Score:5, Insightful)

          by mrfaithful ( 1212510 ) on Wednesday August 07, 2019 @04:22AM (#59055736)

          I want to learn neural networks, I've ham fisted some tensorflow examples into some production workflows but that's as far as I've got.

          My worry is that these trained models are really just the worst kind of code debt ever invented. Every company has that one monolithic program that the whole business is based on which no one really understands completely so they prefer to hack around its deficiencies at the edges, making the code debt worse and worse. However with a whole lot of time and effort you could completely understand every part of it and fix it properly. You might not, you might just reimplement it now that you know the business logic as a whole instead of piecemeal over 10 years, but that's a possibility you have anyway.

          A trained neural net is in my mind the equivalent of that horrible code base no one wants to dive into, except now you couldn't even if you wanted to. Changing how the NN does stuff likely involves an extremely expensive retraining so you'll try and hack at the edges a bit to try and get it more useful. And ultimately, you can't reimplement the whole logic again because you never understood the rules that make the NN work then, now or in the future.

          At first people were using NNs to do stuff that classical programming has failed to make any meaningful headway in, so some result is better than no result. Except I'm starting to see people dive onto this grenade, replacing well understood database searching + statistical weighting, for instance, with NNs trained on client behaviour. At some point someone is going to ask for a tiny change that's going to wreck their entire solution. Spread that across the industry at large and we might be looking back on all this tensor acceleration hardware and wishing we had spent the resources on something we could use for the future.

          • by Anonymous Coward

            It is a good point and I think there are things that can be done to reduce the damage without abandoning NN completely.
            For example one could try to avoid having a large NN that solves the entire problem and try to define interfaces that allows you to use multiple smaller NNs.
            That way only part of of the solution needs to be retrained or converted to a non-NN solution to adjust features.

          • by anegg ( 1390659 )

            As others have pointed out, the technology current called "neural networks" should be more appropriately called "artificial neural networks" (perhaps more accurately "computational neural networks"), as they are not precise, accurate models of how natural neural networks operate; they are grossly simplified approximations of how one mechanism in a natural neural networks is thought to work. In other words, current "neural network" models are just inspired by actual natural neural networks, and are most lik

      • by Applehu Akbar ( 2968043 ) on Wednesday August 07, 2019 @01:18AM (#59055344)

        100% correct. What AI nutters call "neural networks" is a complete joke. Total marketing hype meant to fool idiots that computers can think like a brain.

        On the other hand, such a basic change in our model of neural net behavior might suddenly cause our neural emulations to work really well for a change.

        • by Tablizer ( 95088 )

          such a basic change in our model of neural net behavior might suddenly cause our neural emulations to work really well for a change.

          Our neural emulations do work already. They are very good at identifying the patterns they are trained for. They just lack what we call "common sense".

          I doubt this new tweak will suddenly give them common sense. I expect one of three outcomes:

          1. It will make artificial neurons more efficient: faster pattern recognition with fewer "parts" and/or less training.

          2. The added comp

      • by ShanghaiBill ( 739463 ) on Wednesday August 07, 2019 @01:27AM (#59055358)

        100% correct. What AI nutters call "neural networks" is a complete joke.

        Not at all. A modern artificial neural net is not designed to mimic a human brain any more than a 747 is designed to mimic a hummingbird.

        The basic principles may be the same, but the goal is to build something that works rather than something that is faithful to the biological inspiration.

      • 100% correct. What AI nutters call "neural networks" is a complete joke. Total marketing hype meant to fool idiots that computers can think like a brain.

        Actually, on the contrary it somewhat brings the biological model closer to what some research on the matter in AI has been suggesting for a while that the simple "perceptron" type model implied in the older trigger-threshold model doesn't account for the complexities of how a neural net can be configured.

        This newer research suggests neurons are capable of

      • What AI nutters call "neural networks" is a complete joke.

        Meanwhile, I have a camera app that can see better than me in the dark on a cheap sensor that was developed before the seminal paper was published.

        Reality thinks your aspersions are a complete joke.

  • by 4wdloop ( 1031398 ) on Wednesday August 07, 2019 @12:21AM (#59055224)

    So it's a weighted summation followed by a level-threshold detector? I thought that's how neurons worked to begin?

    • by Dunbal ( 464142 ) * on Wednesday August 07, 2019 @12:29AM (#59055242)
      I'm guessing neither the Na channel nor the Na/K pump density are distributed evenly throughout the cell. Stands to reason you can verily easily explain why the same stimulus would have a different effect depending on where you apply it. It's convenient for us to think of cells as simple things for our own learning and teaching convenience, however there's no reason to assume a complex layer exists that we're completely missing because we usually only ever get to imagine single units or a unit we've smashed to bits.
      • by Tablizer ( 95088 )

        Does somebody want to propose a new computational model of this new view of neurons? I tried, but my drafts were growing too large. I'm not smart enough to factor it nicely. It required a look-up table of "input port" distances or positions, for one. If each neuron requires the equivalent of a spreadsheet, our emulations are hosed. We'll need a shortcut, perhaps a slightly lossy one (a good-enough approximation).

    • Re: (Score:3, Interesting)

      by Tablizer ( 95088 )

      As I interpret it, the spatial relationship of inputs may matter. Two close-together inputs may have a different effect or trigger threshold than two far-apart ones even though their input signal strength is the same.

    • by phantomfive ( 622387 ) on Wednesday August 07, 2019 @01:55AM (#59055430) Journal
      Yeah this is not really 'overturning' a 100 year old assumption (which, as I understand it, wasn't an assumption but a tested hypothesis). It's more like adding nuance to the system.

      It's not just a weighted summation, the equation is more complicated (not surprising, given it's a physical system).
    • by raftpeople ( 844215 ) on Wednesday August 07, 2019 @11:09AM (#59057106)
      There is much more than this that is already known, this is just additional information added to the growing complexity of how our brain cells work. Some other examples:

      1 - Dendrites and axons perform localized signaling/spiking and signals flow both forward and backward (this was detected recently-ish because the dendrites and signals were too fine to previously record)

      2 - Astrocytes (10x astrocytes as there are neurons) are not just support cells, they are part of the computational process. Each one encompasses and modulates between 1million and 2million synapes between neurons (the synapse has a new name called the tripartite synapse), responding to and releasing all neurotransmitters as well as their own gliotransmitters.

      3 - Calcium waves (localized and global) within astrocytes and neurons act as an intracell signaling mechanism that is being studied to determine how they are involved in computation.

      4 - DNA and RNA activities are important for learning and their role in computation is being studied.
  • by q_e_t ( 5104099 ) on Wednesday August 07, 2019 @02:11AM (#59055466)
    Someone should tell those who moved away from this model many years ago.
  • by Anonymous Coward

    Now we have neuron deniers! Where will it all end?!

  • This research is fascinating and will lead to a better understanding of how the brain works. However, this research still suggests that brain signals and processing are very much analog and not discrete like a computer. I guess you could say a computer is technically analog in the sense that electrical signals are travelling along circuit pathways. If they aren't strong enough, they won't register in the destination but the way computers "compute" is by having a known good discrete state (registers, memo
    • This research is fascinating and will lead to a better understanding of how the brain works. However, this research still suggests that brain signals and processing are very much analog and not discrete like a computer.

      Analog and digital are merely abstractions, in nature abstractions are fundamentally unified and bleed into one another, as you stated already. The "digital" ones and zero's are really thresholds of analog electrical voltage determining whether the computer reads the signal as a distinct bit of information.

    • by Tablizer ( 95088 )

      Our digital emulations don't have to be perfect, just good enough. There has to be a degree of "wiggle room" for errors or imperfections in the brain. Otherwise, a mild blow to head or cup of coffee would cause the equivalent of a BSOD. And we know from war and crime injuries that the brain is in general surprisingly tolerant of damage. After all, we evolved in a hostile world.

      Thus, the errors caused by using emulated approximations only have to be below the level of the brain's natural error tolerances.

  • by Kwirl ( 877607 ) <kwirlkarphys@gmail.com> on Wednesday August 07, 2019 @09:32AM (#59056540)
    Just saying...this is from December, 2017......
    • And a really badly written blog post.

      "It's important not to throw out a century of wisdom on the topic on the back of a single study."

      "Wisdom",as opposed scientific research - because there hasn't been a centuries worth of research....
  • "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- Mark Twain/various [quoteinvestigator.com]

  • Not a surprise at all, if you are a Computer Scientist and happen to work with Neural Nets. We have been using Tensors in neural nets for some time now, but have still been unable to really say exactly why they work so well. If "direction matters", well, that is what tensors do actually. They model force direction (e.g. the Einstein Filed Equation), or something even more abstract like the biased direction of a simulated neuron in a larger neural net.

    https://en.wikipedia.org/wiki/... [wikipedia.org]
    https://en.wikipedi [wikipedia.org]

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...