Physicists Overturn a 100-Year-Old Assumption On How Brain Cells Work (sciencealert.com) 135
An anonymous reader quotes a report from ScienceAlert: A study published in 2017 has overturned a 100-year-old assumption on what exactly makes a neuron "fire," posing new mechanisms behind certain neurological disorders. To understand why this is important, we need to go back to 1907 when a French neuroscientist named Louis Lapicque proposed a model to describe how the voltage of a nerve cell's membrane increases as a current is applied. Once reaching a certain threshold, the neuron reacts with a spike of activity, after which the membrane's voltage resets. What this means is a neuron won't send a message unless it collects a strong enough signal. Lapique's equations weren't the last word on the matter, not by far. But the basic principle of his integrate-and-fire model has remained relatively unchallenged in subsequent descriptions, today forming the foundation of most neuronal computational schemes. According to the researchers, the lengthy history of the idea has meant few have bothered to question whether it's accurate.
The experiments approached the question from two angles -- one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve's firing. Their results suggest the direction of a received signal can make all the difference in how a neuron responds. A weak signal from the left arriving with a weak signal from the right won't combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message. This potentially new way of describing what's known as spatial summation could lead to a novel method of categorizing neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction. Better yet, it could even lead to discoveries that explain certain neurological disorders.
The experiments approached the question from two angles -- one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve's firing. Their results suggest the direction of a received signal can make all the difference in how a neuron responds. A weak signal from the left arriving with a weak signal from the right won't combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message. This potentially new way of describing what's known as spatial summation could lead to a novel method of categorizing neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction. Better yet, it could even lead to discoveries that explain certain neurological disorders.
For goodness sake... (Score:2, Interesting)
The signal to noise ratio here is getting woeful (granted I'm posting this on an article with only one visible reply so far, but it's the general principle of the thing...). I'm wondering if the editors in their infinite wisdom might like to try the odd article restricted to no anonymous accounts and no accounts younger than a week, say... there's sometimes some decent stuff at 0, and sometimes even at -1, depending on the subject matter, but I don't want to wade through four pages of antisemitism, three pa
Re: (Score:1)
Wherever there are readers, and anonymity, there is garbage like this. Assholes write bots to spam this stuff and laugh about it. Some people actually type this shit out because their lives are empty.
In any case, there will have to be some kind of technological solution to the problem, as no amount of shaming or whatever will stop them.
Re: (Score:2)
How about, instead of knee-jerking out a simplistic "he's an asshole/loser", we actually look at what makes them do this. I mean to them it obviously has valid reasons.
And then fix those reasons. Usually it's related to simply getting them treated nicer by life. Instead of everyone being a dick to them and then wondering why he is one too.
But maybe you don't want that.
We are on Slashdot after all. No reason to assume you are any different than the swastika spammer.
That's akin to curing racism (with a nice twist).
We can be nice and feed healing, but the "damaged" or "hurt" refuse to believe it's what's really happening. They think it's a trick or they're being screwed with. They respond with more back-scatter and hate.
It ain't gonna get fixed. And if we try to fix it by silencing them or restricting their ability to get their rocks off on others who have nothing to do with their real problem(s), they just get more aggressive, creative, and louder. LRR (Lather, rin
Re: (Score:2)
Oh boo hoo life is hard. Life is hard for most people. And most people still manage to learn that they get better rewards from life by being nice to others (and being reliable at their jobs, etc).
It makes sense to reward good behavior, and to abstain from rewarding bad behavior. This reinforces good behavior while not reinforcing bad behavior. What you are suggesting is that we reward bad behavior, in hopes that this will make the bad behavior stop. That makes zero sense.
BS! The newest generation have smartphones! Life is easy in that world. Unless you take the smartphone away or it's destroyed... then.. ah, fuck.... what you posted is good. :)
Re: (Score:1)
Trump
Koch(both of them)
the 6 Waltons
All "winners" thanks to evil conduct
Re: (Score:2, Offtopic)
> if the editors ... might like to try
BWAHAHA. That's a good one!
I've been reading /. for ~20 years. The running joke IS the editors doing fuck all. Why do you think we get so many dupes HOURS apart.
But yeah the signal:noise ratio has shifted from signal to mostly noise. :-( There are a couple of ways that could be fixed but that would require work and the editors don't give a fuck about that.
1. Add Unicode support
2. Add the ability to edit posts within a ~5 minute timeframe.
3. Remove the shitty lam
Re: (Score:1)
Interesting. Original AC here, and the comment I replied to has been removed. So there IS some mild censoring going on, but very little? That's... even worse than doing nothing at all I reckon.... they remove a relatively yawn-worthy 'apk takes it up the bum' style post but leave the swastikas and the deranged shitposting? yeeesh.....
And honestly, I can't see how 'no AC' articles wouldn't protect against shitposting with the other, relatively well behaved parts like karma and mod points...
Re: (Score:2)
...
Every online community eventually dies as the masses have moved onto the latest fad and only the die hard fans remain. /. is no different.
I'm a long term fan. Does that mean I'm going to die hard? Wait, that's not what I meant! RETRACT!
Just thought I'd add a smile to your day.
OMG this just isn't working.
Re: For goodness sake... (Score:5, Funny)
They could implement -2 for shit posts and -3 for ad spam posts.
No, because these mods would quickly turn into “-2, I strongly disagree” and “-3, suspected Republican.” We might as well go ahead and add “-4, Apple fan.”
The core problem is moderation without comment (Score:1)
That is what /. aleays got ass-backwards.
When one moderates, one should be *forced* to think firsty and give a good reason!
The current system instead *forces* moderators to be both cowards and anonymous. To "hit & run". Like a toddler.
That is unacceptable. If you moderate, and lash out punches, you must be able to take the backlash too!
Re: (Score:1)
The mod point system just need some mod-points that can only be modded up and not down. That way you can mod up your point of view, but cannot suppress others points of view since there will always be more up mod points that down.
I'd also put the down-mods out for meta moderation separately. If a person misuses a down-mod (as judged by multiple meta-moderations), then they don't get down-mods anymore.
More +ve mod points would lead to more useful comments modded above zero, and so the 0 filter level would be
Re: (Score:2)
...Excellent idea. Remove spam and garbage, but make it viewable to counteract the "so THIS is how I mod something away I disagree with" group.
Mod-up only. It's such a good idea that there is no way they're going to do it. :)
Re: For goodness sake... (Score:5, Funny)
...these mods would quickly turn into "-2, I strongly disagree" and "-3, suspected Republican." We might as well go ahead and add "-4, Apple fan."
You say that like it's a bad thing.
Re: (Score:2)
Re: (Score:2)
"+6, I'm in a warm fuzzy safe space"
a. "-1, Space is not safe";
b. "-1, Fuzz is hot today";
c. "-1, I'm not as fuzzy as I want to be";
int undoPrevMods(a, b, c) {
return HAPPY;
}
a. "+3, I'm alive";
Re: (Score:2)
Re: (Score:2)
Your humor got me to follow the thread all the way back to the AC source. Quite a shock that I have to thank you for it.
So why did the original wit post as AC in the first place?
Re: (Score:2)
They could implement -2 for shit posts and -3 for ad spam posts.
No, because these mods would quickly turn into “-2, I strongly disagree” and “-3, suspected Republican.” We might as well go ahead and add “-4, Apple fan.”
That's a sick and simple thought crapped out of the mouth (channeled through the fingers).
It just might work. :)
Re: (Score:1)
I'm wondering if the editors in their infinite wisdom might like to try the odd article restricted to no anonymous accounts and no accounts younger than a week
You can do that for yourself by simply adjusting the minimum moderation level for the articles you read.
I never see the sewage and trolls unless someone with a logged-in account responds to them.
Re: (Score:2)
There's a big difference between comments that get modded down to -1 because moderators dislike their point of view, and those that get there because they are deliberately obscene, pointless, and time-wasting.
I think it would make sense to give posters with good karma more leeway. I have often had quite serious, well-documented comments modded down to -1 and labelled Troll although I had no intention of trolling (I never do) and the comments seemed reasonable.
What I quite often do is to post comments that a
Re: (Score:2)
...I have often had quite serious, well-documented comments modded down to -1 and labelled Troll although I had no intention of trolling (I never do) and the comments seemed reasonable....
You can distinguish between personal and objective thinking and feeling. You^H^H^HWe don't belong here. :)
Greatest possible level of respect in that statement, absolutely.
Re: (Score:1)
It's an illustration of what happens when people's neurons are not working correctly.
Re: For goodness sake... (Score:1)
Or hire some fucking mods. This isn't a quaint nerd project anymore. The site has plenty of ad revenue. Itll be dead shortly if this doesn't get any better.
It's like a bad horror movie (Score:2)
Itll be dead shortly if this doesn't get any better.
We're trying, we're trying! It just wont die!
Re: (Score:2)
May I congratulate the parent? This is the first Slashdot topic I can remember - and I have been reading for many years - in which none of the first dozen or so comments were irrelevant, obscene or vicious.
Someone has moderated the parent Offtopic, which strictly speaking is correct. But from a meta point of view it hits the nail right on the head.
Consistent with Jerry Lettvin's work (Score:4, Informative)
Enhancement or suppression of individual neural signals is consistent with Jerry Lettvin's original work on retinal neurons from the 1960's. The physical layout of neves *matter*, something most modern "hey, let's wire up human brains" ignores.
Re: (Score:2)
100% correct. What AI nutters call "neural networks" is a complete joke. Total marketing hype meant to fool idiots that computers can think like a brain.
Ah. no, that isn't what "AI" means. (Score:4, Interesting)
AI does NOT mean "computers that think." Not at all. Not even remotely. If it meant that, I would agree with you that it doesn't exist. But it doesn't mean that.
The "A" in "AI" stands for "artificial". You know, as in "not real."
By way of comparison, "Leatherette" exists, but it is not real leather, and it is not supposed to be.
Similarly, "Artificial Intelligence" exists, but it is not real intelligence, and it is not supposed to be.
You seem to be thinking of something like "synthetic intelligence." That is pure science fiction.
"Artificial Intelligence" is just a loose collection of algorithms and software engineering techniques that have a common "gist." Nothing more. I Hope that helps clear things up for you.
Re: (Score:1)
Apparently, so did the dictionary [merriam-webster.com]. Snip:
the capability of a machine to imitate intelligent human behavior
Notice the word "imitate" What does that mean? That means it isn't the real thing. Like how imitation crab meat is not real crab meat. Imitation intelligence is not real intelligence.
The meanings of these words are clear, and you've got them wrong. You and your ilk are ignorant.
Re: (Score:1)
More to the point though, as a developer exploring ML my first thought was we need to start rethinking Neural Nets. To your point Machine Learning is not the only form of AI, but it's an important one right now. There is a lot of time and money being spent in that space right now and any improvement to existing ML tools could stand to make the owner of those improvements a lot of money.
Re: (Score:2)
Re:Ah. no, that isn't what "AI" means. (Score:5, Insightful)
I have always thought that the term "Artificial Intelligence" was originally coined to describe an intelligence (i.e., a conscious, thinking entity) created by artifice (i.e. artificially); this to be juxtaposed against the only other form of an intelligence known, what might be called a natural intelligence (i.e., one that has arisen through natural processes).
It seems clear to me that the term "Artificial Intelligence" has been suborned by the marketers and a sort of "grade inflation" effect whereby anything that achieves results that even dimly appear to be similar to what is achievable by a natural intelligence is now termed "AI" even though it is clearly decomposable into nothing more than a collection of algorithms and software engineering techniques.
What I wonder is what the "secret sauce" will be once a true "artificial intelligence" (a conscious, thinking entity created through artifice) is brought about, assuming that is possible at all. I *do* think it is possible, because I think that our minds are conscious and thinking, and I believe that our minds operate solely on the machinery of our physical brains. So perhaps this new way of looking at how neurons actually operate might further our efforts in this direction.
Re: (Score:1)
You have always thought wrong, your interpretation of the word was not the original meaning. "Artificial Intelligence" is a very old computer science term that has covered a class of algorithms that were cooked up many decades ago (minimaxers, various types of tree searches, heuristics, and so on). Their common thread is that they make the machine "mimic" intelligent behavior, without actually being intelligent.
So we get something that looks kind of like thinking, without any actual thinking going on. He
Re: (Score:1)
He/she did not redefine it; you (and many with you) do. AI has a pretty generous definition in computer science, see here for more info: https://en.wikipedia.org/wiki/... [wikipedia.org]
The term has been abused by clueless idiots
Agreed! So please stop.
Silly (Score:2)
You've seen the advances in image recognition, language translation, and other classification areas, and you *could* just go read up on how these systems work, yet you don't.
I assume you're a classic programmer and haven't yet done a course on DNNs and do don't know how to code or train them. That's OK, but you're one tool short of a full toolbox and filling that hole with whiney shit is not helpful.
They're very simple in principle:
Layers of neurons
Each neuron a non-linear equation connecting inputs to outp
Except they don't simulate neurons AT ALL. (Score:1)
All they do, is this ridiculously oversimplified vector-matrix multiplication and summation.
No simulation of the *spiking* signal protocol, the much more complex weighting (as per TFA), nor of the chemical processes in the synaptic gap, or the "broadcast" effects that neurotransmitters can have!
It's a wonder it halfway works at all!
That's also why they need ten times the "neurons"! (Or more like: matrix size.)
Relu, Maxout, ..... activiation functions (Score:1)
You missed the activation functions.
A system as you describe would be reduceable to just linear equations, the first attempts at DNNs used awful functions like sigmoid to add non-linearity, now stupidly simple things like RELU and MAXOUT have been found to work much better. MAXOUT does spike.
And of course any weighting is possible, including locally weighting A closer to B than C. So discoveries like this, won't enhance DNNs, these cases are already possible in the AI-neuron.
Culling (often used in image cla
Re:Silly (Score:5, Insightful)
I want to learn neural networks, I've ham fisted some tensorflow examples into some production workflows but that's as far as I've got.
My worry is that these trained models are really just the worst kind of code debt ever invented. Every company has that one monolithic program that the whole business is based on which no one really understands completely so they prefer to hack around its deficiencies at the edges, making the code debt worse and worse. However with a whole lot of time and effort you could completely understand every part of it and fix it properly. You might not, you might just reimplement it now that you know the business logic as a whole instead of piecemeal over 10 years, but that's a possibility you have anyway.
A trained neural net is in my mind the equivalent of that horrible code base no one wants to dive into, except now you couldn't even if you wanted to. Changing how the NN does stuff likely involves an extremely expensive retraining so you'll try and hack at the edges a bit to try and get it more useful. And ultimately, you can't reimplement the whole logic again because you never understood the rules that make the NN work then, now or in the future.
At first people were using NNs to do stuff that classical programming has failed to make any meaningful headway in, so some result is better than no result. Except I'm starting to see people dive onto this grenade, replacing well understood database searching + statistical weighting, for instance, with NNs trained on client behaviour. At some point someone is going to ask for a tiny change that's going to wreck their entire solution. Spread that across the industry at large and we might be looking back on all this tensor acceleration hardware and wishing we had spent the resources on something we could use for the future.
Re: (Score:1)
It is a good point and I think there are things that can be done to reduce the damage without abandoning NN completely.
For example one could try to avoid having a large NN that solves the entire problem and try to define interfaces that allows you to use multiple smaller NNs.
That way only part of of the solution needs to be retrained or converted to a non-NN solution to adjust features.
Re: (Score:3)
As others have pointed out, the technology current called "neural networks" should be more appropriately called "artificial neural networks" (perhaps more accurately "computational neural networks"), as they are not precise, accurate models of how natural neural networks operate; they are grossly simplified approximations of how one mechanism in a natural neural networks is thought to work. In other words, current "neural network" models are just inspired by actual natural neural networks, and are most lik
Re:Consistent with Jerry Lettvin's work (Score:4, Interesting)
100% correct. What AI nutters call "neural networks" is a complete joke. Total marketing hype meant to fool idiots that computers can think like a brain.
On the other hand, such a basic change in our model of neural net behavior might suddenly cause our neural emulations to work really well for a change.
Re: (Score:3)
Our neural emulations do work already. They are very good at identifying the patterns they are trained for. They just lack what we call "common sense".
I doubt this new tweak will suddenly give them common sense. I expect one of three outcomes:
1. It will make artificial neurons more efficient: faster pattern recognition with fewer "parts" and/or less training.
2. The added comp
Re:Consistent with Jerry Lettvin's work (Score:5, Insightful)
100% correct. What AI nutters call "neural networks" is a complete joke.
Not at all. A modern artificial neural net is not designed to mimic a human brain any more than a 747 is designed to mimic a hummingbird.
The basic principles may be the same, but the goal is to build something that works rather than something that is faithful to the biological inspiration.
Re: It works REALLY badly though. (Score:1)
We are building ornithopters (Score:1)
This is a little disingenuous because a 747 actually flies. None of our artificial neural nets actually think in a brainey sense.
A better analogy would be that we are busy trying to get ornithopters to work. They never will because the fundamental principle is wrong.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Your logic is impeccable. Reality intrudes. I have in my hand right now (yes, I went and found it in my archives) the manual and software disk (5.25" floppy) for "BrainMaker v1.0", Neural Network Simulator; tagline "Simulated Biological Intelligence". Sold by California Scientific Software; manual is dated August, 1988.
The inflation of our extremely limited understanding of how to create intelligence has been around a long time. The reason I have this software package is because the CIO of the company
Re: (Score:3)
Actually, on the contrary it somewhat brings the biological model closer to what some research on the matter in AI has been suggesting for a while that the simple "perceptron" type model implied in the older trigger-threshold model doesn't account for the complexities of how a neural net can be configured.
This newer research suggests neurons are capable of
Re: (Score:2)
What AI nutters call "neural networks" is a complete joke.
Meanwhile, I have a camera app that can see better than me in the dark on a cheap sensor that was developed before the seminal paper was published.
Reality thinks your aspersions are a complete joke.
analog sum + level trigger? (Score:3)
So it's a weighted summation followed by a level-threshold detector? I thought that's how neurons worked to begin?
Re:analog sum + level trigger? (Score:4, Interesting)
Re: (Score:1)
Does somebody want to propose a new computational model of this new view of neurons? I tried, but my drafts were growing too large. I'm not smart enough to factor it nicely. It required a look-up table of "input port" distances or positions, for one. If each neuron requires the equivalent of a spreadsheet, our emulations are hosed. We'll need a shortcut, perhaps a slightly lossy one (a good-enough approximation).
Re: (Score:2)
Re: (Score:1)
Or grow tentacles and break out of the lab
Re: (Score:3, Interesting)
As I interpret it, the spatial relationship of inputs may matter. Two close-together inputs may have a different effect or trigger threshold than two far-apart ones even though their input signal strength is the same.
Re:analog sum + level trigger? (Score:4, Interesting)
It's not just a weighted summation, the equation is more complicated (not surprising, given it's a physical system).
Re:analog sum + level trigger? (Score:4, Informative)
1 - Dendrites and axons perform localized signaling/spiking and signals flow both forward and backward (this was detected recently-ish because the dendrites and signals were too fine to previously record)
2 - Astrocytes (10x astrocytes as there are neurons) are not just support cells, they are part of the computational process. Each one encompasses and modulates between 1million and 2million synapes between neurons (the synapse has a new name called the tripartite synapse), responding to and releasing all neurotransmitters as well as their own gliotransmitters.
3 - Calcium waves (localized and global) within astrocytes and neurons act as an intracell signaling mechanism that is being studied to determine how they are involved in computation.
4 - DNA and RNA activities are important for learning and their role in computation is being studied.
Relatively unchallenged? (Score:3)
Re: (Score:1)
Any specific examples of such AI research?
But, but, but, it was "settled science" (Score:1, Funny)
Now we have neuron deniers! Where will it all end?!
Analog vs. Digital (Score:2)
Re: (Score:3)
This research is fascinating and will lead to a better understanding of how the brain works. However, this research still suggests that brain signals and processing are very much analog and not discrete like a computer.
Analog and digital are merely abstractions, in nature abstractions are fundamentally unified and bleed into one another, as you stated already. The "digital" ones and zero's are really thresholds of analog electrical voltage determining whether the computer reads the signal as a distinct bit of information.
Re: (Score:1)
Our digital emulations don't have to be perfect, just good enough. There has to be a degree of "wiggle room" for errors or imperfections in the brain. Otherwise, a mild blow to head or cup of coffee would cause the equivalent of a BSOD. And we know from war and crime injuries that the brain is in general surprisingly tolerant of damage. After all, we evolved in a hostile world.
Thus, the errors caused by using emulated approximations only have to be below the level of the brain's natural error tolerances.
Is a blog post about a 2 year old study news? (Score:4, Informative)
Re: (Score:2)
"It's important not to throw out a century of wisdom on the topic on the back of a single study."
"Wisdom",as opposed scientific research - because there hasn't been a centuries worth of research....
Reminds me of a quote (Score:2)
"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- Mark Twain/various [quoteinvestigator.com]
Not really a suprise (Score:2)
Not a surprise at all, if you are a Computer Scientist and happen to work with Neural Nets. We have been using Tensors in neural nets for some time now, but have still been unable to really say exactly why they work so well. If "direction matters", well, that is what tensors do actually. They model force direction (e.g. the Einstein Filed Equation), or something even more abstract like the biased direction of a simulated neuron in a larger neural net.
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://en.wikipedi [wikipedia.org]
Re: (Score:3, Informative)
Which so far seems definately to be the case. Those self driving cars for the most part are a *lot* safer than the human driven version.
Re:We didn't even really know how a neuron works.. (Score:5, Funny)
Which so far seems definately to be the case. Those self driving cars for the most part are a *lot* safer than the human driven version.
Obligatory Dilbert: "I found the root cause of our problems. It's people. They're buggy."
Re: (Score:2)
Those self driving cars for the most part are a *lot* safer than the human driven version.
They're complimentary for now. They do better in handling fast emergency situations, but still make a perceptual errors on edge cases on average every 9.5 miles (per Lex Fridman).
Tesla has their Dojo training their neural net at a supposedly exponential rate, and their crowdsourced data plus A/B shadow-testing is doing a remarkable job at learning to drive. Give it a year.
We're about to face a situation where the AI
Re: (Score:2, Insightful)
Government control is probably the biggest impediment we face to achieving better safety.
ROTFLMAO
Without government control, seatbelts wouldn't be a thing, airbags wouldn't be a thing and cars would still explode into flames when rear-ended.
Government control is the only reason a manufacturer can be held accountable for defective products.
Re: (Score:2)
Re: (Score:2)
Okay, I'll play along.. if you can demonstrate that, somehow, orders of magnitude ahead of everyone else in the field of (so-called) 'AI', Elon Musks' scientists and software engineers have managed to develop a machine that can reason like a human brain can, then we'll have a different conversation. Because that'
Re: (Score:2)
> Again: Anything that has to come to a complete stop in the middle of a trip because it can't handle whatever is in front of it, and has to 'phone home' for a remote human operator to bail it out, is not adequate, it's incompetent.
Haven't you ever pulled over to check your directions? Or stopped upon seeing an accident, to assess if it was safe to proceed or called for emergency services? Or contacted your hosts to alert them you were running late, to see if you should stop for the night?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Just because some of you still indulge in Magical Thinking and believe there's a 'person' inside that box running that car, does not mean that is true or accurate in any way. We DO NOT understand how a living brain works, we will NOT understand it for quite some time to come, and the so-called 'AI' the