Teaching Physics To Neural Networks Removes 'Chaos Blindness' (phys.org) 58
An anonymous reader quotes a report from Phys.Org: Researchers from North Carolina State University have discovered that teaching physics to neural networks enables those networks to better adapt to chaos within their environment. Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.
The drawback to this neural network training is something called "chaos blindness" -- an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State's Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to "see" chaos within a system and adapt accordingly. Simply put, the Hamiltonian embodies the complete information about a dynamic physical system -- the total amount of all the energies present, kinetic and potential. Picture a swinging pendulum, moving back and forth in space over time. Now look at a snapshot of that pendulum. The snapshot cannot tell you where that pendulum is in its arc or where it is going next. Conventional neural networks operate from a snapshot of the pendulum. Neural networks familiar with the Hamiltonian flow understand the entirety of the pendulum's movement -- where it is, where it will or could be, and the energies involved in its movement.
In a proof-of-concept project, the NAIL team incorporated Hamiltonian structure into neural networks, then applied them to a known model of stellar and molecular dynamics called the Henon-Heiles model. The Hamiltonian neural network accurately predicted the dynamics of the system, even as it moved between order and chaos. "The Hamiltonian is really the 'special sauce' that gives neural networks the ability to learn order and chaos," says John Lindner, visiting researcher at NAIL, professor of physics at The College of Wooster and corresponding author of a paper describing the work. "With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems." The work appears in the journal Physical Review E.
The drawback to this neural network training is something called "chaos blindness" -- an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State's Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to "see" chaos within a system and adapt accordingly. Simply put, the Hamiltonian embodies the complete information about a dynamic physical system -- the total amount of all the energies present, kinetic and potential. Picture a swinging pendulum, moving back and forth in space over time. Now look at a snapshot of that pendulum. The snapshot cannot tell you where that pendulum is in its arc or where it is going next. Conventional neural networks operate from a snapshot of the pendulum. Neural networks familiar with the Hamiltonian flow understand the entirety of the pendulum's movement -- where it is, where it will or could be, and the energies involved in its movement.
In a proof-of-concept project, the NAIL team incorporated Hamiltonian structure into neural networks, then applied them to a known model of stellar and molecular dynamics called the Henon-Heiles model. The Hamiltonian neural network accurately predicted the dynamics of the system, even as it moved between order and chaos. "The Hamiltonian is really the 'special sauce' that gives neural networks the ability to learn order and chaos," says John Lindner, visiting researcher at NAIL, professor of physics at The College of Wooster and corresponding author of a paper describing the work. "With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems." The work appears in the journal Physical Review E.
Re: (Score:1)
what about newtonian physics (Score:2)
Can you feed it a bunch of videos of a ball being dropped or launched in various trajectories at various speeds .. then show it a two frame video clip and see if can predict where the ball will be 20 frames later.
Re: (Score:2)
Re: (Score:2)
Re: Buzzwords (Score:2)
In the cloud? No...we ran it on somebody elseâ(TM)s computer.
Re: (Score:3)
Physics could help in Natural Language Processing (Score:2)
Recently there was some news about Google's latest monster natural language processing neural network trained on a multigigabyte text corpus. It was pointed out that the associations one can learn this way are shallow and eventually things can drift into nonsense.
It would help, as in this article, to have at least a basic physics simulator as part of such an AI program, which would give meaning to words like "a ball is on the floor" and "it is raining outside".
Each time the user says something to the ma
Re: (Score:3)
Re: (Score:2)
Cyc is a valuable project. Yet it is still about how words are logically connected to each other, as I see. That can be enough for some things, but not for all. Adding in physics brings a new dimension, and some statements about the physical world can be understood only after figuring out the interplay of the objects in that statement according to the laws of physics. The next step is, of course, embodied cognition, but even a basic simulator can help.
And of course Cyc is hand-crafted. Translating statem
Re: (Score:2)
"This kind of vector composition also lets us answer “King – Man + Woman = ?” question and arrive at the result “Queen” ! All of which is truly remarkable when you think that all of this knowledge simply comes from looking at lots of word in context (as we’ll see soon) with no other information provided about their semantics." - https://blog.acolyer.org/2016/... [acolyer.org]
Re: (Score:1)
Re: (Score:2)
As you say, these neural networks don't have the knowledge to really understand the text they are reading. They don't know what a cow is.
I'm pretty sure neither do physicists. They keep asking me if the cow is spherical.
Re:Physics could help in Natural Language Processi (Score:5, Insightful)
This is called the Semantics problem. a machine able to assemble perfect sentences, reply perfectly and elequently to questions, have casual conversations. But does it know what it really means. The "Chinese room" problem in philosophy was designed to prove that this is an impossibility.
Well heres the thing. Do we know what things are? I mean *really* know? We see a bunch of sensory data, and go "Oh thats a table". But its only a table because humans decided its a table. In reality its just a bunch of quantum fields coalessing into subatomic particles spinning around each other in progressively larger formations. Thats what it *actually* is.
Meaning is an illusion. Its a *really useful* illusion, and without it, we'd be lying on the floor frothing at the mouth dying. Genuinely useful. But still an illusion. Its not something that actually exists. But those illusions definately points to real things (Kants work metaphysics kind of settled this one which had been a raging debate since the early greeks, by introducing the idea that some facts exist *despite* reality, for instance the laws of mathematics and logic. Things are real. But it all goes through the filter of human interpretation, so in effect we 'construct' the world around us.)
So the question is, if the human brain machine, and ultimately it really is just a machine, can assemble these illusions, why can't a computer. And more to the point, if it could how would we recognize it. Is it possible that these neural networks by using the same basic principles animal brains use are actually infact generating meaning, and that meaning is merely an emergent phenomena of organized information.
My feeling is we'll never truly know, because we cant even prove other people experience meanings. But concluding "no" about an unknowable isnt as rational as one might conclude. We'll never know in the "I saw it with my own eyes" sense whats behind the swarzchild radius of a blackhole, but that doesnt mean we can't apply logic and physics to make a reasonable guess and add the adendum "We cant falsify this, so an educated guess is as good as we'll get". Simply declaring "We dont know so its not true" is not useful.
Re: (Score:2)
The "Chinese room" problem in philosophy was designed to prove that this is an impossibility.
Really? The unreachable holy grail of proof was found? The Chinese Room problem illustrates a point, much like most thought experiments. Thought experiments can't be actually performed, how could they prove anything? In fact, even experiments that can be actually performed don't prove anything, they merely increase inductive likelihood (as you allude to by bringing up falsification).
"We don't know if Santa Clause exists." So it's not useful. (Or at least, it is not science which is as true for Pansper
Re: (Score:3)
This is called the Semantics problem. a machine able to assemble perfect sentences, reply perfectly and elequently to questions, have casual conversations. But does it know what it really means. The "Chinese room" problem in philosophy was designed to prove that this is an impossibility.
The problem with the "Chinese Room" argument is that it separates mind from process by design, and uses that separation to argue that the mind can't arise from the process.
In that thought-experiment, you have a person inside a room. That person gets Chinese characters that they cannot understand from slips of papers, they follow an algorithm designed to produce perfect responses, then they pass the result back out. The person on the outside then imagines there is a person who understands Chinese inside. The
Re: (Score:2)
In reality, there must be an actual consciousness inside that does understand and properly respond the language. A mind is created
So, your argument against the claim that syntax, by itself, is insufficient for semantics is to claim that magic happens? Oh, okay.
Remember that when you're alone. The wall behind you has some pattern of molecule movement that is isomorphic with the wordstar program, also with a girl AI. She's judging you.
Re: (Score:2)
So, your argument against the claim that syntax, by itself, is insufficient for semantics is to claim that magic happens?
Not magic, no. In fact, it would be magic to argue otherwise: your brain is following an algorithm: individual neurons fire when very specific conditions are met. To imply that the physical processes in your brain are not sufficient to create your mind requires that there's some added magic to it. Clearly, if you could reproduce the exact same process that happens in your mind, you would reproduce your mind, because there's nothing else unless you invoke the supernatural.
What I'm implying is that there's in
Re: (Score:2)
So the question is, if the human brain machine, and ultimately it really is just a machine, can assemble these illusions, why can't a computer.
You've missed the whole point of the Chinese room. The entire point is to illustrate that syntax, by itself, is insufficient for semantics.
That fact is key to Searle's argument that, whatever it is that brains do to cause minds, it's not computation alone. He does not posit anything supernatural. The only claim is that purely computational approaches to so-called strong AI are insufficient. (Don't forget that the brain as a computer is just an analogy. We have a habit of compare our minds to the most a
Re: (Score:2)
It's a very strong argument,
Like hell it is.
Suppose you are talking to someone. Does your tongue understand English? Do your lips?
Of course not, but the words do provide intelligent communication using the tongue
and lips as a communication medium, just like the guy in the Chinese room does.
The Chinese room guy is mechanically transmitting the intelligence of the people
who wrote the answers to the people who read them.
which is why his critics have all focused on the illustration and not the actual argument.
Because the illustration is bullshit and demonstrates that the argument is bullshit.
This is what Searle proved: "if I
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
It will be fine until someone gives it access to everyone's name and phone numbers... oh, wait. We're screwed.
Re: (Score:2)
Re: (Score:2)
Are you talking about that "Roko's basilisk" nonsense from the LessWrong cult?
I'll never figure out how a bunch of atheists managed to invent a god complete with an imperative to worship and support under the threat of eternal damnation while not only convincing themselves it's real, but that it's also influencing us from the future to coerce us into bringing about its nightmarish reign over humanity.
"an advanced type of AI" (Score:2, Interesting)
So because you abused the term AI to mean shitty matrix multiplication based universal functions, neural networks are "advanced" now?
I used to think they are the most basic form of any intelligence thinkable, until you came around with your matrices and proved that you could do it even shittier.
Listen, fraudsters, we're stil miles away from anything resembling AI!
Call us when you simulate actual spiking neurons with EM field directio sensitivity and neurotransmitter broadcast effects and everything! Call u
Total Nonsense (Score:5, Insightful)
Neural networks are an advanced type of AI loosely based on the way that our brains work.
This is the lie we've been telling the general public for decades to make them think this type of AI is something more than just applied statistics. It really needs to die.
Maybe we should change the name to something less deceptive? We could kill the terms AI and ML while we're at it. Funding might take a hit, but it'll be worth it to shake off the stench of deception.
Re: (Score:2, Insightful)
That's complete and total nonsense. I'm not claiming that humans are special, only that neural networks have absolutely nothing to do with how human brains work. This is not a controversial claim. That's just reality.
See, those of us with an actual academic background in ML know that there is no biological basis for NNs any more than there's a biological basis for k-means clustering. To say that the brain is nothing but "a bunch of Markov chains" is ridiculous. There is absolutely no rational or scienti
Re:Total Nonsense (Score:4, Insightful)
Marketability? No, they named them 'neural networks' because they based the manner of data processing on the then-current understanding of how primitive brains work. They examined how fruit fly, worm, and slug brains processed sensory information and modeled the process on what they were able to discern from those simple networks of neurons. IIRC it worked better than they expected. No, it doesn't model a human brain, or even the brain of a trout, but we're still decades from knowing in depth how those actually work. The tools are improving exponentially (I'm looking forward to seeing what Neuralink comes up with), but it's still going to be a while.
Claiming that it's all "marketing" because there aren't neurotransmitters in an electronic neural network is akin to saying Boston Robotics' Spot doesn't actually walk because it doesn't have toes.
Re: (Score:2)
This is the danger. See, you're still making inappropriate comparisons to brains. You can get identical results to any NN with a completely different structure that looks nothing like a graph. Remember that a NN is nothing more than a nonlinear transform.
I'm saying it's marketing because it give people a false impression about the nature of the technology. If we stopped drawing graphs and instead called them something accurate but boring like "nonlinear transforms" they'd lose a lot of their magic in the
Re:Total Nonsense (Score:5, Insightful)
See, those of us with an actual academic background in ML know that there is no biological basis for NNs
As someone with an academic background in ML, I am baffled that you aren't aware than NNs were explicitly designed as a form of bio-mimcry and some, even now, (e.g. spiking) also explicitly so. They aren't as complex as a brain, but inspired by biology.
Re:Total Nonsense (Score:5, Insightful)
I am baffled that you aren't aware than NNs were explicitly designed as a form of bio-mimcry
I second that. Neural networks were absolutely designed to mimic functions that were observed in biological brains. Saying they were not is like saying airplanes were not inspired by birds because their wings don't flap and they don't grow feathers or digest worms. Sure, there are a great many things the brain does that neural networks don't, and we don't even fully understand the brain yet, but it still remains the target of many of our efforts. The biggest and highest-impact conferences in neural networks also invite neuroscientists so they can cross-mingle their expertise, and numerous advances in neural networks have been made based upon analogies with the brain. Dropout and convolutional layers are the first two examples that pop into my mind.
Many people think that only spiking neural nets are actually patterned after behavior observed in the brain, but did you know that the non-spiking variant that Hopfield originally presented were reported to approximate the rate at which spikes fire? When the activations of classical neural nets are interpret as firing rates, they really do become strong analogs for the behavior of the brain. Now perhaps you are not satisfied with the extent to which we have been able to reproduce brain-like activity. That's great! Neither are we! But the brain still remains the original objective and a significant source of inspiration for many efforts in advancing research in neural networks.
Re: (Score:2)
This doesn't preclude us from saying (quite correctly) that NNs were modeled after what people could observe in primitive neural systems.
To me, it seems that most people on both sides of this topic simply suck at expressing themselves.
Re: (Score:2)
I'm fully aware of the history.
I also know that there is no biological basis for NNs. To claim otherwise is pure nonsense. Brains don't work like NNs. NNs don't work like brains. The historic comparison is what is in question here.
I'm also not convinced that there was ever a true biological inspiration. There is every reason to believe that NNs are so named to carry on the 'electronic brain' analogy that held the public fascination with computers. That is, I don't believe that NNs were inspired by bra
Re: (Score:2)
I'm also not convinced that there was ever a true biological inspiration.
There are many early papers that explicitly reference such inspiration.
Re: (Score:2)
Re: (Score:2)
Humans being special is a lie your brain tells to itself to make itself feel better.
Depending on what "special" means, this statement can be either true or false. Define "special", then let's talk.
The ability to be self-aware (and recursively so), that's unique in the animal world, and thus, that makes us special (making this statement of yours false.) OTH, we could define "special" as "built or evolved for a purpose", and then your statement would be true.
Long story short: define your language precisely so that you stop spouting nonsense as if they were axioms.
AI already passed the Turing test, and this was the same response. "It's not good enough, it's not 'real' intelligence, humans are speeeeecial, we need to keep creating new tests AI can't pass to prove it!"
See above.
The sooner you just accept you yourself are just a bunch of markov chains run by meat electricity the sooner you can stop worrying so damned much about it.
See, more un
Re: (Score:3)
that's unique in the animal world,
Since we can't adequately communicate with animals yet, even the dogs that we've lived with for 35,000 years, we have no idea whether that's at all the case. Corvids and cetaceans both have considerably more complex brains and abilities then were imagined even a decade ago. Then there are the hive-minds that we *really* don't even begin to understand at this point. Declaring that we are the only self-aware creatures at this point is premature at best.
Re: (Score:2)
Oh, are you one of the lucky Boston Robotics Spot early adopters? I'm jealous. My wife won't let me pay $75,000 for a Cybertruck, much less for a robotic dog.
Re: (Score:2)
Humans being special is a lie your brain tells to itself to make itself feel better.
That's true.
But, what other entity has a brain that will tell itself that it is special?
Re: (Score:2)
In other words, they have gone from 0.000001% of the way to real intelligence to maybe 0.000002%.
It's still many decades away, if it can be achieved at all.
"I know nothing" (Score:5, Insightful)
Neural networks are *not* "an advanced AI". They are 'artifical', of course, but no way they are 'intelligent'. The mere fact that as humans we need a certain level of intelligence to perform a task does not make a machine intelligent when it performs that same task.
How can we even say a machine 'understands' something when we do not even understand what it takes for ourselves to 'understand' something?
There is not even a glimpse of proof the computing machine is an adequate model for an advanced brain, human or otherwise.
Re: (Score:2)
Absolutely no one is claiming that they're "an adequate model for an advanced brain, human or otherwise". They're a model of a fruit fly or worm brain, maybe as advanced as a slug now. It will probably be most of a decade before they get as smart as a trout. After that AI models are likely to take over their own evolution, likely in directions that haven't even occurred to us.
Re: (Score:1)
I heard that prediction many years ago. Nothing much changed. And we're back at neural networks again. So I expect many, many decades.
Re: (Score:2)
Re: (Score:1)
Indeed. It is narrow in the sense you describe, but wide in its requirements for versatility.
In my view the cells you describe either adapt by calibrating themselves (which is a function within reach of their current DNA) or they evolve through random changes in their DNA until they either perish or cope with the new situation. I'd hesitate to call it 'learning'.
AI hype is getting absurd (Score:4, Informative)
So, every time there's something different hardcoded on a neural net implementation, that's some fantastic new advance?
Ah, no.
Other uses (Score:2)
It seems this could be a nice tool for artists and animators. Put in a swing set, be able to animate it immediately. Make trees sway, etc... would save a lot of time.