Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Science

Teaching Physics To Neural Networks Removes 'Chaos Blindness' (phys.org) 58

An anonymous reader quotes a report from Phys.Org: Researchers from North Carolina State University have discovered that teaching physics to neural networks enables those networks to better adapt to chaos within their environment. Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

The drawback to this neural network training is something called "chaos blindness" -- an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State's Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to "see" chaos within a system and adapt accordingly. Simply put, the Hamiltonian embodies the complete information about a dynamic physical system -- the total amount of all the energies present, kinetic and potential. Picture a swinging pendulum, moving back and forth in space over time. Now look at a snapshot of that pendulum. The snapshot cannot tell you where that pendulum is in its arc or where it is going next. Conventional neural networks operate from a snapshot of the pendulum. Neural networks familiar with the Hamiltonian flow understand the entirety of the pendulum's movement -- where it is, where it will or could be, and the energies involved in its movement.

In a proof-of-concept project, the NAIL team incorporated Hamiltonian structure into neural networks, then applied them to a known model of stellar and molecular dynamics called the Henon-Heiles model. The Hamiltonian neural network accurately predicted the dynamics of the system, even as it moved between order and chaos. "The Hamiltonian is really the 'special sauce' that gives neural networks the ability to learn order and chaos," says John Lindner, visiting researcher at NAIL, professor of physics at The College of Wooster and corresponding author of a paper describing the work. "With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems."
The work appears in the journal Physical Review E.
This discussion has been archived. No new comments can be posted.

Teaching Physics To Neural Networks Removes 'Chaos Blindness'

Comments Filter:
  • Can you feed it a bunch of videos of a ball being dropped or launched in various trajectories at various speeds .. then show it a two frame video clip and see if can predict where the ball will be 20 frames later.

  • Recently there was some news about Google's latest monster natural language processing neural network trained on a multigigabyte text corpus. It was pointed out that the associations one can learn this way are shallow and eventually things can drift into nonsense.

    It would help, as in this article, to have at least a basic physics simulator as part of such an AI program, which would give meaning to words like "a ball is on the floor" and "it is raining outside".

    Each time the user says something to the ma

    • As you say, these neural networks don't have the knowledge to really understand the text they are reading. They don't know what a cow is. Giving them this kind of knowledge is the point of projects like Cyc [wikipedia.org]. Combining that kind of thing with neural networks in a hybrid approach could be powerful.
      • Cyc is a valuable project. Yet it is still about how words are logically connected to each other, as I see. That can be enough for some things, but not for all. Adding in physics brings a new dimension, and some statements about the physical world can be understood only after figuring out the interplay of the objects in that statement according to the laws of physics. The next step is, of course, embodied cognition, but even a basic simulator can help.

        And of course Cyc is hand-crafted. Translating statem

        • by Layzej ( 1976930 )
          AI can learn how words are logically connected and build word vectors:

          "This kind of vector composition also lets us answer “King – Man + Woman = ?” question and arrive at the result “Queen” ! All of which is truly remarkable when you think that all of this knowledge simply comes from looking at lots of word in context (as we’ll see soon) with no other information provided about their semantics." - https://blog.acolyer.org/2016/... [acolyer.org]

          • Yes, that is a well-known model. The whole point is that it has limitations. If you tell it "a glass jar fell from my hands but luckily hit the carpet" it won't know how to finish that sentence.
      • by dgatwood ( 11270 )

        As you say, these neural networks don't have the knowledge to really understand the text they are reading. They don't know what a cow is.

        I'm pretty sure neither do physicists. They keep asking me if the cow is spherical.

    • by sg_oneill ( 159032 ) on Wednesday June 24, 2020 @03:05AM (#60220890)

      This is called the Semantics problem. a machine able to assemble perfect sentences, reply perfectly and elequently to questions, have casual conversations. But does it know what it really means. The "Chinese room" problem in philosophy was designed to prove that this is an impossibility.

      Well heres the thing. Do we know what things are? I mean *really* know? We see a bunch of sensory data, and go "Oh thats a table". But its only a table because humans decided its a table. In reality its just a bunch of quantum fields coalessing into subatomic particles spinning around each other in progressively larger formations. Thats what it *actually* is.

      Meaning is an illusion. Its a *really useful* illusion, and without it, we'd be lying on the floor frothing at the mouth dying. Genuinely useful. But still an illusion. Its not something that actually exists. But those illusions definately points to real things (Kants work metaphysics kind of settled this one which had been a raging debate since the early greeks, by introducing the idea that some facts exist *despite* reality, for instance the laws of mathematics and logic. Things are real. But it all goes through the filter of human interpretation, so in effect we 'construct' the world around us.)

      So the question is, if the human brain machine, and ultimately it really is just a machine, can assemble these illusions, why can't a computer. And more to the point, if it could how would we recognize it. Is it possible that these neural networks by using the same basic principles animal brains use are actually infact generating meaning, and that meaning is merely an emergent phenomena of organized information.

      My feeling is we'll never truly know, because we cant even prove other people experience meanings. But concluding "no" about an unknowable isnt as rational as one might conclude. We'll never know in the "I saw it with my own eyes" sense whats behind the swarzchild radius of a blackhole, but that doesnt mean we can't apply logic and physics to make a reasonable guess and add the adendum "We cant falsify this, so an educated guess is as good as we'll get". Simply declaring "We dont know so its not true" is not useful.

      • by ebyrob ( 165903 )

        The "Chinese room" problem in philosophy was designed to prove that this is an impossibility.

        Really? The unreachable holy grail of proof was found? The Chinese Room problem illustrates a point, much like most thought experiments. Thought experiments can't be actually performed, how could they prove anything? In fact, even experiments that can be actually performed don't prove anything, they merely increase inductive likelihood (as you allude to by bringing up falsification).

        "We don't know if Santa Clause exists." So it's not useful. (Or at least, it is not science which is as true for Pansper

      • This is called the Semantics problem. a machine able to assemble perfect sentences, reply perfectly and elequently to questions, have casual conversations. But does it know what it really means. The "Chinese room" problem in philosophy was designed to prove that this is an impossibility.

        The problem with the "Chinese Room" argument is that it separates mind from process by design, and uses that separation to argue that the mind can't arise from the process.

        In that thought-experiment, you have a person inside a room. That person gets Chinese characters that they cannot understand from slips of papers, they follow an algorithm designed to produce perfect responses, then they pass the result back out. The person on the outside then imagines there is a person who understands Chinese inside. The

        • by narcc ( 412956 )

          In reality, there must be an actual consciousness inside that does understand and properly respond the language. A mind is created

          So, your argument against the claim that syntax, by itself, is insufficient for semantics is to claim that magic happens? Oh, okay.

          Remember that when you're alone. The wall behind you has some pattern of molecule movement that is isomorphic with the wordstar program, also with a girl AI. She's judging you.

          • So, your argument against the claim that syntax, by itself, is insufficient for semantics is to claim that magic happens?

            Not magic, no. In fact, it would be magic to argue otherwise: your brain is following an algorithm: individual neurons fire when very specific conditions are met. To imply that the physical processes in your brain are not sufficient to create your mind requires that there's some added magic to it. Clearly, if you could reproduce the exact same process that happens in your mind, you would reproduce your mind, because there's nothing else unless you invoke the supernatural.

            What I'm implying is that there's in

      • by narcc ( 412956 )

        So the question is, if the human brain machine, and ultimately it really is just a machine, can assemble these illusions, why can't a computer.

        You've missed the whole point of the Chinese room. The entire point is to illustrate that syntax, by itself, is insufficient for semantics.

        That fact is key to Searle's argument that, whatever it is that brains do to cause minds, it's not computation alone. He does not posit anything supernatural. The only claim is that purely computational approaches to so-called strong AI are insufficient. (Don't forget that the brain as a computer is just an analogy. We have a habit of compare our minds to the most a

        • It's a very strong argument,

          Like hell it is.
          Suppose you are talking to someone. Does your tongue understand English? Do your lips?
          Of course not, but the words do provide intelligent communication using the tongue
          and lips as a communication medium, just like the guy in the Chinese room does.
          The Chinese room guy is mechanically transmitting the intelligence of the people
          who wrote the answers to the people who read them.

          which is why his critics have all focused on the illustration and not the actual argument.

          Because the illustration is bullshit and demonstrates that the argument is bullshit.
          This is what Searle proved: "if I

      • You are pushing things here to a logical extreme. To understand the world reasonably well a machine would need to know a lot of concepts, their relations, and some meaning, with physics and other simulations being used for some of that. Philosophy can wait.
      • Comment removed based on user account deletion
  • So because you abused the term AI to mean shitty matrix multiplication based universal functions, neural networks are "advanced" now?

    I used to think they are the most basic form of any intelligence thinkable, until you came around with your matrices and proved that you could do it even shittier.

    Listen, fraudsters, we're stil miles away from anything resembling AI!
    Call us when you simulate actual spiking neurons with EM field directio sensitivity and neurotransmitter broadcast effects and everything! Call u

  • Total Nonsense (Score:5, Insightful)

    by narcc ( 412956 ) on Wednesday June 24, 2020 @01:54AM (#60220780) Journal

    Neural networks are an advanced type of AI loosely based on the way that our brains work.

    This is the lie we've been telling the general public for decades to make them think this type of AI is something more than just applied statistics. It really needs to die.

    Maybe we should change the name to something less deceptive? We could kill the terms AI and ML while we're at it. Funding might take a hit, but it'll be worth it to shake off the stench of deception.

    • Well, they are more advanced than they were 20 years ago.
      In other words, they have gone from 0.000001% of the way to real intelligence to maybe 0.000002%.
      It's still many decades away, if it can be achieved at all.
  • "I know nothing" (Score:5, Insightful)

    by multatuli ( 740516 ) on Wednesday June 24, 2020 @04:21AM (#60220976)

    Neural networks are *not* "an advanced AI". They are 'artifical', of course, but no way they are 'intelligent'. The mere fact that as humans we need a certain level of intelligence to perform a task does not make a machine intelligent when it performs that same task.
    How can we even say a machine 'understands' something when we do not even understand what it takes for ourselves to 'understand' something?
    There is not even a glimpse of proof the computing machine is an adequate model for an advanced brain, human or otherwise.

    • by cusco ( 717999 )

      Absolutely no one is claiming that they're "an adequate model for an advanced brain, human or otherwise". They're a model of a fruit fly or worm brain, maybe as advanced as a slug now. It will probably be most of a decade before they get as smart as a trout. After that AI models are likely to take over their own evolution, likely in directions that haven't even occurred to us.

      • I heard that prediction many years ago. Nothing much changed. And we're back at neural networks again. So I expect many, many decades.

    • by noodler ( 724788 )
      I think your definition of intelligence is waaay too narrow. Human-like intelligence is not the only example of intelligence. Even some cells have the capacity to learn and adapt to their environment which is a very basic definition of intelligence. And that is done through DNA, so basically a turing machine-like construction. The rest of your post makes tho if you specify that by 'intelligence' you're actually referring to the special case of 'intelligence as done by human brains'.
      • Indeed. It is narrow in the sense you describe, but wide in its requirements for versatility.
        In my view the cells you describe either adapt by calibrating themselves (which is a function within reach of their current DNA) or they evolve through random changes in their DNA until they either perish or cope with the new situation. I'd hesitate to call it 'learning'.

  • by moxrespawn ( 6714000 ) on Wednesday June 24, 2020 @04:56AM (#60221016)

    So, every time there's something different hardcoded on a neural net implementation, that's some fantastic new advance?

    Ah, no.

  • It seems this could be a nice tool for artists and animators. Put in a swing set, be able to animate it immediately. Make trees sway, etc... would save a lot of time.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...