Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Software Science Technology

AI Shortcuts Speed Up Science Simulations By Billions of Times (sciencemag.org) 74

sciencehabit shares a report from Science Magazine: Modeling immensely complex natural phenomena such as how subatomic particles interact or how atmospheric haze affects climate can take thousands of hours on even the fastest supercomputers. Emulators, algorithms that quickly approximate these detailed simulations, offer a shortcut. Now, work posted online shows how artificial intelligence can produce accurate emulators that can accelerate simulations across all of science by billions of times. The new system automatically creates emulators that work better and faster than those designed and trained by hand. And they could be used to improve the models they mimic and help scientists make the most of their time at experimental facilities.
This discussion has been archived. No new comments can be posted.

AI Shortcuts Speed Up Science Simulations By Billions of Times

Comments Filter:
  • There are a number of serious problems with, for example, current climate models.

    One of those problems is that models are only as good as the assumptions they make.

    In the case of climate models, we don't have the computing power to make the model cells small enough. And even if we did, we do not have sufficient data to populate those cells. We actually don't have enough to adequately populate the current, too-large cells.

    So estimates of factors we are not able to actually model must be introduced t
    • None of that is relevant. This technique is about getting the same results, faster, without having to manually write simulation algorithms at multiple levels of detail. If you can guarantee the same result as a high-detail run with a high probability of doing so more quickly, you win.

      Of course any sort of model, theory, equation, or statement in general can be wrong. That's independent of speeding up the computation.

      • by rtb61 ( 674572 )

        They are using AI to emulate intuition. Of course you do not just intuit a result, you test it's validity and see if you have made a successful guess based upon the feel of the data. It can be successful but it can also miss the best result by a long shot. Simply because your intuitions where based on a bit of dead end series of guesses, there is continuous improvement but it is not the best solution overall, just the best solution based upon that path lead by intuitive guesses. You kind of should do both,

        • They are using AI to emulate intuition.

          Well ... no, not really.

          in-tu-i-tion, noun, the ability to understand something immediately, without the need for conscious reasoning.

          It's debatable whether AI "understands" something, but even allowing that, it certainly doesn't do so "immediately". It needs to be trained to recognize outputs, given various inputs.

          The kinds of detailed simulations described in TFA use know laws of physics and boundary/initial conditions in order to calculate future states of a system. The challenge is that such simulations

      • It's perfectly relevant. What they are using the "AI" for, which you would know if you understood how models work, is for parameterizing the models to fit HISTORICAL data.

        When they get a good match, they then turn around and try to use the model for projecting the future.

        None of this is different from what I said earlier. The only difference here is that the "AI" is doing the parameterization, rather than humans.

        The problem remains: there are so many parameters, that there are already many, many mo
    • by cusco ( 717999 ) <brian.bixby@[ ]il.com ['gma' in gap]> on Wednesday February 12, 2020 @11:02PM (#59722284)

      historically they haven't done very well at all.

      Where? They work fine modeling climate on Earth, Venus, Mars, and Titan, where have you tried them?

      You're confusing "weather" with "climate" again. The cells don't have to be small to simulate the climate of the Northern Hemisphere or the South Atlantic, just if you are trying to predict weather.

      • Do you have a graph for Titan handy? [drroyspencer.com]

        I want to establish a baseline of what "fine" means.
        • by cusco ( 717999 )

          "I’m seeing a lot of wrangling over the recent (15+ year) pause in global average warming"...

          Holy carp, I didn't even have to read a full sentence to see how utterly moronic that source is. Damn. While not a record it's pretty close. If only it were painful to be that stupid . . .

          • Can you link your Ph.D. and NASA credentials, so I can evaluate your relative qualifications?
      • No confusion here, at all.

        Have you seen the famous comparison between an ensemble of more than 100 CMIP5 models versus observed temperature?

        They do not match at all. In fact the observed temperature is lower than the low point of the uncertainty range for the whole ensemble.

        That's just one example but there are many more. Here's another example. [wiley.com]

        No, the models in fact have not matched reality very well at all.
    • They are just doing model fitting and parameter estimation where they guess at the model's terms. COmparing that to an exact computation will be slower. But just doing some sort of generalized model parameter estimation should be even faster.

      • Which was my point.

        They'll continue to make the same errors, only faster.

        I don't mean that entirely derogatorily, though. When your errors are made faster, maybe you can approach the truth faster too.
    • by Brett Buck ( 811747 ) on Thursday February 13, 2020 @01:42AM (#59722650)

      So, it's wrong, but wrong *much faster*

      • I've long held that you should fail as fast, and as early as possible!

      • So, it's wrong, but wrong *much faster*

        I would say imprecise. Wrong implies it's worthless. I imagine this technique would be useful for an algorithmic search. Once an area of interest is identified, the more precise method can be used. No sense wasting CPU time on points that obviously have no interest.

      • Yes, that was basically what I was saying. :o)
    • by gweihir ( 88907 )

      The claim is basically nonsense, just another AI fanatic fanboi article. Sure, many simulations are badly coded and there may be something to gain there, but everything else they do is _neccessary_ and running this through some artificial stupidity "enhancer" just introduces hard to evaluate new failure modes into the simulation.

    • " And in fact, historically they haven't done very well at all. "

      So 14 of 17 models dating back to the 1970's produce results that are 'virtually indistinguishable from reality' when fed with accurate data. Sounds good to me.

      Link to paper [wiley.com]
      • So 14 of 17 models dating back to the 1970's produce results that are 'virtually indistinguishable from reality' when fed with accurate data.

        Neither the abstract [wiley.com], or even the full paper [sci-hub.se], say anything even remotely like that.

        Here's what it does say:

        When mismatches between projected and observed forcings are taken into account, a better performance is seen. Using the implied TCR metric, 14 of the 17 model projections were consistent with observations; of the three that were not, Mi70 and H88 scenario C showed higher implied TCR than observations, while RS71 showed lower implied TCR (Schneider 1975; see supplementary text S2 for a discussion of the anomalously low-ECS model used in RS71).

        In other words: after the fact, when they re-adjusted the models to account for the observed forcing, as opposed to the original forcings used in the model projections, the models closely matched observed results.

        Duh.

        That's called "hindcasting", not forecasting. Anybody can "predict" the past. It's easy.

        • It doesn't say that at all. "forcings" in this context just means "factors" that drive the weather https://www.accuweather.com/en... [accuweather.com]

          You seem to be implying that this means "forced to fit" but that's not the context of the word "forcings" here. What they are saying is that the models made predictions of the factors that drive the weather, and that these were close to the "observed" factors that drive the weather. Those factors are called "forcings" because the drive the weather in certain ways. it's not abou

          • I know what forcings are, no I did not mean anything in the sense of "forced to fit", and yes it does say that.

            Read the quote again:

            When mismatches between projected and observed forcings are taken into account...

            They took data after the fact, and adjusted their model accordingly to achieve a better fit.

            This is the same technique used in hindcasting. Just take known factors (forcings) and plug them in, along with whatever parameters your model needs.

            My original point remains: this paper does not say what the person who originally linked to it said it does.

            And yes, I did desc

  • From the article:

    When they were turbocharged with specialized graphical processing chips, they were between about 100,000 and 2 billion times faster than their simulations.

    So the boost is not just due to AI but also adapting the original algorithm to run in parallel on GPUs. It would have been more honest to quote just the speed up due to using machine learning and not to include the effect of using more powerful hardware.

    Details of how this technique works for subatomic particle simulations are also completely lacking which is a shame since we use Monte-Carlo simulations where each event is separate and the behaviour of the particle is determined randoml

    • by Anonymous Coward on Wednesday February 12, 2020 @10:25PM (#59722194)

      I _highly_ doubt such massive speed-ups are possible for most typical use cases

      Scientists at Stanford, Lawrence Livermore, etc come up with promising new technique, guy on slashdot says it won't work. You guys are comical.

      • Scientists at Stanford, Lawrence Livermore, etc come up with promising new technique, guy on slashdot says it won't work.

        I did not say that it would not work. I merely questioned whether the claims for such a huge speed-up would be possible for particle physics Monte-Carlo simulations. Appealing to authority is also a logical fallacy and irrelevant...but if you insist on playing that game this "random guy on slashdot" happens to be a professor of particle physics but it would be wrong to let persuade you of anything.

        Simply put I don't see how you are going to get a billion times faster MC simulation with AI algorithms alo

        • Sure it can be faster. If you have a particle subject to 1 million random rolls that give it a different trajectory, then you train an AI to predict the outcome and it gives you a single probability distribution that it applies in a single step, rather than doing 1 million calculations, many of which may just cancel each other's effects out anyway.

    • by Anonymous Coward

      AI speedup. A 110 million speedup factor is nothing to be sneezed at but if you want something 2 billion times faster use a GPU

      While the simulations presented typically run in minutes to days, the DENSE emulators can process multiple sets of input parameters in milliseconds to a few seconds with one CPU core, or even faster when using a Titan X GPU card. For the GCM simulation which takes about 1150 CPU-hours to run, the emulator speedup is a factor of 110 million on a like-for-like basis, and over 2 billion with a GPU card.

      (Source: https://arxiv.org/pdf/2001.080... [arxiv.org])

    • by HiThere ( 15173 )

      If they're using AI they're depending on a heuristic search of state space, which often works quite well, but can also fail without warning. (N.B.: This is also true of evolved intelligence.)

      The reason to avoid "intelligent" solutions when you don't really understand the problem, is that they are non-deterministic and not intrinsically validateable. A good example is "Goldbach's conjecture", though the "four color problem" would also work. It's easy to make a decent guess at the answer, and you're like

      • If they're using AI they're depending on a heuristic search of state space, which often works quite well, but can also fail without warning.

        That would be my guess too which is why I do not see how it will speed up simulations of subatomic particle interactions. Simulations like GEANT have lots of potential particle-matter interactions to consider and which one of these happens and where produces an insanely large parameter space which I have a very hard time believing can be populated with a sufficient density of points that an AI algorithm can accurately search it especially since there are resonances which can produce large changes in outcom

  • The common sorts of simulations are subject to a variety of numerical type errors, sometimes obvious, sometimes very tricky to track down. (for example if you are modeling less than the real number of particles in a simulation, you need to be very careful to get the correct statistical variations, since some interactions depend on that).

    With AI in the simulation there is the potential for much more subtle and difficult to detect errors. It will probably take some effort to figure out how to validate this t

    • by Anonymous Coward

      With AI in the simulation there is the potential for much more subtle and difficult to detect errors. It will probably take some effort to figure out how to validate this type of simulation.

      Ah, no. If you'd bothered to read the article, you would have seen this:

      “This is a big deal,” says Donald Lucas, who runs climate simulations at Lawrence Livermore National Laboratory and was not involved in the work. He says the new system automatically creates emulators that work better and faster than those his team designs and trains, usually by hand.

      • Presumably when they work. I guess its possible it can't fall into the equivalent of numerical problems, but its not clear to me why that would be the case.

  • What a load of crockshit.

    A typical computer simulation might calculate, at each time step, how physical forces affect atoms, clouds, galaxies—whatever is being modeled. Emulators, based on a form of AI called machine learning, skip the laborious reproduction of nature. Fed with the inputs and outputs of the full simulation, emulators look for patterns and learn to guess what the simulation would do with new inputs.

    So, the idea is to substitute running a full simulation with "AI" doing guesswork based on "training" for parts of it? How is this "new"? You do this kind of shit (optimizing simulations) all the time. E.g. in your Monte Carlo, the cornerstone of every revolutionary piece of PhD work, you ordinarily calculate only few of the events in the part of the solution space that you don't care about and re-insert them back with weights at the end. It is a "simple trick that can speed your

    • What a load of crockshit.

      Another /. "expert" poo poos the technique that has already been shown to work [arxiv.org] . Priceless.

      • by xwin ( 848234 )
        Not only that but Deep Mind already proved that you can substitute computations with AI by beating humans at GO - AlphaGO. AI is also computations just done differently. I am sure that there is a lot of areas where AI can be beneficial, we just did not discover them all yet.
      • You must be really stupid if you think you'll impress anyone but code monkey idiots like yourself with a post on arxiv.org. The only thing you demonstrate is your ignorance.
    • I think you missed the point of this.

      Monte Carlo simulations do indeed sample the phase-space of a problem, and compensate for the sampling with appropriate weights. But the simulations can take a very long time to run, especially if you demand high-resolution answers with dense sampling of the phase-space.

      The AI does not "substitute" for the simulations. It tries to find patterns that are revealed by the simulations, with the objective of predicting outputs from inputs more rapidly, once the training is co

      • I think you do not understand the meaning of the words you use. "Predicting outputs" is exactly the same as "substituting results" of a simulation with something the AI generated. "Based on previous simulations" means exactly what I describe.

        But hey, I call it what it is, and I am not buzzword-compliant.

  • When a neural net can find patterns that are 99.9% predictive of a full simulation while delivering many orders of magnitude speed-up in computations, that's impressive.

  • by Ambassador Kosh ( 18352 ) on Thursday February 13, 2020 @04:53AM (#59722892)

    Using neural networks for classification is the new cool thing to do but that is not what they were first used for. Neural networks are good piecewise polynomial approximations to an unknown function.

    Most science models are a complex mix of algebraic, differential and partial differential equations that are very time consuming to solve. However from math there MUST exist a polynomial approximation to that same set of equations that can return the same results to arbitrary accuracy on some bounded interval. This is the ENTIRE basis for using neural networks to approximate complex simulators.

    I have been doing the same thing for my work. Sometimes we need to run a simulator millions of times and it can take a minute to run each time. However a good network needs about 10K samples and can predict the rest but it can predict them accurately running on a normal GPU at about 100K/s which takes simulations that would take a month down to one that takes less than a day in total.

    I am still working on a better network design for my problem but already the results are quite promising and almost good enough. I take the same inputs I would give to the simulator and predict the time series the simulator would create.

  • I have lost track of whats going on in science at the present speed. Where is the cut off point of human comprehension if we speed the whole thing up? Additionally, because speed is increased billions of time does it mean results are?
    • by cusco ( 717999 )

      Knowledge is increasing exponentially, we've lost the ability to know "everything" at least a century ago. That's why there really aren't any more polymaths like Thomas Jefferson or Leonardo da Vinci any longer, there's no time to do any more than scrape the surface of a large number of topics. Even worse, the number of sciences has exploded as well, with new introductions like medical nanotechnology, superconduction, robotics and astrobiology. About the closest we can come to a polymath today is someone

  • Genius!
    I'm sure those simulations will be *totally* trustworthy!
    Not plagued by "field of sheep [slashdot.org]" illusions at all!

    • These networks are being used the way they were originally designed and in line with how the math actually works. At their core networks are universal function approximators. They can be used for classification but you do get strange results sometimes. All they are being used for here is to create a piecewise polynomial approximation to a high dimensional function and they are quite good at that.

      The sheep problem is ENTIRELY different.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...