Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Math Facebook Software Technology

Facebook Has a Neural Network That Can Do Advanced Math (technologyreview.com) 36

Guillaume Lample and Francois Charton, at Facebook AI Research in Paris, say they have developed an algorithm that can calculate integrals and solve differential equations. MIT Technology Review reports: Neural networks have become hugely accomplished at pattern-recognition tasks such as face and object recognition, certain kinds of natural language processing, and even playing games like chess, Go, and Space Invaders. But despite much effort, nobody has been able to train them to do symbolic reasoning tasks such as those involved in mathematics. The best that neural networks have achieved is the addition and multiplication of whole numbers. For neural networks and humans alike, one of the difficulties with advanced mathematical expressions is the shorthand they rely on. For example, the expression x^3 is a shorthand way of writing x multiplied by x multiplied by x. In this example, "multiplication" is shorthand for repeated addition, which is itself shorthand for the total value of two quantities combined.

Enter Lample and Charton, who have come up with an elegant way to unpack mathematical shorthand into its fundamental units. They then teach a neural network to recognize the patterns of mathematical manipulation that are equivalent to integration and differentiation. Finally, they let the neural network loose on expressions it has never seen and compare the results with the answers derived by conventional solvers like Mathematica and Matlab. The first part of this process is to break down mathematical expressions into their component parts. Lample and Charton do this by representing expressions as tree-like structures. The leaves on these trees are numbers, constants, and variables like x; the internal nodes are operators like addition, multiplication, differentiate-with-respect-to, and so on. [...] Trees are equal when they are mathematically equivalent. For example, 2 + 3 = 5 = 12 - 7 = 1 x 5 are all equivalent; therefore their trees are equivalent too. These trees can also be written as sequences, taking each node consecutively. In this form, they are ripe for processing by a neural network approach called seq2seq.

The next stage is the training process, and this requires a huge database of examples to learn from. Lample and Charton create this database by randomly assembling mathematical expressions from a library of binary operators such as addition, multiplication, and so on; unary operators such as cos, sin, and exp; and a set of variables, integers, and constants, such as [pi] and e. They also limit the number of internal nodes to keep the equations from becoming too big. [...] Finally, Lample and Charton put their neural network through its paces by feeding it 5,000 expressions it has never seen before and comparing the results it produces in 500 cases with those from commercially available solvers, such as Maple, Matlab, and Mathematica. The comparisons between these and the neural-network approach are revealing. "On all tasks, we observe that our model significantly outperforms Mathematica," say the researchers. "On function integration, our model obtains close to 100% accuracy, while Mathematica barely reaches 85%." And the Maple and Matlab packages perform less well than Mathematica on average.
The paper, called "Deep Learning For Symbolic Mathematics," can be found on arXiv.
This discussion has been archived. No new comments can be posted.

Facebook Has a Neural Network That Can Do Advanced Math

Comments Filter:
  • Honest question here: If an AI developed by thousands of people solves a hard math problem, who gets the credit?
    • The person who correctly expresses the proof of the solution.
    • by gweihir ( 88907 )

      We can worry about that when it happens. If ever. Certainly not anytime soon.

    • A human's math skills are developed with the help of dozens of teachers, each of which had dozens of teachers. The math theories and symbology learned by all of the teachers were developed by scores of mathematicians and philosophers over many centuries. Who gets the credit?
    • The company that has used the pubklic funds to create the AI and patented it.

    • "Honest question here: If an AI developed by thousands of people solves a hard math problem, who gets the credit?"

      Nobody, because nobody will understand it, ever.

      The AIs will sort that out amongst themselves.

  • now can it do advanced sorting so the posts are always in chronological order >_>
  • Quick!! (Score:4, Funny)

    by eclectro ( 227083 ) on Tuesday December 17, 2019 @09:02PM (#59530578)

    Give it civil rights so it can't be forced to do homework!

  • Logical manipulation is extremely ill-suited to neural computation, which is why human abilities are so puny compared to machines (e.g. evaluating millions of individual chess positions to pick a good one - or for that matter performing long division).

    But computers, being purely logical and precise, are extremely ill-suited to neural simulation,which is inherently parallel and approximate, which is why we never got very far with neural networks until computers got very powerful.

    So, simulating logic on t

    • I believe you missed the part about it performing faster and more accurately than Mathmatica.

      When solving symbolic math problems there is definitely a large element of pattern recognition. Thatâ(TM)s why you spend so much time learning derivation patterns.

      • by raymorris ( 2726007 ) on Tuesday December 17, 2019 @10:12PM (#59530726) Journal

        Thr description of breaking it down into a tree of simpler operations is very similar to what some other software does. Well, it's EXACTLY what the parser part of a compiler does.

        Once again this is a hard problem, but one that Donald Knuth and others solved in the early 1970s. By 1975, the creation of parsers was understood well enough that Stephen Johnson created yacc, software which creates parser software. That was 40 years ago. You tell yacc that multiplication, the * symbol, means repeated addition, the exponent symbol means repeated multiplication, etc and yacc creates a parser, a first-stage compiler to compile mathematical expressions.

        I can't imagine that fuzzy neural networks could possibly better IF the expressions follow a known syntax. If you're wanting to deal with random ways of writing things WITHOUT putting that in the spec, maybe. It seems more logical to just write them in a standard form, though.

        • by ceoyoyo ( 59147 )

          Expressing equations as trees is a very old technique. I assume the journalist writing about the story didn't know much about symbolic math solvers and assumed that was also new.

          • by gTsiros ( 205624 )

            still, beating mathematica in symbolic integration?

            sounds fishy. I'd expect mathematica to have something more advanced than just a risch algorithm and that one can do pretty much everything.

            • by ceoyoyo ( 59147 )

              Symbolic integration is *really* hard. There are lots of integrals that almost certainly have a solution but nobody knows what it is, even after hundreds of years of trying.

              Mathematica certainly uses a graph (tree) representation for equations, and operates on that graph. But there isn't an algorithm for general symbolic integration, so if your integral doesn't happen to break down into something Mathematica knows how to deal with, you're SOL. In the paper they give an example of an integral that Mathematic

              • by gTsiros ( 205624 )

                "*really* hard" no it's not.

                "there isn't an algorithm for general symbolic integration"

                doesn't the Risch algorithm that i mentioned count?

                • by ceoyoyo ( 59147 )

                  The Risch algorithm is only applicable to antiderivatives where both sides are composed of elementary functions. It's not a general algorithm, although it's domain is certainly quite large and useful. The Wikipedia page on it (linked at the end) has an example of an integral that can be solved, and a very similar integral (one coefficient is increased by 1) that cannot.

                  Technically, the Risch algorithm is a semi-algorithm. There are decidability issues. Theoretically, that means the algorithm can fail to pro

      • Rubi is also more accurate than Mathematica. So?
      • Also...

        When solving symbolic math problems there is definitely a large element of pattern recognition

        Yeah, that's why we've have this for decades. [github.com]

    • So, simulating logic on top of a simulation of neural networks on top of an actual logic processor is really a wonderful, glorious waste of computation!

      I do not think creating the first software able to solve arbitrary problems in calculus with a high degree of accuracy is a waste of time. Maybe, in the future, someone will come up with an improved form of AI for this kind of problem. Until then, this is an advance that may turn out to be truly important.

      • This. I think this is the most impressive news I've heard all year. It's a late contender for coolest development in 2019.

      • I do not think creating the first software able to solve arbitrary problems in calculus with a high degree of accuracy is a waste of time.

        It certainly isn't. Let's hope that someone will actually write it one day. One can hope.

    • by znrt ( 2424692 )

      So, simulating logic on top of a simulation of neural networks on top of an actual logic processor is really a wonderful, glorious waste of computation!

      it works. what's your better suggestion until we can manufacture neocortex?

    • by ceoyoyo ( 59147 )

      Solving (non-trivial) integrals isn't about logic. It's about intuition and making weird connections with things you've seen before. Exactly what neural networks do better than many other algorithms.

      Not sure why they mentioned derivatives. Solving derivatives is straightforward.

      • Logic absolutely is a part of it. Even when using intuition it still guides you, which is why a hybrid approach would most likely be very productive.
    • Except for the fact that existing "actual logic" processing of advanced mathematics fails, and is worse than this "glorious waste of computation". From the story itself, it calls out comparison of the results with "actual logic" programs such as Mathematica, Maple, and Matlab (which are the big three in advanced mathematics processing programs), which even the best of only reaches 85% accuracy in "function integration". This neural network approach, improved on the accuracy, bringing it up to close to 100%
  • Your AI sucks.
    My 3rd grade son can do that.
    And another 1000 things your AI will never be able to do in the upcoming century

  • Capable of doing the hypermathematics necessary to calculate Facebook tax returns.

  • I have a calculator that can do advanced math, the 5800P!
  • E.g. to guess how to throw something.
    That is not the hard part.

    Doesn't mean it can do it *precisely*.
    Because *that* is the impressive part.

This file will self-destruct in five minutes.

Working...