Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Math AI

Advanced Version of Gemini With Deep Think Officially Achieves Gold-Medal Standard at the International Mathematical Olympiad (deepmind.google) 63

An anonymous reader shares a blog post: The International Mathematical Olympiad is the world's most prestigious competition for young mathematicians, and has been held annually since 1959. Each country taking part is represented by six elite, pre-university mathematicians who compete to solve six exceptionally difficult problems in algebra, combinatorics, geometry, and number theory. Medals are awarded to the top half of contestants, with approximately 8% receiving a prestigious gold medal.

Recently, the IMO has also become an aspirational challenge for AI systems as a test of their advanced mathematical problem-solving and reasoning capabilities. Last year, Google DeepMind's combined AlphaProof and AlphaGeometry 2 systems achieved the silver-medal standard, solving four out of the six problems and scoring 28 points. Making use of specialist formal languages, this breakthrough demonstrated that AI was beginning to approach elite human mathematical reasoning.

This year, we were amongst an inaugural cohort to have our model results officially graded and certified by IMO coordinators using the same criteria as for student solutions. Recognizing the significant accomplishments of this year's student-participants, we're now excited to share the news of Gemini's breakthrough performance. An advanced version of Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points, and achieving gold-medal level performance.

Advanced Version of Gemini With Deep Think Officially Achieves Gold-Medal Standard at the International Mathematical Olympiad

Comments Filter:
  • I have to wonder. Did "Gemini Deep Think" solve the problems or simply regurgitate the answer from the billions of sucked up webpages, math research papers, etc. used to train the model? Actual competitors don't have the complete history of https://math.stackexchange.com... [stackexchange.com] at their fingertips.

    • Re:AI Training (Score:5, Informative)

      by JoshuaZ ( 1134087 ) on Monday July 21, 2025 @03:57PM (#65535218) Homepage
      I'm a mathematician, so I may have some expertise here: Humans spend years training for the IMO so they functionally sucking all of that up. And the IMO problems themselves vary a lot. Even extremely bright people including professional mathematicians, would have trouble with some IMO problems. Mere regurgitation is insufficient to solve the problems.
      • Thank you

      • by gweihir ( 88907 )

        But what about putting them into a Computer Algebra system? We have had these for decades now. In fact, I used one when I started my CS studies 35 years ago.

        The very point of such a competition is to have a human do it, not a machine.

        • The easy IMO problems, like some P1s or P2s could be sometimes done by a computer algebra system, but that's an exception there. And sure, the point of the competition is for humans to do it. This isn't about the IMO as a competition, but the usefulness of the IMO as essentially a natural metric of how effective these AI systems are at difficult reasoning problems.
          • by gweihir ( 88907 )

            but the usefulness of the IMO as essentially a natural metric of how effective these AI systems are at difficult reasoning problems.

            Since LLMs have zero reasoning capability (the math does not allow it), it is obviously a failure at this.

            • Does it worry you that there's a circularity in your own reasoning? You are taking for granted that LLMs cannot reason and therefore choosing to decide that any metric which shows they can reason must be flawed, and therefore cannot be evidence against your claim. Aside from the circularity, you are also confusing whether an LLM AN has "reasoning capability" with whether the system can succeed at problems which humans solve via difficult reasoning. This is essentially akin to insisting that an airplane must
              • Or that watches based on quartz crystals are just an idle curiosity because they aren't built using intricate gear assemblies.

              • No, they are not designed to reason. You're arguing from ignorance of the subject just using rhetorical techniques. You're mistaking technical questions with real answers for philosophical questions where you can just blow any horseshit you want out your ass and then use rhetoric to "argue" it.

                And your rhetoric is mostly red herrings and strawmen.

                • Instead of insults and general claims of ignorance, do you want to go and explain what "red herrings and strawmen" I've engaged in?

                  You're mistaking technical questions with real answers for philosophical questions where you can just blow any horseshit you want out your ass and then use rhetoric to "argue" it.

                  What technical questions do you think I'm mistaking for philosophical questions? I must confess also that I find it somewhat strange that you make a claim about strawmen and then say "No, they are not designed to reason" as if that somehow responds to what I wrote where I explicitly said that the question about whether an LLM AI can reason is a distinct question from whether they can solve problems humans solve via reasoning. So it seems very odd to tell me that they are not designed to reason. So what straw positions are there here other than yours?

                • Sorry, messed up the formatting with the previous comment, so replying separately to make it more readable. My apologies. Instead of insults and general claims of ignorance, do you want to go and explain what "red herrings and strawmen" I've engaged in?

                  You're mistaking technical questions with real answers for philosophical questions where you can just blow any horseshit you want out your ass and then use rhetoric to "argue" it.

                  What technical questions do you think I'm mistaking for philosophical questions? I must confess also that I find it somewhat strange that you make a claim about strawmen and then say "No, they are not designed to reason" as if that somehow responds to what I

                  • You seem to be unfamiliar with Searle's Chinese Room, among other things.
                    • I'm famiiiar with the Chinese room. Do you want to explain how its relevant here? Since again, we're explicitly talking about distinguishing whether the system is reasoning from whether it is solving problems which humans use reasoning to solve, it isn't relevant. That's aside from the serious problems with the Chinese room argument.
                    • The point of the Chinese room is that something can appear to be intelligent without actually being intelligent.

                      You can't do that. You have to actually look at what the machine is doing.
                    • So, there are two issues here. One is that one has to actually agree with the conclusion that that thought problem shows that; not all philosophers agree on that. But second, and more importantly this is precisely why I made the point that whether or not these systems are reasoning is a separate claim from whether or not they are solving problems that humans would apply reasoning to. So the Chinese room thought experiment isn't relevant to the question at hand, which is how effective these systems are, not
                • by gweihir ( 88907 )

                  That nicely sums it up. Most of these people (like the one you responded to) have zero clue how LLMs actually work and what limits that imposes. They just want to believe like fucking morons in a cult.

              • by gweihir ( 88907 )

                You are so utterly without insight here, it is embarrassing. If it is a benchmark for difficult reasoning questions, it must measure reasoning or it is a failure. Take chess: It is not a reasoning problem to a computer. It is, in part, to a human. But because it is not to a computer, it is not suitable as a benchmark for reasoning capabilities.

                • You are so utterly without insight here, it is embarrassing. If it is a benchmark for difficult reasoning questions, it must measure reasoning or it is a failure. Take chess: It is not a reasoning problem to a computer. It is, in part, to a human. But because it is not to a computer, it is not suitable as a benchmark for reasoning capabilities.

                  Please reread the last two sentences of my previous reply since they address exactly this sort of issue. Whether humans use reasoning to do something doesn't mean something cannot be solved any other way, hence the analogy with the birds. If you think there's a problem with that analogy, by all means please explain it. That said, given that in the other subthread you are actively refusing to test a claim you've made that should just take you a few minutes, I'm guessing that this is not going to be a terribl

                • by HiThere ( 15173 )

                  To justify that assertion you must have a very specific definition of what "reasoning" entails. Would you care to share it?

                  In the definition of reasoning that I use, e.g., alpha-beta pruning would count as reasoning. Clearly you disagree.

                  • In the definition of reasoning that I use, e.g., alpha-beta pruning would count as reasoning. Clearly you disagree.

                    That's a really good point, and I would love to hear the GP's response! Why isn't alpha-beta pruning considered reasoning? Why isn't A* search over a generated graph considered reasoning?

                    I've said it before, but I truly do think many of the most vitriolic anti-LLM voices come from a place of being worried about what LLM capabilities say about the nature of human intelligence and reasoning. Perhaps the bottom line is that human intelligence is not so special as we make it out to be?

                    Go stands out to me as an

                • At what point do humans become unable to reason, according to your rather strict requirements?

                  • by gweihir ( 88907 )

                    Since we do not know how the human mind works (or often fails, as this AI hype nicely shows), I have no idea. But I am a PhD level CS type and I have followed AI research for 35 years now. At one point I was considering making it a career myself, but the constant lying and overpromising turned me away.

                    And no, I do not use a narrow definition. I use one that makes sense. Unless you want to sell something and then a vacuum cleaner becomes "intelligent" and borderline AGI.

      • Maybe you've seen this paper, maybe not:

        "Recent math benchmarks for large language models (LLMs) such as MathArena indicate that state-of-the-art reasoning models achieve impressive performance on mathematical competitions like AIME, with the leading model, Gemini-2.5-Pro, achieving scores comparable to top human competitors. However, these benchmarks evaluate models solely based on final numerical answers, neglecting rigorous reasoning and proof generation which are essential for real-world mathematical ta

        • In the mean time maths and physics students are using them at masters level to explain questions, give step by step explanations of proofs and create test questions.

          The whole "can it reason" is arguably a moot issue if LLMs are better at maths and physics than 99% of the population.

    • Re:AI Training (Score:4, Insightful)

      by fph il quozientatore ( 971015 ) on Monday July 21, 2025 @04:06PM (#65535238)
      You could ask the same question about the contestants. They solved these problems because they trained. They recognized techniques and ideas similar to other problems they encountered, and they applied them to a new problem.
    • by Anonymous Coward

      If you had spent even a few minutes actually learning about LLMs you would understand that training data is not directly stored in an LLM. In fact, you don't even need to research it, you can just use your brain and look at open releases. You can load the 8B parameter release of deepseek into 24gb of ram. But it was trained on 14.8 Trillion tokens (approximately 50TB of data). You cannot store 50TB of data in 24GB.

       

      • You're not even wrong. Perhaps start by taking a first year course on data compression. Once you get to a point where you understand how JPEG works, you will have everything you need to conceptually understand how it is possible for an LLM to represent its training data inside the parameters.

        If you're lazy, then I'll just point you to the fact that LLM exploits have been found that make the LLM spit out its training data in clear text. [theregister.com]

        • Only some locally coherent snippets. And that only after poking it for a long time. If you claim the LLMs will puke up their entire training set, just take that to the lawyers for the authors in the case vs OpenAI. They'll be quite happy to see you.

  • What will they think of next?

  • I have in mind, the latest Grok 4 Heavy, was he tested? I didn't saw any information about other engines doing that test to compare.

  • by gweihir ( 88907 )

    I have no doubt that Maple or any other decent Computer Algebra system could have done the same ... 30 years ago. Or Wolfram Alpha.

    This is a completely meaningless stunt. The only purpose is to deceive the stupid about what these systems can do, or rather cannot do.

    • If you really believe this, by all means try this. This year's problems are here https://artofproblemsolving.com/wiki/index.php/2025_IMO_Problems [artofproblemsolving.com] and you can try to see if you can get Maple, or Mathematica to do these problems. The same link has all the previous contests back to 1959 (1980 did not have a contest). Given format changes, you are more likely to see problems that these systems can solve, especially some of the real inequality problems (which Maple can certainly do), but those problems have not
      • by Anonymous Coward

        You're replying to slashdot's local resident AI retard! Don't waste time reasoning with him. It won't work. For some reason, he feels it is his duty to shit on anything related to AI. He will never give any explanation or reasoning behind his comments and doesn't care what anyone says.

      • by gweihir ( 88907 )

        Why would I waste time in what is clearly a failed technology? The stupid always need years and years to find out a hype is just a hype and never has any of the substance claimed. I can do it directly.

        • Re:So? (Score:4, Insightful)

          by JoshuaZ ( 1134087 ) on Monday July 21, 2025 @06:28PM (#65535522) Homepage
          So, your evidence is to make a claim about what Maple or another computer algebra system must be able to do, and you aren't willing to actually spend 5 minutes checking whether you can actually do that. Here: I'll even make the following offer. Take the 2025 IMO. If you can get Maple or any other computer algebra system to solve more than 1 problem on it, and provide screenshots of it doing so, I'll donate $100 to a charity of your choice. So now want to actually go and do what you claim must be obviously the case. I suspect you aren't however going to respond to this by actually trying what you claim is possible, and I'll let interested readers of this thread make their own conclusions about why that's the case.
          • by gweihir ( 88907 )

            Not interested. You are deep in delusion and cannot be reached.

          • Not relevant to your post, but I appreciate your insights as a mathematician on this article (says the guy who tapped out after 3rd semester calc!). For what it's worth, I don't _think_ gweihir is trolling, though he could be be, but he does seem to spend a substantial number of his waking hours posting ad hominems and just outright calling people stupid if anyone suggests any possible productive usage of an LLM. He doesn't engage in good faith, so the conversations are never particularly interesting.

        • by ceoyoyo ( 59147 )

          Ah, the gaps close even tighter.

          "I'm totally sure X or Y could do that."

          "Okay, try it!"

          "Why would I waste my time?"

          Why indeed. You might find out that you're wrong.

          • by gweihir ( 88907 )

            You might find out that you're wrong.

            Or I could waste a lot of time. Which is the more likely case.

  • The IMO is pretty pissed off at Google because the results are embargoed from publication until the 28th of July.
    Apparently Google did their own metrics and claimed the Gold-Medal. IMO is not happy,
    https://arstechnica.com/ai/202... [arstechnica.com]

    But hey, good on Goog for demonstrating some level of success, whether IMO gives them a gold medal or not.
    Shame on them for violating the publication embargo, but I haven't read that contract or those T&Cs so I only
    judge by what IMO says and what Goog says.

  • What's worse than the self-grading is that the rules include not using calculators.

    How can an AI do math without using a calculator?

    There is no way a computer anything can score even 1% using the same rules as humans.

    • If you read the summary, you'd see it was not self-graded but graded by IMO people. Also, while it is true that AI systems functionally have in some sense access to calculators, you can go and look at the IMO problems yourself https://artofproblemsolving.com/wiki/index.php/2025_IMO [artofproblemsolving.com] and see that the assistance one would get from a standard calculator is pretty minimal.
      • I think when people talk about calculator assistance for LLMs, they are conflating a general class of approaches to bolt on reasoning capabilities into those LLMs by delegating arithmetic, and delegating logic, to specialized, provably correct software tools. These tools are not AIs themselves and do not use ML algorithms in their implementations, so should be considered traditional software. The tools are necessary though, as the Chatbot/language models alone are incapable of performing arithmetic, and th

"Never give in. Never give in. Never. Never. Never." -- Winston Churchill

Working...