Can Artificial Intelligence Solve Real Mathematical Research Problems? Scientists Put AI to the Test
Artificial Intelligence (AI) is becoming an increasingly common tool in mathematics, mirroring trends across the wider scientific community. Although mathematics underpins AI development, mathematicians are also using these systems for tasks such as scanning academic literature and spotting errors in draft papers. But their real interest lies in a more demanding test: can AI solve authentic, high-level research problems?
Related science and AI coverage:
The Challenge of Measuring AI's Mathematical Ability
Until now, there has been no agreed framework for realistically assessing AI's performance in advanced mathematics. In response, a group of mathematicians set out to evaluate these capabilities in a new study released on the arXiv preprint platform.
What sets this work apart is the nature of the questions posed to the AI. Rather than using familiar textbook or competition problems, the researchers drew directly from their own unpublished research, ensuring the challenges were entirely new and beyond anything the systems could have memorized.
How the First Proof Experiment Was Designed
To ensure the integrity of the test, every participating mathematician contributed an original problem and solved it beforehand, demonstrating that a solution was achievable. The answers were then encrypted, preventing them from appearing in any public databases accessible to AI models.
Altogether, the study featured ten problems drawn from diverse areas of mathematics, including:
- Stochastic analysis
- Spectral graph theory
- Symplectic geometry
- Algebraic topology
These challenges were posed to several state-of-the-art systems, among them GPT-5.1 Pro and Gemini 3 Pro, with each model given just one chance to respond.
No additional guidance, conversation or clues were allowed.
Research methodology insights:
Understanding scientific testing frameworks
What the Researchers Mean by "First Proof"
Named First Proof, the experiment targeted a precise stage of mathematical research. According to the authors, it concentrates on the final, well-specified step, where the problem and conceptual tools are already clearly defined.
This approach was designed to test whether AI systems could bridge the final gap between known methods and a complete, correct proof—an ability central to genuine mathematical discovery.
AI Performance on Unpublished Research Problems
The findings are likely to reassure anyone worried that artificial intelligence is on the brink of replacing mathematicians. While today's AI systems excel at summarizing established knowledge and spotting patterns in data, they consistently struggled to solve the research problems when given only a single attempt.
The researchers conclude that, for now, AI performs well on contest-style exercises but lacks the creative insight and intuition required to explore genuinely unknown mathematical territory.
What Comes Next for the First Proof Benchmark
Next, the team plans to release the encrypted solutions on 13 February, before moving on to a second round of challenges. Their long-term goal is to turn First Proof into a permanent benchmark, stating that they hope to use these insights to develop a more formal and enduring test of AI's mathematical abilities.
Such a benchmark could help track future progress in AI reasoning, creativity and problem-solving as systems continue to evolve.
Key Takeaways for Readers
- Mathematicians tested AI using entirely new, unpublished research problems.
- The experiment removed memorization by encrypting solutions in advance.
- Leading AI models were given only one attempt, with no hints or interaction.
- AI struggled with genuine research-level mathematics.
- Researchers aim to make First Proof a long-term benchmark for AI reasoning.
Broader Implications for Society
Physics, engineering and applied research
Future tourism & World travel guide

Comments
Post a Comment