TLDR: A new research paper introduces the Generalization–and–Perturbation (GAP) framework and PutnamGAP dataset to evaluate LLM mathematical reasoning robustness. By creating mathematically equivalent but linguistically or parametrically varied versions of advanced math problems, the study found that 18 commercial and open-source LLMs experienced significant performance drops, highlighting their sensitivity to non-mathematical perturbations and a reliance on superficial cues rather than true abstract reasoning. The findings suggest a need for new training methods to improve LLM robustness in mathematical tasks.
Large Language Models, or LLMs, are increasingly being relied upon for complex tasks that demand genuine reasoning, not just pattern matching. However, a recent study highlights a critical challenge in evaluating their mathematical reasoning capabilities: existing benchmarks often suffer from data leakage, where test questions inadvertently seep into training data, leading to artificially inflated scores. This means a high score might not truly reflect an LLM’s ability to reason but rather its capacity to memorize.
To address this, researchers have introduced a novel approach called the Generalization–and–Perturbation (GAP) framework. This framework aims to rigorously assess an LLM’s mathematical reasoning robustness by stress-testing it with problems that are mathematically identical but feature linguistic and parametric variations. By observing how LLMs perform on these transformed problems, the framework can accurately measure their sensitivity to non-mathematical changes, providing a clearer picture of their true reasoning abilities.
As an instantiation of the GAP framework, a new benchmark dataset named PutnamGAP has been created. This dataset is built upon problems from the prestigious William Lowell Putnam Mathematical Competition, spanning from 1938 to 2024. The researchers took 1,051 original problems and systematically generated five mathematically equivalent variants for each. These variants include four types of “surface renames” (e.g., changing variable names to descriptive, confusing, misleading, or garbled strings) and one “kernel rewrite” (altering numerical constants while preserving the core mathematical structure). In total, PutnamGAP comprises 6,306 challenging stress-test questions.
The study evaluated 18 different commercial and open-source LLMs using this new benchmark. The results revealed a consistent and significant degradation in performance across almost all models when faced with the varied problems. For instance, OpenAI’s flagship reasoning model, O3, which scored 49% on the original problems, saw its accuracy drop by 4 percentage points on surface variants and a more substantial 10.5 percentage points on core-step-based variants. Smaller models exhibited even more pronounced declines.
A key finding was that even minor changes, such as simply renaming variables, led to noticeable accuracy drops (averaging 3-5 percentage points). This suggests that LLMs might still rely on superficial textual cues, or “lexical semantic anchors,” rather than fully abstracting the underlying mathematical structures. The most severe performance declines were observed with kernel variants, where numerical constants were changed. Here, accuracy losses often ranged from 5 to 15 percentage points, indicating that understanding a solution pattern doesn’t automatically translate to reasoning that is invariant to parameter changes.
Interestingly, the types of errors made by the LLMs—such as symbol confusion, step omission, arithmetic errors, and logic hallucinations—remained consistent across all variants, with logic hallucinations being the most common. This indicates that the mathematically equivalent perturbations consistently degrade LLM performance across various error categories.
The implications of these findings are significant. For evaluation, the GAP framework offers a continuous supply of contamination-resistant test items, which can help mitigate future data leakage and prevent artificial score inflation on benchmarks. For training, the results suggest that future LLM development should incorporate curriculum fine-tuning that explicitly randomizes both symbol identities and numeric parameters, rather than merely expanding pre-training corpora. From a security perspective, the observed surface-level fragility implies that LLMs could be vulnerable to prompt injection attacks using seemingly innocuous renamings, underscoring the need for integrating robustness checks into security pipelines.
Also Read:
- Unmasking LLM Vulnerabilities with ObfusQAte
- Assessing AI’s Geometry Skills: A New Benchmark for Complex Problems
The researchers recommend several practical strategies to improve LLM performance in mathematical reasoning tasks, including data augmentation (applying transformations during training), symbol binding (separating identifier tokens), and hybrid reasoning (embedding formal validators like SMT/CAS into the decoding process). This research makes it clear that true progress in mathematical AI will be measured not just by higher raw scores, but by an LLM’s ability to bridge the gap between symbols and their underlying mathematical substance. For more details, you can refer to the full research paper here.


