TLDR: A new framework called LANPO (Language-And-Numerical Policy Optimization) enhances Large Language Models (LLMs) by integrating both language and numerical feedback in reinforcement learning. It resolves issues like ‘information leakage’ and ‘behavior collapse’ by using ‘Reward-Agnostic Reflection’ for self-correction and ‘Relevant Abstraction’ to distill generalizable lessons from past experiences. This approach significantly improves LLM performance on mathematical reasoning tasks, making learning more efficient and robust.
Large Language Models (LLMs) have shown incredible potential in various reasoning tasks, often enhanced through Reinforcement Learning (RL). Traditionally, RL in LLMs relies on simple numerical rewards, which tell the model if an answer was right or wrong but discard the rich, explanatory text generated during the model’s thought process. This means LLMs often have to start from scratch with each new problem, leading to inefficient learning and repetitive mistakes.
Imagine trying to learn a new skill, but every time you fail, you’re only told “wrong” without any explanation of why or how to improve. That’s similar to how LLMs learn with only numerical feedback. However, LLMs are unique because they can also understand and generate human-like language. This opens up the possibility of using language feedback – like explanations of why a solution failed or general problem-solving strategies – to guide their learning.
Integrating this language feedback into RL training isn’t straightforward, though. Researchers faced a dilemma: if an LLM receives feedback on the exact problem it’s trying to solve (intra-sample feedback), it might simply memorize the answer, leading to inflated performance that doesn’t generalize to new problems. This is called “information leakage.” On the other hand, if it receives feedback from different, unrelated problems (inter-sample feedback), it might ignore the context altogether, a phenomenon termed “behavior collapse.” This happens because the feedback might be too specific or irrelevant, making it easier for the model to generate a solution from scratch.
Introducing LANPO: A New Approach to LLM Learning
To overcome these challenges, a new framework called Language-And-Numerical Policy Optimization (LANPO) has been proposed. LANPO cleverly separates the roles of different types of feedback: language guides the model’s exploration and helps it understand *how* to approach problems, while numerical rewards drive the actual optimization, telling the model *what* to learn from its attempts. This synergistic approach allows LLMs to learn more efficiently and robustly.
At the heart of LANPO are two key mechanisms designed to make language feedback effective and safe:
- Reward-Agnostic Reflection: For intra-sample feedback, instead of giving the model the correct answer, LANPO encourages it to review its own past attempts, critique them step-by-step, and then produce a refined solution. This fosters a self-corrective learning habit without revealing the ground truth, preventing information leakage and promoting genuine reasoning improvement.
- Relevant Abstraction: For inter-sample feedback, LANPO ensures that past experiences are not just raw solutions but are distilled into generalizable “principles and pitfalls.” This involves filtering for semantically similar problems and summarizing their solutions into high-level strategies. This prevents behavior collapse by providing guidance that is both useful and transferable, rather than problem-specific.
LANPO operates with a dynamic “experience pool” that accumulates these distilled insights from past trials. When tackling a new problem, the model can retrieve relevant context from this pool, either for self-reflection on its own attempts or for abstract guidance from similar problems. This memory system fuels both reflection and abstraction, making the learning process more informed.
Also Read:
- Optimizing AI Reasoning for Shorter, Smarter Responses
- Optimizing with Human Insight: How LILO Combines Language Models and Bayesian Search
Empirical Success and Future Implications
The effectiveness of LANPO was rigorously tested on challenging mathematical reasoning benchmarks. The results showed that LANPO-trained models, specifically Qwen2.5-7B and Qwen3-14B, significantly outperformed strong baselines like GRPO in test accuracy. For instance, LANPO achieved an absolute performance improvement of up to 9.27% on the AIME25 test set after the same number of training steps. These improvements were observed not only in zero-shot inference (solving problems without explicit hints) but also when models were allowed to use their learned self-correction or retrieval mechanisms at test time.
The research also delved into the training dynamics, revealing how LANPO’s feedback mechanisms distinctly shape exploration. Reward-Agnostic Reflection encouraged broader exploration, while Relevant Abstraction helped focus the policy’s search space. The study also highlighted the critical importance of filtering mechanisms for inter-sample feedback and finding the right balance for the feedback ratio during training.
In conclusion, LANPO offers a robust method for integrating historical experiences into the LLM RL loop, creating more effective and data-efficient learning agents. By carefully structuring how language feedback is used, it transforms previously conflicting signals into complementary drivers of efficient and robust policy learning. This work provides a practical blueprint for enhancing LLM reasoning abilities, paving the way for more adaptable and sample-efficient RL systems. You can read the full paper here.


