TLDR: UR2 is a novel framework that integrates Retrieval-Augmented Generation (RAG) and Reinforcement Learning (RL) in Large Language Models (LLMs). It addresses the isolation of these two capabilities by introducing a difficulty-aware curriculum that triggers retrieval only for challenging problems, and a hybrid knowledge access strategy combining offline corpora with LLM-generated summaries. This dynamic coordination enables LLMs to adaptively leverage external knowledge, leading to significant performance improvements across diverse reasoning and QA tasks, often matching or exceeding advanced commercial models.
Large Language Models (LLMs) have become incredibly powerful, excelling at many tasks. Their strength comes from two main approaches: Retrieval-Augmented Generation (RAG) and Reinforcement Learning from Verifiable Rewards (RLVR). RAG helps LLMs access external knowledge, making their responses more grounded in facts. RLVR, on the other hand, refines the LLM’s ability to perform complex reasoning, especially in areas like mathematics and logic.
However, a common challenge is that these two powerful capabilities are often developed in isolation. Existing attempts to combine them have been limited, usually focusing only on specific tasks like open-domain question answering with fixed ways of retrieving information. This separation can limit how well these methods generalize to new situations and broader applications.
To overcome this, researchers have proposed a new framework called UR2, which stands for Unified RAG and Reasoning. UR2 is designed to dynamically coordinate retrieval and reasoning using reinforcement learning. It introduces two key innovations to achieve this:
Difficulty-Aware Curriculum Training
UR2 employs a smart training strategy where it selectively uses retrieval only for problems that are genuinely challenging. For easier questions, the model is encouraged to rely on its internal reasoning abilities. This approach not only makes the retrieval process more efficient by reducing unnecessary searches but also helps the model learn when it truly needs external information, improving the quality of its queries for difficult problems.
Also Read:
- Retrieval-Augmented Generation: A Comprehensive Review of Its Landscape
- Streamlining Database Interaction: An End-to-End Text-to-SQL Framework with Automated Database Selection
Hybrid Knowledge Access
Unlike previous methods that might only use static knowledge bases like Wikipedia, UR2 combines different sources of information. It leverages domain-specific offline corpora (like curated medical knowledge bases) for accurate grounding. Additionally, it uses summaries generated by LLMs themselves, which helps with efficiency and generalization across various tasks. This hybrid approach ensures that the model has access to both precise, specialized knowledge and broader, summarized information.
During training, UR2 models spontaneously develop advanced cognitive behaviors. These include self-verification through retrieval, validating intermediate reasoning steps, and revising hypotheses based on external evidence. This means the model doesn’t just retrieve information; it actively uses it to refine its thought process.
The effectiveness of UR2 has been demonstrated through extensive experiments across a variety of tasks, including open-domain question answering, general knowledge benchmarks (like MMLU-Pro), and specialized medical and mathematical reasoning. Built on models like Qwen2.5-3/7B and LLaMA-3.1-8B, UR2 significantly outperforms existing RAG and RL methods. In some cases, its performance is even comparable to highly capable commercial models like GPT-4o-mini and GPT-4.1-mini.
The framework uses a two-stage optimization process. The first stage focuses on activating the model’s retrieval capabilities, teaching it how and when to issue search queries correctly. The second stage then refines the quality of the answers, incorporating correctness feedback while maintaining the learned retrieval behaviors. This decoupled approach ensures stable learning and clear credit assignment for complex reasoning paths.
Ablation studies, where specific components of UR2 were removed, confirmed the importance of each part. For instance, removing the initial retrieval activation stage led to noticeable performance drops, highlighting its necessity. The use of LLM-generated summaries was also found to be crucial, as models struggled significantly without them. The framework’s robustness was further shown by its consistent strong performance even when using different LLMs for summarization, proving its adaptability to various computational budgets.
UR2 represents a significant step towards creating more adaptive AI systems that can flexibly combine their internal knowledge with dynamic access to external information. By learning to strategically retrieve information based on problem difficulty, UR2 enhances both reasoning and knowledge utilization in LLMs. For more technical details, you can refer to the full research paper: UR2: Unify RAG and Reasoning Through Reinforcement Learning.


