TLDR: A new study investigates how different data ordering strategies, known as curriculum learning, affect the mathematical reasoning abilities of large language models (LLMs). The research introduces a framework that breaks down difficulty into five dimensions and finds that there is no universal best strategy; the optimal approach depends on the model’s capabilities and the task’s complexity. The findings show that data ordering significantly impacts both model performance and internal states like confidence and uncertainty, offering actionable insights for designing more effective LLM training curricula.
Curriculum learning, a strategy that organizes training data from easy to hard, has become a popular method for enhancing the reasoning abilities of large language models (LLMs). However, previous research has used many different ways to measure difficulty and set up training, leaving key questions unanswered: When does curriculum learning truly help? Is it better to go from easy to hard (forward) or hard to easy (reverse)? And does the answer change depending on what we measure?
A recent study titled What Makes a Good Curriculum? Disentangling the Effects of Data Ordering on LLM Mathematical Reasoning from Dartmouth College addresses these questions through a comprehensive evaluation framework. The researchers, Yaning Jia, Chunhui Zhang, Xingjian Diao, Xiangchi Yuan, Zhongyu Ouyang, Chiyu Ma, and Soroush Vosoughi, broke down curriculum difficulty into five distinct dimensions: Problem Difficulty, Model Surprisal, Confidence Margin, Predictive Uncertainty, and Decision Variability.
No Universal Strategy
The study conducted controlled experiments using Llama3.1-8B, Mistral-7B, and Gemma3-4B on mathematical reasoning tasks. A key finding was that no single curriculum strategy works best universally. The effectiveness of training from easy-to-hard versus hard-to-easy depends on both the model’s capabilities and the complexity of the task. For instance, stronger models often benefit from an easy-to-hard approach on simpler tasks, while weaker models or more challenging tasks might perform better with a hard-to-easy order.
The Impact of Data Ordering
The research highlights that how data is presented matters as much as what data is presented. Even when the amount and type of training data are fixed, simply changing the order can significantly alter how a model learns and its reasoning outcomes. This suggests that data ordering is a crucial signal in the learning process.
Understanding Difficulty Metrics
The researchers explored various metrics to define difficulty:
- Problem-Side Metrics: These measure the inherent complexity of a problem, such as the number of reasoning steps, symbolic complexity, comprehension difficulty, and empirical accuracy (Acc@K).
- Model-Side Metrics: These capture difficulty from the model’s perspective, including how ‘surprised’ the model is by a sequence (perplexity), how confident it is in its predictions (confidence margin), how uncertain its predictions are (predictive uncertainty), and how stable its decisions are across multiple attempts (decision variability).
Key Insights from Experiments
The study revealed several important insights:
- Problem Difficulty: Curricula based on problem difficulty generally improved performance, especially on easier datasets. However, the benefits decreased as tasks became more complex. Reverse curriculum learning (hard-to-easy) often worked better on simpler datasets, while forward curriculum learning (easy-to-hard) was more effective on harder ones.
- Model Surprisal: Perplexity, which measures how unexpected a sequence is, proved to be an effective but sometimes unstable signal. Starting with low-perplexity samples (forward CL) often led to more stable optimization.
- Confidence Margin: The logit gap, which measures how confidently a model prefers its top prediction, was a stable and effective signal. Reverse curriculum learning (starting with high-confidence examples) consistently improved reasoning performance.
- Predictive Uncertainty: Entropy-based metrics, reflecting the dispersion of predictions, were informative but less stable. For in-distribution tasks, forward CL (low-entropy first) often led to more stable optimization.
- Decision Variability: Curricula based on decision variability showed clear patterns. Reverse CL (starting with high-variability, less stable problems) consistently outperformed forward CL, particularly on simpler reasoning tasks.
Also Read:
- Scaf-GRPO: A Progressive Training Method for Advanced LLM Reasoning
- Beyond Final Answers: How RLVR Affects LLM Reasoning Traces
Shaping Internal States and Training Dynamics
Beyond just external performance, curriculum learning also influences the model’s internal states, such as confidence and uncertainty. Forward CL tends to produce more cautious, uncertainty-aware models, while Reverse CL helps models maintain decisiveness with higher confidence. Furthermore, the study found that different curriculum strategies shape the training process itself. Metrics related to the model’s internal state primarily regulate how quickly and stably the model converges, while task-aligned metrics significantly impact the final learning outcome and how well the model generalizes to new problems.
In conclusion, this research challenges the idea of a one-size-fits-all curriculum strategy for LLMs. It provides practical guidance, suggesting that the optimal data ordering depends heavily on the specific model and task at hand. For instance, prioritizing problems where the model is uncertain or highly confident during training can further boost its reasoning capabilities.


