TLDR: This research explores how large language models perform long chain-of-thought reasoning across different languages, including French, Japanese, Latvian, and Swahili. It finds that using English as a pivot language has varied effectiveness, extensive multilingual pretraining helps but doesn’t eliminate performance gaps, and the optimal dataset size for fine-tuning depends on the language. The study provides translated datasets to advance multilingual AI reasoning.
Large Language Models (LLMs) have made incredible strides in reasoning, often by generating a series of intermediate steps, a technique known as Chain-of-Thought (CoT) prompting. While this has led to expert-level performance in areas like mathematics and programming, this advanced reasoning has largely remained focused on the English language.
A recent research paper, “Long Chain-of-Thought Reasoning Across Languages,” delves into this limitation by systematically studying how LLMs perform long CoT reasoning in a multilingual context. The researchers, Josh Barua, Seun Eisape, Kayo Yin, and Alane Suhr from the University of California, Berkeley, aimed to understand when and why long CoTs transfer across different languages.
To achieve this, the team constructed translated versions of two popular English reasoning datasets, s1k and Bespoke-Stratos-17k, into French, Japanese, Latvian, and Swahili. These translated datasets, referred to as M-s1k and M-BS17k, were created using Gemini 2.0 Flash, a powerful translation model. They then fine-tuned Qwen 2.5 (7B) and Qwen 3 (8B) models, chosen for their balance of reasoning capabilities and computational efficiency, on these multilingual datasets.
Also Read:
- LinguaSafe: Advancing Multilingual Safety Evaluation for Large Language Models
- Bridging the Language Divide: How Code-Switching Improves LLM Performance
Key Findings from the Study
The experiments revealed three significant insights into multilingual CoT reasoning:
First, the effectiveness of using English as a ‘pivot language’ – where models process queries in English before generating responses in the target language – varies greatly. For French, using English as a pivot offered no performance benefit. However, for Japanese and Latvian, it improved performance when English was used as the reasoning language. For Swahili, a low-resource language, English pivoting proved insufficient, with models struggling in both task comprehension and reasoning.
Second, extensive multilingual pretraining, as seen in the Qwen 3 models which were trained on approximately 36 trillion tokens covering 119 languages, significantly narrows the cross-lingual performance gap. Despite this, the gap is not entirely eliminated. A lightweight fine-tune using just 1,000 reasoning traces still led to substantial performance improvements, over 30%, in Swahili, demonstrating the continued value of targeted fine-tuning for certain languages.
Third, the trade-offs between data quality and scale during fine-tuning are language-dependent. Small, carefully curated datasets were sufficient for English and French. In contrast, larger, even if noisier, corpora proved more effective for languages like Swahili and Latvian. This suggests that low-resource languages might benefit more from a greater volume of data to compensate for less pretraining exposure to specific reasoning patterns.
This systematic study provides valuable insights into the complexities of enabling advanced reasoning capabilities in LLMs across diverse languages. By releasing their translated datasets, the researchers hope to foster more equitable multilingual reasoning research in the future. You can find more details about their work in the full research paper: Long Chain-of-Thought Reasoning Across Languages.


