TLDR: This research investigates using Speech LLMs for Automatic Speech Recognition (ASR) in low-resource languages. It finds that while 100-200 hours of data are needed for optimal performance, pretraining a lightweight “projector” on high-resource languages significantly improves ASR in data-scarce settings, especially with small training sets (10-15 hours). Multilingual pretraining further enhances results, offering a promising strategy for expanding speech technology to more languages.
Large Language Models (LLMs) have shown great promise in understanding spoken language, especially for widely spoken languages. However, their effectiveness in “low-resource” settings—languages with limited available data—has been less explored. This research delves into how Speech LLMs can be used for Automatic Speech Recognition (ASR) in these challenging environments, specifically using the SLAM-ASR framework.
The SLAM-ASR framework is a system that connects a speech encoder (which processes spoken input) with an LLM (which handles language understanding and generation) through a “lightweight projector.” This projector is a small, trainable component that helps align the speech information with the LLM’s understanding. The study used the Whisper-large-v3-turbo model as the speech encoder and evaluated two open-source multilingual LLMs: EuroLLM 1.7B and Salamandra 2B.
Data Requirements for Training
One of the key questions the researchers addressed was how much training data is needed to effectively train this linear projector. They simulated low-resource conditions by using varying amounts of data from the Common Voice (CV) Italian dataset, ranging from 10 to 252 hours. Their findings indicate that to achieve performance comparable to or even slightly better than a Whisper-only model, approximately 200 hours of training data are required. This highlights the ongoing challenge of data scarcity in low-resource language scenarios, even with a very lightweight trainable component.
The choice of LLM also played a significant role. EuroLLM 1.7B consistently outperformed Salamandra 2B, suggesting that the underlying LLM greatly influences the ASR performance. Interestingly, the performance gap between the two LLMs narrowed as more training data became available.
The Power of Pretraining
Given that 100-200 hours of labeled data can be difficult to obtain for many languages, the study explored a transfer learning approach: pretraining the projector on a high-resource language and then fine-tuning it on a low-resource one. They pretrained projectors on English (Librispeech 100 and Common Voice English) and Spanish (Common Voice Spanish) data, then fine-tuned them on Italian and Galician datasets.
The results were compelling, especially for very limited datasets (10-15 hours). Projectors pretrained on high-resource languages significantly improved performance compared to training a projector from scratch. For instance, with just 10 hours of Italian training data, a pretrained projector drastically reduced the Word Error Rate (WER). This suggests that pretraining helps the model generalize better, making it more robust even with minimal target language data.
The study also found that the language used for pretraining matters. For Italian, pretraining on Spanish data yielded better results than English, likely due to higher acoustic similarity. Furthermore, leveraging a “multilingual” projector—one pretrained on a combination of languages like English, Spanish, and Italian—further enhanced performance and generalization capabilities, particularly in the low-resource Galician case study.
Also Read:
- Enhancing Voice Assistant Accuracy on Devices with Adaptive Knowledge Distillation
- Assessing Cognitive Impairment with Large Language Models Across Languages
Implications for Low-Resource Languages
This research provides valuable insights into making advanced speech technologies more accessible for the thousands of languages currently underserved. While Speech LLMs, particularly within the SLAM-ASR framework, still require substantial data to reach peak performance, the strategies of pretraining and fine-tuning offer a promising path forward. These methods can significantly mitigate the impact of data scarcity, making it more feasible to develop effective ASR systems for languages with limited resources. Future work will likely focus on further optimizing these transfer learning techniques and exploring linguistic factors to improve cross-lingual transfer, especially between closely related languages. You can read the full research paper here.


