TLDR: SeaLLMs-Audio is the first large audio-language model (LALM) specifically designed for Indonesian, Thai, Vietnamese, English, and Chinese. It supports multilingual, multimodal, and multi-task functionalities, handling diverse audio analysis and voice-based interaction tasks. Alongside, SeaBench-Audio was created as a benchmark for evaluating LALMs in Southeast Asian languages. Experiments show SeaLLMs-Audio performs competitively, particularly excelling in factual, life, medical, and math-related audio queries.
A new advancement in artificial intelligence, SeaLLMs-Audio, has been introduced, marking a significant step forward for audio-language models in Southeast Asia. This innovative model is the first large audio-language model (LALM) specifically designed to cater to multiple Southeast Asian (SEA) languages, including Indonesian (id), Thai (th), and Vietnamese (vi), alongside English (en) and Chinese (zh).
SeaLLMs-Audio is built to be highly versatile, offering three key features:
Multilingual Capabilities
The model primarily supports five languages: Indonesian, Thai, Vietnamese, English, and Chinese. This focus addresses a critical gap, as most existing LALMs typically support only one or two languages, often leaving multilingual and low-resource regions like Southeast Asia underserved.
Multimodal Input Flexibility
SeaLLMs-Audio can accept various input types, making it highly adaptable. Users can provide audio only, text only, or a combination of both audio and text, allowing for more natural and diverse interactions.
Also Read:
- Diagnosing AI’s Reasoning Abilities with TempoBench
- GeoFM: A New Approach to Boost Geometric Reasoning in AI Models
Multi-task Functionality
The model supports a broad spectrum of audio-centric tasks. These include detailed audio analysis tasks such as Audio Captioning (AC), Automatic Speech Recognition (ASR), Speech-to-Text Translation (S2TT), Speech Emotion Recognition (SER), Speech Question Answering (SQA), and Speech Summarization (SS). Beyond analysis, it also facilitates voice-based dialogue, capable of answering factual, mathematical, and general knowledge queries.
The development of SeaLLMs-Audio involved training on a vast audio corpus, which enables it to perform strongly across these diverse tasks. It aims to benefit both the regional research community and industry by advancing audio LLMs in Southeast Asia.
To ensure robust evaluation for this new generation of LALMs, the researchers also introduced SeaBench-Audio. This is a comprehensive benchmark specifically designed for assessing LALMs in Southeast Asian languages. SeaBench-Audio includes 14 distinct tasks, covering scenarios with both audio and text inputs, as well as tasks with audio-only inputs. These tasks reflect real-world, multimodal language understanding challenges, and the benchmark uses an “LLM-as-a-judge” framework for consistent and scalable evaluation, showing high agreement with human assessments.
Under the hood, SeaLLMs-Audio is built upon the Qwen2-Audio-7B and Qwen2.5-7B-Instruct architectures. It leverages the effective audio encoding capabilities of Qwen2-Audio-7B and the strong multilingual features of Qwen2.5-7B-Instruct. The model underwent full-parameter fine-tuning on a newly curated, large-scale audio dataset, which includes 1.58 million conversations across various tasks and languages.
Experiments conducted on the SeaBench-Audio benchmark demonstrate that SeaLLMs-Audio achieves competitive performance compared to other LALMs in SEA languages. While other models like MERaLiON-2 showed strengths in audio comprehension tasks such as ASR, S2TT, and SER, SeaLLMs-Audio excelled in categories like factual questions, life-related queries, medical questions, and mathematical problems. This highlights the extensive scope and varied nature of its training data.
The researchers acknowledge limitations, such as the current focus on three selected Southeast Asian languages due to resource constraints, though the methodology is designed to be extensible. They also note instances of language mixing, a common behavior in LALMs, which they suggest could be improved with future work using reinforcement learning.
For more detailed information, you can read the full research paper here: SeaLLMs-Audio: Large Audio-Language Models for Southeast Asia.


