TLDR: Researchers have introduced MMAO-Bench, a novel and diverse benchmark designed to thoroughly evaluate “omni models” – AI systems that unify visual, audio, and language understanding. This benchmark, featuring 1880 human-curated samples across 44 task types and innovative multi-step open-ended questions, addresses limitations of previous evaluations. A key finding is the “compositional law,” which suggests that while weaker models are limited by their least capable modality, stronger models demonstrate enhanced, synergistic performance when integrating multiple senses. Experiments highlight the superior performance of advanced models like Gemini, yet also reveal that complex reasoning remains a significant challenge, even as perception capabilities approach human levels.
As artificial intelligence continues its rapid advancement, we are witnessing the emergence of ‘omni models’ – sophisticated AI systems capable of understanding and processing information across multiple modalities, including vision, audio, and language, all at once. This represents a significant leap from earlier models that typically specialized in a single type of data, such as just text or just images. However, evaluating these complex omni models presents a unique challenge: how do we accurately measure their ability to not only understand individual modalities but also to integrate them seamlessly for a holistic comprehension of the world?
Existing evaluation methods often fall short. Many benchmarks focus on specific modalities like image or video understanding, or they might be limited to English-only contexts. Furthermore, some datasets suffer from quality issues or questions that can be solved without truly requiring cross-modal reasoning, making it difficult to gauge a model’s true ‘omni-modal’ intelligence.
To address these critical gaps, a team of researchers has introduced a groundbreaking new evaluation framework called MMAO-Bench: MultiModal All in One Benchmark. This innovative benchmark is specifically designed to provide a comprehensive and high-quality assessment of omni models, evaluating both their individual (uni-modal) and combined (omni-modal) understanding capabilities. You can delve into the full research paper for more details: MMAO-Bench: MultiModal All in One Benchmark Reveals Compositional Law between Uni-modal and Omni-modal in OmniModels.
MMAO-Bench stands out with its impressive scale and diversity, featuring 1880 meticulously human-curated samples across 44 distinct task types. A significant innovation is its inclusion of multi-step open-ended questions. Unlike traditional multiple-choice formats, these questions require models to perform complex, multi-step reasoning and provide open-ended text answers, offering a more realistic and discriminative evaluation of advanced cognitive abilities.
One of the most compelling findings from the research is the discovery of a ‘compositional law’ governing the relationship between uni-modal and omni-modal performance. The study suggests that an omni-modal model’s overall ability is not simply an addition of its individual modality strengths, but rather a more profound, power-law relationship based on the multiplication of these strengths. For weaker models, this often manifests as a ‘short-board effect,’ where their performance is limited by their weakest modality. Conversely, stronger models demonstrate an ’emergent ability,’ where the combination of modalities leads to a synergistic promotion, resulting in significantly higher intelligence than the sum of their parts.
The benchmark categorizes a model’s capabilities into two primary dimensions: Perception and Reasoning. The Perception layer assesses various recognition types, including object, attribute, scenario, and spatial perception, along with cross-modal conversion and semantic understanding. It also evaluates cross-modal alignment – how well models synchronize information across different senses. The Reasoning layer goes beyond conventional categories to include spatial, temporal, and complex reasoning, which involves multi-conditional and multi-step problem-solving.
Experimental results using MMAO-Bench provided clear insights into the current state of omni models. Proprietary models, particularly the Gemini series, consistently outperformed open-source counterparts across all benchmarks, with Gemini-2.5-Pro setting a new state-of-the-art. The multi-step open-ended questions proved to be particularly challenging for all models, highlighting that complex, multi-step cross-modal reasoning remains a significant hurdle, especially for open-source AI. While top models like Gemini-2.5-Pro showed perception capabilities comparable to human experts, a notable gap in reasoning ability still exists.
Ablation studies, which involved selectively removing visual or audio information, further underscored the importance of multimodal input. Providing visual or audio information, or even just captions derived from them, significantly boosted model performance, especially for the more capable models. This demonstrates how different modalities can enhance each other, leading to a richer and more accurate understanding.
Also Read:
- MUSE Benchmark Exposes AI’s Limitations in Music Perception and Relational Reasoning
- Unveiling MLLM Vision: How AI Models See the World Through Visual Search
In conclusion, MMAO-Bench offers a robust and highly discriminative tool for evaluating the next generation of AI. By providing a unified framework and challenging tasks, it not only reveals the current strengths and weaknesses of omni models but also paves the way for their future development, pushing them towards a more integrated and intelligent understanding of our multimodal world.


