TLDR: MULTI-TAP is a new plug-and-play predictor for image-text alignment evaluation. It uses a two-stage training process on large vision-language models to provide both overall and fine-grained scores, aligning strongly with human judgments. It significantly improves inference efficiency and long-sequence processing compared to existing methods. The research also introduces EYE4ALL, a new dataset with blind and low-vision user preferences, to benchmark assistive AI systems.
In the rapidly evolving landscape of vision-language models, accurately and efficiently evaluating how well an image aligns with its text description has become a critical challenge. Traditional evaluation methods often fall short, especially when dealing with complex, instruction-rich texts that mirror real-world scenarios. Existing evaluation predictors typically lack one or more crucial properties: strong alignment with human judgments, the ability to process long text sequences, inference efficiency, or applicability to multi-objective scoring.
Addressing these limitations, researchers from KAIST AI and Theia Insights have introduced a novel solution called MULTI-TAP (Multi-Objective Task-Aware Predictor). This innovative plug-and-play architecture is designed to build a robust predictor capable of both single and multi-objective scoring for image-text alignment. MULTI-TAP stands out by producing a single overall score using a reward head built on top of a large vision-language model (LVLM), while also generating fine-grained scores for multiple human-interpretable objectives.
How MULTI-TAP Works
The core of MULTI-TAP’s strength lies in its two-stage training paradigm. In Stage 1, it trains a single-objective predictor to generate a unified score reflecting the overall semantic alignment between images and texts. This stage also helps in shaping rich multimodal embeddings. In Stage 2, these frozen embeddings are then used to construct a multi-objective, task-aware predictor. This second stage adds a lightweight ridge regression layer to produce scores across various human-interpretable dimensions, such as accuracy, sufficiency, and safety, without the computational expense and instability of directly training an LVLM for multi-objective tasks.
Improved Performance and Efficiency
MULTI-TAP demonstrates significant improvements over existing metrics. For instance, it achieves substantially higher performance in Kendall’s τc correlation compared to IXCREW-S on FlickrExp, and even matches the performance of the GPT-4o-based predictor, G-VEval, despite being much smaller (7–8B parameters). It is robust across different LVLM architectures, including Qwen2-VL, InternLM, and LLaMA-3.2, consistently outperforming other metrics in both pairwise and pointwise ranking datasets.
In terms of efficiency, MULTI-TAP significantly reduces inference time compared to generative reward models, which can take days for evaluation. For example, MULTI-TAP Qwen-2B-S can complete training and inference in about 4 hours, while generative models might take 51 days for similar tasks. This efficiency, combined with its strong correlation with human judgments and ability to process long text sequences (up to 131K tokens), makes MULTI-TAP a highly practical solution.
Also Read:
- A New Benchmark for Evaluating Audio Descriptions in Movies
- VELA: An Efficient Metric for Assessing Long Image Descriptions
Introducing EYE4ALL: A New Benchmark for Assistive AI
A notable contribution of this research is the introduction of a new dataset called EYE4ALL. This novel text-image-to-text (TI2T) dataset is built upon the judgments of 25 human annotators, crucially including perspectives from blind and low-vision (BLV) individuals. Unlike previous datasets that focused on image quality, EYE4ALL contains human judgments on the quality of LVLM-generated text responses and their alignment to text requests and scenery images. It offers two complementary modes: EYE4ALLPref for human preferences on LVLM text responses, and EYE4ALLMulti for human-annotated fine-grained scores across seven dimensions like Direction Accuracy, Depth Accuracy, Safety, Sufficiency, Conciseness, Hallucination, and Overall Quality. This dataset is particularly valuable for developing more accessible AI systems by capturing the underlying preferences of users, including BLV individuals.
The research paper, available at arXiv:2510.00766, highlights that MULTI-TAP not only provides a scalable framework for single and multi-objective reward modeling but also introduces a crucial benchmark for practical evaluation and building robust assistive AI systems. This work is poised to guide future research in developing human-aligned predictors for multimodal evaluation, especially for applications catering to accessibility needs.


