TLDR: A new AI model, the Motion-Augmented Caption Model (M-ACM), significantly improves the accuracy and detail of video captions for human motion. It uses a dual-pathway system, combining standard visual understanding with specialized motion representations derived from human mesh recovery. A key innovation, Motion Synergetic Decoding (MSD), reduces ‘hallucinations’ by integrating information from both pathways. The research also introduces the Human Motion Insight (HMI) Dataset and HMI-Bench, new resources for training and evaluating models on fine-grained human movement. M-ACM outperforms existing methods, particularly in capturing subtle movements, micro-expressions, and motivations.
Researchers have introduced a groundbreaking approach to video captioning, specifically targeting the intricate challenge of describing human actions with fine-grained detail. Current video captioning models often fall short, producing vague descriptions or even ‘hallucinations’ – semantically inconsistent or incorrect details about human movements. This new work aims to overcome these limitations by introducing a novel generative framework.
The core of this innovation is the **Motion-Augmented Caption Model (M-ACM)**. This model significantly enhances caption quality by integrating ‘motion-aware decoding.’ At its heart, M-ACM utilizes motion representations derived from human mesh recovery. This technique explicitly highlights the dynamics of the human body, which in turn helps to reduce factual errors and improve both the semantic accuracy and spatial alignment of the generated captions.
How M-ACM Works
The M-ACM framework operates with a dual-pathway system. One pathway processes the standard visual information from a video, while the other is specialized for motion. The motion pathway employs ViTPose-based frame sampling and human mesh recovery to generate precise representations of human movement. Both visual and motion-based representations are then projected into a common embedding space and processed by a language model decoder to generate the final descriptive text.
A key innovation within M-ACM is **Motion Synergetic Decoding (MSD)**. This mechanism addresses the hallucination problem by comparing logit distributions from both the standard visual pathway and the motion-specialized pathway. By leveraging complementary information from these two modalities, MSD ensures that the model’s predictions are more robust and accurate, leading to captions that correctly identify body parts and actions, even in complex scenarios.
New Resources for Research
To further support research in this critical area, the team has also unveiled the **Human Motion Insight (HMI) Dataset**. This comprehensive resource comprises 115,000 video-description pairs, along with 1,031,000 question-answer pairs, all specifically focused on human movement. The HMI dataset was meticulously created through rigorous filtering, motion quality assessment, and dynamic frame sampling to ensure high-quality data for understanding human motion.
Accompanying the dataset is **HMI-Bench**, a dedicated benchmark designed for evaluating motion-focused video captioning. This benchmark allows for the assessment of models across multiple dimensions, including detailed movement descriptions, spatial relationships, temporal dynamics, and semantic accuracy.
Also Read:
- DualCap: Generating Richer Image Captions Through Dual Retrieval
- SynHLMA: Advancing Hand Manipulation for Articulated Objects with Language Instructions
Impressive Results
Experimental results demonstrate that M-ACM significantly surpasses previous methods in accurately describing complex human motions and subtle temporal variations. For instance, the M-ACM Qwen2 7B model showed substantial improvements in metrics like BLEU-4 and CIDEr compared to existing baselines. Beyond standard captioning metrics, M-ACM also excelled in human motion understanding dimensions, particularly in areas requiring fine-grained analysis such as ‘Motivation and Reason’ and ‘Micro-expressions and Emotions,’ where it showed improvements of 40% and 260% respectively over the best baselines.
Ablation studies confirmed the effectiveness of each component: the HMI dataset, the Motion Synergetic Decoding method, and ViTPose-based keyframe sampling all contributed to the model’s superior performance. The M-ACM Qwen2 7B model ultimately outperformed a strong baseline (LLAVAOV-Qwen2 7B) by 48% in video captioning and achieved a 109% improvement in motion detail scores for QA tasks.
This research sets a new standard for motion-centric video captioning, offering a powerful tool for applications requiring precise human motion understanding. For more in-depth information, you can read the full research paper here.


