TLDR: CausalStep is a novel benchmark designed to evaluate explicit, step-by-step causal reasoning in videos, a capability where current AI models, including advanced LLMs, show significant limitations compared to humans. It features causally segmented videos, a strict sequential question-answer protocol, and diagnostic metrics to prevent shortcuts and precisely measure reasoning depth and robustness. Experiments reveal a substantial performance gap between AI models and human-level understanding, particularly in maintaining long, uninterrupted causal reasoning chains.
Recent advancements in large language models (LLMs) have significantly improved their ability to reason with text and images. However, extending these capabilities to complex video scenarios remains a considerable challenge. Existing video benchmarks often fall short, primarily assessing shallow understanding and allowing models to exploit global context, which doesn’t truly test their causal and stepwise reasoning abilities.
Introducing CausalStep: A New Benchmark for Video Reasoning
To address these limitations, researchers have introduced CausalStep, a novel benchmark specifically designed to evaluate explicit stepwise causal reasoning in videos. This benchmark aims to rigorously test how well AI models can understand and reason about the sequence of events and their causal relationships within a video, much like humans do.
CausalStep segments videos into distinct, causally linked units. It enforces a strict question-answer (QA) protocol where models must answer questions sequentially, based only on the current and previous video segments, without any access to future information. This design prevents models from using shortcuts and ensures that they truly perform step-by-step, causally dependent reasoning. Each question also includes carefully constructed incorrect options, known as distractors, which are based on a taxonomy of common error types, making the evaluation more diagnostic.
The benchmark features 100 diverse videos across six categories, including cartoons, movies, sports, and documentaries. In total, it comprises 1,852 multiple-choice QA pairs. To provide a comprehensive assessment, CausalStep introduces seven diagnostic metrics, allowing for a precise diagnosis of a model’s causal reasoning capabilities.
How CausalStep Works
At its core, CausalStep breaks down a video into a series of causal segments. For each segment, a descriptive understanding question is posed to assess basic comprehension. For all segments after the first, an explicit stepwise causal reasoning question is added, requiring the model to understand the causal link between the current segment and the one immediately preceding it.
The evaluation proceeds as a reasoning chain. If a model answers a question correctly, it moves to the next step. If an answer is incorrect, the reasoning chain is interrupted, and the model must restart from a specific point, ensuring that progress is only made through valid inferences. This scoring system rewards models for maintaining longer, uninterrupted sequences of correct reasoning, providing a fine-grained measure of their ability to perform explicit stepwise causal reasoning.
The videos for CausalStep were curated from the MGIT benchmark, focusing on those that support explicit stepwise causal reasoning and discourage shortcuts. The question and answer pairs, along with the distractors, were generated using a hybrid approach combining the efficiency of large language models like GPT-4o with meticulous human review to ensure high quality and accuracy.
Also Read:
- Unlocking Video Answers: How LeAdQA Uses AI to Find Key Moments
- METER: A New Benchmark for Explainable Multi-Modal Forgery Detection
Key Findings: A Significant Gap Between AI and Human Reasoning
Experiments were conducted on CausalStep with a wide range of state-of-the-art proprietary and open-source multimodal models, as well as human participants. The results reveal a substantial gap between current AI models and human-level stepwise reasoning.
Proprietary models generally outperformed open-source models across all metrics, with the o4-mini model showing the best performance among all evaluated AI systems. However, even the top-performing AI models significantly lagged behind human participants, especially in their ability to maintain long, uninterrupted reasoning chains and perform accurate causal reasoning when presented with isolated segment pairs. This indicates that current models struggle with integrating context over extended video sequences and with robust causal inference.
This disparity highlights that while AI models are becoming adept at basic visual perception, they still face considerable challenges in performing the deep, multi-step causal reasoning that humans naturally employ when interpreting complex video narratives. The CausalStep benchmark, detailed further in the research paper available at arXiv.org, serves as a rigorous tool for diagnosing these limitations and will drive future research towards more robust and interpretable video reasoning systems.


