TLDR: This research paper introduces a logic-based framework for generating contrastive explanations, answering “Why P but not Q?”. It defines several problems, including explaining differences between two entities, global contrasts between properties, and counterfactual explanations for single instances. The framework is shown to capture cardinality-minimal explanations and its computational complexity is analyzed. A prototypical implementation using Answer Set Programming demonstrates its practical application in explaining AI decisions on real-world datasets.
In the rapidly evolving world of Artificial Intelligence, understanding why a system makes a particular decision is as crucial as the decision itself. This is where the concept of “contrastive explanations” comes into play, addressing questions like “Why did this happen, but not that?” A recent research paper titled “Why this and not that? A Logic-based Framework for Contrastive Explanations” by Tobias Geibinger, Reijo Jaakkola, Antti Kuusisto, Xinghan Liu, and Miikka Vilander introduces a comprehensive logic-based framework to formalize and tackle these very questions.
The core idea behind contrastive explanations is to provide insights by comparing an observed outcome with a different, unobserved one. For instance, if a loan application was approved for one person but rejected for another with a seemingly similar background, a contrastive explanation would highlight the specific differences that led to the disparate outcomes. This new framework delves into several canonical problems, each designed to shed light on different facets of “why P but not Q” scenarios.
Understanding the Problems
The authors define several distinct problems within their framework:
The Contrastive Explanation Problem focuses on explaining why two seemingly similar entities possess different properties. Imagine two animals; one is classified as a cat, the other as a dog. This problem seeks to identify the minimal differences that explain their distinct classifications, while also highlighting their shared characteristics. For example, “can not fly” might be a shared characteristic (common context) for both cats and dogs, but “meows” versus “barks” would be the differentiating factors.
A special case of this is the Global Contrastive Explanation Problem, which aims to find the fundamental differences between two general properties or concepts, rather than specific instances. It seeks to provide a comprehensive contrast between all possible scenarios where one property holds versus where the other holds.
Related to this is the Minimal Separator Problem, which seeks to identify a single, most concise property that distinguishes one concept from another. It’s about finding the “smoking gun” difference.
Beyond comparing two instances or general properties, the framework also addresses Counterfactual Explanations, which answer “Why does a specific instance satisfy property P but not property Q?” This is particularly relevant when an expected outcome didn’t materialize. Two variations are introduced:
- The Counterfactual Contrastive Explanation Problem focuses on finding minimal reasons for the observed outcome and a similar minimal reason for the counterfactual (what could have been).
- The Counterfactual Difference Problem, on the other hand, aims to find the smallest possible change to the current situation that would lead to the alternative outcome. For example, if a bird was classified as a pelican, this problem would identify the minimal changes to its attributes that would have resulted in it being classified as a seagull. The paper illustrates this with an example: a bird with a “beak pouch” is a pelican. If it had “no beak pouch” and was “small,” it would be a seagull. The difference between these two counterfactual problems lies in whether they prioritize minimal reasons for classification or minimal changes to the instance itself.
Also Read:
- Bridging Language and Logic: A New Framework for AI Reasoning
- Advanced AI Framework Accelerates Rare Disease Diagnosis
Key Contributions and Practical Applications
The research rigorously investigates these problems within the setting of propositional logic, demonstrating how their definitions effectively identify contrasts and commonalities. They show that their framework can capture a “cardinality-minimal” version of existing contrastive explanation methods, meaning it finds explanations that involve the fewest possible changes or elements.
The authors also provide an extensive analysis of the computational complexity of these problems, revealing that most are computationally challenging (Σp2-complete). Despite this, they have developed a prototypical implementation using Answer Set Programming (ASP), a powerful computational formalism well-suited for such complex tasks. This implementation was successfully used in case studies involving real-world classification datasets like Iris, Wine, and Glass, demonstrating the practical utility of their framework in generating meaningful explanations for decisions made by AI models.
This work represents a significant step forward in the field of explainable AI, offering a robust and versatile logic-based approach to understanding “why this and not that” in automated decision-making. For more in-depth details, you can read the full research paper here.


