TLDR: Causal SHAP is a new method that improves machine learning explanations by integrating causal relationships into the widely used SHAP framework. It uses causal discovery algorithms (PC and IDA) to distinguish between features that truly cause an outcome and those that are merely correlated, leading to more accurate and trustworthy explanations, especially in critical applications like healthcare.
Understanding why a machine learning model makes a particular prediction is becoming increasingly important, especially in critical fields like healthcare and autonomous driving. While tools like SHapley Additive exPlanations (SHAP) are widely used for this purpose, they often struggle to differentiate between features that truly cause an outcome and those that are merely correlated. This can lead to misleading explanations and potentially flawed decisions.
A new research paper titled “Causal SHAP: Feature Attribution with Dependency Awareness through Causal Discovery” introduces a novel framework designed to overcome this limitation. Developed by Woon Yee Ng, Li Rong Wang, Siyuan Liu, and Xiuyi Fan from Nanyang Technological University, Singapore, Causal SHAP integrates causal relationships directly into the feature attribution process, aiming to provide more accurate and trustworthy explanations for machine learning predictions. You can read the full paper here.
The Challenge with Traditional SHAP
Traditional SHAP, inspired by game theory, quantifies each feature’s contribution to a model’s prediction. However, a fundamental weakness is its assumption of feature independence. In real-world scenarios, features are rarely independent; they often influence each other in complex causal webs. For instance, in predicting lung cancer risk, SHAP might assign similar importance to “smoking,” “stress,” and “drinking coffee” if they are all correlated with the outcome. However, domain knowledge reveals that smoking and stress might directly cause lung cancer, while drinking coffee might only be correlated due to its association with smoking or stress. Misattributing importance in such cases can have serious consequences, particularly in medical diagnoses.
Introducing Causal SHAP: A Causal-Aware Approach
Causal SHAP addresses this by incorporating causal discovery and strength quantification into the SHAP framework. It employs two well-established algorithms: the Peter-Clark (PC) algorithm for identifying causal relationships (which features cause which) and the Intervention Calculus when the DAG is Absent (IDA) algorithm for quantifying the strength of these causal effects.
The framework works in two main steps. First, it introduces a “Causal Value Function” that samples out-of-coalition features in a way that respects the discovered causal graph. This prevents the generation of “impossible” data points—combinations of feature values that wouldn’t naturally occur given the causal structure. Second, it integrates “Causal Strength” by assigning different weights to features based on their total causal effect on the prediction target, as determined by the IDA algorithm. This effectively discounts the attribution scores for features that are only correlated, rather than causally linked, to the outcome.
Theoretical Soundness and Practical Validation
The researchers demonstrate that Causal SHAP maintains the desirable theoretical properties of SHAP, including local accuracy, missingness, and consistency. This means the explanations are still faithful to the model’s prediction, assign zero importance to truly missing or irrelevant features, and respond logically to changes in model behavior.
To validate its effectiveness, Causal SHAP was tested on both synthetic datasets with known causal structures and real-world biomedical datasets, including Irritable Bowel Syndrome (IBS) and Colorectal Cancer. On synthetic data, Causal SHAP significantly outperformed other SHAP-derived methods by accurately assigning near-zero importance to merely correlated features (like “drinking coffee” in the lung cancer example) and correctly identifying direct and indirect causal factors. For real-world datasets, Causal SHAP achieved superior “insertion scores,” a metric used to evaluate how well feature attributions align with model performance, indicating more effective leveraging of causal relationships even when the true causal graph is unknown.
Also Read:
- Uncovering Hidden Causes: A Generative AI Approach to Smarter Predictions
- Unlocking Deeper Understanding: How Multi-Agent LLMs Are Revolutionizing Causal AI
Impact and Future Directions
Causal SHAP represents a significant advancement in Explainable AI (XAI) by providing a practical framework for causal-aware model explanations. Its ability to distinguish between causation and correlation makes it particularly valuable in high-stakes domains where understanding true causal relationships is paramount for informed decision-making. The authors plan to extend this work to handle cases with hidden variables, explore more efficient algorithms for high-dimensional datasets, and address causal structure uncertainty in future research.


