TLDR: This research introduces a new algorithm and column generation techniques to efficiently compute tight probability bounds for partially identifiable queries in quasi-Markovian Structural Causal Models (SCMs). The new algorithm simplifies the construction of linear programs by exploiting input probabilities, while column generation addresses the scalability challenges posed by a large number of unknown variables. Experiments show significant performance improvements over traditional direct linear programming methods, particularly for single interventions.
In the realm of artificial intelligence and data science, understanding cause-and-effect relationships is paramount. Structural Causal Models (SCMs) provide a powerful framework for representing these relationships, distinguishing between variables with deterministic mechanisms (endogenous) and those with marginal probabilities (exogenous). However, a common challenge arises when observational data only determines the distribution of endogenous variables, leaving the distributions of exogenous variables unspecified. This leads to what are known as ‘partially identifiable queries’ – situations where a probability value of interest cannot be precisely computed, but rather exists within a range of possibilities.
A recent research paper, titled “Multilinear and Linear Programs for Partially Identifiable Queries in Quasi-Markovian Structural Causal Models,” by João P. Arroyo, João G. Rodrigues, Daniel Lawand, Denis D. Mauá, Junkyu Lee, Radu Marinescu, Alex Gray, Eduardo R. Laurentino, and Fabio G. Cozman, tackles this very problem. The authors delve into the computation of ‘tight probability bounds’ – the narrowest possible range for these uncertain probabilities – specifically within a class of SCMs called quasi-Markovian models.
Understanding Quasi-Markovian SCMs
Quasi-Markovian SCMs are a special type of causal model where each endogenous variable is connected with at most one exogenous confounder. This structural simplification, while still expressive enough to cover many real-world scenarios, allows for more efficient computational approaches. The paper highlights that previous work has shown how to cast these partially identifiable queries as nonlinear programs, which can sometimes be reduced to linear programs, especially when a single confounded component is intervened upon.
A New Algorithm for Simplified Program Construction
The core contribution of this research is a novel algorithm designed to simplify the construction of these multilinear and linear programs. The algorithm cleverly exploits input probabilities over endogenous variables and leverages Pearl’s do-calculus to build a more succinct objective function. This means that instead of dealing with an explosion of terms, the algorithm can generate expressions that are exponentially smaller, making the problem more manageable.
For instance, in scenarios with a single intervention (like observing the effect of a specific treatment), the problem can be formulated as a linear program. The new algorithm streamlines the process of setting up this program, making it more efficient to find the desired probability bounds.
Tackling Scale with Column Generation
One of the practical hurdles in solving these optimization problems is the potentially vast number of possible values for exogenous variables, which can lead to extremely large linear programs. To circumvent this computational explosion, the researchers introduce ‘column generation’ techniques. Column generation is an advanced optimization method that iteratively identifies and adds only the most relevant parts (columns) of the problem to the solver, rather than trying to process the entire, massive problem at once.
By applying column generation, the authors demonstrate that it’s possible to compute probability bounds through a sequence of auxiliary linear integer programs. This approach allows for a representation with polynomial cardinality for exogenous variables, making previously intractable problems solvable. The experimental results presented in the paper clearly show that these column generation techniques are significantly superior to existing direct linear programming methods, with some cases demonstrating improvements of over 20,000 times faster execution.
Also Read:
- Unveiling True Influence: How Causal SHAP Enhances AI Explanations
- Exploring Markov Logic Networks: How Domain Size and Constraints Shape Probabilistic Outcomes
Future Directions
This research offers a significant step forward in the field of causal inference, particularly for situations where complete information about all causal factors is unavailable. The methods presented provide a more efficient and scalable way to quantify uncertainty in causal effects. The authors suggest future work could extend these results to multiple interventions, characterize the algorithm’s complexity more deeply, and explore combinations with other optimization techniques. For more in-depth technical details, you can refer to the full research paper available at arXiv:2509.03548.


