TLDR: The research paper introduces FLOP (Fast Learning of Order and Parents), a novel score-based algorithm for causal discovery in linear models. FLOP significantly improves upon existing methods by combining fast parent selection, accelerated score computations using dynamic Cholesky updates, a principled initial order construction, and an Iterated Local Search (ILS) metaheuristic. These innovations enable FLOP to perform discrete search much more efficiently and accurately, achieving state-of-the-art results across various benchmarks with substantial run-time reductions and near-perfect recovery of causal structures, even for large systems. The paper advocates for revisiting discrete search as a highly effective approach to causal discovery.
Understanding cause-and-effect relationships from data is a fundamental challenge across many scientific fields. Researchers Marcel Wienöbst, Leonard Henckel, and Sebastian Weichwald have introduced a new algorithm called FLOP (Fast Learning of Order and Parents) that significantly advances how we discover these causal structures, particularly in linear models.
Traditionally, identifying the underlying directed acyclic graph (DAG) – essentially, a map showing which variables influence others and in what direction – has been tackled by score-based algorithms. These methods assign a score to different possible causal maps, aiming to find the one with the optimal score. However, exact solutions are computationally intensive and only feasible for systems with a small number of variables (around 30). For larger systems, local search methods are used, but these can often get stuck in ‘local optima’ – solutions that are good, but not the absolute best possible.
Addressing the Challenges of Causal Discovery
The paper highlights that while continuous optimization methods have emerged as an alternative, they often introduce their own set of problems, such as optimization complexity and convergence issues, and haven’t consistently matched the performance of the best discrete search methods. The authors argue that the perceived difficulty of discrete search for causal discovery is often overstated, especially when the underlying system can be represented by a sparse DAG.
FLOP is designed to overcome these practical challenges by fully embracing discrete search. It builds upon the principles of the BOSS algorithm, which explores the space of possible causal orders by reinserting variables at different positions. FLOP introduces four key components that make this search much more aggressive and efficient:
-
Faster Parent Selection: Instead of starting from scratch, FLOP’s ‘grow-shrink’ procedure for selecting parents (variables that directly influence another) re-initializes from the parent sets learned for the previous order. This ‘warm start’ significantly reduces computation and memory costs because parent sets typically change very little with small adjustments to the order.
-
Accelerated Score Computations: FLOP uses efficient iterative updates of Cholesky factorizations to calculate the Bayesian Information Criterion (BIC) score. This mathematical trick allows the algorithm to update scores in a much faster way (O(k^2) operations) compared to recomputing them entirely (O(k^3) operations), especially beneficial for larger and denser graphs.
-
Principled Order Initialization: Random initial orders can be problematic, especially for ‘path graphs’ where causal influences are weak between distant variables. FLOP addresses this by building an initial order where strongly correlated nodes are placed adjacently. This helps the algorithm find a more accurate causal map from the outset.
-
Iterated Local Search (ILS): To escape local optima, FLOP employs ILS. This metaheuristic involves running a local search to find a good solution, then slightly perturbing that solution (e.g., by swapping elements in the order) and restarting the local search. This process is repeated, allowing FLOP to explore the search space more thoroughly and find graphs with scores closer to the global optimum.
Also Read:
- FourierCSP: A New Era for Solving Complex Constraint Problems with Continuous Optimization
- A New Era for Multi-Armed Bandits: Stable Thompson Sampling with Stochastic Approximation
Performance and Implications
The results are compelling. FLOP demonstrates substantial run-time reductions, being more than 100 times faster than BOSS for graphs with 100 nodes and scaling effectively to 500 nodes, where BOSS often hits time limits. This speed advantage can be directly translated into higher accuracy by allowing FLOP to perform more ILS iterations.
Across various benchmarks, including Erdős-Renyi graphs, scale-free graphs, and real-world networks like the Alarm network, FLOP achieves state-of-the-art accuracy. It consistently finds graphs with better BIC scores and lower Structural Hamming Distance (SHD) – a measure of how different the learned graph is from the true one – often achieving near-perfect recovery in standard settings.
The authors emphasize that these findings call for a renewed appreciation of discrete search methods in causal discovery. They suggest that the focus should shift from theoretical asymptotic consistency guarantees (which can often be achieved by running other algorithms in parallel) to practical finite-sample performance. The ability to efficiently explore the search space, as FLOP does, highlights the direct link between computational speed and the accuracy of the discovered causal structures.
For those interested in the full technical details, the research paper can be found here.


