TLDR: DEEPGRAPHLOG is a new neurosymbolic AI framework that extends PROBLOG with Graph Neural Predicates, enabling multi-layer, bidirectional interaction between neural networks and symbolic reasoning. Unlike previous systems with fixed processing flows, DEEPGRAPHLOG treats symbolic representations as graphs, allowing Graph Neural Networks (GNNs) to process them. This enhances expressivity, supports learning from incomplete knowledge, and improves performance on tasks like GNN expressivity, structure learning, knowledge graph completion with distant supervision, and multi-step planning.
Artificial Intelligence is constantly evolving, seeking ways to combine the best of different computational paradigms. One such exciting area is Neurosymbolic AI (NeSy), which aims to merge the statistical power of neural networks with the structured, interpretable reasoning of symbolic logic. However, existing NeSy frameworks often face limitations, particularly when dealing with complex, irregular data structures like graphs.
A new research paper introduces DEEPGRAPHLOG, a novel framework designed to overcome these challenges. Authored by Adem Kikaj, Giuseppe Marra, Floris Geerts, Robin Manhaeve, and Luc De Raedt, DEEPGRAPHLOG extends the probabilistic logic programming language PROBLOG by integrating Graph Neural Predicates. This innovation allows for multi-layer neural-symbolic reasoning, meaning neural and symbolic components can be layered in any order, enabling a more flexible and expressive approach to AI.
The Challenge with Existing Neurosymbolic AI
Traditional NeSy systems, like DEEPPROBLOG, typically follow a fixed flow: subsymbolic data is processed by a neural network, and the resulting symbolic output is then passed to a reasoning module. This “subsymbolic → neural → symbolic” structure limits the interaction between components. Neural networks usually operate on raw data and struggle with incomplete symbolic information, while symbolic modules cannot leverage the rich internal representations learned by neural models.
Many real-world tasks, however, require iterative reasoning where symbolic representations are continuously refined by neural components. Imagine a scenario where reasoning starts with incomplete symbolic input, which a neural module then refines into a new symbolic form, leading to a “symbolic → neural → symbolic” flow. Current systems struggle with this dynamic interaction, especially when dealing with graph-structured data, which is common in many domains.
Introducing DEEPGRAPHLOG: Layered Reasoning with Graphs
DEEPGRAPHLOG addresses these limitations by treating symbolic representations as graphs, which can then be processed by Graph Neural Networks (GNNs). This is a crucial distinction from DEEPPROBLOG, which cannot handle symbolic reasoning via neural methods. By integrating GNNs, DEEPGRAPHLOG allows neural components to reason directly over structured, often incomplete, symbolic data.
The framework introduces “graph neural facts” into PROBLOG. Instead of assigning a fixed probability to a logical statement, DEEPGRAPHLOG computes this probability using a GNN. The input to this GNN is itself a “random graph” derived from other probabilistic facts in the system. This means that the GNN can learn dependencies between logical statements, even when explicit rules are not fully specified.
Key Innovations and Capabilities
The core contributions of DEEPGRAPHLOG include:
- A new neurosymbolic language that enables bidirectional interaction between neural and symbolic components across multiple levels of an AI model.
- A novel learning setting for GNNs that supports weakly supervised learning and the integration of hard logical constraints.
- Demonstrated flexibility and advantages through various experiments combining graph-based learning with structured reasoning.
Experimental Validation: Real-World Applications
The researchers evaluated DEEPGRAPHLOG across several challenging tasks:
GNN Expressivity: Standard GNNs have limitations in distinguishing certain graph structures. DEEPGRAPHLOG, by combining GNNs with logical reasoning, significantly improved the ability to differentiate these structures, especially with limited training data. This shows that logical knowledge can enhance GNNs’ understanding of graph patterns.
Structure Learning: DEEPGRAPHLOG can learn the importance of different graph structures (like cycles or cliques) for a given task. Instead of hardcoding these rules, the model learns probabilities for their relevance, effectively performing structure learning through parameter learning. For instance, it successfully identified a 4-cycle as highly relevant for a specific classification task.
Distant Supervision for Knowledge Graph Completion: In a task involving family trees, DEEPGRAPHLOG dramatically outperformed standard GNNs in learning intermediate relationships like “fatherOf” and “motherOf” from distant supervision (e.g., knowing who is a “grandfatherOf”). This highlights its ability to leverage partial logical knowledge and background information.
Multi-Layer Reasoning in Blocks World: In a planning task inspired by the classic Blocks World, DEEPGRAPHLOG achieved perfect accuracy. It combined two graph neural predicates with an explicit symbolic constraint (e.g., “cannot move a block onto a glass block”) to determine valid moves and tower formations. This demonstrated the power of structured intermediate representations and hard constraints in complex planning scenarios.
Also Read:
- Unlocking Complex Proofs: BFS-Prover-V2 Advances AI in Formal Mathematics
- ParaThinker: Unlocking LLM Reasoning Potential Through Native Parallel Thinking
A More Flexible Future for AI
DEEPGRAPHLOG represents a significant step forward in neurosymbolic AI. By allowing neural and symbolic reasoning to interact in a multi-layered, bidirectional fashion, it offers a more expressive and flexible framework for integrating these powerful AI paradigms. While it introduces new computational challenges due to recursive dependencies, the framework’s ability to handle complex relational dependencies and overcome limitations of existing systems opens up new possibilities for AI applications in graph-structured domains.


