TLDR: This research introduces a new family of parameterized grounding methods (BCw,d) for Neural-Symbolic AI, which generalize Backward Chaining. These methods allow controlling the trade-off between logical expressiveness and computational scalability by adjusting ‘width’ (unknown atoms) and ‘depth’ (reasoning steps). Experiments show that while deeper grounding improves performance for complex tasks, efficient shallow grounders can still yield significant gains, highlighting the critical role of grounding in NeSy model effectiveness and scalability, especially for large knowledge graphs.
Neural-Symbolic (NeSy) AI is an exciting field that aims to combine the strengths of neural networks, which are excellent at processing complex data, with symbolic reasoning, which provides clear, interpretable logic. This integration promises AI systems that are both powerful and understandable.
A crucial challenge in building effective NeSy systems is a process called “logic grounding.” Imagine you have a set of logical rules, like “If X is located in Y and Y is a neighbor of Z, then X is located in Z.” Grounding is the process of taking these general rules and applying them to specific entities, such as “If Paris is located in France and France is a neighbor of Germany, then Paris is located in Germany.”
Traditionally, some NeSy methods try to generate every single possible specific instance of these rules. While this ensures all logical information is captured, it quickly becomes unmanageably large, especially with many entities and rules. This is like trying to list every single possible sentence that could be formed from a grammar – it’s a combinatorial explosion. Other methods use shortcuts or “heuristics” to pick only a few relevant instances, which is faster but might miss important information or lack a clear reason for why certain instances were chosen.
This new research paper, titled “Grounding Methods for Neural-Symbolic AI,” by Rodrigo Castellano Ontiveros, Francesco Giannini, Marco Gori, Giuseppe Marra, and Michelangelo Diligenti, tackles this fundamental problem. The authors propose a new, flexible family of grounding methods that generalize a classic logic technique called Backward Chaining. Their approach allows for a controlled balance between how much logical detail is considered (expressiveness) and how efficiently the system can operate (scalability).
Understanding the New Grounding Approach
The core of their proposal is a parameterized grounding method called BCw,d. Here, ‘w’ stands for “width” and ‘d’ stands for “depth.”
- Width (w): This parameter controls the maximum number of unknown facts that can be part of a logical rule instance. If a rule needs too many unknown pieces of information to be proven, it might be discarded.
- Depth (d): This parameter limits how many “steps” of reasoning the system will follow. Think of it as how many layers deep the logical proof can go.
By adjusting ‘w’ and ‘d’, the researchers can fine-tune the grounding process. For instance, a very small depth (d=1) means the system only looks at direct connections, while a larger depth allows for multi-step reasoning. Interestingly, many existing grounding techniques used in other NeSy models can be seen as specific settings of this BCw,d framework.
The paper also highlights a crucial trade-off: while increasing width and depth allows the system to find more complex logical proofs and potentially improve accuracy, it also significantly increases the size of the underlying “reasoning graph.” A larger graph can make the system slower and, perhaps counter-intuitively, might even hurt its ability to generalize to new, unseen data. This is because the neural network part of the NeSy system might overfit to the specific, large graph it was trained on.
Also Read:
- Unpacking AI Decisions: A New Logic for Contrastive Explanations
- Bridging Language and Logic: A New Framework for AI Reasoning
Experimental Insights
The researchers tested their new grounding methods on various knowledge graph datasets, including Countries, Kinship, WN18RR, and FB15k-237. These datasets represent different sizes and complexities, and the task was “link prediction” – essentially, predicting missing connections or facts within the knowledge graph.
Key findings from their experiments include:
- For tasks requiring more complex, multi-step reasoning, increasing the grounding depth (d) generally led to better performance. For example, in the Countries dataset, a depth of 2 or 3 was optimal for the most complex task, aligning with the number of reasoning steps needed.
- However, using a “Full Grounder” (which tries to ground everything) often performed worse than the more focused BCw,d methods. This supports the idea that grounding too many unnecessary facts can negatively impact the model’s ability to generalize.
- Scalability is a significant concern for larger datasets. Increasing the depth from 1 to 2, while sometimes improving accuracy, could increase training and inference times by a factor of three or more. This means that for very large knowledge graphs, even shallow grounding methods (like BC0,1, which only considers direct, known facts) can provide substantial improvements over baselines while remaining computationally feasible.
This research underscores that the choice of grounding method is as important as the design of the NeSy model itself. It offers a principled way to control the balance between logical expressiveness and computational efficiency, paving the way for more scalable and effective Neural-Symbolic AI systems. You can read the full research paper for more technical details and results here: Grounding Methods for Neural-Symbolic AI.


