TLDR: A new research paper introduces Neural Rewriting System (NRS) and Fast Neural Rewriting System (FastNRS), neuro-symbolic architectures that learn convergent term rewriting systems. These models excel at simplifying complex mathematical formulas (Logic, ListOps, Arithmetic, Algebra) and demonstrate strong out-of-distribution generalization. FastNRS offers significant improvements in memory efficiency and speed by processing multiple simplifications in parallel, outperforming neural baselines and often matching or exceeding advanced LLMs like OpenAI’s o1-preview on challenging reasoning tasks.
Artificial intelligence has made incredible strides in areas like image recognition and natural language processing. However, a persistent challenge remains: enabling neural networks to perform systematic, iterative reasoning, especially when faced with problems outside their training data. This is where classical algorithms, with their predictable outcomes and strong generalization, traditionally excel. A new research paper introduces a novel approach to bridge this gap, focusing on teaching neural systems to learn and execute symbolic algorithms, specifically convergent term rewriting systems.
Convergent term rewriting systems are essentially sets of rules that describe how to transform elements, like mathematical expressions, into simpler, equivalent forms. Imagine simplifying a complex algebraic equation step-by-step until you reach a final, irreducible value. These systems are ‘convergent’ because applying the rules always leads to the same final result, regardless of the order, and the process never gets stuck in a loop.
The researchers, Flavio Petruzzellis, Alberto Testolin, and Alessandro Sperduti, propose a general framework for learning these systems using a neuro-symbolic architecture. This means their system combines the pattern recognition power of neural networks with the structured, rule-based logic of symbolic AI. They developed two main implementations: the Neural Rewriting System (NRS) and the Fast Neural Rewriting System (FastNRS).
Both NRS and FastNRS are designed with a modular structure that mirrors the steps of a rewriting algorithm. They consist of three key components:
The Selector Module
This module acts like a smart scanner, identifying specific parts of a complex formula that can be simplified. For example, in an expression like (12+(3-(4+5))), the Selector would pinpoint the innermost solvable part, (4+5).
The Solver Module
Once a part is identified, the Solver takes over. It’s trained to apply the correct ‘rewriting rule’ to simplify that specific part into its equivalent, simpler form. So, (4+5) would become 9. It also knows when the entire computation is complete.
Also Read:
- New REST Framework Stress-Tests Large Reasoning Models for Multi-Problem Solving
- NVIDIA Unveils Llama Nemotron Super v1.5: Advancing AI Reasoning and Agentic Capabilities
The Combiner Module
After the Solver does its job, the Combiner integrates the simplified part back into the original formula, creating a new, simpler version of the expression. This process repeats until the entire formula is reduced to its final value.
The key difference between NRS and FastNRS lies in how the Selector operates. While NRS selects and processes one part at a time, FastNRS is designed for efficiency. It uses a ‘text segmentation’ approach to identify and process multiple simplifiable parts of a formula in parallel. This parallel processing significantly boosts memory efficiency, training speed, and inference time.
The researchers put their systems to the test on four different types of formula simplification problems: Logic (nested logical operations), ListOps (operations on integer lists), Arithmetic (sums, subtractions, multiplications), and Algebra (simplifying monomials). A crucial aspect of their evaluation was ‘out-of-distribution generalization’ – the ability of the models to solve problems that are much more complex or longer than what they encountered during training.
The results were compelling. Both NRS and FastNRS demonstrated strong generalization capabilities, often outperforming other neural baselines like the Neural Data Router. They were also benchmarked against powerful large language models (LLMs) such as OpenAI’s GPT-4o and the reasoning-optimized o1-preview. While o1-preview showed impressive capabilities, especially with its step-by-step reasoning, the NRS and FastNRS consistently matched or surpassed its accuracy on the most challenging, deeply nested formulas. This suggests that their algorithmic-inspired design provides a more robust foundation for systematic reasoning in these specific tasks.
A notable achievement was the FastNRS’s efficiency. It achieved significant speed-ups in processing time and often had fewer parameters than the NRS, all while maintaining strong performance. This highlights the potential for building highly efficient AI systems capable of complex symbolic reasoning.
Despite these advancements, the framework has limitations. It’s currently restricted to sequence-based rewriting problems where rules operate on local substrings. The underlying algorithmic structure is also predefined by human designers, meaning the system doesn’t learn the algorithm itself, only how to apply its rules efficiently. Future work aims to expand the system to handle non-local patterns and potentially infer algorithmic blueprints directly from data.
This research represents a significant step towards building AI systems that can reason systematically and generalize robustly, combining the best of neural and symbolic approaches. You can read the full research paper here: Learning Neuro-Symbolic Convergent Term Rewriting Systems.


