TLDR: This research introduces ABAGCN and ABAGAT, the first Graph Neural Network (GNN) models designed to approximate credulous acceptance in Assumption-Based Argumentation (ABA). By representing ABA frameworks as heterogeneous dependency graphs, these models achieve high accuracy (F1 score up to 0.71) in classifying acceptable assumptions. Furthermore, a novel extension-reconstruction algorithm, driven by these GNNs, can reconstruct stable extensions with F1 scores over 0.85 on small frameworks and around 0.58 on large ones, significantly reducing computation time for complex argumentation problems.
Computational argumentation is a field that develops formal tools to model how we reason with conflicting information. One powerful approach within this field is Assumption-Based Argumentation (ABA), which is used in various applications like decision support and planning. However, a major challenge with ABA is that precisely calculating ‘stable extensions’ – sets of assumptions that are jointly acceptable – becomes incredibly difficult and time-consuming for large systems.
To address this computational hurdle, a new research paper introduces the first approach to use Graph Neural Networks (GNNs) for approximating ‘credulous acceptance’ in ABA. Credulous acceptance means checking if a particular assumption is part of any stable extension, a task that is known to be computationally intensive.
Modeling Argumentation as Graphs
The core innovation lies in how ABA frameworks are represented. The researchers model these frameworks as ‘dependency graphs.’ In these graphs, assumptions, claims (conclusions), and rules are represented as different types of nodes. The relationships between these elements are captured by distinct edge labels: ‘support’ edges link body atoms to rule nodes, ‘derive’ edges connect rule nodes to their conclusions, and ‘attack’ edges show conflicts between contraries and assumptions.
This dependency graph is crucial because it faithfully encodes all the necessary information about an ABA framework, allowing the GNNs to learn and make predictions while preserving the underlying semantics. Unlike previous methods that might simplify the structure, this approach maintains the full complexity needed for accurate reasoning.
Introducing ABAGCN and ABAGAT
The paper proposes two specific GNN architectures: ABAGCN and ABAGAT. Both models are designed to learn ‘node embeddings’ – numerical representations of each node that capture its context and relationships within the graph. ABAGCN uses residual heterogeneous convolutional layers, while ABAGAT employs residual heterogeneous attention layers. These architectures are enriched with learnable embeddings and features based on node degrees (how many connections a node has), helping them to understand the graph’s structure and node attributes effectively.
The models were trained on a substantial dataset, combining the ICCMA 2023 benchmark with a large number of synthetically generated ABA frameworks. Hyperparameters for the GNNs were carefully optimized using Bayesian search to achieve the best performance.
Empirical Performance and Speed
The empirical results are promising. Both ABAGCN and ABAGAT significantly outperformed a state-of-the-art GNN baseline adapted from abstract argumentation, especially on smaller ABA frameworks. For ‘credulous acceptance classification,’ the models achieved a node-level F1 score of up to 0.71 on the ICCMA instances, demonstrating their accuracy in predicting which assumptions are acceptable.
Beyond just classifying individual assumptions, the researchers also developed a ‘sound polynomial-time extension-reconstruction algorithm.’ This algorithm uses the GNN’s predictions to reconstruct entire stable extensions. It achieved an F1 score above 0.85 on small ABA frameworks and maintained an F1 score of approximately 0.58 on much larger frameworks (up to 1,000 atoms).
A critical finding was the significant speed-up achieved. For very challenging ABA frameworks with 4,000–5,000 atoms, the exact ASPForABA solver averaged 435 seconds per instance. In contrast, the GNN-driven approximate extension reconstruction ran in just 192 seconds – 2.3 times faster – while still maintaining a respectable F1 score of 0.68. This highlights the potential of GNNs to make complex argumentation reasoning tractable for real-world applications.
Also Read:
- DeepProofLog: A Scalable Approach to Neurosymbolic AI with Efficient Proof Generation
- Bridging Natural Language and Graph Databases: A Multi-Agent Approach to Cypher Query Generation
Future Directions
This work opens new avenues for scalable approximate reasoning in structured argumentation. Future research aims to integrate lightweight symbolic checks to further boost precision, extend the approach to other argumentation semantics (like admissible or preferred extensions), and lift the current restriction to ‘flat’ ABA frameworks to handle more general and complex scenarios.
For more details, you can read the full research paper here.


