TLDR: This research introduces a unified framework for understanding and evaluating different types of “forgetting” in AI’s epistemic states, which encompass an agent’s knowledge and beliefs. It defines five general types of forgetting (Contraction, Ignoration, Revocation, Marginalization, Conditionalization) and instantiates them with seven concrete operations using Spohn’s ranking functions. By adapting existing postulates from logic programming and belief revision, and introducing novel ones, the paper systematically evaluates these operations, revealing that minimal c-contractions align with AGM principles, while marginalization best fits ASP-inspired forgetting. The work offers a comprehensive overview, bridging different perspectives on how AI agents can intentionally manage and adapt their knowledge.
In the realm of artificial intelligence, managing an agent’s knowledge and beliefs is paramount. Just as humans selectively forget information, AI systems can benefit from a similar capability, known as “forgetting.” This concept, however, is far more complex than simply deleting data. A recent research paper, “A General Framework of Epistemic Forgetting and its Instantiation by Ranking Functions,” delves into the multifaceted nature of this operation, proposing a unified framework to understand and evaluate different types of forgetting in AI systems.
Traditionally, forgetting in AI has been approached through two main lenses: variable elimination in logic programming and contraction in AGM belief revision theory. While both are effective, they operate on different principles and often rely on classical logic. The authors, Christoph Beierle, Alexander Hahn, Diana Howey, Gabriele Kern-Isberner, and Kai Sauerwald, recognized the need for a more comprehensive, epistemic perspective that considers the richer semantic structures of an agent’s entire belief state, not just propositional logic.
The paper introduces five general types of epistemic forgetting: Contraction, Ignoration, Revocation, Marginalization, and Conditionalization. Each type represents a distinct intention behind the act of forgetting. For instance, Contraction aims to directly give up a belief, while Ignoration seeks to make an agent undecided about a piece of information. Revocation goes a step further, not only forgetting a belief but also establishing belief in its negation. Marginalization and Conditionalization, on the other hand, are well-established operations that are reinterpreted here as forgetting mechanisms, focusing on restricting the scope of information or interpreting beliefs under specific assumptions.
To bring these abstract types to life, the researchers instantiated them using Spohn’s ranking functions. Ranking functions are a powerful tool for representing epistemic states, offering a way to quantify the plausibility of different possible worlds. Through this instantiation, seven concrete forgetting operations were developed: Marginalization, Lifted Marginalization, Conditionalization, c-Ignoration, c-Revision, Minimal c-Revision, and Non-Minimal c-Revisions. This detailed instantiation allows for a precise study of how different forgetting mechanisms affect an agent’s beliefs and conditional inferences.
A crucial aspect of this research is the rigorous evaluation of these forgetting operations. The authors adapted existing postulates from Answer Set Programming (ASP) and AGM belief revision theory, which describe rational properties of forgetting. They also developed novel postulates tailored to their unifying epistemic framework, such as Epistemic Persistence (EP), Belief Persistence (BP), Belief Equivalence (BE), Extensional Belief Equivalence (EBE), and Linear Equivalence (LEocf). This comprehensive set of axioms provides a robust benchmark for comparing the different forgetting operators.
The evaluation yielded significant insights. It confirmed that minimal c-contractions align well with the principles of AGM belief revision, effectively removing propositions from belief sets. Conversely, marginalization proved to be the best fit for ASP-inspired forgetting, which focuses on preserving beliefs when forgetting irrelevant atoms. Interestingly, some postulates, like Belief Equivalence (BE) and Extensional Belief Equivalence (EBE), were satisfied by almost all forgetting operations, suggesting fundamental commonalities across different forgetting paradigms. However, the study also highlighted that strong equivalence properties (like (wE) and (E)) are often not preserved, indicating that simple propositional equivalence is insufficient to maintain equivalence under epistemic change.
Also Read:
- Unlocking AI’s Understanding: Learning Action Models from Incomplete Information
- Unlocking Deeper AI Explanations: Modifying Planning Problems for Desired Outcomes
This work not only provides a novel, comprehensive overview of epistemic forgetting but also builds bridges between previously disparate approaches. By offering a common semantic framework, it allows for a more coherent understanding of how AI agents can intentionally ignore information, restructure their knowledge, and adapt to new contexts. The findings pave the way for future research into more complex knowledge items, efficient algorithms, and applications in dynamic, multi-agent systems. For a deeper dive into the technical details, you can read the full paper here.


