TLDR: This research paper explores the philosophical assumptions, specifically ontological (what exists) and epistemological (how we know), embedded within different Explainable AI (XAI) methods. It argues that XAI tools are built upon diverse foundational beliefs about what constitutes an explanation and how it can be understood. The paper categorizes XAI methods into four paradigms – Logical Positivist, Contemporary Realist, Interpretive, and Postmodern – demonstrating how technical variations correspond to significant philosophical differences. It highlights the importance of aligning XAI method selection with the philosophical paradigm of the application domain to avoid misinterpretations and ensure valid, trustworthy AI explanations.
Artificial intelligence (AI) has become ubiquitous, but its most powerful methods, like deep learning, often operate as ‘black-box’ systems. This lack of transparency makes it difficult to understand how they arrive at their decisions, which in turn limits trust and adoption. Explainable AI (XAI) aims to address this by providing insights into these complex models.
However, the concept of an ‘explanation’ itself is far from straightforward. It has been a subject of deep philosophical debate for centuries. This research paper, titled Onto-Epistemological Analysis of AI Explanations, delves into the underlying philosophical assumptions embedded within different XAI methods. The authors, Martina Mattioli, Eike Petersen, Aasa Feragen, Marcello Pelillo, and Siavash A. Bigdeli, argue that the assumptions made by engineers and scientists developing XAI tools, often with a technical background, are not neutral. These assumptions about what an explanation is, whether it exists independently, and how we can know it, have significant consequences for the validity and interpretation of AI explanations across various fields.
The paper introduces an ‘onto-epistemological’ framework to analyze XAI methods. Ontology is the study of what exists – in this context, whether AI explanations have an independent reality. Epistemology is the study of knowledge – how we can gain knowledge about these explanations. The authors highlight that seemingly minor technical adjustments in an XAI method can correspond to profound differences in these underlying philosophical assumptions.
Understanding the Philosophical Foundations of XAI
The researchers categorize XAI methods into four main philosophical paradigms:
- Logical Positivist: This paradigm assumes that explanations exist and can be fully understood through direct observation and logical reasoning, often by analyzing the AI model’s internal workings and objectives. Examples include basic gradient-based methods that show how changes in input affect output.
- Contemporary Realist: Similar to positivists in believing explanations exist, but acknowledges that not all reality is attainable through simple observation. Knowledge is influenced by subjective viewpoints. Methods in this category often focus on specific tasks or standpoints, like Grad-CAM, which evaluates explanations based on their ability to localize objects.
- Interpretive: In this paradigm, explanations are believed to exist only within the human mind. They are mental constructs, not external realities. XAI methods here are designed to align with human understanding and expectations, such as Layer-Wise Relevance Propagation (LRP), which defines explanations as a sum of input dimensions that approximate model output, often justified by visual evidence that conforms to human perception.
- Postmodern: This approach rejects the absolute existence of AI explanations, focusing instead on relationships and comparisons. Explanations are understood relatively, not absolutely. Integrated Gradients, for instance, explains a prediction relative to a baseline input, emphasizing sensitivity and implementation invariance rather than an inherent explanation of a single input.
Also Read:
- Understanding Smart Environments: When “What If” Explanations Help Most
- AI’s Struggle with Novelty: When Missing Rules Expose Reasoning Gaps
Implications for XAI Development and Application
The paper emphasizes the risks of ignoring these underlying onto-epistemological paradigms when selecting an XAI method for a particular application. For instance, a medical researcher operating within a scientific (contemporary realist) paradigm would find XAI techniques developed under an interpretive or postmodern paradigm incompatible with their foundational beliefs about reality and knowledge. Using such a method could lead to contradictions and misinterpretations.
The authors argue that the common criticisms of XAI regarding reliability, trustworthiness, and utility often stem from the implicit assumption that all XAI models adhere to the same philosophical paradigm. By recognizing that XAI methods are built within different onto-epistemological frameworks, researchers and developers can make more informed choices, ensuring better conformity with desired guidelines and assumptions for their specific domains.
This analysis provides a more nuanced way to understand and classify XAI methods, moving beyond simple technical classifications or trade-offs between completeness and understandability. It encourages a deeper engagement with the philosophical underpinnings of AI explanations, ultimately aiming for more appropriate and effective use of XAI in diverse applications.


