spot_img
HomeResearch & DevelopmentDesigning Understandable AI: A Human-Centered Approach to Explanations

Designing Understandable AI: A Human-Centered Approach to Explanations

TLDR: This research paper proposes a human-centered framework for designing Explainable AI (XAI) systems, moving beyond purely technical methods. It introduces a ‘Who, What, and How’ framework: identifying the diverse stakeholders (Developers, Operators, Validators, Subjects), determining what aspects of the AI model need explanation (scope, focus, model specificity, operational cost), and deciding how explanations should be delivered (numerical, visual, textual, interactive). The paper also critically examines the ethical implications of XAI design, including epistemic inequality, social inequality, and accountability, advocating for explanations that are not only useful but also just and empowering.

The field of Explainable AI (XAI) aims to make complex artificial intelligence models understandable. While many advancements have focused on developing new technical methods, a recent research paper titled “Beyond Technocratic XAI: The Who, What & How in Explanation Design” argues that creating truly meaningful explanations is a context-dependent task requiring intentional design choices. This paper, authored by Ruchira Dhar, Stephanie Brandl, Ninell Oldenburg, and Anders Søgaard, reframes explanation as a situated design process, which is particularly relevant for those building and deploying explainable AI systems. The core of their work proposes a three-part framework for explanation design: asking Who needs the explanation, What they need explained, and How that explanation should be delivered. The authors also highlight the critical need for ethical considerations throughout this process, addressing risks such as epistemic inequality, reinforcing social inequities, and obscuring accountability.

Understanding the ‘Who’ in XAI Design

A key aspect of effective explanation design is identifying the audience. The paper emphasizes that unlike many technical XAI works that assume a generic user, human-centered design requires differentiating among various stakeholders affected by an AI system. The authors categorize these stakeholders into four main groups: Developers, Operators, Validators, and Subjects. Developers need explanations to debug and improve model internals, prioritizing technical accuracy. Operators, such as doctors or bankers, use AI outputs for real-time decision-making and require context-sensitive, actionable, and efficient explanations. Validators, like auditors or compliance officers, seek transparency to assess fairness and regulatory adherence, needing comprehensible and structured explanations. Finally, Subjects, who are individuals impacted by model decisions (e.g., patients or customers), need accessible explanations to understand and potentially challenge decisions that affect them. The paper notes that individuals might occupy multiple roles, but each role represents a distinct mindset that influences explanation needs.

Defining the ‘What’ to Explain

Once the audience is identified, the next crucial step is determining what aspects of the AI model’s behavior should be explained. The paper classifies existing XAI methods along four axes: explanation scope, explanation focus, model specificity, and operational cost. Explanation scope refers to the level of generality; local methods explain individual predictions, while global methods explain overall model behavior. Explanation focus differentiates between behavioral methods, which show how inputs influence outputs, and mechanistic methods, which uncover internal model structures. Model specificity considers whether a method requires internal access to the model (model-specific) or treats it as a black box (model-agnostic). Lastly, operational cost refers to the time and computational resources required to generate an explanation. For instance, a clinician (Operator) might need a local, behavioral explanation focusing on specific patient factors, delivered quickly due to time constraints. This framework helps match the right explanation method to the specific needs and constraints of different stakeholders.

Choosing the ‘How’ of Explanation Delivery

The final design question revolves around how the explanation should be delivered. This is not merely a technical decision about format but an act of shaping and framing knowledge for specific purposes. The paper outlines four common explanation formats: Numerical, Visual, Textual, and Interactive. Numerical explanations use quantitative indicators like scores or weights, suitable for developers comfortable with abstract metrics. Visual explanations, such as heatmaps or plots, offer intuitive understanding, especially for operators and subjects. Textual explanations transform model reasoning into natural language, excelling in accessibility for non-technical stakeholders. Interactive explanations allow users to engage with the system, exploring ‘what-if’ scenarios, which is beneficial for developers, validators, and skilled operators. The authors emphasize that these formats can be combined, and the challenge lies in choosing the most appropriate format for the user’s context, skill level, and goals.

Also Read:

Ethical Dimensions of Explanation Design

Beyond the technical aspects, the paper strongly argues that explanation design is inherently ethical. It highlights three critical dimensions: epistemic inequality, social inequality, and accountability. Epistemic inequality arises when different stakeholders receive varying levels of access to explanations, often disadvantaging marginalized groups. Social inequality can be reinforced if explanations are designed to legitimize institutional priorities rather than empower users to challenge decisions. Finally, accountability and governance are crucial; explanations should not just clarify model outputs but also reveal underlying values, assumptions, and trade-offs within the broader system. The paper stresses that ethical XAI design requires recognizing that explanations are situated, produced under constraints, shaped by norms, and open to strategic misuse. It calls for practitioners to consider not just accuracy, but also justice, understanding, and whose understanding is prioritized. For more in-depth insights, you can read the full paper available at arXiv.org.

In conclusion, the paper advocates for a shift from a method-centric to a process-oriented approach in XAI. By integrating the ‘Who, What, and How’ framework with a strong emphasis on ethical considerations, it aims to support practitioners in building explanation systems that are not only technically robust but also contextually appropriate and ethically sound, fostering greater transparency, trust, and accountability in AI.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -