TLDR: A new research paper introduces ‘meaningful transparency’ for civic AI systems, arguing that current technical explanations are insufficient. This concept proposes that transparency must be technically producible, connect to the lived experiences of citizens, and enable them to take action or influence decisions. It advocates for participatory design, contestation mechanisms, and transdisciplinary approaches to ensure AI in public services is truly accountable and understandable to the people it affects.
Artificial intelligence is increasingly integrated into government services, from managing benefits to issuing parking fines. While AI promises efficiency and fairness, it often falls short, producing biased or incorrect outcomes and limiting the ability of citizens and civic workers to influence decisions. This is where transparency comes in, aiming to help people understand decisions made about them and shape the processes behind those decisions.
However, traditional approaches to AI transparency often focus on technical details that are difficult for the public to understand. These technical explanations frequently don’t connect to real-world actions or provide insight into the broader social context of decision-making. A new perspective is needed to make AI transparency truly effective.
Introducing Meaningful Transparency
A recent paper, “Towards Meaningful Transparency in Civic AI Systems”, by Dave Murray-Rust, Kars Alfrink, and Cristina Zaga, introduces the concept of ‘meaningful transparency’ for civic AI systems. This approach emphasizes transparencies that allow the public to engage with AI systems affecting their lives, linking understanding with the potential for action. It moves beyond just showing how an algorithm works to considering the human experience and the ability to make a difference.
The authors propose that meaningful transparency in civic AI systems has three key qualities:
1. Relates to Lived Experience: Transparency should be personal and relevant to the daily lives of those affected by AI decisions. It needs to be understandable to a wide range of people, regardless of their technical background. This means focusing on what truly matters to individuals and how they experience these systems.
2. Actionable: Transparency isn’t just about providing information; it must enable change. This could involve formal mechanisms to challenge decisions, influencing organizational behavior, or allowing people to opt out or correct data. The goal is to empower individuals to act when they encounter issues with AI systems.
3. Technically Producible: While focusing on human needs, the proposed transparency mechanisms must also be technically feasible to create. This involves considering privacy safeguards, the possibility of real-time information, and the resources required to implement such systems.
The Power of Lived Transparency
The core of meaningful transparency lies in ‘lived transparency,’ which means designing and analyzing transparency systems based on how residents actually experience and interpret technical systems. This approach draws from the idea of ‘lifeworld’ or ‘lived experience,’ recognizing that human existence is fundamentally situational and contextual. For example, studies have shown how nurses’ lived experiences with AI in critical care influence their trust and acceptance of AI recommendations. Similarly, in education, a lack of transparency in AI-driven learning environments can reduce a learner’s autonomy, highlighting the need for transparency that enables informed decisions about learning pathways.
Lived transparency also acknowledges that different groups have unique experiences that should inform AI development. For instance, including people with lived experience of mental disorders in generative AI development can help reduce bias and improve accuracy in mental health applications. This perspective treats lived experience as a form of expertise that is crucial for creating fair and effective AI systems.
Also Read:
- Assessing AI Governance: A New Index for Pluralism, Transparency, and Accountability
- Navigating the New Frontier: How Early Adopters Understand Multi-Agent Generative AI
Practices for Enacting Meaningful Transparency
To achieve meaningful transparency, the paper highlights three emerging practices:
1. Participation, Co-design, and Co-sensemaking: Involving the public in the development of AI systems helps create a shared understanding and empowers decision-subjects. This can take many forms, such as workshops where citizens discuss what a “trustworthy public-sector AI service” should look like, or co-sensemaking efforts where affected communities identify what information they need from automated decision-making systems. Interactional participation allows users to make choices, like opting out of an algorithm’s assistance or correcting incorrect data, directly connecting technical transparency to user action.
2. Contestation and Agonism: This approach suggests that decisions can be challenged even without a complete understanding of how they were made. Contestable AI emphasizes human involvement and fosters debate between those affected by AI and those who design or deploy it. It’s about creating mechanisms for people to demand justifications and hold system operators accountable, viewing conflict as a productive force in shaping better systems.
3. Transdisciplinarity: Addressing the complex challenges of civic AI requires moving beyond single academic disciplines and integrating diverse forms of knowledge, including lived experience. Transdisciplinary research involves collaboration among scientists, designers, governments, companies, and the public to develop integrated knowledge and facilitate social transformations. This approach acknowledges power imbalances and aims to genuinely incorporate citizens’ input, ensuring AI systems serve the public interest.
In conclusion, meaningful transparency in civic AI systems is about creating technically feasible, actionable, and deeply human-centered transparency that resonates with the lived experiences of citizens. By embracing participation, contestation, and transdisciplinary approaches, we can build AI systems that are not just transparent, but truly accountable and beneficial to society.


