spot_img
HomeResearch & DevelopmentBuilding AI That Understands Us: A Framework for Integrating...

Building AI That Understands Us: A Framework for Integrating Human Lived Experiences

TLDR: A new research paper introduces the Lived Experience-Centered AI Framework (LEAF), proposing a systematic approach to integrate human lived experiences into every stage of AI design and development. The framework argues that current AI often overlooks subjective, emotional, and contextual human realities, leading to systems that may not be trustworthy or aligned with user needs. LEAF identifies key dimensions of lived experience (self, health, social/cultural, learning) and demonstrates through case studies how incorporating these perspectives can lead to more empathetic, context-aware, and socially aligned AI in domains like education, healthcare, and cultural interaction.

Artificial intelligence systems are becoming increasingly integrated into our daily lives, influencing everything from how we learn to how we receive healthcare. However, a significant challenge in AI development has been the limited understanding and integration of human ‘lived experiences’ – the personal, subjective, emotional, and contextual realities that shape how individuals interact with technology. A new research paper introduces a groundbreaking framework called the Lived Experience-Centered AI Framework (LEAF), aiming to bridge this gap and foster the creation of AI systems that are not only technically robust but also empathetic, context-aware, and truly aligned with human realities.

Understanding Lived Experience in AI

The concept of ‘lived experience’ refers to personal knowledge gained through direct, first-hand involvement in everyday events, rather than through abstract representations. While scientific research has traditionally focused on objective data, there’s a growing recognition of the value of subjective human experiences. In the context of AI, overlooking these experiences can lead to systems that misinterpret user needs, perpetuate biases, or even cause harm. The LEAF framework emphasizes that lived experiences are multifaceted and dynamic, differing based on individual perceptions, cultural backgrounds, and social contexts.

The paper highlights several key dimensions of lived experience that are particularly relevant to AI systems:

  • Sense of Self: How personal reflections, past traumas, and metacognition (awareness of one’s own thought processes) influence interactions with AI.
  • Health: The crucial insights health practitioners and patients bring, especially in high-stakes scenarios where AI conversational agents are used for mental health support or medical consultations.
  • Social and Cultural: The intertwined nature of social and cultural contexts, including gender identities and cultural heritage, which shape how individuals engage with AI and perceive its trustworthiness.
  • Learning: How personal learning styles and the relational dynamics between educators and students are often overlooked when AI systems are designed for educational settings.

Integrating Lived Experience into the AI Development Pipeline

The LEAF framework proposes integrating lived experiences at every stage of the AI development lifecycle, ensuring that AI models are grounded in real-world concerns, especially those of communities often marginalized in technical design processes. This approach fosters co-creative design, where tools reflect both technical feasibility and cultural and social relevance.

  • Problem Definition: Instead of relying on abstract technical goals, lived experiences help define the problem space by focusing on how people are actually affected by an issue. This can uncover situational nuances and prevent systems from embedding developers’ assumptions rather than users’ diverse needs.
  • Data Curation and Annotation: Lived experience helps identify gaps in data representation and guides more inclusive labeling practices. For instance, involving native speakers in language data annotation can preserve linguistic and cultural accuracy.
  • Model Design: Incorporating user narratives, especially accounts of exclusion or discomfort, can inform concrete design decisions, refining system logic and feedback mechanisms to be culturally relevant and emotionally attuned.
  • Model Evaluation and Testing: Inviting users and stakeholders to test AI systems in real-world scenarios can surface harms and usability issues invisible to developers. Diverse user testing, for example, revealed racial disparities in speech recognition systems.
  • Post-deployment Monitoring: Establishing feedback loops through community reporting tools or ethnographic studies allows continuous insights into how AI impacts users in daily practice, moving beyond just technical performance metrics.
  • Role of Policies: Policies can mandate participatory design processes and require transparent technical documentation, such as ‘Model Cards’ or ‘Datasheets for Datasets,’ to ensure AI systems align with community values and experiences.

Real-World Applications of LEAF

The paper illustrates the framework’s applicability through several case studies:

  • Students and the Autograder: In an online computer science course, an AI autograder’s mistakes led students to ignore detailed feedback, hindering learning. LEAF suggests that understanding student and instructor perspectives on feedback interpretation is crucial for designing effective educational AI.
  • Conversational Clinical Agents: While AI models show promise in healthcare, they often lack the nuanced understanding of real-world patient and clinician experiences. The AMIE (Articulate Medical Intelligence Explorer) model, which used a self-play simulation environment incorporating diverse demographic vignettes, demonstrated improved contextual sensitivity by centering both clinician and patient perspectives.
  • Gods and Machines (Cultural Alignment): AI systems processing sacred religious texts often treat them as neutral data, overlooking their profound cultural and spiritual significance. LEAF emphasizes that AI engaging with such data must be designed with awareness of embedded cultural, religious, and historical meanings to prevent cultural misalignment or erasure.
  • AI for Task Instruction: Multi-modal AI systems that provide instructions (e.g., for fixing a printer) can be hindered by noise in inputs and a lack of understanding of diverse user mental models. LEAF suggests context-aware modality switching and considering operational environments to enhance usability and trustworthiness.

Also Read:

The Path Forward for Experience-Centered AI

The research underscores that integrating human lived experiences is not just an ethical imperative but a design necessity for creating trustworthy, fair, and effective AI systems. By embracing methodological pluralism that includes first-person perspectives and contextual knowledge, the LEAF framework provides a flexible and dynamic approach to ensure AI systems truly align with human values and needs. This work lays a foundation for future research to explore its relevance in various sectors and develop new tools to capture the multifaceted nature of lived experience, ultimately fostering a more responsible and socially grounded AI practice. For more details, you can read the full paper here.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -