TLDR: This research paper proposes a workshop to address the challenges of conducting longitudinal studies of AI systems. It argues that traditional short-term evaluations are insufficient to capture how user interactions with AI evolve over time through learning, adaptation, and repurposing. The workshop aims to identify challenges, provide hands-on experience in designing long-term protocols and systems, and foster a community to promote longitudinal research as a more embraced method for evaluating AI tools.
The paper “Facilitating Longitudinal Interaction Studies of AI Systems” highlights a crucial shift in how we evaluate AI tools. Traditionally, user interface software and technology (UIST) research has relied on short-term evaluations. However, the way people interact with AI systems changes significantly over time. Users learn, adapt, and even repurpose AI tools, making one-time studies insufficient to capture the full picture of their evolving engagement.
This research emphasizes the growing need for longitudinal studies, which involve observing user interactions over extended periods. Such studies offer deeper insights into how AI tools perform in real-world conditions, moving beyond initial proofs-of-concept. They are particularly vital for AI because of its dynamic nature and the ease with which these systems can be updated. By conducting longer-term research, new interaction mechanisms can be discovered through richer user data.
The authors point out several key aspects that longitudinal studies can illuminate. These include understanding the “novelty and placebo effects,” where initial excitement might skew perceptions before stabilizing. They also help track “learning phases” as users build mental models, examine “customization and personalization” as users tailor tools, and investigate “appropriation” where tools are used for unintended tasks. Furthermore, these studies can observe “shifts in usage patterns,” such as evolving behaviors around features, and “shifts in perception and capability,” including how users’ views of tasks and systems change. Finally, they explore “(un)sustained use and workflow integration,” revealing whether users adopt, partially adopt, or abandon a tool and how it integrates into their daily routines.
Despite these clear benefits, implementing longitudinal studies presents significant challenges. Researchers often face difficulties with long-term tool deployment, including managing system bugs, recruiting and retaining participants, designing effective evaluation protocols, and collecting data without constant observation.
To address these hurdles, a workshop is proposed with three main goals. First, it aims to identify existing challenges in longitudinal UIST research and propose practical solutions. This includes designing effective protocols that balance data richness with participant burden, building robust and adaptable systems, clarifying the unique contributions of longitudinal studies, bridging academic and industry approaches, and exploring human-AI studies where both users and AI agents evolve.
Second, the workshop will offer participants hands-on experience in designing longitudinal protocols and prototyping systems. Experts with experience in such studies will guide participants through simplified design processes, preparing them for future research. Third, the initiative seeks to foster a community of researchers interested in longitudinal studies, promoting it as a more widely adopted method for designing, building, and evaluating future UIST tools.
The workshop is structured into four phases: an introduction to longitudinal practices, a session for identifying challenges, hands-on protocol design and system prototyping, and a final reflection and summary. Post-workshop plans include co-writing a blog post, creating living GitHub artifacts (like a collection of longitudinal papers and protocol templates), and maintaining engagement through online communities.
Also Read:
- Designing Understandable AI: A Human-Centered Approach to Explanations
- Bridging the Gap: Real-World Insights into Explainable NLP Adoption and Challenges
This work underscores that as AI tool development and user evolution accelerate, the UIST community must invest in longitudinal methods to bridge the gap between strong prototypes and sustained real-world deployment. For more details, you can refer to the full paper available at https://arxiv.org/pdf/2508.10252.


