TLDR: This research paper details the iterative design and evaluation of a gaze-based learning analytics dashboard for English Language Arts (ELA) instruction. It combines eye-tracking data with a conversational AI agent to help teachers and students interpret complex gaze patterns, offering insights into student cognition and engagement. The findings highlight the importance of user-centered design, data storytelling, and explainable AI to make such advanced analytics approachable and pedagogically valuable in real classroom settings.
Educational technology is constantly evolving, but many current tools still rely on basic data like clicks and time spent, which don’t fully capture how students think and engage with learning materials. This limitation is particularly noticeable in subjects like English Language Arts (ELA), where understanding a student’s silent reading process is crucial but often invisible to teachers.
Recent advancements in multimodal learning analytics (MMLA), especially eye-tracking technology, offer a promising solution. Eye gaze can reveal deep insights into cognitive processes such as information acquisition, reading strategies, and attention. However, this rich data often remains confined to academic research, rarely making its way into practical classroom tools because it’s complex and hard for teachers and students to interpret.
Bridging the Gap with User-Centered Design and AI
A new research paper, “Designing Gaze Analytics for ELA Instruction: A User-Centered Dashboard with Conversational AI Support,” explores how to make gaze data accessible and useful for ELA instruction. The paper details an iterative design process for a gaze-based learning analytics dashboard, developed through five studies involving both teachers and students. The core idea is to use user-centered design and data storytelling principles to transform complex gaze data into actionable insights for reflection, assessment, and instructional decisions.
The researchers, including Eduardo Davalos, Yike Zhang, Shruti Jain, Namrata Srivastava, Trieu Truong, Nafees-ul Haque, Tristan Van, Jorge A. Salas, Sara McFadden, Sun-Joo Cho, Gautam Biswas, and Amanda Goodwin, found that gaze analytics can be both approachable and valuable when supported by familiar visualizations, clear explanations, and narrative structures. A key innovation in their work is the integration of a conversational agent, powered by a large language model (LLM), which helps users interpret gaze data through natural language interactions.
The Journey of Design and Discovery
The project followed a design-based research methodology, continuously refining the dashboard based on feedback from real classroom settings. This involved several stages:
- Initial Classroom Study: A feasibility study with 5th-grade students and a teacher showed that gaze data could be collected and presented successfully. Students found the eye-tracking feature engaging, but teachers noted the dashboard’s complexity and the need for better support in interpreting the data.
- In-Depth Interviews: Separate interviews with teachers and students revealed a strong desire for an AI chatbot to help interpret data, along with visual aids, legends, and a feature to group students based on educational categories (e.g., English as a Second Language learners).
- Interactive Design Workshop: This workshop validated many design updates. Participants preferred heatmaps as the most intuitive gaze visualization, finding scanpaths too complex. The need for data storytelling principles became clear, with requests for simplified legends, info buttons, and personalized feedback that highlighted individual progress.
- LLM-Generated Reports and Conversational Agent: In response to requests for more actionable insights, an LLM-driven system was developed to generate personalized classroom summaries. Teachers appreciated the structure and pedagogical relevance of these reports. A conversational agent prototype was then introduced, allowing teachers to ask natural language questions about the data. While helpful, concerns about the factual reliability and transparency of AI-generated responses were raised, emphasizing the need for explainable AI.
Also Read:
- The Effort Paradox: How Generative AI Can Deepen Student Learning
- AI Agents Transform Learning: A New Approach to Education
Key Findings and Future Directions
The studies confirmed that both students and teachers are enthusiastic about gaze-based analytics, finding them valuable for understanding reading behaviors, provided there is adequate support for interpretation. Data storytelling, through clear narrative structures, contextual explanations, and personalization, significantly improved the dashboard’s usability and educational utility. The conversational agent proved effective in helping users navigate and interpret data, but trust and transparency remain critical design considerations.
The researchers propose several design principles for future educational dashboards:
- Start with familiar visuals like heatmaps before introducing more complex representations.
- Use progressive disclosure, offering high-level summaries that can be drilled down for more detail.
- Support self-narration by highlighting personal progress and tailored recommendations.
- Layer in storytelling aids such as annotated visualizations, simplified legends, and AI-generated summaries.
- Design for explainable AI, ensuring that AI-generated outputs are traceable and verifiable.
- Enable on-demand inquiry, allowing users to request custom analyses in natural language.
This work represents a significant step towards integrating novel data modalities like eye-tracking into classroom contexts in a way that is transparent, usable, and pedagogically relevant. For more details, you can read the full research paper here.


