spot_img
HomeResearch & DevelopmentConversational AI Improves University Feedback Systems

Conversational AI Improves University Feedback Systems

TLDR: A pilot study at UC Santa Cruz explored using LLM-powered chatbots to collect classroom feedback. The system, comprising PromptDesigner, FeedbackCollector, and FeedbackAnalyzer, engaged students in conversational dialogues, providing instructors with timely, detailed, and actionable insights. Both instructors and students found the LLM-based system more effective and engaging than traditional surveys, though student concerns about anonymity were noted.

Traditional end-of-quarter surveys often leave instructors guessing about what truly worked or didn’t in their courses. These surveys are frequently generic, filled with vague comments, and suffer from low student engagement, failing to provide timely or actionable insights for improving teaching methods.

A recent pilot study conducted at the University of California, Santa Cruz, explores a novel approach: leveraging Large Language Model (LLM)-powered chatbots to revolutionize the classroom feedback process. This innovative system aims to engage students in reflective, conversational dialogues, offering a richer, more nuanced understanding of their learning experiences.

The system is comprised of three key components: PromptDesigner, FeedbackCollector, and FeedbackAnalyzer. PromptDesigner is a web-based tool that allows instructors to craft, test, and refine conversational prompts tailored to their specific course needs, enabling them to upload syllabi and specify areas for feedback. FeedbackCollector is the student-facing chatbot, powered by GPT-4, which gathers feedback asynchronously through conversational sessions designed to encourage detailed reflections. Finally, FeedbackAnalyzer is a dashboard for instructors, visualizing anonymized student-chatbot conversations, key themes, and AI-generated summaries of responses.

The pilot study was deployed in two graduate courses at UC Santa Cruz—Game Data Science and Ethics in HCI—involving a total of 40 students. Feedback was collected at the 5-week and 10-week marks, allowing for mid-course adjustments. Semi-structured interviews with students and the instructor at the end of the term provided qualitative insights into their experiences.

Instructor Perspectives

The instructor involved in the study expressed a strong preference for the LLM-based system over traditional surveys. They highlighted the system’s ability to ask course-specific questions, providing feedback that was far more actionable and timely. The flexibility to tailor prompts to the curriculum was highly valued, creating a feedback process that felt relevant and grounded in the teaching context.

Efficiency was another significant benefit; the chatbot was quick to use, and its conversational style made it engaging for students, leading to higher participation rates compared to standard surveys. Collecting feedback mid-quarter allowed for real-time adjustments, improving teaching methods and giving students a sense that their input genuinely influenced the course. The FeedbackAnalyzer also helped uncover specific insights, such as a misalignment between readings and lectures, which prompted immediate changes in classroom discussions. The instructor also gained deeper insights into student habits and expectations, which traditional surveys could not provide.

Also Read:

Student Perspectives

Students generally welcomed the LLM-based feedback system, appreciating its conversational format. They found it more reflective, meaningful, and less like a chore compared to conventional multiple-choice question surveys. The chatbot’s ability to ask follow-up questions was a notable strength, encouraging elaboration and sustaining meaningful dialogue, making the feedback process feel more genuine.

However, students also raised important points for improvement. Some suggested that a more informal and impartial chatbot persona could encourage greater candor. Concerns about anonymity were significant, with students expressing initial hesitation about providing honest criticism if they weren’t fully confident their identity would remain unattached. Clearer, visible reassurances about privacy on the interface were suggested to foster full openness from the beginning.

In conclusion, this pilot study suggests that LLM-based feedback systems offer richer insights, greater contextual relevance, and higher engagement than standard survey tools. While promising, future work will focus on scaling the system to a larger number of faculty across diverse disciplines, enhancing adaptability, and addressing design challenges such as personalization, anonymity, and interface clarity. For more details, you can read the full research paper here.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -