TLDR: This research paper introduces a system that uses skill-based explanations to improve serendipitous course recommendations for undergraduate students. It develops a deep learning model to extract skills from course descriptions and integrates these into a recommendation system. A user study found that while explanations didn’t change overall interest, they significantly increased interest in highly unexpected courses and boosted decision-making confidence, especially for undeclared majors, by reducing neutral responses. The study emphasizes the value of clear, skill-focused explanations in educational recommendation systems.
Navigating academic choices in U.S. undergraduate education can be a complex journey for students. With a vast array of courses, limited guidance, and often overwhelming options, students frequently struggle to make informed decisions about their academic paths. While personalized course recommendation systems exist, they often fall short in providing clear explanations that help students understand why a particular course is relevant or how it aligns with their interests and goals.
A recent research paper, “Skill-based Explanations for Serendipitous Course Recommendation”, addresses this challenge by introducing a novel approach that integrates skill-based explanations into a course recommendation framework. The core idea is to make unexpected yet relevant course suggestions more approachable and actionable for students, particularly those who are still exploring their academic interests.
Extracting Skills from Course Descriptions
The foundation of this new system is a sophisticated deep learning model designed to efficiently extract relevant concepts, or “skills,” from course descriptions. Unlike simpler methods that rely on single words, this model identifies multi-word keyphrases that better convey the semantic meaning of a course. The researchers developed a concept extraction model that combines the power of BERT (Bidirectional Encoder Representations from Transformers) with a BI-LSTM-CRF (Bidirectional Long Short-Term Memory with Conditional Random Fields) architecture. This model was trained on extensive datasets from academic and educational domains, ensuring its ability to accurately pinpoint skills within course content. Expert evaluations confirmed the high quality and accuracy of the extracted concepts, laying a strong groundwork for the explanation system.
Personalized Skill-Based Explanations
The study highlights that explanations are especially vital for “serendipitous” recommendations – suggestions that are unexpected but still highly relevant. Without clear reasons, students might dismiss unfamiliar but valuable courses. Skills offer a natural and intuitive way to explain these recommendations. The system generates two types of skill lists for each recommended course:
- Learned Skills: These are skills that the student has likely already acquired through previously taken courses, showing how the new recommendation connects to their existing knowledge.
- New Skills: These are novel skills that the recommended course offers, highlighting what new knowledge or abilities the student could gain.
By presenting both familiar and new skills, the explanations aim to provide a comprehensive understanding of a course’s utility, helping students evaluate its relevance and build confidence in their decision-making.
The Recommendation Engine and User Study
The skill-based explanations were integrated into the AskOski system at the University of California, Berkeley, which is powered by PLAN-BERT. PLAN-BERT is an advanced deep learning model that considers a student’s past enrollment history, major information, and course attributes to generate personalized course suggestions. To ensure a diverse range of options, the system was designed to recommend courses from different departments.
An online user study involving 53 undergraduate students was conducted to assess the effectiveness of these skill-based explanations. Participants were randomly divided into two groups: one received recommendations with explanations, and the other received recommendations without them. They evaluated five recommended courses based on their interest, how unexpected the recommendation was, and its novelty.
Key Findings and Impact
While the explanations did not significantly alter overall interest or perceived unexpectedness, the study revealed crucial insights:
- Increased Interest in Unexpected Courses: Explanations significantly increased user interest in courses that were perceived as highly unexpected. This suggests that explanations help students see the value in courses they might otherwise overlook due to unfamiliarity.
- Boosted Decision-Making Confidence: The presence of explanations bolstered students’ confidence in their decisions and notably reduced the number of “neutral” responses. This effect was particularly pronounced among students who had not yet declared their majors, indicating that explanations are especially beneficial for those still exploring their academic paths. For undeclared students, the absence of explanations led to a significantly higher percentage of neutral opinions (36.7%) compared to when explanations were provided (16.3%).
- Perceived Usefulness: Participants generally found the explanations useful in determining their interest in a course.
These findings underscore the importance of integrating skill-related data and clear explanations into educational recommendation systems. Such systems can empower students to make more informed choices, broaden their academic exploration, and confidently engage with a wider range of course options.
Also Read:
- Understanding User Needs for Explanations in Social Media AI Recommendations
- Automating Course Material Creation with Collaborative AI Agents
Future Directions
The researchers acknowledge several areas for future improvement, including refining the course diversification strategy, incorporating course prerequisites and sequences, and conducting larger-scale studies. The potential of Large Language Models (LLMs) is also highlighted as a promising avenue for enhancing skill detection, relationship extraction, and overall system explainability in educational contexts.


