TLDR: This research explores how Augmented Intelligence (AuI) can improve Intelligent Tutoring Systems (ITS) by involving teachers in correcting AI errors, specifically in the MathAIde app for handwritten math. Through a user-centered design process, including brainstorming, prototyping, and real-world testing, the study found that providing teachers with pre-defined options to correct AI misidentifications significantly enhances the system’s accuracy, usefulness, and teacher trust, especially in resource-limited environments.
Artificial Intelligence in Education (AIED) holds immense promise for transforming learning experiences, offering benefits like personalized learning, increased student engagement, and improved retention rates. However, the integration of AI into education is not without its challenges. Key concerns include ensuring teachers play a critical role in the design process, addressing the limitations and reliability of AI tools (such as the tendency for Large Language Models to ‘hallucinate’ or generate incorrect information), and overcoming the accessibility gap for technological resources, especially in underserved regions.
A recent study introduces Augmented Intelligence (AuI) as a powerful solution to these challenges. AuI focuses on enhancing human capabilities rather than replacing them entirely. In this model, AI systems suggest solutions or provide analyses, while humans offer final assessments and corrections, thereby helping the AI to learn and improve over time. This collaborative approach builds trust and ensures the systems remain aligned with educational goals.
The research specifically delves into the design, development, and evaluation of MathAIde, an Intelligent Tutoring System (ITS) designed to correct mathematics exercises. What makes MathAIde unique is its ability to process handwritten student work using computer vision and AI, providing feedback based on photos taken by teachers. This ‘AIED unplugged’ approach is particularly beneficial for environments with limited technological resources, as students can continue to work on paper while teachers act as a bridge to the digital system.
Despite its innovative approach, MathAIde faces a common challenge: the AI’s accuracy in recognizing handwritten math expressions. Experimental results showed the system had an overall accuracy of around 70% in detecting all characters and about 80% for all but one character. This limitation meant that teachers often needed to intervene when the AI misidentified a student’s correct answer or misinterpreted an error.
To address this, the researchers employed a user-centered mixed-methods approach to integrate AuI into MathAIde. This comprehensive methodology involved four key stages: brainstorming sessions with 14 elementary school math teachers, high-fidelity prototyping, A/B testing with three teachers, and a real-world case study involving three teachers and 49 students.
During brainstorming, teachers proposed various ideas for correcting AI misidentifications, ranging from direct number editing to using colors to highlight different parts of an equation. These ideas emphasized the need for both high user control and computational automation, aligning with advanced human-centered AI frameworks.
Based on these ideas and considering technical feasibility, two main prototypes were developed. Prototype A allowed teachers to directly edit misidentified numbers on a digitized version of the student’s answer. Prototype B offered pre-defined options for teachers to report errors, such as indicating that the student got the answer right despite the AI’s assessment, or changing the type of error identified by the AI.
The A/B testing phase revealed that Prototype B, with its pre-defined correction options, was more efficient and preferred by teachers. While Prototype A offered more granular control, it required more interaction and time, which could be impractical in a busy classroom setting. Teachers appreciated the agility of Prototype B, even suggesting improvements to its terminology, like changing ‘Report Error’ to something that more clearly indicates a ‘review’ or ‘correction’ of the AI’s assessment.
The final case study deployed the chosen AuI functionality (Prototype B) in real classroom environments. The results were compelling: teachers used the AuI feature 139 times out of 784 student answers, representing 17.6% of total corrections. A significant portion of these corrections (130 instances) involved teachers indicating that a student had gotten an answer right, even though MathAIde initially marked it wrong. This highlights the crucial role of human oversight in ensuring fair and accurate assessments, especially when AI has limitations due to factors like varying handwriting styles or photo quality.
The study concludes that a user-centered, mixed-methods approach is vital for designing effective AuI-based ITS. By involving teachers throughout the design process, the researchers were able to create a usable and validated solution that balances user needs with technological capabilities. This approach not only enhances the reliability and trustworthiness of AIED systems but also increases their potential for adoption, particularly in resource-limited settings where accessible educational technology is most needed.
Also Read:
- Redefining ‘Ground Truth’ in Educational AI: Moving Beyond Simple Agreement
- Unlocking the AI Black Box: A New Framework for Transparent and Personalized Learning
For more in-depth information, you can read the full research paper: A Mixed User-Centered Approach to Enable Augmented Intelligence in Intelligent Tutoring Systems: The Case of MathAIde app.


