TLDR: A recent study from Monash University indicates that despite high self-reported proficiency, many students lack the practical and ethical AI literacy needed to effectively use and evaluate generative AI tools, raising concerns for education and future workforce readiness.
A groundbreaking study conducted by Monash University researchers, set to be published in the December 2025 issue of ‘Computers and Education: Artificial Intelligence,’ has unveiled a concerning disparity between students’ self-perceived AI proficiency and their actual competency. The study introduces the Generative AI Literacy Assessment Test (GLAT), the first of its kind designed to evaluate not only students’ ability to utilize generative AI tools but also their capacity to comprehend and ethically apply them.
The findings are stark: while nearly all study participants claimed some level of proficiency with AI chatbots, only those who scored highly on the GLAT were able to successfully navigate complex tasks requiring AI support, such as analyzing intricate data. Conversely, students who rated themselves highly on self-report surveys but performed poorly on the GLAT struggled to extract useful information or identify errors in chatbot responses, indicating that confidence does not correlate with actual AI literacy or competence.
This critical gap between perceived and actual AI literacy carries significant implications across various sectors, including education, national security, workforce preparedness, and the evolving landscape of information warfare. The study highlights the urgent need for educational institutions to move beyond traditional digital literacy and embrace AI literacy as a core educational priority.
Broader surveys reinforce these concerns. A 2025 HEPI survey found that while student use of AI has surged, with 92% of students now using AI in some form (up from 66% in 2024), and 88% using GenAI for assessments (up from 53% in 2024), only 36% have received institutional support to develop AI skills. Students primarily use AI for explaining concepts, summarizing articles, and suggesting research ideas, but a notable 18% have directly included AI-generated text in their work.
Concerns among students include the fear of academic misconduct and the risk of receiving false or biased results from AI. Educators, too, express significant concerns about plagiarism, overreliance on AI, misinformation, and insufficient training. Some students and faculty also voice apprehension about AI’s potential to foster dependency and undermine critical thinking, citing instances of AI ‘hallucinations’ leading to non-existent sources.
Despite these challenges, there’s a growing recognition of AI fluency as a highly in-demand skill in the job market. Educational leaders globally view AI literacy as an essential component of basic education for every student, with 76% of global leaders and 79% of US higher education educators agreeing. Initiatives like the AI Literacy Framework (AILit) by the European Commission and OECD aim to equip learners to engage with AI critically, creatively, and ethically, aligning with regulations such as Article 4 of the EU AI Act, which mandates sufficient AI literacy for users of AI systems.
Also Read:
- Student Appeals Against AI Plagiarism Detection Tools Upheld by UK Ombudsman
- Turnitin Unveils Clarity: An AI-Powered Solution for Academic Integrity and Responsible AI Use in Education
As AI continues to advance rapidly, the findings underscore the imperative for comprehensive AI education and training programs that go beyond mere usage, focusing on foundational concepts, ethical considerations, and critical evaluation skills to prepare students for an AI-integrated future.


