TLDR: Nearly three years after ChatGPT’s launch, Harvard faculty are fundamentally rethinking their teaching methods to adapt to the widespread use of artificial intelligence among students. While some professors are embracing AI as a learning tool, others are implementing ‘AI-proof’ measures like in-person exams. A spring 2025 survey revealed that nearly 80% of Faculty of Arts and Sciences respondents encountered AI-generated coursework, highlighting AI’s ubiquitous presence and the challenges in detecting its use.
Cambridge, MA – The academic landscape at Harvard University is undergoing a significant transformation as faculty members navigate the pervasive influence of artificial intelligence in education. Nearly three years since the public release of ChatGPT, instructors are adjusting to a ‘new normal’ where AI tools have reshaped classroom dynamics and challenged traditional notions of academic integrity. This fall, the changes are more visible than ever, with professors adopting diverse strategies ranging from full AI integration to a return to ‘analog’ teaching methods.
David J. Deming, the College’s new dean, underscored the inevitability of AI’s impact during Convocation, urging freshmen to prepare for a world revolutionized by generative AI. ‘Young, educated people like you are already the heaviest users of AI, and you are creative and open-minded enough to figure out the best ways to use it, too,’ Deming stated.
The ubiquity of AI in Harvard classrooms is undeniable. A survey conducted by The Harvard Crimson among the Faculty of Arts and Sciences (FAS) in spring 2025 revealed that nearly 80 percent of respondents had encountered coursework they knew or believed was produced with AI. This marks a significant increase from just two years prior, when over half of the respondents reported no such experiences.
Despite the widespread use, faculty confidence in detecting AI-generated work remains low, with only 14 percent of FAS survey respondents feeling ‘very confident’ in their ability to distinguish between AI and non-AI submissions. This challenge is further compounded by studies, such as one from Pennsylvania State University, indicating that humans can identify AI-generated text only about 53 percent of the time, a rate barely better than random chance.
In response, Harvard professors are exploring varied pedagogical approaches. Some are actively encouraging students to leverage AI for tasks like data crunching, translating primary sources, and reviewing course material before exams. This reflects a shift from outright bans to a more nuanced guidance, treating AI similarly to calculators in a math class – a tool to be used carefully based on the skills being developed.
Conversely, other instructors are implementing ‘AI-proof’ measures. This includes a return to seated, in-person exams, no-laptop policies during class, and assignments submitted on paper, creating what some students have described as a ‘newly analog campus.’
The institutional response is also evolving. An analysis of AI policies from 20 popular College courses in fall 2025 showed a complete reversal from fall 2022, when none mentioned AI. Now, all but two of these sampled courses have policies regulating AI use, with most allowing some level of AI integration.
However, the rise of AI also presents challenges beyond academic integrity. A 2024 study of Harvard undergraduates found that approximately one-quarter of generative AI users rely on it to replace crucial academic interactions, such as office hours with professors or engaging with essential readings. This raises concerns about stunted intellectual growth and student disengagement from complex materials.
Also Read:
- USC Study Reveals AI’s Dual Impact on Student Learning: Shortcut or Deep Engagement?
- Canada Urgently Needs Coordinated National Strategy for Generative AI in Higher Education
Despite these complexities, the prevailing sentiment among Harvard faculty is that there is ‘no going back to how they used to teach.’ The ongoing adaptation signifies a willingness to meet students halfway, placing a greater responsibility on students to use AI responsibly and honor the trust placed in them by educators.


