TLDR: A recent South African court case highlighted the critical dangers of irresponsible AI use in legal proceedings, as a legal team submitted AI-fabricated case law. This incident, along with a previous one in 2023, underscores the urgent need for legal practitioners and institutions to adopt responsible AI practices and comprehensive AI education to prevent professional misconduct and uphold the integrity of the justice system.
The legal landscape is grappling with the profound implications of Artificial Intelligence, a reality starkly illuminated by a South African High Court case in January 2025. In the case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others, a legal team presented case law that was largely non-existent, generated by OpenAI’s ChatGPT. Out of nine submitted case authorities, only two were genuine; the rest were identified as AI-fabricated ‘hallucinations’. The court unequivocally condemned this conduct as ‘irresponsible and unprofessional,’ referring the matter to the Legal Practice Council for investigation.
This incident is not an isolated one. A similar case, Parker v Forsyth, occurred in 2023, though the judge in that instance was more lenient, finding no intent to mislead. However, the Mavundla ruling signals a significant shift, indicating that courts are losing patience with legal practitioners who misuse AI tools. Legal academics researching the growing integration of generative AI in legal research and education emphasize that while these technologies offer powerful tools for efficiency, they also pose serious risks when used without proper oversight.
Aspiring legal professionals who misuse AI without adequate guidance or ethical grounding face severe professional consequences, potentially jeopardizing their careers before they even begin. This highlights a critical gap in legal education, as most institutions remain unprepared for the rapid adoption of AI. Very few universities have formal policies or training programs on AI, leaving students without a clear roadmap for navigating this evolving technological terrain.
The court in Mavundla underscored that lawyers remain ultimately responsible for the accuracy of every source presented, regardless of technological advancements. Workload pressures or ignorance of AI’s risks are not considered valid defenses. The supervising attorney was also criticized for failing to review the documents before filing, reinforcing the ethical principle that senior lawyers must properly train and supervise junior colleagues. The core message is clear: integrity, accuracy, and critical thinking are non-negotiable pillars of the legal profession.
Also Read:
- ChatGPT’s Persistent ‘Hallucinations’ Highlight AI Accuracy Challenges Despite Upgrades
- Namibian Legislators Participate in UNESCO-Led Training on Artificial Intelligence and the Rule of Law
Generative AI tools like ChatGPT possess immense potential to summarize cases, draft arguments, and analyze complex texts rapidly. However, their capacity to confidently fabricate information, producing authoritative-looking but entirely false text, presents a significant danger. For students, this poses a dual threat: over-reliance on AI can hinder the development of essential critical research skills, and it can lead to serious academic or professional misconduct. The call for a proactive and structured approach to AI education in law schools is more urgent than ever, to equip future legal practitioners with the judgment and skills necessary for responsible AI use.


