TLDR: Artificial intelligence tools have fabricated case studies and legal citations in South African courtrooms, leading to adverse rulings and investigations into legal practitioners. This has prompted the Legal Practice Council to develop a new policy framework to ensure the ethical and responsible use of AI in legal research, emphasizing the critical need for verification.
The integrity of South Africa’s legal system is under scrutiny following multiple incidents where artificial intelligence tools, including ChatGPT and Legal Genius, generated fictitious case studies and citations, leading to significant repercussions in courtrooms. This alarming trend has highlighted a growing concern among legal professionals regarding the responsible and ethical deployment of generative AI in legal research.
One prominent case involved a legal team in the KwaZulu-Natal High Court in Pietermaritzburg, where an AI tool fabricated supplementary case examples for their arguments. The legal representatives submitted a notice of appeal citing several authorities and case studies. However, an independent verification by the judge using ChatGPT revealed that many of the cited cases were non-existent in recognized legal databases. Consequently, the court ruled against the plaintiff, with the written judgment sternly stating, ‘The court has gained the impression that the lawyers placed false trust in the veracity of AI-generated legal research and, out of laziness, failed to check this research.’
Similar incidents have emerged, including cases like Mavundla vs the KwaZulu-Natal MEC of cooperative governance & traditional affairs and Northbound Processing vs the SA Diamond and Precious Metals Regulator. In these instances, AI tools ‘hallucinated’ non-existent case law, which was then cited as precedent to support arguments presented in court. Acting judge DJ Smit, in the Northbound Processing case, criticized this practice, underscoring the dangers of relying on generative AI without proper verification.
Tayla Pinto, a lawyer specializing in AI, data protection, and IT law, expressed grave concerns, noting that when confronted, legal counsel admitted to using generative AI. Pinto stated, ‘This shows that the problem of lawyers not knowing how to use generative AI responsibly and ethically is growing.’ She further indicated that there have been at least three cases in South Africa where legal advisors used AI to create court documents, including the Northbound Processing case in June.
Also Read:
- University of Cape Town Adopts Landmark AI in Education Framework
- Intellectual Property Experts Debate Generative AI’s Fair Use Implications at 9th Circuit Conference
In response to these critical misapplications of AI, South Africa’s Legal Practice Council (LPC) is actively developing a comprehensive AI policy framework. This framework aims to govern how legal professionals utilize artificial intelligence in their work. Llewellen Curlewis, Deputy Chair of the LPC, emphasized the severity of the situation, stating that such misuse of AI amounts to serious misconduct that could potentially lead to legal practitioners being disbarred. Curlewis asserted, ‘Every AI-generated output must be verified before being included in legal submissions,’ and warned that the unverified use of AI ‘could undermine the integrity of the entire justice system.’ The LPC is collaborating with experts and IT specialists to draft enforceable guidelines, acknowledging the complexity of regulating this nascent technology while ensuring it does not erode public trust in the rule of law.


