TLDR: The Quebec Superior Court has sanctioned a 74-year-old self-represented litigant, Mr. Jean Laprade, with a $5,000 fine for submitting court filings that included fictitious case law generated by artificial intelligence. This landmark decision underscores the legal and ethical responsibilities of all litigants to verify the accuracy of materials presented in court, even when using AI tools.
In a significant ruling on October 17, 2025, the Quebec Superior Court addressed the burgeoning issue of generative artificial intelligence (AI) in legal proceedings by sanctioning a self-represented litigant. Mr. Jean Laprade, a 74-year-old defendant in a dispute involving Specter Aviation Limited and TPVX Aircraft Solutions Inc., was ordered to pay a $5,000 fine after presenting court documents containing fabricated legal citations, or ‘hallucinations,’ produced by an AI tool.
The Court’s decision, as reported by Mondaq, emphasized that while Quebec courts are not inherently opposed to the use of AI, litigants bear full responsibility for the veracity and accuracy of their submissions. The ruling highlighted that Mr. Laprade’s reliance on non-existent case law constituted a ‘serious procedural breach’ under Article 342 of the Code of Civil Procedure. This conduct was found to have caused ‘unnecessary work for opposing counsel and the Court’ and risked ‘eroding public confidence in the administration of justice.’ The $5,000 sanction serves both as a punitive measure for his actions and a deterrent against similar future behavior.
This case is part of a growing trend, often dubbed the ‘fake AI cases’ epidemic, where self-represented litigants (SRLs) inadvertently submit AI-generated inaccuracies. Courts in other jurisdictions have also begun to grapple with this challenge. For instance, a Colorado appellate court, in a January 2025 decision, cautioned that future submissions containing fictitious citations could lead to severe consequences, including monetary penalties or dismissal of appeals, even for SRLs. Similarly, in an unpublished opinion from July 2024, the U.S. District Court for the Southern District of New York acknowledged the seriousness of such filings but initially declined to impose sanctions, citing the litigants’ self-represented status. However, the Quebec ruling demonstrates a clear shift towards holding individuals accountable regardless of their legal representation status.
Also Read:
- Jamaica Leads Caribbean in AI Regulation for Judicial System
- Legal Profession’s Future: AI to Redefine Roles, Not Replace Lawyers, Says Expert Gerald Manoharan
Legal experts note that the rapid acceleration of decisions reporting problematic AI-generated content filed by SRLs necessitates clear guidelines and education. Courts and tribunals are increasingly urged to provide prominent warnings about the careful use of generative AI, particularly concerning legal authorities, and to clarify its permissible (and impermissible) uses for evidentiary purposes. Furthermore, continuous and regularly updated adjudicator education is deemed critical to ensure an appropriate understanding of the evolving technology within the judicial system. The Quebec Superior Court’s ruling sets a firm precedent, signaling that the legal system expects diligence and verification from all parties engaging with AI in court.


