TLDR: A growing number of self-represented individuals are using generative AI tools in Australian courts, leading to significant risks including damaged cases, rejected documents, and potential cost orders due to inaccurate or irrelevant AI-generated information.
An increasing trend in Australian courts reveals that self-represented litigants are turning to generative artificial intelligence (AI) tools for assistance, often with detrimental consequences for their legal cases and finances. Research indicates 84 reported cases of generative AI use in Australian courts since ChatGPT’s late 2022 launch, with over three-quarters (66 of 84) involving individuals representing themselves. These litigants, who may have valid legal claims, are employing AI for various disputes, including property, wills, employment, bankruptcy, defamation, and migration cases.
Judges acknowledge the allure of AI, especially for those overwhelmed by self-representation. However, as Judge My Anh Tran of the County Court of Victoria noted, using AI without understanding its output and verifying its legal and factual accuracy can severely damage a case. The risks are substantial: AI tools can produce “fake law,” leading to court documents being rejected and valid claims being lost. If evidence or arguments are not real, the court is obligated to reject them.
The consequences extend to financial penalties. Queensland’s courts recently updated guidance for self-represented litigants, warning that using “inaccurate AI-generated information in court” could cause delays and potentially result in a “costs order” against them, meaning they might have to pay their opponent’s legal costs. New South Wales Chief Justice Andrew Bell observed a case where a self-represented respondent, despite being “admirably candid” about her AI use, presented AI-generated submissions that were “misconceived, unhelpful and irrelevant.”
Also Read:
- Artificial Intelligence Nears Embryo Experimentation as Australia’s Foremost Ethical Concern, Governance Institute Survey Reveals
- AI in Algorithmic Trading: Experts Caution Against ‘Shallow Quants’ and Systemic Financial Risks
While lawyers have also been caught relying on inaccurate AI-generated information, a crucial distinction exists. If a lawyer uses fake cases, it likely constitutes negligence, potentially allowing the client to sue. However, when a self-represented individual makes such errors, they bear sole responsibility. This ongoing research, part of an upcoming report for the Australian Academy of Law, highlights a growing real-world problem that underscores the critical need for caution and verification when integrating AI into legal proceedings.


