TLDR: The Fair Work Commission (FWC) is set to issue guidance on the increasing use of generative artificial intelligence by employees and employers in drafting dismissal claims. This comes as FWC applications surged by 10% in 2024-25, with concerns mounting over AI ‘hallucinations’ and unreliable legal advice leading to rejected claims and fabricated precedents.
Sydney, Australia – November 10, 2025 – The Fair Work Commission (FWC) is poised to issue a significant statement this week regarding the burgeoning reliance on generative artificial intelligence (AI) by parties involved in dismissal claims. This move follows months of internal warnings and a noticeable uptick in cases where AI tools have been used to draft legal submissions and seek advice, often with problematic outcomes.
According to the FWC’s latest annual report for 2024-25, applications to the commission have surged by 10%, reaching a total of 44,075 lodgements. Unfair dismissal applications constituted the largest portion, accounting for 37% of the total with 16,500 filings. General protections involving dismissal applications made up a further 14% of lodgements. This increase in caseload coincides with the growing integration of AI into the preparation of these claims.
The FWC has observed a trend where both applicants and respondents are utilizing AI tools to explore legal options, seek advice, and prepare their submissions. However, this reliance has not been without its pitfalls. A particular case in August saw the FWC reject a general protections claim from a worker who had used AI to prepare his application almost two and a half years after his employment ended. The ruling highlighted “the obvious danger of relying on artificial intelligence for legal advice,” noting the deficiencies in the AI-generated application.
In another concerning instance, an employer was found to have referenced several non-existent case precedents in their submission, a phenomenon commonly referred to as “AI hallucination.” Google Cloud defines AI hallucinations as incorrect or misleading results generated by AI models due to factors such as insufficient training data, incorrect assumptions, or biases. Such occurrences underscore the risks associated with uncritical adoption of AI in legal contexts.
Deputy President Slevin of the Fair Work Commission has been particularly critical of AI use in proceedings. In the case of Mr Branden Deysel v Electra Lift Co. FWC 2289, Slevin applied a critical lens to the applicant’s use of ChatGPT, noting that the AI-generated advice was baseless and led to a “hopeless” claim, unnecessarily wasting commission resources. While the AI use did not determine the outcome of Deysel’s extension of time application (which was already significantly out of time), it likely encouraged him to pursue the claim.
Experts suggest that the accessibility of AI tools like ChatGPT might encourage more aggrieved former employees to bring applications, potentially under the misconception that professional legal assistance is unnecessary. Michael Byrnes, an employment law expert, noted that “AI tools can mislead an employee as to the correct claim to bring, or the merits of such a claim, but it does facilitate the making of claims.”
Also Read:
- Navigating the Digital Divide: AI’s Impact and Ethical Challenges in Legal Practice
- Workplace AI Adoption Becomes Mandatory, Non-Compliance Risks Job Security
The FWC’s upcoming statement is expected to provide guidance and reinforce warnings about the careful and judicious use of AI in preparing and presenting claims, emphasizing that while AI may have a role, it must be used with caution to avoid unmeritorious claims and arguments. The commission’s vigilance reflects a broader concern within the legal community, with even the NSW Supreme Court commenting on the need for judicial oversight regarding AI use, particularly by unrepresented litigants.


