TLDR: Texas courts are increasingly implementing stringent rules requiring attorneys to disclose the use of artificial intelligence in legal filings. This move comes amidst growing concerns over AI-generated ‘hallucinations’ and fabricated citations, leading to potential sanctions, fines, and reputational damage for legal professionals who fail to verify AI-produced content. The broader regulatory landscape in Texas also includes the Responsible Artificial Intelligence Governance Act, set to take effect in 2026, further emphasizing the state’s proactive stance on AI regulation.
Texas courts are at the forefront of a national trend to regulate the use of artificial intelligence in legal proceedings, with new disclosure requirements and the threat of sanctions for non-compliance. This proactive stance aims to combat issues such as AI ‘hallucinations’ and the generation of non-existent legal citations, which have already led to penalties for attorneys across the country.
One notable instance of this regulatory push is seen in the U.S. District Court for the Northern District of Texas. As early as June 2023, U.S. District Judge Brantley Starr in Dallas began requiring attorneys appearing before his court to file a certificate attesting whether any portion of their filing was drafted by generative AI. If AI was used, the certification must confirm that the language was thoroughly checked for accuracy by a human using traditional legal databases or print reporters. Judge Starr explicitly warned that AI programs are ‘prone to hallucinations and bias’ and can ‘make stuff up,’ including fabricated quotes and citations. Violators of this mandate face sanctions under Rule 11 of the Federal Rules of Civil Procedure. This measure underscores the judiciary’s concern that AI, unlike human attorneys, is ‘unbound by any sense of duty, honor, or justice’ and operates based on code rather than conviction.
The broader legal landscape in Texas also reflects a comprehensive approach to AI governance. The Texas Responsible Artificial Intelligence Governance Act (Texas AI Act), signed into law on June 22, 2025, and effective January 1, 2026, places restrictions on AI use by both government agencies and businesses operating within the state. While this act is broader than court filings, it mandates disclosures for government agencies interacting with AI systems and prohibits AI use for inciting criminal activity, self-harm, or discrimination. This legislative action complements the judiciary’s efforts to ensure responsible AI deployment.
The need for such rules is highlighted by recent incidents where lawyers have faced severe consequences for improper AI use. For example, an Arizona federal court sanctioned attorney Maren Bam on August 14, 2025, for submitting a brief containing multiple AI-generated citations to non-existent cases, resulting in the revocation of her pro hac vice status and mandatory notification to state bar authorities. Similarly, Bloomberg Law News reported on February 25, 2025, that Morgan & Morgan lawyers were fined for ‘hallucinated’ AI citations, and on June 23, 2025, that ‘AI Fake Citations Expose Lawyer Sloppiness and Training Gaps.’ These cases underscore the significant reputational harm, sanctions, and fines that can arise from lawyers’ improper AI use and failure to verify AI-generated content.
Also Read:
- Attorney Sanctioned for Submitting Fictitious AI-Generated Legal Citations in Court Filing
- Australian King’s Counsel Apologizes for AI-Generated Factual Errors in Murder Case Submissions
The legal profession is grappling with a ‘patchwork of obligations and expectations’ around AI use, stemming from executive orders, court rulings, and state-level rules. This evolving regulatory environment, coupled with the risks of ethical missteps and regulatory violations, is contributing to a cautious approach to AI adoption in law firms. As AI technologies mature, the legal community is working to establish robust frameworks for managing the ethical, security, and integration challenges posed by these powerful tools, ensuring that innovation does not compromise the integrity of legal practice.


