Key Takeaways
- Judges are frequently discovering fake legal citations in court documents, often generated by AI.
- Lawyers are increasingly responsible for these errors, a shift from when self-representing individuals were the main source.
- The number of such incidents has been rising, with a significant uptick in cases where AI provided incorrect information.
- Courts are now imposing penalties, including substantial fines, for the misuse of AI in legal filings.
A concerning trend is emerging in courtrooms: judges are spotting bogus legal citations almost daily. This issue is largely due to lawyers relying too heavily on artificial intelligence, according to a report by Business Insider.
Since May 1st alone, judges have identified at least 23 instances where AI appears to have invented information for court records. This highlights a growing problem of what’s often called AI “hallucinations”—where the technology creates fake cases or cites non-existent legal authorities.
Legal data analyst Damien Charlotin has compiled a public database documenting 120 such cases. He notes that this number likely represents only a fraction of actual incidents, as some AI-generated errors might go unnoticed by judges.
Interestingly, the data shows a shift in who makes these mistakes. While initially, individuals representing themselves in court were more commonly at fault, lawyers and their support staff, like paralegals, are now increasingly the source of these AI-driven errors.
In 2023, for example, seven out of ten identified AI errors came from people representing themselves, with lawyers responsible for the other three. However, last month, legal professionals were found to be at fault in at least 13 of the 23 documented cases.
Charlotin remarks on his website that instances of lawyers or litigants mistakenly citing fabricated cases “has now become a rather common trope.”
His database details a rising tide: 10 rulings involving AI errors in 2023, 37 in 2024, and a striking 73 in just the first five months of 2025. Most of these cases are from the United States.
However, this isn’t solely an American issue. Judges in the UK, South Africa, Israel, Australia, and Spain have also caught AI-generated mistakes in legal documents.
Courts worldwide are beginning to take a firm stance, penalizing the misuse of AI. Monetary fines have been imposed in several instances, with some sanctions reaching $10,000 or more. Four of these substantial fines were issued this year.
Many individuals involved in these cases lack the resources or expertise for thorough legal research, which often involves carefully analyzing past cases. In one South African case, an elderly lawyer involved with fake AI citations was described by the court as “technologically challenged.”
But it’s not just those with limited resources. Recently, attorneys at prominent U.S. law firms, including K&L Gates and Ellis George, admitted to using AI-generated, made-up cases. They attributed this to miscommunication and a failure to verify their work, resulting in a sanction of about $31,000.
In many of the cases Charlotin cataloged, the specific AI tool used wasn’t identified. Sometimes, judges concluded AI was used even when the parties denied it. However, when a tool was named, ChatGPT appeared more frequently than any other in Charlotin’s data.