Lawyers Keep Falling for AI-Generated Jurisprudence
A few years ago, two lawyers in New York were sanctioned for citing six cases in federal court filings. The problem? The cases didn’t exist. They were generated using ChatGPT, and the lawyers failed to verify their authenticity. It was a cautionary tale about relying too heavily on artificial intelligence.
But since then, it’s happened over and over again. In 2023, a lawyer for Michael Cohen filed a motion citing three fake cases. In 2024, a lawyer submitted a brief with bogus legal precedent to the Second Circuit Court of Appeals. And in 2025, two law firms were sanctioned after AI-generated citations made their way into a filing.
Now it’s Brazil’s turn. The Superior Labor Court (TST) recently sanctioned lawyers for appealing a case using fabricated jurisprudence. The citations looked real. They were formatted correctly and even cited the names of actual judges. Yet they were entirely made up.
The pattern is familiar. Legal professionals are using tools like ChatGPT without verifying the output. That’s a risky move anywhere, but especially in legal documents, where filing false information can lead to disciplinary action.
The challenge is that AI models are trained to sound convincing, even when they’re wrong. They can generate text that appears authoritative, fooling even experienced users.
The lesson is clear: AI can be a powerful tool, but it’s no substitute for real lawyering. As legal tech evolves, so must the habits of those who use it. Verification is key. Otherwise, what starts as a shortcut can quickly become a career setback.