OpinionPREMIUM

ARTHUR GOLDSTUCK: Lawyers are falling for AI hallucinations

I isn’t here to replace the artists. It’s here to give them their lives back, says the writer. Picture: 123RF
I isn’t here to replace the artists. It’s here to give them their lives back, says the writer. Picture: 123RF

It was always going to happen. One day, a legal firm would swap out decades of precedent and procedure for something scraped up by a chatbot, and end up citing case law that didn’t exist. Because we all know large language models are designed to predict the next word and not the next fact. So the last thing we should trust it with is formal references and citations. Especially in law.

Oh wait. It turns out we don’t all know.

For instance, a law firm in Pietermaritzburg had not heard of AI hallucinations. It was hauled over the judicial coals by the judge for submitting an appeal packed with legal references that had all the authority of a WhatsApp voice note.

The irony. Lawyers spend years qualifying to understand the law. In particular, they gain a deep insight into the role played by case law in legal arguments. And then some turn to chatbots to compile their arguments for them, don’t check what is delivered and land up on the wrong side of legal cases themselves.

South African courtrooms are only a subset of a global trend. In Alabama, two attorneys used ChatGPT to draft legal filings and the judge found five citations that were fabricated. In California, a judge fined a law firm $31,000 (R554,000) for filing briefs generated by an AI model with no verification.

These cases were lost twice: both in the case itself being undermined, and in the lawyers being sanctioned.

Yet, it’s a case of damned if they do it wrong, and damned if they don’t do it at all.

Johannesburg-based technology lawyer Nerushka Bowan, who has established herself as a leading voice in the use of AI in law with the founding of the LITT (Law. Innovation. Technology. Tomorrow.) Institute, was sympathetic when I asked her if these lawyers were guilty of stupidity. She gave me a different perspective.

“It is a lack of understanding how the tools work, coupled with extreme stress and tight deadlines. Working in these high-pressured environments, burnt out and stressed, makes smart people do dumb things,” she said.

“They felt like they received the answer to all their problems when using ChatGPT. They were able to meet their deadlines. On the surface, everything looked and appeared high-quality and accurate. They didn’t even know hallucinations were possible. They are not technologists, they don’t know how large language models work.

If you are working like a robot, producing machine-like work, then you are ripe for replacement

—  Nerushka Bowan, founder of LITT

“Bear in mind that the headlines are just the lawyers that are getting caught out for citing hallucinated cases in court. The actual number of ignorant users of generative AI in the legal industry is exponentially higher.”

The LITT Institute is dedicated to bridging the knowledge gap to allow all lawyers to navigate new technology like generative AI, while doing so responsibly and ethically.

“For the first time, lawyers are adopting a new technology without being told to do so. These early adopters did not appreciate the risks involved in using these tools, such as hallucinations, which landed them in hot water when courts found that cases submitted did not actually exist! Lawyers need to balance innovation with existing legal responsibilities, including understanding the risks involved when using new technologies.”

That is not to say that AI must be avoided.

“Most law firms and legal departments have adopted Microsoft Copilot as a generative AI starting point, and a handful are leveraging tools such as ChatGPT Enterprise. We are already seeing a host of vertical AI created for the legal industry.”

It is not surprising then, that some roles inside law firms are changing and some jobs will come under threat. But the context is much the same as for creative and financial sectors.

“If you are working like a robot, producing machine like work, then you are ripe for replacement. If you are human, that connects, engages and empathises with clients. You don't have to be afraid.

“The reason we are afraid of agentic AI, AGI and smart robots is that we ourselves are behaving like machines. We work around the clock, we analyse and synthesise large volumes of data, and we logically and analytically produce outputs according to timelines. AI can do this better and faster. And so there may be reason to be afraid.”

Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of The Hitchhiker’s Guide to AI.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon