EMILE MYBURGH: Lawyers need not fret about AI

But they do need to make sure someone trained is checking any work produced by artificial intelligence

Picture: 123RF
Picture: 123RF

Let me be upfront and say, despite all the doomsayers to the contrary, we lawyers don’t have to lose any sleep over artificial intelligence (AI).

My authority to say so is not ChatGPT but the Johannesburg high court case of Northbound Processing v The SA Diamond & Precious Metals Regulator, a decision handed down on June 30 2025.

The facts leading up to the case aren’t that remarkable, dealing with an attempt to set aside the sale of a business. But what made it cause a stir in the legal fraternity was the reliance of three advocates (including a well-known senior counsel) on an AI programme when drafting their heads of argument. 

The AI programme, ironically called Legal Genius, presented imaginary case law as authority for the applicant’s case. It didn’t take long for their opponents and the court to pick up on these hallucinations, as the invention of nonexistent case law (or other falsehoods) by AI is called. 

Counsel apologised profusely and used that most inane of arguments when trying to justify a serious blunder, namely that no harm was caused, and claimed that they were under immense pressure to get the work out and therefore didn’t check Legal Genius’ work. 

The court was not impressed and referred the matter to the Legal Practice Council for investigation into unethical conduct by the advocates concerned. I won’t identify the advocates here, and knowing how rampant AI hallucinations are it’s unlikely that a casual AI search will accurately identify them either. 

The real surprise here is that, despite there being  hundreds of similar episodes here and abroad, these advocates still relied on AI without checking the results. No half-baked lawyer should still rely on AI-generated case law, at least not without checking the published law reports to confirm whether the cases actually exist. 

As far as my research shows, this kind of AI hallucination already reared its head as far back as 2023 (ancient history in the dynamic world of IT) when someone sued the Colombian airline Avianca in a New York court following an injury on a flight in 2019 involving the drinks trolley.

Avianca tried to have the case thrown out on the basis of the American statute of limitations (known in SA as prescription).  The plaintiff’s lawyer cited cases such as Martinez v Delta Air Lines, Zicherman v Korean Air Lines and Varghese v China Southern Airlines as authority for why his client’s case should not be tossed out. 

It certainly sounded convincing, but sadly for the plaintiff these cases were inventions by ChatGPT. The plaintiff’s lawyer, Steven A Schwartz of the firm Levidow, Levidow & Oberman, apologised profusely, but to no avail. He also claimed he had no intent to deceive the court. The judge didn’t mince his words when he labelled Schwartz’s legal submission as being replete with  “bogus judicial decisions, with bogus quotes and bogus internal citations”. 

So why do lawyers still use AI? The court in the Northbound decision answered this question by quoting the following paragraph from the 2025 UK decision of Ayinde v The London Borough of Haringey; Al-Haroun v Qatar National Bank: “In the context of legal research, the risks of using AI are now well known. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect.

“The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”  In other words, AI can produce such compelling drivel that it may convince even the most seasoned lawyers that it is true. 

Many law firms (especially the so-called magic circle ones) pride themselves on employing AI. A major SA firm recently even publicised on LinkedIn that its new AI tool has been appointed as partner — presumably with a fee target.

Much has been said about the use of AI in law firms to draft agreements, devise cross examination questions and do other heavy lifting work, and how this functionality will come at the expense of thousands of entry-level law firm jobs. While there is definitely no denying that AI has its uses, one has to question the wisdom of using it as a substitute for human legal work. 

When one considers that these agreements and cross examinations must be based on the law and decisions of courts, matters AI routinely gets wrong, the question arises whether these agreements and cross-examinations can be trusted at all.  Surely someone must check these documents, and only a trained and experienced lawyer can do that. 

Many lawyers, me included, dream of tools that can make our lives easier, and I do find the summaries and spell checks AI can provide of complex documents to be very useful. Many of my colleagues swear by the results AI has brought them, but with the queues of disciplinary hearings against lawyers who used AI unchecked growing worldwide, I simply do not believe it is worth outsourcing our legal brains to AI for any creative work (such as drafting contracts or heads of argument).   

Of course, AI is an evolving tool, and nothing says AI won’t get case law citations spot on in future. Already in other fields, such as mental health in countries where access to psychologists is expensive, limited or non-existing, AI is proving useful (though not without criticism).

Considering how much time lapsed between the Avianca matter in New York and the Northbound decision in Johannesburg, and all the matters in between where AI led to the downfall of lawyers, it is lunacy — and possibly unqualified professional negligence and misconduct — for any lawyer to continue to rely on AI.  That it’s easy, compelling and looks good doesn’t justify the risk of AI’s hallucinations. 

There is also another consideration, as highlighted by the case The New York Times brought against OpenAI, creator of ChatGPT. The New York Times alleges that OpenAI infringed the publication’s copyright to train ChatGPT’s large language model (LLM). 

AI essentially works by guessing what text should follow the preceding words and phrases based on the LLM it uses. But if AI can only succeed by such devious means as copyright infringement to create its LLM, one has to seriously question the ethics of using AI at all. 

• Myburgh is an attorney practising in SA and Brazil. 

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon