ColumnistsPREMIUM

KATE THOMPSON DAVY: OpenAI vs open letter: the fight over AI’s threats and opportunities

Most opponents of the incredible new tech are deeply hypocritical and cravenly begging for a chance to catch up

Picture: DENIS ISMAGILOV/ 123RF
Picture: DENIS ISMAGILOV/ 123RF

Apparently, it’s not just the knowledge-worker plebs losing sleep over artificial intelligence (AI) anymore. Although, to be fair, AI anxiety hasn’t robbed me of any more sleep than my TikTok habits do.

When it first burst onto the scene — or onto Joe Public’s radar — in November 2022 there was a moment of fretful insomnia, a woeful wondering if it was too late for me to consider a career pivot. Just what other possible work I’ll do when AI takes my job remains unclear to me, but the leading candidates are subsistence Karoo goat farmer and reluctant bank robber … Remember when the inherent promise of rapidly progressing tech was no-labour-all-leisure? Those were the days!

Good news, though, the billionaires and lords of industry have entered the chat — with an open letter calling for a six-month moratorium on AI tool development. Specifically, any tool that is comparable to OpenAI’s Chat GPT4. This lumps them in alongside us mere academics, lawyers, accountants, management consultants and others who now see generative AI as a competitor, or a future competitor at least.

Last week this open letter was issued by a group called the Future of Life Institute (a nonprofit whose primary funder is the Musk Foundation). The letter requests that all AI labs “immediately pause for at least six months the training of AI systems more powerful than GPT-4”, citing the risks to society that these tools may potentially pose. Elon Musk is one of thousands of signatories, as well as Apple co-founder Steve Wozniak and a bunch of other technologists — including researchers and executives currently operating in the AI space.

The letter is just the latest move against OpenAI’s incredible new tech. At the weekend Italian data protection regulators announced an unprecedented ban on ChatGPT, based on privacy concerns they believe are posed by the model. The BBC is reporting that the regulator launched an investigation after the data breach of March 20, telling OpenAI it had 20 days to deal with the concerns or be fined — but OpenAI told the British news organisation it believes it complied with all applicable privacy laws, specifically the General Data Protection Regulation (GDPR).

In addition to the security wobble in March, the Italians singled out ChatGPT because — unlike Google’s AI tool Bard — there is no age limit in place to protect minors. Not to be outdone by the private sector or Italian regulators, Reuters reported on Tuesday that US President Joe Biden would be meeting his council of science and technology advisers to discuss what the White House called the “risks and opportunities” of AI’s blistering pace of development, and the threat it could be to people (users of the technology, and more generally) and in the context of national security to the US.

The need for “responsible innovation and appropriate safeguards” wasn’t the only thing on the agenda for the AI meeting, other matters including the reduction or curtailment of personal data collection and the protection of children. Great! We need leaders to be getting to grips with this stuff, but realistically the above are perennial hits from the set “big panic about big tech” list: jobs, privacy, and “what about the kids?”

It’s not that I completely disagree with these worries. Not at all. But the letter is vague and designed to tap into fear, which is always a red flag for me. A number of the academics cited in the paper have also gone public stating that their research is misused in the letter’s argument. Some signatories have even walked back their support. Others have criticised Musk’s decision to add his name to it. Cornell University digital and information law professor James Grimmelman told Reuters it is “deeply hypocritical …  given how hard Tesla has fought against accountability for the defective AI in its self-driving cars”.

In his first public remarks since the letter debate kicked off, Bill Gates told Reuters that calls to stop the development of AI — or at least take a solid time-out — will not address the concerns that lie ahead. It would be preferable, according to Gates, to concentrate on how best to use the advancements in AI because it is difficult to see how a worldwide halt might function.

Gates isn’t at the helm of Microsoft anymore, but even having stepped away from the board, he’s baked into the structure there. In March he told Forbes he spends about 10% of his time at Microsoft headquarters “meeting with product teams” and discussing “the ways AI can change how we work — and how we use Microsoft software products to do it”. The company has invested billions into OpenAI and integrating it into Microsoft tools, such as Bing, with more integration and investment definitively on the cards.

So, to recap: both sides seem to have vested interests. The “Halt AI development” crowd’s interests make it really hard to read their concerns as much more than covetousness, and the desire to be given a chance to catch up. This and the misappropriated research used in the letter are considerable cracks in the foundation.  OpenAI and its investors, on the other hand, were never going to acquiesce to the pause without a fight.

Microsoft hasn’t exactly struggled in the tech world, but I’m not sure it has ever been this relevant or cutting edge, and it would be anathema to its business aims to squander that advantage. And, lastly, if we’re being totally honest, the whole thing is probably moot since that horse hasn’t so much bolted as it has launched into outer space. 

• Thompson Davy, a freelance journalist, is an impactAFRICA fellow and WanaData member.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon