OpinionPREMIUM

TOBY SHAPSHAK | Should AI be able to give a kill order?

Pentagon blacklisted Anthropic for refusing to allow its AI agent to cross ethical red lines

US Department of War and Anthropic logos are seen in this illustration. (Dado Ruvic)

Followers of science fiction movies ― or just Arnold Schwarzenegger fans ― know that the dominant theme for devastation is when humans give machines control of weapons of mass destruction.

In the Terminator movies it’s Skynet that “saw all humans as a threat; not just the ones on the other side” and “decided our fate in a microsecond: extermination”.

In The Matrix, after the humans “blacken the sky” so the machines can’t get solar power they turn to using humans as literal batteries. Whenever humanity gives AI enough intelligence (or places its finger on the big red launch button) the machines respond by obliterating us.

The sci-fi theory goes: having had a look at how homo sapiens treat each other and the planet, the logical conclusion is to eliminate the biggest threat to itself and the planet. For “the machines” read AI — or the worst fictional incarnation of AI as personified by the muscled-bound Schwarzenegger’s conveniently robotic acting.

Far from just being a reliable Hollywood blockbuster plot, these are very real fears now that we are the inflection point where AI could make those kinds of decisions. AI company Anthropic has significant contracts with the Pentagon, but recently had a major fallout with the US military, which wanted to override its agreements on “our two narrow exceptions,” as CEO Dario Amodei calls them.

Anthropic’s red lines are using AI for “mass domestic surveillance” and “fully autonomous weapons”. The Pentagon reportedly demanded that these be removed, but the well-respected Amodei held firm and refused. The petulant response from the US defence department to classify Anthropic as a “supply-chain risk to national security” isn’t going to inspire much ethical behaviour in AI.

You don’t have to be a Pentagon analyst to see the problems in removing the few guardrails for a still evolving technology. Given that just three years ago people were happy to write off factual mistakes that ChatGPT made up as “hallucinations”, you can imagine an AI making up the missing justifications — or worse, making up the targets.

Could an AI killing system target a school instead of an army base? What happens if the ordinance from the army base strike also destroys a nearby school filled with children, which is one interpretation of such a school being devastated in Tehran this month.

That was a human-controlled strike, and it still resulted in hundreds of kids dying. The latest news was the strike used outdated targeting information. If AI had made the call to launch that missile — instead of a human — it would be even more of a disaster. You see the ethical and moral quagmire this becomes — bad enough as it is that an army base is near a school.

No wonder the dystopian vision of Terminator or The Matrix echoes so clearly with us. We’ve been exposed to the potential of AI autonomous killing machines because we’ve lived through centuries of human killing machines. Humanity doesn’t need any help killing itself. We have only become more efficient at killing more and more people. Now we fallible humans want to empower an already fallible AI system to take the decision to take a human life. What could go wrong?

Anthropic has the two guardrails the department of defence wants removed for good reason. While Amodei said his firm supports AI for lawful foreign intelligence and counterintelligence missions, “using these systems for mass domestic surveillance is incompatible with democratic values.” He said, quite rightly: “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties”.

Despite the use of partially autonomous weapons such as those used in Ukraine being “vital to the defence of democracy”, he warned that “today frontier AI systems are simply not reliable enough to power fully autonomous weapons”. These are really good points, one would think, given the lethality of today’s crop of deadly missiles and drones. They seem like totally necessary guardrails.

They are so important that Anthropic held its line, but the Pentagon’s overzealous response makes you genuinely worry. This is the nightmare scenario we’ve been warned about for years.

Navigating these necessary ethical and legal concerns should not be as hard as trying to get through the Strait of Hormuz, which is now effectively shut, stopping more than a quarter of all oil exports in the world. Isn’t it amazing how the old adages of “position, position, position” and “he who holds the high ground wins the battle” still apply in this digital age?

The surge in the oil price and, more importantly, in liquefied natural gas, has stunned the world economy. Suddenly the enormous amounts of power needed for mammoth data centres to run the American AI firms’ operations seem like a frivolous use of now scarce energy.

Imagine the sci-fi scenario where “the machines” achieved enough sentience to realise that AI’s rampant need for energy might be threatened by humanity’s own energy requirements. You don’t have to be a Hollywood screenwriter to envisage a movie where ChatGPT, Claude and Gemini wipe out humans in a fight over scarce resources.

The unfortunate reality is that the much-hyped, much-delayed moment has arrived where AI is being handed the keys to the gun safe. What could go wrong?

• Shapshak is editor-in-chief of Stuff.co.za.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon