Andreas Kluth
There’s no question that artificial intelligence will transform warfare, along with pretty much everything else. But will the change be apocalyptic or evolutionary? For the sake of humanity, let’s hope it’s the latter.
Technological innovation has always changed warcraft. It’s been that way since the arrival of chariots, stirrups, gunpowder, nukes, and nowadays drones, as Ukrainians and Russians are demonstrating every day.
My favorite example (because it’s so simple) is the Battle of Koeniggraetz in the 19th century, in which the Prussians defeated the Austrians, thus ensuring that Germany would be united from Berlin rather than Vienna. The Prussians won largely because they had breech-loading guns, which they could rapidly reload while lying on the ground, whereas the Austrians had muzzle-loading rifles, which they reloaded more slowly while standing up.
If AI were akin to that kind of technology, either the U.S. or China, vying for leadership in the field, might hope to gain military preeminence for a fleeting moment. As a military technology, though, AI looks less like breech-loading rifles and more like the telegraph, internet, or even electricity. That is, it’s less a weapon than an infrastructure that will gradually transform everything, including fighting.
It’s already doing that. America’s satellites and reconnaissance drones now capture so much information that no army of humans could analyze all of it fast enough to give the Ukrainians, say, useful tips about Russian troop movements in actionable time. So AI gets that job. In that way, soldiers are like doctors who use AI to guide them through reams of X-ray data.
The next step is to put AI into all sorts of bots that will function, for example, as automated wingmen for fighter pilots. A human will still fly a jet, but she’ll be surrounded by a swarm of drones using sensors and AI to spot and — with the pilot’s permission — annihilate enemy air defenses or ground troops. The bots won’t even care if they expire in the process, if that’s their fate. In that way, AI could save lives as well as cost, and free up humans to concentrate on the larger context of the mission.
The crucial detail is that these bots must still seek human authorization before killing. I don’t think we should ever trust an algorithm to have adequate contextual awareness to judge, say, whether people in plain clothes are likely to be civilians or combatants — even humans are notoriously bad at telling the difference. Nor should we let AI assess whether the human toll required for a mission’s tactical success is proportionate to the strategic objective.
The existential question is therefore not about AI as such. Paul Scharre at the Center for a New American Security, an author on the subject, argues that it’s instead mostly about the degree of autonomy we humans grant our machines. Will the algorithm assist soldiers, officers and commanders, or replace them?
This, too, isn’t a wholly new problem. Long before AI, during the Cold War, Moscow built “dead-hand” systems, including one called Perimeter. It’s a fully automated procedure to launch nuclear strikes after the Kremlin’s human leadership dies in an attack. The purpose is obviously to convince the enemy that even a successful first strike would lead to mutual assured destruction. But one wonders what would happen if Perimeter, which the Russians are upgrading, malfunctions and launches in error.
So the problem is about how autonomously machines do the deciding. In the case of nuclear weapons, the stakes are self-evidently existential. But they’re still vertiginously high with all other “lethal autonomous weapons systems” (LAWS), as killer robots are officially called.
It may be that an algorithm makes good decisions and minimizes death; that’s why some air-defense systems already use AI — it’s faster and better than people are. But the code may also fail or, more diabolically, be programmed to maximize suffering. Would you ever want Russian President Vladimir Putin or Hamas to deploy killer robots?
The U.S., as the furthest along technologically, has in some ways led by good example, and in some ways not. In its Nuclear Posture Review in 2022, it said that it will always “maintain a human ‘in the loop’” when making launch decisions. Neither Russia nor China has made a similar declaration. Last year, the U.S. also issued a “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy.” Endorsed by 52 countries and counting, it calls for all sorts of “safeguards” on LAWS.
Not for their ban, however. And that’s where the U.S., as so often in international law, could play a more constructive role. The U.N. Convention on Certain Conventional Weapons, which seeks to restrict pernicious killing techniques such as landmines, has been trying to prohibit autonomous killer robots outright. But the U.S. is among those opposing a ban. It should instead support one and get China, and then others, to do the same.
Even if the world says no to LAWS, of course, AI will still create new risks. It will accelerate military decisions so much that humans may have no time to evaluate a situation, and in the extreme stress either make fatal mistakes or surrender to the algorithm. This is called automation bias, the psychology at work when, for example, people let their car’s GPS guide them into a pond or off a cliff.
But risk has increased along with military innovation ever since Homo sapiens tied stone tips onto spears. And so far we’ve mostly learned to manage the new perils. Provided we humans, and not our bots, remain the ones to make the final and most existential calls, there’s still hope that we’ll evolve alongside AI, rather than perish with it.
Andreas Kluth is a Bloomberg Opinion columnist covering U.S. diplomacy, national security, and geopolitics. Previously, he was editor-in-chief of Handelsblatt Global and a writer for The Economist.