top of page

The AI Future of War


Kai-Fu Lee believes that humankind has risen to a whole new level of war-waging.

In his new book 2041: Ten Visions of Our Future, Lee identifies three major revolutions in the technology of warfare that have been game-changers in the human story. He builds his case compellingly, offering not only technical detail but considerable social and political insight in the process.

The first revolution, he argues, was gunpowder. With handfuls of explosive dirt, it becomes possible to kill from the safety of a great distance – and, of course, to blow things up. All the swords and arrows Charlemagne could muster could not stand against Grant’s rifles and cannons.

And so it was for many centuries, until the last one, when the guns and bombs all got bigger, but were still just guns and bombs. Then came the splitting of the atom, and it became possible to eliminate an entire city in less than a minute. And while this weapon has only been used twice in human history, it nonetheless completely rewrote the geopolitical playbook for nearly a century, redirecting nations and affecting billions of lives.

And now, per Lee, we have our third revolution – artificially intelligent weapons.

There is, for instance, the Israeli Harpy drone, which can fly to a specific location, look for specific targets, and then “Fire and Forget”. This is the

simplified version, Lee explains, of the more sophisticated “Slaughterbot” cloud, a batch of bird-sized drones that can track down a specific person and “shoot a small amount of dynamite point-blank through the person’s skull.”

Ouch.

Such drones, he continues, don’t need human guidance, as most drones do; they can fly themselves, “and are too small and nimble to be easily caught, stopped, or destroyed.”

Think about the implications, as Lee does: we are entering an entirely new era in human extermination, with true AI-enabled autonomy of weaponry - “the full engagement of killing: searching for, deciding to engage, and obliterating human life, completely without human involvement (italics mine).”

That’s a blood-curdling thought. And these weapons not only already exist, as Lee points out – but can be assembled in a garage with off-the-Internet parts for less than $1,000. “And this is not a far-fetched danger for the future,” Lee writes, “but a clear and present danger.”

These weapons will “become more intelligent, more precise, faster, and cheaper,” he continues; “They will also learn new capabilities, such as how to form swarms with teamwork and redundancy, making their missions virtually unstoppable. A swarm of 10,000 drones that could wipe out half a city could theoretically cost as little as $10 million.”

There is an upside, he maintains; far fewer soldiers will die on the battlefield, moving forward, and will greatly reduce the unintentional deaths of civilians and friendly forces. And these weapons will be more effective against terrorists and assassins than current measures.

But it is enough to justify their use? Lee asks. There’s a staggering moral shift at work; autonomous, intelligent weapons blur the line of accountability. Who is responsible when a drone kills a human being? Its actions were by its own plan. Do we blame the person who launched the mission? The programmer? The manufacturer? The drone itself made the decision to kill. You can see the problem. And all of this gets even stickier when considering a person killed by a drone through error. Lee quotes UN Secretary-General Antonio Guterres: “The prospect of machines with the discretion and power to take human life is morally repugnant.”

As for this autonomy and its attendant fuzzy accountability, “Such ambiguity may ultimately absolve aggressors for injustices or violations of international humanitarian law,” writes Lee, “and this lowers the threshold of war and makes it accessible to anyone.” To say nothing of the lower cost! Almost anyone can now wage war on an international level.

With this technology, anyone with the know-how and parts – all accessible online! - can create a drone that can track a cell phone signature and target the person carrying the phone, Lee points out. The Unabomber never had it so good.

And we should point out that there is, at present, no defense against such weapons; they are unstoppable. “The capabilities of autonomous weapons will be limited more by the laws of physics... than by any deficiencies in the AI systems that control them,” Lee quotes Stuart Russell of UC Berkeley. “One can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenseless.”

Well, it’s already here, and we can expect a rapid escalation. And more players, as the cost of such warfare will be so much less than we’re used to, lowering the barrier of entry, as Lee notes. Smaller countries will now be just as deadly as the big boys.

Will AI-based weapons become a deterrent, as nuclear weapons ultimately were? Nope, Lee reasons; AI weapons systems lack the mutually-assured-destruction factor. Not only can there be a clear victor, the victory can be attained while leaving the decimated country’s infrastructure completely intact. It is even desirable, if you’re the aggressor, to deploy them in lieu of a nuclear attack. Moreover, an aggressor can be anonymous – such an attack is theoretically untraceable. If Russia or China launches nukes against the US, we know where they came from, so we know where to send ours; but even if we out-drone all other nations 10-to-1, if we’re attacked, we might have no idea who to launch ours against in retaliation.

What can we do? We can put humans in the loop, of course, so that drones can attack only via a clear human decision – restoring accountability, which is a deterrent to using them. Not that any practicing nation will be very open to deterrence: what aggressor doesn’t want to remain unaccountable?

And there’s always the option of agreeing to ban them. Even if the politics of such a ban are negotiated, the development of these weapons will undoubtedly proceed in quiet.

In any case, AI-based warfare has arrived, and it’s now a scramble. An indescribably dangerous one.

“This multilateral arms race, if allowed to run its course,” Lee concludes, “will eventually become a race toward oblivion.”

9 views0 comments

Recent Posts

See All

Hozzászólások


bottom of page