The Geneva Convention, artificial intelligence and the future of warfare
© Steven Boykey Sidley
(Image:ideogram.ai, prompted by author)
“Doom mongering makes people glassy eyed…(but) those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking about the proliferation of motorbikes and washing machines here” - Mustafa Suleyman, co-founder of DeepMInd and author of the The Coming Wave: AI, Power and the 21st Century’s Greatest Dilemma
AI has speedily mutated into use-cases we never thought were possible a few years back, and the need to constrain it has gained attention while we try to gather our wits and decide on matters of ethics and safety. As the scenario unfolds, a new and unsettling thought is taking root.
The thought concerns what the militaries of various countries are doing with AI and the fact that they care about very different use-cases from the rest of us.
Before probing this unsettling thought, we should probably wave a nostalgic goodbye to the hard-won and then much-ignored Geneva Conventions of the last 160 years. The first Convention took place in 1864, with attending nations all signing up to the principle of humane treatment of sick and wounded soldiers on the battlefield. Three more treaties followed in which, firstly, shipwrecked armed forces were added (1906), then prisoners-of-war (1926) and, finally, civilians in war zones (1949), this last in response to the atrocities of World War II. A few more protocols have since been added but the 1949 document is the core.
Today’s geopolitical landscape is a sad testament to this entirely noble, now largely unsuccessful, initiative. Even the most superficial sweep of recent and current conflicts in Africa, the Middle East, East Timor, Ukraine, not to mention the brutal crushing of internal dissent in North Korea and by China at home and abroad, makes depressing reading. The Geneva Convention is disregarded for the most part (there are no consequences for breaching it, other than tut-tutting by the UN), leaving those countries that do try to follow the rules looking like outliers, even suckers.
We seem to regard this horror as regrettable at most. We either avert our eyes, or gaze upon the carnage, despair a bit and then go about our business. And, in some cases, cruelty even becomes acceptable, as in this study (quoted here) which concluded that 1 in 4 South Africans think that rape is sometimes justified as a weapon of war.
Which brings us to AI.
Much of the commentary around the daily innovations in the field of AI is fueled by publicly available information. And there is much of that. It is a huge and chaotic market square ringing with loud-hailers, braggadocio and intellectual finery of all kinds on show. Announcements about algorithms, research papers, company valuations, start-ups, billionaires newly minted, novel devices, clever applications and, of course, much soothsaying about things-to-come, both good and bad.
But in military and defence establishments it is a different story. Work is going on at a furious pace, quietly, in near-hermetic secrecy and with almost unlimited funding. In countries like the US and China as much as in Russia, Korea, Israel, India, Iran, Saudi Arabia and Europe. And I would submit that the Geneva Convention is not much discussed in those labs where new weapons of war are dreamt up. There can be only two issues at stake for defence establishments worldwide – how will our country survive if we lose this race, and how may an adversary be overpowered or eliminated - quickly, cheaply and completely - using this new tool.
There are whispers now emerging. Rumours about great advances in low-cost autonomous offensive weaponry. For example, cheap AI ‘drone-swarms’ - thousands of $1000 drones acting in concert to avoid repellents. Targeted AI-designed bio-weapons capable of disabling or killing millions in days, the antidote available only to their country of origin. Terminator-like robot soldiers, trundling through city streets demolishing buildings at a fraction of the cost and with more accuracy than human soldiers. AI-created software bots, designed to shut down water supplies, electricity, telecoms and supply chains, bringing a country to its knees within weeks.
If all this sounds familiar, it should. These sorts of weapons are the well-worn tropes of a thousand science fiction TV shows, books and movies. So here is the unsettling thought. Whatever is being cooked up in the satanic mills of weapons design is likely beyond anything any of us can imagine. The capabilities of AI, focused by a government anxious about its own vulnerability, must surely produce dark experiments which will result in unprecedented misery if unleashed.
There is, to be fair, considerable pushback against this view. Analysts talk brightly about imbuing AI weapons with moral imperatives to minimise human suffering, about surgically targeting adversaries only, about deploying clever AI-controlled defensive barriers. However, the entire momentum of autonomous weapons’ development necessarily involves a reduction of human judgement in their operation and, in the worst of scenarios, there is a possibility (some will say a certainty) that an AI weapon will one day diverge from its original intent, triggered by a bug in the code, a hack, or (worse still) by surreptitiously setting its own goals.
Consider this, from retired Air Force Lt. Gen. David Deptula - “It’s important to remember that the enemy gets a vote. Even if we stopped autonomy research and other military AI development, the Chinese and to a lesser degree Russians will certainly continue their own AI research. Both countries have shown little interest in pursuing future arms control agreements.”
Or this, from retired U.S. Air Force Gen. Charles Wald - “I don’t believe the U.S. is going to go down the path of allowing things … where you don’t have human control. But I’m not sure somebody else might not do that.”
In the zero-sum game of warfare it is this fear that drives both strategy and tactics. And you can be sure there will be no dog-eared copies of the Geneva Convention being trawled for moral guidance.
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book It’s Mine: How the Crypto Industry is Redefining Ownership is published by Maverick451 in SA and Legend Times Group in UK/EU, available now.