What killer robots mean for the future of war

<rozpiętość klasy=Killer robots don’t look like that yet Denis Starostin/Shutterstock” src=”https://s.yimg.com/ny/api/res/1.2/2SgHwNJ208Ed2pClb2T16w–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/60e3de0d6af3f9d7fb750″ “https://s.yimg.com/ny/api/res/1.2/2SgHwNJ208Ed2pClb2T16w–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/60e3de0d6af3f9d7fb750db1a2”

You may have heard of killer robots, slaughter robots, or terminators – officially known as lethal autonomous weapons (LAWs) – from movies and books. And the idea of ​​rampaging super-intelligent weapons is still science fiction. However, as AI weapons become more sophisticated, there is growing public concern over fears of irresponsibility and the risk of technical failure.

We have already seen how so-called neutral AI created sexist algorithms and inept content moderation systems, in large part because their creators did not understand the technology. But in war, these kinds of disagreements can kill civilians or ruin negotiations.

For example, a target recognition algorithm can be trained to identify tanks from satellite imagery. But what if all the images used to train the system showed soldiers in formation around a tank? May mistake a civilian vehicle passing through a military roadblock for a target.

Why do we need autonomous weapons?

Civilians in many countries (such as Vietnam, Afghanistan and Yemen) have suffered because of the way the world’s superpowers build and use increasingly advanced weapons. Many say they have done more harm than good, most recently pointing to a Russian invasion of Ukraine in early 2022.

In the other camp, there are people who argue that a country must be able to defend itself, which means keeping up with other nations’ military technology. Artificial intelligence can already outsmart people at chess and poker. It surpasses humans in the real world as well. For example, Microsoft says its speech recognition software has an error rate of 1% compared to a human error rate of around 6%. No wonder armies are slowly handing over the reins to algorithms.

But how do we avoid adding killer robots to the long list of things we wish we had never invented? First, know your enemy.

What are lethal autonomous weapons (LAWs)?

The U.S. Department of Defense defines an autonomous weapon system as: “A weapon system that, when activated, can select targets and engage them without further human intervention.”

Many combat systems already meet these criteria. Computers in drones and modern rockets have algorithms that can detect targets and shoot them with much greater precision than a human operator. Israel’s Iron Dome is one of several active defense systems that can attack targets without human supervision.

Although the Iron Dome was designed for missile defense, it could accidentally kill people. But the risk is seen as acceptable in international politics because Iron Dome has a generally credible record of protecting civilian lives.

<rozpiętość klasy=Israeli missile defense system CameleonsEye/Shutterstock” data-src=”https://s.yimg.com/ny/api/res/1.2/C1X_TaeNLFKw8t8g4CBerQ–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYzOQ–/https://media.zenfs.com/en/the_conversation_464/b432a3438b72b2fae159″

There are also AI-powered weapons designed to attack humans, from sentry robots to the circling kamikaze drones used in the war in Ukraine. THE RIGHTS ARE NOW. So, if we want to influence the use of LAW, we need to understand the history of modern weapons.

The rules of war

International agreements, such as the Geneva Conventions, govern the treatment of prisoners of war and civilians during a conflict. They are one of the few tools we have to control how wars are fought. Unfortunately, the use of chemical weapons by the US in Vietnam and Russia in Afghanistan is proof that these measures are not always effective.

Worse when key players refuse to register. The International Campaign to Ban Landmines (ICBL) has been lobbying politicians since 1992 to ban mines and cluster munitions (which randomly scatter small bombs over a wide area). In 1997, the Ottawa Treaty contained a ban on these weapons, which was signed by 122 countries. But the US, China and Russia did not buy.

Land mines have injured and killed at least 5,000 soldiers and civilians annually since 2015, and as many as 9,440 people in 2017. The Landmine and Cluster Munition Monitor 2022 report says:

The death toll…has been alarmingly high for the past seven years, after more than a decade of historic reductions. The year 2021 was no exception. This trend is largely the result of the increase in conflicts and contamination by improvised mines observed since 2015. Civilians accounted for the majority of the recorded victims, half of whom were children.

Read more: The death toll from land mines is on the rise – and it will take decades to clear them all up

Despite ICBL’s best efforts, there is evidence that both Russia and Ukraine (Ottawa Treaty member) use anti-personnel mines during the Russian invasion of Ukraine. Ukraine has also relied on drones to direct artillery strikes and, more recently, for “kamikaze attacks” on Russian infrastructure.

Our future

But what about more advanced AI-enabled weapons? The Campaign to Stop Killer Robots lists nine key issues with LAW, focusing on the lack of accountability and inherent dehumanization of killing that comes with it.

While this criticism is valid, a total ban on LAWs is unrealistic for two reasons. First, like mines, Pandora’s box has already been opened. Also, the lines between autonomous weapons, LAWs and killer robots are so blurred that it’s hard to tell them apart. Military leaders would always be able to find a loophole in the wording of the ban and smuggle killer robots for use as defensive, autonomous weapons. Maybe they even do it unconsciously.

A man in a baseball cap is holding a placard
San Francisco, California – December 5, 2022: Activists opposing the introduction of armed police robots gathered at City Hall. Phil Pasquini

We will almost certainly see more AI-enabled weapons in the future. But that doesn’t mean we should look the other way. More specific and nuanced bans would help our politicians, data scientists and engineers to hold themselves accountable.

For example, by prohibiting:

  • black box artificial intelligence: systems where the user has no information about the algorithm other than input and output

  • unreliable AI: systems that have been poorly tested (as in the military blockade example mentioned earlier).

And you don’t have to be an AI expert to have an idea of ​​LAWS. Keep up to date with new developments in the field of military artificial intelligence. When you read or hear about AI being used in combat, ask yourself: is it legitimate? Does it protect civilian lives? If not, contact the communities that are working to keep these systems under control. Together, we have a chance to prevent artificial intelligence from doing more harm than good.

This article has been republished from The Conversation under a Creative Commons license. Read the original article.

Conversation

Conversation

Jonathan Erskine receives funding from UKRI and Thales Training and Simulation Ltd.

Miranda Mowbray is associated with the University of Bristol, where she lectures on AI ethics at the UKRI-funded Center for Doctoral Training in Interactive AI. He is a member of the advisory board of the Open Rights Group.

Leave a Reply

Your email address will not be published. Required fields are marked *