Robots and AI can lead to more devastating wars

Lethal autonomous weapons (LAWs), often known as killer robots or slaughter bots, are probably familiar to you from movies and books. Furthermore, the concept of rogue super-intelligent weaponry is still the stuff of science fiction. But, as AI weapons get more advanced, the public worries about a lack of responsibility, and the possibility of technical failure is growing.

We are not new to AI mistakes that can cause harm. But, in a war, these kinds of misconceptions could result in the deaths of civilians or ruin negotiations.

According to this article, a target recognition algorithm, for instance, may be trained to recognize tanks from satellite images. But what if every illustration used to train the system showed soldiers standing in a circle around the tank? It could believe a civilian car navigating a military barrier is a target.

Civilians have suffered in numerous nations (including Vietnam, Afghanistan, and Yemen) as a result of how the world’s superpowers manufacture and use ever-more-advanced weapons.

Those who believe that a nation must be able to defend itself by keeping up with other countries military technology belong to the other camp. For example, Microsoft asserts that its speech recognition technology has a 1% error rate compared to a 6% error rate for humans. Thus it should come as no surprise that armies are gradually giving the reins over to algorithms.

But how do we keep killer robots from joining the lengthy list of inventions we regret?

An autonomous weapon system is what the US Department of Defense defines as: “A weapon system that, once activated, can select and engage targets without further intervention by a human operator”.

This standard is already met by several fighting systems. Algorithms on the computers in current missiles and drones can recognize targets and shoot at them with a great deal greater accuracy than a human operator. One of the active defense systems that can engage targets without human supervision is Israel’s, Iron Dome.

Despite being intended for missile defense, the Iron Dome may accidentally cause fatalities. But because of the Iron Dome’s typically consistent track record of defending civilian lives, the risk is accepted in international politics.

Robot sentinels and loitering kamikaze drones that were employed in the conflict in Ukraine are only two examples of AI-enabled weapons that are made to harm people. So, knowledge of the past of modern weapons is necessary if we hope to influence the use of LAWs.

International agreements, like the Geneva Conventions, set standards for how civilians and prisoners of war should be treated during hostilities. They are one of the few methods we have to manage the conduct of conflicts. Regrettably, the US’s use of chemical weapons in Vietnam and Russia’s use in Afghanistan provide evidence that these strategies aren’t always effective.

Worse is when important players decline to participate. Since 1992, the International Campaign to Ban Landmines (ICBL) has pushed for the outlawing of mines and cluster munitions (which randomly scatter small bombs over a wide area). A ban on these weapons was included in the Ottawa Convention of 1997, which 122 nations ratified. Yet neither the US nor China nor Russia agreed.

What about more sophisticated AI-powered weaponry, though? Nine major issues with LAWs are listed by the Campaign to Stop Killer Robots, with a focus on the lack of accountability and the resulting inherent dehumanization of killing.

Although this critique is legitimate, a complete prohibition of LAWs is implausible for two reasons. First, they have already been legitimized, just like mines. Moreover, it can be challenging to discern between autonomous weapons, LAWs, and killer robots due to the difficulty to distinguish them. Military leaders could always find a way around a ban’s restrictions and smuggle killer robots into use as defensive autonomous weapons. They could even inadvertently engage in it.

Future weapons with AI capabilities are virtually probably going to increase. But, this does not force us to turn a blind eye. It would be easier to hold our politicians, data scientists, and engineers accountable if there were more precise and detailed restrictions.

For instance, by banning:

  • a black box AI which refers to systems where the user is only aware of the algorithm’s inputs and outputs.
  • AI that is unreliable: inadequately tested systems.

Relying on robots and artificial intelligence to conduct warfare means being free from responsibility even more regarding criminal acts. In this way, killing by delegating the act to machines could lead to more bloody killings with less morality, giving rise to a whole new warfare scenario. And since machines are tireless, what will cause a war to end?