‘Part of the kill chain’: how can we control weaponised robots? | Artificial intelligence (AI)

The security convoy turned on to Tehran’s Imam Khomeini Boulevard at around 3:30pm on November 27, 2020. The VIP was the Iranian scientist Mohsen Fakhrizadeh, widely regarded as the head of Iran’s secret nuclear weapons program. He was driving his wife to their country property, flanked by bodyguards in other vehicles. They were close to home when the assassin struck.

A number of shots rang out, smashing into Fakhrizadeh’s black Nissan and bringing it to a halt. The gun fired again, hitting the scientist in the shoulder and causing him to exit the vehicle. With Fakhrizadeh in the open, the assassin delivered the fatal shots, leaving Fakhrizadeh’s wife uninjured in the passenger seat.

Then something bizarre happened. A pickup truck parked on the side of the road exploded for no apparent reason. Sifting through the wreckage afterward, Iranian security forces found the remains of a robotic machine gun, with multiple cameras and a computer-controlled mechanism to pull the trigger. Had Fakhrizadeh been killed by a robot?

Subsequent reporting by the New York Times revealed that the robot machine gun was not fully autonomous. Instead, an assassin some 1,000km away was fed images from the truck and decided when to pull the trigger. But the AI ​​software compensated for the target’s movements in the 1.6 seconds it took for the images to be relayed via satellite from the truck to the assassin, and the signal to pull the trigger to come back.

It’s the stuff of nightmares, and footage from the war in Ukraine is doing nothing to allay fears. Drones are ubiquitous in the conflict, from the Turkish-made Bayraktar TB2 used to attack occupying Russian forces on Snake Island, to the seaborne drones that attacked Russian ships in Sevastopol harbor, and the modified quadcopters dropping grenades on unsuspecting infantry and other targets. And if footage on the internet is anything to go by, things could get worse.

A ceremony for Iranian nuclear scientist Mohsen Fakhrizadeh, who was killed by a robot machine gun operated by an assassin located 1,000km away.
A ceremony for Iranian nuclear scientist Mohsen Fakhrizadeh, who was killed by a robot machine gun operated by an assassin located 1,000km away. Photograph: Anadolu Agency/Getty Images

In one video posted on Weibo, a Chinese defense contractor appears to showcase a drone placing a robot dog on the ground. The robot springs to life. On its back is a machine gun. In another video, a commercially available robot dog appears to have been modified by a Russian individual fire a gunwith the recoil lifting the robot on its hind legs.

In response to these alarming videos, in October Boston Dynamics and five other robotics companies issued an open letter stating: “We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponised applications of these newly capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society.”

In a statement to the Observer, the company further explained: “We’ve seen an increase in makeshift efforts by individuals attempting to weaponise commercially available robots, and this letter indicates that the broader advanced mobile robotics industry opposes weaponisation and is committed to avoiding it. We are hopeful the strength in our numbers will encourage policymakers to engage on this issue to help us promote the safe use of mobile robots and prohibit their misuse.”

However, Boston Dynamics is effectively owned by the Hyundai Motor Group, which in June 2021 bought a controlling interest in the company, and another part of that group, Hyundai Rotem, has no such qualms. In April this year, Hyundai Rotem announced a collaboration with another South Korean firm, Rainbow Robotics, to develop multi-legged defense robots. A promotional illustration shows a robot dog with a gun attached.

In addition, defense analyst and military historian Tim Ripley wonders what Boston Dynamic’s commitment means in practice. Even if you don’t strap weapons to these robots, he says, they can still be instruments of war.

“If the robot is a surveillance drone, and it finds a target, and you fire an artillery shell at it, and it kills people, then that drone is just as much a part of a weapons system as having a missile on the drone. It’s still a part of the kill chain,” he says.

He points out that drone surveillance has played a crucial role in the Ukraine war, used on both sides to track enemy movements and find targets for artillery bombardments.


When it comes to computerized military hardware, there are always two parts of the system: the hardware itself and the control software. While robots beyond drones are not yet a common feature on the battlefield, more and more intelligent software is being widely used.

“There’s a whole range of autonomy that’s already built into our systems. It’s been deemed necessary because it enables humans to make quick decisions,” says Mike Martin, senior war studies fellow at King’s College, London.

Dogs of war: a young girl and her mother interact with a robot dog, made by Ghost Robotics, in Seoul, South Korea.
Dogs of war: a young girl and her mother interact with a robot dog, made by Ghost Robotics, in Seoul, South Korea. Photograph: AFP/Getty Images

He cites the example of an Apache helicopter scanning the landscape for heat signatures. The onboard software will quickly identify those as potential targets. It may even make a recommendation of how to prioritize those targets, and then present that information to the pilot to decide what to do next.

If defense conventions are anything to go by, there is clearly an appetite in the military for more such systems, especially if they can be twinned with robots. US firm Ghost Robotics makes robot dogs, or quadrupedal robots as the industry calls them. As well as being touted as surveillance devices to help patrols reconnoitre potentially hostile areas, they are also being suggested as killing machines.

At the Association of the United States Army’s 2021 annual conference last October, Ghost Robotics showed off a quadrupedal with a gun strapped to the top. The gun is manufactured by another US company, Sword Defense Systems, and is called a Special Purpose Unmanned Rifle (Spur). On the Sword Defense Systems website, Spur is said to be “the future of unmanned weapon systems, and that future is now”.

In the UK, the Royal Navy is currently trialling an autonomous submarine called Manta. The nine-meter-long unmanned vehicle is expected to carry sonar, cameras, communications and jamming devices. UK troops, meanwhile, are currently in the Mojave desert taking part in war games with their American counterparts. Known as Project Convergence, a focus of the exercise is the use of drones, other robotic vehicles and artificial intelligence to “help make the British army more lethal on the battlefield”.

Yet even in the most sophisticated of current systems, humans are always involved in the decision-making. There are two levels of involvement: an “in the loop” system means that computers select possible targets and present them to a human operator who then decides what to do. With an “on the loop” system, however, the computer tells the human operator which targets it recommends taking out first. The human can always override the computer, but the machine is much more active in making decisions. The rubicon to be crossed is where the system is fully automated, choosing and prosecuting its own targets without human interference.

“Hopefully we’ll never get to that stage,” says Martin. “If you hand decision-making to autonomous systems, you lose control, and who’s to say that the system won’t decide that the best thing for the prosecution of the war isn’t the removal of their own leadership?” It’s a nightmare scenario that conjures up images of the film The Terminatorin which artificially intelligent robots decide to wage a war to eliminate mankind.

Feras Batarseh is an associate professor at Virginia Tech University and co-author of AI Assurance: Towards Trustworthy, Explainable, Safe, and Ethical AI (Elsevier). While he believes that fully autonomous systems are a long way off, he cautions that artificial intelligence is reaching a dangerous level of development.

“The technology is at a place where it’s not intelligent enough to be completely trusted, yet it’s not so dumb that a human will automatically know that they should remain in control,” he says.

In other words, a soldier who currently places their trust in an AI system may be putting themselves in more danger because the current generation of AI fails when it encounters situations it has not been explicitly taught to interpret. Researchers refer to unexpected situations or events as outliers, and war hugely amps up the number of them.

“In war, unexpected things happen all the time. Outliers are the name of the game and we know that current AIs do not do a good job with outliers,” says Batarseh.

Even if we solve this problem, there are still enormous ethical problems to grapple with. For example, how do you decide if an AI made the right choice when it took the decision to kill? It is similar to the so-called trolley problem that is currently dogging the development of automated vehicles. It comes in many guises but essentially boils down to asking whether it is ethically right to let an impending accident play out in which a number of people could be killed, or to take some action that saves those people but risks killing a lesser number of others people. Such questions take on a whole new level when the system involved is actually programmed to kill.

Sorin Matei at Purdue University, Indiana, believes that a step towards a solution would be to program each AI warrior with a sense of his own vulnerability. The robot would then value its continued existence, and could extrapolate that to human beings. Matei even suggests that this could lead to the more humane prosecution of warfare.

A member of a Ukrainian volunteer battalion learns how to operate drones.
A member of a Ukrainian volunteer battalion learns how to operate drones. Drone surveillance has played a crucial role in the war, used by both sides to identify targets. Photograph: Sergiy Kozlov/EPA

“We could program them to be as sensitive as the Geneva Convention would want human actors to be,” he says. “To trust AIs, we need to give them something that they will have at stake.”

But even the most ethically programmed killer robot – or civilian robot for that matter – is vulnerable to one thing: hacking. “The thing with weapons system development is that you will develop a weapon system, and someone at the same time will be trying to counteract it,” says Ripley.

With that in mind, a force of hackable robot warriors would be the most obvious of targets for cyber-attack by an enemy, which could turn them against their makers and scrub all ethics from their microchip memories. The consequences could be horrendous. Yet still it seems that manufacturers and defense contractors are pushing hard in this direction.

In order to achieve meaningful control of such terrible weapons, suggests Martin, we should keep one eye on military history.

“If you look at other weapons systems that humans are really scared of – say nuclear, chemical, biological – the reason we’ve ended up with arms control agreements on those is not because we stopped the development of them early on, but because the development of them got so scary during the arms race that everyone went, OK, right, let’s have a conversation about this,” says Martin.

Until that day comes, it looks certain there are some worrying times ahead, as drones and robots and other unmanned weapons increasingly find their way to the world’s battlefields.

Leave a Reply

Your email address will not be published. Required fields are marked *