Ethical Standards and Killer Robots

Anyone who is familiar with robots knows that they have many wonderful and helpful purposes. From the big industrial robots that have revolutionized manufacturing to the robots that assist with medical surgeries to robots that are being implemented for search and rescue or disaster relief, robots produce a lot of good. However, as robotic technologies become more advanced, the applications for robots continue to grow.

The military’s use of robots has been a topic of debate for some time. There isn’t much controversy over a robot that can help carry equipment for soldiers on the battlefield, and recon robots seem harmless enough. It’s the robots that are equipped with firepower and capable of attack that raise concerns.

A recent NPR article tackled the issue of killer robots.

Lethal autonomous robots are considered by some as as the 3rd revolution in warfare, after gunpowder and nuclear weapons. The idea is that killer robots could have an immense impact on war, and in ways that we might not even be able to yet fathom. There are international agreements on war and warfare that clarify the ethical rules for human beings engaged in warfare.

There are currently no standards  established for killer robots because robotics, and robots in war, are so new.

Last month the United Nations determined that there needs to be set protocol for killer robots in the same way that other weapons of war are regulated. Robot technologies are advancing at a staggering rate, which makes establishing norms that much more pressing. The U.N. meets again in December, and will decide whether to pursue international law to regulate killer robots.

It’s unnerving to think of a death machine capable of destruction, void of human consciousness, roaming… well, anywhere really. What happens if that robot somehow exits the battlefield and stumbles upon a city full of innocent civilians, or if a lethal autonomous robot somehow loses communication with its human controller yet continues to seek out its target?

One of the big concerns regarding killer robots, apart from the fact that they are killer robots, is the potential arms race that could lead to dangerous technologies with unexpected and horrific outcomes. 14 countries, and several organizations and individuals, have even called for a complete ban on killer robots altogether.

It’s hard to imagine the benefits of autonomous machines designed for war – especially when we think about killer robots in science fiction – but there is a viewpoint holding that lethal robots could actually do some good. There are far too many people (both civilians and soldiers) killed in wars. Wars carried out by machines could reduce the number of human casualties. There’s also the idea that robots could be used as precision-guided weapons. Robots and precision go hand in hand, meaning potential for less collateral damage.

But it’s easy to imagine the negative potential for killer robots. Perhaps the biggest objection is the fact that robots are not human. The NPR article quotes Harvard Law School professor Bonnie Docherty as saying,

“It would undermine human dignity to be killed by a machine that can’t understand the value of human life.”

Robots can make mistakes and kill without discretion. A robot programmed to kill a person unquestioningly could be a terrible thing.

Either way, killer robots are no longer just something for science fiction buffs to debate. The subject of killer robots is now a real-world concern.