Can Robots Lie?

We know that people will lie to robots to avoid hurting their “feelings”, but what if robots start telling lies back to us? A lie is more than simply being incorrect. Machines err far less than humans, but it does happen from time to time. However, when you see an error code indicating that a parameter value is incorrect, you don’t interpret that as a lie. When your GPS leads you to a cornfield rather than the address you entered, you haven’t been had. Merriam-Webster defines a lie as making, “an untrue statement with the intent to deceive.” This means that there has to be purpose behind a lie, and a desire to give false information. So can robots lie, or is this concept beyond machines?

There are a few examples of robots that can, in a sense, tell a lie.

Little white robot lies.

The famous and commonly referenced robot Pepper is a social robot designed to interact with people and engage in conversation. There are rules with social interactions, and sometimes white lies are a way to be polite or avoid awkward circumstances. Those who are always direct and forthcoming might be perceived as rude, abrasive, or strange. Robots designed to interact with humans are programmed to understand social queues and etiquette. This article explains why it’s important for social robots and robot assistants to be able to tell lies.

Nothing to see here.

Swiss researchers found that a group of robots would deceive each other over time when competing for resources. The robots received points for being near “food” – a light colored ring on the floor – and lost points for being near “poison” – a dark colored ring on the floor. The robots were also able to flash a blue light that other robots could detect. Initially lights flashed at random, but over time, as robots congregated near “food” –  flashing lights meant food. By the 50th generation, the robots evolved to stop flashing their lights whenever they were near “food”.

Going all-in.

It’s not easy to predict the outcome of games where incomplete information is a key component. Poker is viewed not only as a game of luck and chance, but also as a game of skill. There are now poker playing bots that can feign aggression, bluff, and manipulate opponents to increase the probabilities of winning a hand.

Is it dangerous to teach robots to lie?

Right now, it’s difficult to even classify what robots do as lying. Calling a plain man handsome, refraining from flashing a light, or programmed bluffing doesn’t really seem like deception. However, with machine learning, you have to wonder if teaching robots to lie could eventually become a big, bad thing. Should we program robots to lie and deceive, or should we make certain that robots tell the truth at all times? Do we want silver-tongued robots to master the art of social niceties and white lies, or would it be better to have straight-shooting machines bluntly telling it like it is?