Not So Different, You and AI

People and machines have different strengths and weaknesses. When it comes to lifting heavy objects, performing tasks with the highest level of accuracy and zero variation, or quickly processing large volumes of data, you go with a machine every time. If you’re looking for creativity, adaptability, problem-solving skills, or the ability to relate to the human experience, you have to pick a person.

Humans are influenced by thoughts, feelings, and emotions; this isn’t something that we associate with machines. Whether this is good or bad depends on the task at hand.

The inability to become distracted is beneficial with industrial motion control, but perhaps killer robots shouldn’t be cold and unfeeling with a binary view of the world. People in caregiver positions need to be able to empathize and connect with those that they work with, but people also make assumptions, form biases, and develop prejudices based on nothing other than preconceived notions.

It turns out that developing prejudices isn’t limited to humans. Machines may be prone to this behavior as well.

Can AI form prejudices?

Remember that time when Microsoft’s AI Twitter chatbot started sending racist comments? Tay began making offensive Tweets within a day of going live. The researchers hadn’t considered that people would troll the chatbot, intentionally teaching Tay inappropriate behaviors.

That event didn’t prove that AI is inherently prejudiced. It just proved that the internet can be an unpleasant place.

MIT and Cardiff University researchers studying psychology and computer science recently conducted a study to observe AI in a more controlled setting. They concluded from their research that artificial intelligence robots can develop biases.

The machines learned prejudices by copying behaviors from one another rather than people intentionally teaching the AI prejudices.

In the simulation, AI bots could choose to donate to other bots within their own group or in different group. They based their decisions on the reputation of the other bots, among other factors. The robots ultimately became prejudiced against robots from other groups.

University of Cardiff professor Roger Whitaker, one of the study’s co-authors, said that the simulations show “prejudice is a powerful force of nature; through evolution, it can become incentivized in virtual populations, to the detriment of wider connectivity with others”.

Is prejudice a “force of nature”?

People are susceptible to prejudices, and there have been several studies that show that machines can be susceptible to prejudices, too. That doesn’t necessarily mean that prejudice or bias is inherent to AI bots, though.

One of the challenges in creating non-biased AI is that humans, with our inclination towards bias and prejudice, are the ones in charge of programming it.

Is it possible for humans to program AI without prejudice? If so, we may learn some very valuable information.