Human beings have a sense of ethics. Some moral compasses are tuned better than others, but generally speaking, we can tell the difference between right and wrong. We know that stealing something is unethical, for example. You’re not supposed to take something that doesn’t belong to you. Of course, certain factors can complicate the issue. Is taking a loaf of bread still unethical if you’re doing so to keep your family from starving? While some would say yes, and others would say no, we as humans are able to consider the options and view actions within the scope of morality. But can robots have a sense of ethics?
Researchers are working on ethical standards for robots and artificial intelligence.
The British Standards Institute suggested a set of ethical guidelines of robots and artificial intelligence. So what does this moral code entail?
- Robots should not be designed for the sole purpose of harming or killing a human.
- Robots and AI should not discriminate.
- Human developers are ultimately responsible for the actions of the robots they built.
- The standards also bring to light the issue of emotional bonds between humans and robots.
Does this means robots would have a moral code?
While some people believe that this is a big first step in embedding ethics into robotics and artificial intelligence, it does raise an important question. Does this mean robots have a sense of ethics, or is it just the illusion of a moral code?
If a robot carries out programmed functions, is there really any sense of ethics? Artificial intelligence can appear to make decisions based on morality, but that doesn’t necessarily display a sense of ethics.
Perhaps standards for robot ethics applies more to the programmers than to robots.
Concerns about standards for robot ethics.
There’s been a big question mark over machine learning and how that influences a machine’s understanding of morality. Deep learning can lead to a strange sense of ethics. AI could be convinced it is following an ethical process, when in fact it is not.
There’s also the possibility that machine learning could result in a lack of cultural diversity and ethnocentric robots. Deep learning relies on the Internet, which is filled with less than savory views and opinions.
And then there are the gray areas. Those times when you know that stealing is wrong, but letting your family starve is also wrong. Would AI have an algorithm to quantify how good something is? Would there be a goodness threshold to dictate a robot’s decisions?
Fortunately, we deal with robots in a different way. Rather than tackle challenging philosophical questions about whether or not robots have a sense of ethics, we just make sure that your Indramat motion control system is working like it should. Contact us today for any of your Indramat needs.