Artificial intelligence is like many other tools. If used correctly, responsibly, and with purpose, it can do a lot of good. It can also do a lot of harm if used without care or consideration. We like to think of machines as being free of bias or partial judgement, as these are human tendencies. However, we’ve seen that AI is just as prone to bias as humans, if not more so.
So how do we ensure that artificially intelligent systems do more good than harm? Some experts believe that a code of ethics for AI is the key.
A code of ethics for AI
The Defense Innovation Board (DIB) — an independent federal advisory committee advising the Secretary of Defense — states that maintaining a competitive advantage in AI is “essential to our national security”. The DIB recommends a set of ethics principles that should apply to artificial intelligence “in both combat and non-combat environments”.
This set of ethics for AI is the product of 12 months of work from experts in their field: business leaders, scientists, technologists, inventors, educators, ethicists, futurists, lawyers, human rights experts, technologists, philosophers, civil society leaders, and entrepreneurs. The DIB also considered two public hearings and digital and in person input from the public.
We must use good judgement and take responsibility for the development, deployment, and the outcomes of artificially intelligent systems.
We must make a concerted effort to prevent unintended bias in artificial intelligent systems that could inadvertently harm people.
Those building AI systems should be experts with a thorough understanding of the technology. AI systems should be built with “transparent and auditable methodologies”.
AI systems must have a clearly defined functions, and all features — including safety and security — must be thoroughly tested and proven.
We must be able to safely and effectively monitor and control AI systems.
Should these ethics apply to all AI?
The Defense Innovation Board developed this ethical code with DOD systems in mind. However, these principles aren’t specific to military AI systems. The DIB white paper on AI ethics states:
“AI is a powerful, emerging technology; there’s much we do not know about the consequences of its application in various contexts or about the interaction and interoperability of such systems (including legacy and new systems).”
This applies to artificial intelligence in all fields. As AI technologies improve, we implement AI in more areas, and we rely more heavily on these systems, we must make sure that we are using this tool responsibly. It makes sense to develop a set of rules that establish right and wrong as we develop this systems and apply this anywhere we implement AI.
As the Defense Innovation Board puts it, “Ethics cannot be “bolted on” after a widget is built or considered only once a deployed process unfolds, and policy cannot wait for scientists and engineers to figure out particular technology problems. Rather, there must be an integrated, iterative development of technology with ethics, law, and policy considerations happening alongside technological development.
As a factory owner, you’re not developing AI systems, and maybe your plant is a long way from deploying AI on your floor. Leave AI up to the experts and focus on what you can control: your industrial motion control system. Call 479-422-0390 for service, support, or repair on your Indramat motion control system.