AI: Risks and Regulations

Gartner Risk & Audit produced a report on emerging risks, and generative AI leads the pack. The Quarterly Emerging Risk Report follows up on six AI-related risks Gartner had previously identified:

  • Fabricated and inaccurate answers
  • Data privacy (or the lack thereof)
  • Bias (which we’ve already seen in hiring software)
  • Intellectual and copyright risks
  • Cyber fraud risks
  • Consumer protection risks

While some of these are more obviously connected with enterprise-level business than others, manufacturers and printers also need to be aware of consumer protection risks, hiring and credit bias, and risks associated with fraud and privacy. Gartner recommends regulation to help cope with these risks.

The government is aware of these issues, and has been working on a framework for AI regulation. Their five principles:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

These mesh with Gartner’s recommendations. The administration is calling their work-in-progress “An AI Bill of Rights.” The rights in question are human rights, not the rights of large language models. “This important progress must not come at the price of civil rights or democratic values, foundational American principles,” the Bill of Rights declares.

Safe and effective systems

This principle says that technology should be developed with safety in mind, including protections from foreseeable harm and danger. The “foreseeable” clause is certainly a problem. We’ve seen that warehouse injuries increase when decisions about the speed of machines and other process matters are made with AI tools. Is that foreseeable?

People didn’t foresee it, at least, and once it happened, it hasn’t always been remedied. AI in factories can certainly contribute to greater safety, as when machines alert humans that they need repair or maintenance. But how often can we really predict the possible harm?

Algorithmic discrimination protection

A recent book, Weapons of Math Destruction by Cathy O’Neal, goes into great detail showing how algorithms discriminate against human beings on the grounds of race, gender, age, income level, and so on. In many cases, readers will nt have noticed the discrimination until it’s pointed out — but they can never unsee it again.

While some examples suggest intentional exploitation, often the damage is well-meaning. The AI tools are intended to be more fair than human judgements or to use criteria that doesn’t rely on demographics…but they don’t. “Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way,” the Bill of Rights says, but how can that be regulated in real life?

We already see bias in AI tools on a regular basis. How can that be avoided, predicted, or cleaned up?

Data privacy

Speaking of shutting the barn door after the horse has already run away, this one is such a widespread problem that it is hard to imagine how it can be addressed. The Bill of Rights specifically speaks out against workplace surveillance. Factories that use AI to get warnings about sleepy workers or people getting too close to machinery shouldn’t have to give up those safety measures, and employers who use productivity measurement software won’t want to do so either.

Notice and explanation

“Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible,” says the Bill of Rights.

As O’Neal pointed out, many AI algorithms are black boxes for business reasons. They may also be very hard to explain in “generally accessible plain language.” And this rule also includes instructions that the automated systems themselves should give clear explanations of what they are doing.

Human alternatives, consideration, and fallback

Have you ever demanded a human being’s help when you’re talking to a chatbot? That’s what this principle is about. If you are dealing with AI, you should always be able to ask to switch to dealing with a human being.

Like several of the other principles, this one will be bound to increase costs across the board. It’s in the nature of automation that it doesn’t have much human oversight. Those hotel robots that have to be watched and rescued by people generally turn out to be impractical.

 

The document goes on to discuss specific applications of these principles, and it’s worth a read. It’s a great start. It will require a lot of cooperation among government, industry, and communities to make it happen.