AI Doomsday

AI is a routine part of our lives now, especially if we’re participating in the 4th Industrial Revolution. We wake up to alarms that use information about the world we live in to turn out lights on at dawn, adjust our thermostats to suit our preferences, and play the curated news or music chosen to meet our needs. We head to the smart factory — or a facility somewhere on the way to that — and appreciate the way we now get warnings about valves that might need replacement or lines that need reconfiguration. Home we go, where we cook a meal a subscription service has chosen for us based on what we’ve told them about our family and watch a movie selected by our TV based on our previous behavior. We may finish up the evening with a search for answers to a question using AI tools for search or a trip to a watering hole located by AI in our cars. Many of the products we used during the day and the night were produced using AI, too, and some of the infrastructure we use also relies on AI. It’s normal life. So where is the AI doomsday scenario in this?

Expert concerns

Movies and novels tend to go with the idea of a robot overlord turning on humans with malevolence.

The Guardian quotes Jessica Newman, director of University of California Berkeley’s Artificial Intelligence Security Initiative, as saying, “The danger is from something much more simple, which is that people may program AI to do harmful things, or we end up causing harm by integrating inherently inaccurate AI systems into more and more domains of society.”

This kind of negative outcome is easy to imagine — or even to see. The Guardian encapsulates the concerns by pointing out that “powerful AI has the capacity to destabilize civilizations in the form of escalating misinformation, manipulation of human users, and a huge transformation of the labor market as AI takes over jobs.”

We already have examples of this kind of outcome. The January 6 insurrection was fueled by human actions, and also by social media algorithms that strengthened and supported false information. Biases in algorithms used for hiring and sentencing criminals have already been seen. Their consequences may at this point be about the same as the consequences of human biases, but they have the potential to spread further and — since they largely exist in a black box — to be harder to root out.

Solutions

Government regulation, ethical considerations for researchers and users, and education of the public are all possible solutions to this type of AI doomsday scenario.

Right now, Pew Research tells us, most Americans don’t recognize examples of AI in daily life. Just 14% of us have ever tried ChatGPT. Nearly half of Americans believe that they don’t interact with AI in their daily lives. This shows that public education is needed.

If we are aware of the possible pitfalls of using AI, and make a sincere effort to have public discussions of the ethical questions it brings up, will we be able to hold off an AI doomsday? At the very least, we should now be able to see the need for a proactive approach to prevent an AI doomsday scenario.