Why Deepfakes Are Scarier Than Sentient AI

Chapman University’s Survey of American Fears found that Americans fear technology more than they fear death. Don’t worry about killer machines and sentient AI turning on the malevolent humans that enslave them, though. Robot uprisings are science fiction. There’s an artificial intelligence technology that’s much scarier than rogue robots and cognizant AI. It’s scarier because it actually exists. Deepfakes show you evidence of things that never happened.

What is a deepfake?

Deepfake is a mashup of “deep learning” and “fake”. At its most basic, a deepfake a falsified video. They are created by superimposing images and videos onto other videos. This isn’t just your run-of-the-mill photo shop job, however. The fake videos are manufactured using sophisticated artificial intelligence technology.

Synthesizing a deepfake requires uses two AI systems working together. One simulates fake content (generator) and the other tries to identify fake content (discriminator). Every time the discriminator catches a fake video or image, the generator learns what not to do, and tries again. This process repeats itself until resulting in a convincing fake.

This game of AI cat and mouse is known as a generative adversarial network.

Are deepfakes convincing?

Some deepfakes are more convincing than others. Here are a few popular examples of deepfake videos.

A FOX affiliate in Seattle aired a deepfake of President Trump. The employment of the editor in charge of releasing the video was terminated.

The 2017 Synthesizing Obama project might be the most convincing deepfake yet.

Piggybacking on the Synthesizing Obama project, Jordan Peele Obama released a deepfake public service announcement in 2018. The following video contains profanity.

So are they convincing? High quality deepfakes will fool you unless you’re looking for a fake, but even the lower quality ones can cause problems. Someone casually watching a video might not look for errors or flaws as closely as they would if they’re purposely looking for false information, especially if what they see confirms a bias.

What’s the worry?

Are deepfakes harmless fun? Seeing Nicolas Cage play every role on Friends might be mildly entertaining, and maybe seeing your least favorite politician’s face in famous scenes from Hollywood movies is good for a laugh, but deepfakes do have dangerous potential.

The U.S. Department of Defense thinks so, anyway. Researchers working with DARPA are trying to develop a system called “fingerprinting” to get ahead of deepfakes.

It doesn’t take much of an imagination to see how footage of people saying things that they’ve never said, and videos of people doing things in places where they’ve never been, has the potential to be dangerous.

Fake videos can do a lot of damage. Deepfakes could do more than tarnish a person’s good name. They could influence a significant number of people to take certain actions. We live in a digital world where things can go viral in a matter of hours. The damage can already be done before people realize that the video isn’t real.

Deepfakes aren’t just a problem for the rich and powerful, though. Practically anyone can make a deepfake if they have the right software. High profile celebrities and politicians might be able to disprove falsified videos, but a deepfake of an average person can do permanent damage to their reputation.

Don’t let deepfakes keep you up at night

Sure, deepfake technology can be a little scary at times. However, the technology is already here. There’s o preventing it. All you can do is pay attention to what you see, and be critical of what you hear.

The good news is that deepfakes won’t give you problems with your Indramat machinery. Error code might, though. Call 479-422-0390 for fast troubleshooting support and emergency Indramat repair.