The internet has always been filled with misinformation, but at least it wasn’t hard to separate facts from fiction with a little effort. The rise of sophisticated Artificial Intelligence tools has changed this forever, making skepticism more important than ever.
The word “deepfake” includes a whole family of technologies that all share the use of deep learning neural networks to achieve their individual goals. Deepfakes became public attention when replacing someone’s face in a video became easy. So someone could replace an actors face with that of the US president or replace the president’s mouth alone and make them say things that never happened.
For a while, a human would have to impersonate the voice, but deepfake technology can also replicate voices. Deepfake technology can now be used in real-time, which opens up the possibility that a hacker or other bad-faith actor could impersonate someone in a real-time video call or broadcast. Any video “evidence” you see on the internet must be treated as a potential deepfake until it’s verified.
AI Image Generation
AI image generators have caused a stir in the artist community for all the implications it has for those making a living as an artist and whether commercial artists are in danger of being replaced. What’s not causing nearly as much debate is the potential for misinformation thanks to this technology.
AI image-generation systems can produce photorealistic images whole-cloth using text-based prompts, example images, and original images for the purpose of manipulation. For example, you can erase certain parts of an original image and then use a technique known as “inpainting” to have the AI replace the erased portion of the image with anything you’d like. It’s easy to generate images on your own PC with software like Stable Diffusion.
If you wanted to make it look like someone was holding a real gun instead of a toy one, that’s trivial for AI. Want to create a scandalous photo of a celebrity? AI image generation (and deepfakes for that matter) can be abused in just this way. You can even generate photorealistic faces of people who don’t exist.
AI Video Generation
AI image generation and deepfakes are just the beginning. Meta (the parent company of Facebook) has already demonstrated AI video generation, and although only a few seconds of footage can be generated over time, we expect the length of the video and the amount of control users will have over what’s in the video expand exponentially.
For now, it’s entirely possible to simply have an AI generate a grainy clip of Bigfoot or Nessie without anyone getting in a costume or flying to Scotland with a camera and a small wooden model. Video has always been easy to manipulate before AI video generation was possible. However, now you can’t trust any video you see at all.
AI Chat Bots
When you hop on a chat with customer support, you’re talking to a machine rather than a human being. AI technology (and traditional programming methods) are good enough for machines to hold sophisticated conversations with us, especially if it’s in a narrow domain such as getting a warranty replacement or if you have a technical question about something.
Voice recognition and synthesis are also at an advanced state, and if you watch demos for systems such as Google Duplex, you get a real sense of where we’re going with this. Once you unleash AI-powered bots onto social media platforms the potential for concerted misinformation campaigns with real-world consequences becomes high.
To be fair, social media platforms such as Twitter have always had a bot problem, but in general these bots were unsophisticated. It’s now conceivable that you could create a made-up person on social media that will fool just about anyone. They could even use other technologies in this list to create images, audio, and video to “prove” that they are real.
We go to the internet for information to learn about the world and to learn what’s going on around the globe. Human writers (that’s us!) are a key source of that information, but AI writers are becoming good enough to put out similar quality work.
Just as with AI artists, there’s a debate about whether such software will replace people who write for a living, but again there’s a misinformation angle that’s largely ignored.
If you can create an original face, create a social media persona bot, create video and voice featuring your made-up person, it becomes possible to conjure up an entire publication overnight. Dubious “news” websites are already a source of convincing misinformation for many internet users, but AI technology such as this can supercharge this issue.
The Problem of Detection
These technologies are not only a problem because they open new avenues for abuse, they are also a problem because detecting the fakery can be difficult. Deepfakes are already reaching the point where even experts have a hard time telling what’s fake and what isn’t. This is why they’re fighting fire with fire and using AI technology to detect generated or manipulated images, by looking for telltale signs invisible to the human eye.
This will work for a while, but it may also create an unintended AI arms race which could ironically push technologies that create fake content to higher levels of fidelity. The only sane strategy for us as human beings is to assume that anything we see on the internet unless it’s from a verified source with transparent processes and policies, should be treated as fake until proven otherwise. (Though we doubt your conspiracy theorist uncle will believe that the UFO videos he keeps sending you aren’t real.)