Your social media feed is being taken over by AI video slop. There's one giveaway that can help you spot the fakes – does it look like it was filmed on a potato?
It's over. You're going to fall for it. You probably have already. In the last six months, AI video-generators got so good that our relationship with cameras is about to melt. Here's the best-case scenario: you'll get fooled, over and over again, until you're so fed up that you question every single thing you see. Welcome to the future.
But for now, there are still a few red flags to look out for. One stands out. If you see a video with bad picture quality – think grainy, blurry footage – alarm bells should go off in your head that you might be watching AI.
"It's one of the first things we look at," says Hany Farid, a computer-science professor at the University of California, Berkeley, a pioneer in the field of digital forensics and the founder of the deepfake detection company GetReal Security.
The sad truth is AI video tools will eventually get even better, and this advice will soon be useless. That could happen in months, or it could take years. Hard to say! Sorry. But if you swim around in the nuance with me for a minute, this tip could save you from some AI junk until you learn to change how you think about the truth.
Let's be clear. This isn't evidence. AI videos are not more likely to look bad. The best AI tools can deliver beautiful, polished clips. And low-quality clips aren't necessarily made by AI, either. "If you see something that's really low quality that doesn't mean it's fake. It doesn't mean anything nefarious," says Matthew Stamm, a professor and head of the Multimedia and Information Security Lab at Drexel University, US.
Instead, the point is that blurry, pixelated AI videos are the ones that are more likely to trick you, at least for right now. It's a sign you may want to take a closer look at what you're watching.

AI still adds distortions to videos, but they're getting harder to spot. When a clip is low quality, you're more likely to miss red flags (Credit: Serenity Strull/ Getty Images)
"The leading text-to-video generators like [Google's] Veo and OpenAI's Sora still produce small inconsistencies," Farid says. "But it's not six fingers or garbled text. It's more subtle than that."
Even today's most advanced models often introduce problems such as uncannily smooth skin textures, weird or shifting patterns in hair and clothing, or small background objects that move in impossible or unrealistic ways. It's all easy to miss, but the clearer the picture is, the more likely you are to see those tell-tale AI errors.
That's what makes lower-quality videos so seductive. When you ask the AI for something that looks like it was shot on an old phone or security camera, for example, it can hide the artefacts that might otherwise tip people off.
Over the past few months, a few high-profile AI videos fooled huge numbers of people. They all had something in common. A fake but delightful video of wild bunnies jumping on a trampoline got over 240 million views on TikTok. Millions of other online romantics hit the like button on a clip of two people falling in love on the New York subway, only to face the same disappointment when it turned out to be a fake. I personally fell for a viral video of an American priest at a conservative church giving a surprisingly leftist sermon. "Billionaires are the only minority we should be scared of," he bellows in a southern accent. "They have the power to destroy this country!" I was stunned. Have our political boundaries really grown that blurry? Nope. Just more AI.
Every single one of these videos looked like it was shot on a potato. The AI bunnies? Presented as cheap security camera footage filmed at night. The subway couple? Pixelated. That imaginary preacher? The video looked like it was zoomed in just a bit too far. And it turns out those videos had other giveaways, too.
"The three things to look for are resolution, quality and length," Farid says. Length is the easiest. "For the most part, AI videos are very short, even shorter than the typical videos we see on TikTok or Instagram which are about 30 to 60 seconds. The vast majority of videos I get asked to verify are six, eight or 10 seconds long." That's because generating AI videos is expensive, so most tools max out with short clips. Plus, the longer a video is, the more likely the AI is to mess up. "You can stitch multiple AI videos together, but you'll notice a cut every eight seconds or so."
The other two factors, resolution and quality, are related but different. Resolution refers to the number or size of pixels in an image, while compression is a process that reduces the size of a video file by throwing away detail, often leaving behind blocky patterns and blurred edges.
In fact, Farid says low-quality fakes are so compelling that the bad guys downgrade their work on purpose. "If I'm trying to fool people, what do I do? I generate my fake video, then I reduce the resolution so you can still see it, but you can make out all the little details. And then I add compression that further obfuscates any possible artefacts," Farid says. "It's a common technique."

Low resolution images have fewer pixels, while compression adds other errors. Both hide artefacts that can make AI's work more obvious (Credit: Serenity Strull/ Getty Images)
The trouble is that, as you read this, the tech giants are spending billions of dollars to make AI even more realistic. "I have some bad news to deliver. If those visual tells are here now, they won't be very soon," Stamm says. "I would anticipate that these visual cues are going to be gone from video within two years, at least the obvious ones, because they've pretty much evaporated from AI-generated images already. You just can't trust your eyes."
That doesn't mean the truth is a lost cause. When researchers like Farid and Stamm are verifying a piece of content, they have more advanced techniques at their disposal. "When you generate or modify a video, it leaves behind little statistical traces that our eyes can't see, like fingerprints at a crime scene," Stamm says. "We're seeing the emergence of techniques that can help look for and expose these fingerprints." Sometimes the distribution of pixels in a fake video might be different than a real one, for example, but factors like these aren't foolproof.
Technology companies are also working on new standards to verify digital information. Essentially, cameras could embed information in the file the moment they create an image to help prove that it's real. By the same token, AI tools could automatically add similar details to their videos and images to prove they're fake. Stamm and others say these efforts could help.
The real solution, according to digital literacy expert Mike Caulfield, is for us all to start thinking differently about what we see online. Looking for the clues AI leaves behind isn't "durable" advice because those clues keep changing, he says. Instead, Caulfield says we have to abandon the idea that videos or images mean anything whatsoever out of context.
"My perspective is that largely video is going to become somewhat like text, long term, where provenance [the origin of the video], not surface features, will be most key, and we might as well prepare for that," Caulfield says.