90.1 FM San Luis Obispo | 91.7 FM Paso Robles | 91.1 FM Cayucos | 95.1 FM Lompoc | 90.9 FM Avila
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

2025 has seen an explosion of AI-generated slop

SCOTT DETROW, HOST:

2025 has been a big year for artificial intelligence, especially for short AI-generated videos that people are posting online. The kids call it AI slop, and NPR's Geoff Brumfiel and Shannon Bond have spent much of the year rolling around in that slop. They are here with the highlights and what it means. Hey, everybody.

SHANNON BOND, BYLINE: Hey, Scott.

GEOFF BRUMFIEL, BYLINE: Oink, oink.

DETROW: (Laughter) So we're going to talk about three fake video todays (ph), all of which have been widely shared online. Shannon, let's start with you. What's our first slop entry?

BOND: Well, back in October - this was a Saturday. It was one of the days of those big No Kings protests - right? - against the Trump administration. President Trump posted this AI-generated video of him flying a fighter jet. The jet says King Trump on the side. He's wearing a crown. He flies over a city full of protesters and dumps what looks like poop all over them. You've probably seen it.

DETROW: Sure have.

(SOUNDBITE OF SONG, "DANGER ZONE")

KENNY LOGGINS: (Singing) Highway to the danger zone.

BOND: And the video is set to the song "Danger Zone" by Kenny Loggins. I should say, as an aside here, Loggins is not happy about his music being used in this video. He asked for it to be taken down back in October. It has not since been taken down. Now, this video is, of course, obviously fake, right? Like, I don't think anyone's watching it thinking Trump is really flying a fighter jet. And we have seen the president share AI videos and imagery before. He did it actually a lot during the 2024 campaign. He and his supporters seem to really love these kind of memes.

But since he's taken office this year, this kind of strategy, it's really something we've seen not just the president, but his whole administration embrace. The White House and the Department of Homeland Security, their social media accounts post these sort of meme videos and images often made with AI. And I think, you know, what this tells us, given we're heading into midterm elections next year, we should expect to see even more AI-generated political content all over our feeds.

DETROW: I mean, Shannon, this is so prevalent. You're seeing Trump and his allies return this more and more and more. What has the White House said about their use of - you know, I feel like it's fair to say - these almost, like, propaganda videos that they're creating with AI?

BOND: Yeah, and I should be - it should be clear - it's not always clear that the White House itself or the White House staff are making this. In some cases, like this fighter jet video, we're seeing, you know, the president or administration accounts resharing content that's been made by other people online. You know, the White House doesn't tend to comment on these specific videos, but what they have said in the past in general about this social media approach - you know, they've said sort of things like, you know, the memes will continue. It's clearly a form of messaging I think they think resonates with their audience. And, you know, look, this is a very online administration. This is a very online president. And this is - you know, they're very much engaging in the language of what is online at the moment, and it's increasingly becoming AI.

DETROW: So that's the first trend we're talking about. Let's move on to video No. 2. It came from a little company. Nobody's really heard about it. We haven't talked about it that much. It's called OpenAI. Google it. So it actually (laughter) - it showed the company's CEO committing crimes. Geoff, what's going on here?

BRUMFIEL: Yeah, this second video came from an app OpenAI rolled out earlier this fall called Sora, and that app has made AI slop super easy to generate. One of the features of Sora is you can put other people's faces and voices into your video with their permission. One of the first people to grant permission was Sam Altman, OpenAI's CEO. He let people make videos with his likeness, and an OpenAI employee created this video of Altman in what appears to be a Target. It's surveillance video, and Altman seems to be shoplifting computer chips for his AI company.

(SOUNDBITE OF ARCHIVED RECORDING)

AI-GENERATED VOICE: (As Sam Altman) Please, I really need this for Sora inference. This video is too good.

BRUMFIEL: Now, this is an inside joke about AI's endless need for computing, but it's really notable for a couple of reasons. First, it shows that AI videos can now put real people into completely fake situations. You can make the CEO of a company commit fake crimes and make it look pretty real. But that's not the only fake stuff that Sora is capable of producing. So, you know, we've also seen news stories about Sora producing fake videos of people stuffing ballot boxes, fake local news interviews. And this is creating a lot of concerns, especially, you know, as Shannon just said, we're going into an election year next year, and Sora has basically lowered the bar for slop to zero.

DETROW: So that's two different extremes here. Geoff, you're talking about ways that these fake videos can get into the real news cycle very quickly. Shannon, you're talking about just totally farcical propaganda, for lack of another word. Let's move on to our final video. This is one maybe our listeners have seen. It's racked up, like, 200 million views on TikTok. Tell us about the bunnies, Shannon.

BOND: Yeah. So this video, it looks also pretty realistic. It looks like Ring camera footage of some very cute bunnies bouncing around on a backyard trampoline at night. And you can imagine, right? People post these kinds of videos...

DETROW: Yeah.

BOND: ...Right? - from their actual Ring cameras. And so when this was posted on TikTok this summer, a lot of people were fooled into thinking it was real. There was no watermark on the video itself disclosing that it was made with AI. TikTok has since put its own AI label on the post, but, you know, judging from the comments at the time, you know, lots and lots of people just totally thought it was real. And I think what's interesting about this one, you know, it's quite different than the other examples we've talked about. But this is the kind of, you know, mindless, cute engagement bait. It's animals, right? That's so prevalent on the internet.

DETROW: Yeah.

BOND: It's always been prevalent on the internet, right? And so, in some ways, it's not surprising that now we're seeing AI versions of this. But what strikes me is this is the kind of stuff I am seeing all over my social media feeds at this point. And whether or not they are, like, clearly labeled as AI, it really does start to blur the boundaries, and it makes people feel, I think, like this AI slop is inescapable if you are going to be online.

DETROW: And if it is inescapable, I'm just wondering, Geoff, is there anything we could do about that?

BRUMFIEL: Well, I mean, the first thing is, you know, until there's some sort of regulation and labeling, you're probably just going to have to accept, Scott, that you're going to be duped sooner or later. I mean, I think all of us, at this point, have seen videos that are AI. But that being said, there are some things to watch out for. AI videos tend to be very short because it takes a lot of computing to make them, and they often contain scenarios that if you take a second, you'll realize are kind of unrealistic. Like, all those bunnies aren't going to bounce on a trampoline all at once. A reverse image search can help, too, or searching a news story on the event you're seeing.

But interestingly, Scott, you know, one of the things researchers I spoke to about this say is they actually don't want people to become cynical and just assume everything is fake because when that happens, it makes it really hard to hold bad actors to account. You know, people can say, oh, that's just fake. I didn't really do that thing. And so we've got to try to cling to reality, even for the cute animal videos. You know that raccoon that passed out next to a toilet? I thought that was AI. But I did my homework, and I'm relieved - and it brought a little joy into my life - to see that a raccoon really can still get drunk in a liquor store in 2025. And that's a real thing.

DETROW: We can always count on raccoons to give us realistic, entertaining internet content, I think.

BRUMFIEL: Let's hope. Let's hope the raccoons aren't put out of work by all this AI image generation.

DETROW: (Laughter) That was NPR's Shannon Bond and Geoff Brumfiel deep, deep, deep into the slop. Thank you and my condolences to both of you.

BOND: Thanks, Scott.

DETROW: Thank you.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Geoff Brumfiel works as a senior editor and correspondent on NPR's science desk. His editing duties include science and space, while his reporting focuses on the intersection of science and national security.
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.