LOS ANGELES (AP) — The Trump administration has not shied away from sharing AI-generated images online, embracing cartoon-like images and memes and promoting them on official White House channels.
But an edited — and realistic — image of civil rights lawyer Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what’s real and what’s fake.
Homeland Security Secretary Kristi Noem’s account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image showing her crying. The doctored image is part of a deluge of AI-edited images that have been shared across the political spectrum since the fatal shootings of Renee Good and Alex Pretti by Border Patrol agents in Minneapolis.
However, the White House’s use of artificial intelligence has troubled disinformation experts, who worry that the spread of AI-generated or edited images erodes the public’s perception of the truth and sows mistrust.
In response to criticism of Levy Armstrong’s edited image, White House officials doubled down on the post, with Deputy Communications Director Kaelan Dorr writing on X that “the memes will continue.” White House Deputy Press Secretary Abigail Jackson also shared a post mocking the critics.
David Rand, a professor of information science at Cornell University, says that calling the altered image a meme “definitely seems like an attempt to present it as a joke or a humorous post, like their previous cartoons. This is probably intended to shield them from criticism for the manipulated media post.” He said the purpose of sharing the altered arrest image seems “much more ambiguous” than cartoon images the administration has shared in the past.
Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or edited images are just the latest tool the White House is using to engage the segment of Trump’s base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm.
“People who are terminally online will see it and instantly recognize it as a meme,” he said. “Your grandparents might see it and not understand the meme, but because it feels real, it gets them asking their kids or grandkids about it.”
All the better if it provokes a backlash that helps it go viral, said Henry, who generally praised the work of the White House’s social media team.
The creation and dissemination of altered images, especially when shared by credible sources, “crystallizes an idea of what’s going on, rather than showing what’s actually going on,” said Michael A. Spikes, a Northwestern University professor and media news researcher.
“Government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do that,” he said. “By sharing this kind of content and creating this kind of content … it erodes the trust — even though I’m always skeptical of the term trust — but the trust that we should have in our federal government to give us accurate and verified information. It’s a real loss and it worries me a lot.”
Spikes said he already sees “institutional crises” surrounding mistrust of news organizations and higher education, and believes this behavior from official channels is fueling those issues.
Ramesh Srinivasan, a UCLA professor and host of the Utopias podcast, said many people are now wondering where they can turn for “reliable information.” “AI systems will only exacerbate, amplify and accelerate these problems of an absence of trust, an absence of understanding of what might be considered reality, truth or evidence,” he said.
Srinivasan said he believes the White House and other officials who share AI-generated content not only invite ordinary people to continue posting similar content, but also give others in positions of credibility and power, such as policymakers, permission to share unlabeled synthetic content. He added that, given that social media platforms tend to “algorithmically privilege” extreme and conspiratorial content – which AI generators can easily create – “we have a big, big set of challenges on our hands”.
An influx of AI-generated videos of immigration and customs enforcement actions, protests and citizen interactions has already proliferated on social media. After Renee Good was shot by an ICE officer while in her car, several AI-generated videos began circulating of women walking away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and people confronting ICE officers, often yelling at them or throwing food in their faces.
Jeremy Carrasco, a content creator who specializes in media education and debunking viral AI videos, said the bulk of these videos likely come from accounts that are “engagement” or looking to capitalize on clicks by generating content with popular keywords and search terms like ICE. But he also said the videos are getting views from people who oppose ICE and DHS and may view them as “fan fiction” or engage in “wishful thinking” in the hope of seeing real pushback against the organizations and their officers.
However, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “what’s real or not when it actually matters, like when the stakes are much higher.”
Even when there are obvious signs of AI generation, such as street signs with plates on them or other obvious errors, only in the “best case scenario” would a viewer be savvy enough or pay enough attention to register the use of AI.
This issue, of course, is not limited to news about law enforcement and immigration protests. Fabricated and distorted images of the capture of ousted Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, believe the spread of AI-generated political content will only become more common.
Carrasco believes the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution. The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesn’t think it will be widely adopted for at least another year.
“It’s going to be a problem forever now,” he said. I don’t think people understand how bad this is.”
__
Associated Press writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.