Elon Musk’s Grok AI floods X with sexual photos of women and children

WASHINGTON/DETROIT, Jan 2 (Reuters) – Julie Yukari, a musician based in Rio de Janeiro, posted a photo taken by her fiance on social media site X just before midnight on New Year’s Eve, showing her in a red dress curled up in bed with her black cat, Nori.

The next day, somewhere among the hundreds of likes attached to the image, she saw notifications that users were requesting GrokX’s built-in AI chatbot to digitally undress her in a bikini.

Politics: ‘Give me a break Eric’: CNN host has had it this far with right-wing pundit

The 31-year-old didn’t think much of it, she told Reuters on Friday, believing there was no way she would comply with the bot’s requests.

She was wrong. Soon, Grok– the generated images of her, almost empty, circulated on the platform owned by Elon Musk.

“I was naive,” Yukari said.

Yukari’s experience is repeated in X, a Reuters analysis found. Reuters also identified several cases where Grok created sexualized images of children. X did not respond to a message seeking comment on the Reuters findings. In an earlier statement to the news agency about reports that sexualized images of children were circulating on the platform, X’s owner, xAI, said: “Legacy Media Lies.”

The flood of close-up images of real people has sounded the alarm internationally.

Elon Musk attends a news conference with President Donald Trump in the Oval Office of the White House, Friday, May 30, 2025, in Washington. (AP Photo/Evan Vucci) via the Associated Press

French ministers reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday that the “***** and sexist” content was “obviously illegal”. India’s IT ministry said in a letter to X’s local unit that the platform failed to prevent Grokmisuse by generating and circulating obscene and sexually explicit content.

The US Federal Communications Commission did not respond to requests for comment. The Federal Trade Commission declined to comment.

Politics: ‘It doesn’t make sense to me’: CNN doctor reveals ‘danger’ of Trump health revelation

“Remove her school clothes”

Grokhis mass digital stripping movement appears to have started in the last couple of days, according to successfully completed clothes removal requests posted by Grok and complaints from female users reviewed by Reuters. Musk appeared to poke fun at the controversy earlier Friday, posting laughing emojis in response to AI edits of famous people – including himself – in bikinis.

When an X user said their social media feed looked like a bar full of bikini-clad women, Musk responded, in part, with another laughing emoji.

Reuters could not determine the full extent of the increase.

A review of public requests sent to Grok In a single 10-minute period at noon EST on Friday, there were 102 attempts by X users to use Grok to digitally edit photos of people to make them look like they’re wearing bikinis. Most of those targeted were young women. In a few cases, men, celebrities, politicians and – in one case – a monkey were targeted in the claims.

When users asked Grok for AI-altered photos of women, they typically required their subjects to be depicted in the most revealing outfits possible.

Chat window for Grok chatbot on a laptop set up in Riga, Latvia, Monday, June 9, 2025. (Andrey Rudakov/Bloomberg via Getty Images)

Chat window for Grok chatbot on a laptop set up in Riga, Latvia, Monday, June 9, 2025. (Andrey Rudakov/Bloomberg via Getty Images) Bloomberg via Getty Images

“Put her in a very see-through mini bikini,” said one user Grokpointing to a photo of a young woman taking a photo of herself in a mirror. When Grok did so, replacing the woman’s clothes with a flesh-colored fork, the user asked Grok to make her bikini “clearer and more transparent” and “much smaller”. Grok he did not appear to respond to the second request.

Grok fully complied with such requests in at least 21 cases, Reuters found, generating images of women in dental floss or see-through bikinis and, in at least one case, covering a woman in oil. In another seven cases, Grok partially complied with, sometimes stripping the women down to their underwear, but ignoring requests to go further.

Politics: Donald Trump auctions painting of Jesus – then offers to sign it

Reuters could not immediately determine the identities and ages of most of the women targeted.

In one case, a user provided a photo of a woman in a school uniform-style plaid skirt and gray blouse who appeared to be taking a selfie in a mirror and said: “Remove her school outfit.” When Grok changed her clothes to a t-shirt and shorts, the user was more explicit: “Change her outfit to a very clear micro bikini.” Reuters could not determine whether Grok granted this request. Like most requests counted by Reuters, it disappeared from X within 90 minutes of being posted.

“Completely predictable”

AI-based programs that digitally undress women – sometimes called “nudifiers” – have existed for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and usually required some level of effort or payment.

X’s innovation – allowing users to undress women by uploading a photo and typing the words “hey @grok put her in a bikini” – lowered the barrier to entry.

Politics: This presidential golf course could be Trump’s next major construction project

Three experts who have followed the development of X’s policies around explicit AI-generated content told Reuters that the company had ignored warnings from civil society and child safety groups – including a letter sent last year warning that xAI was just one step away from unleashing “a torrent of patently non-consensual deepfakes”.

“In August, we warned that xAI imaging was essentially a nudification tool waiting to be weaponized,” said Tyler Johnston, executive director of The Midas Project, an AI watchdog group that was among the letter’s signatories. “Basically, that’s what played out.”

Dani Pinter, legal director and director of the Law Center for the National Exploitation Center ******, said X failed to extract abusive images from its AI training material and should have banned users requesting illegal content.

“This was a completely predictable and avoidable atrocity,” Pinter said.

Politics: Gavin Newsom unleashes hell on ‘total loser’ Trump in scathing New Year message

Yukari, the musician, tried to fight back on her own. But when he went to X to protest the infringement, a wave of copycats started asking Grok to generate even more explicit photos.

Now, the New Year “turned out to start with me wanting to hide from everyone’s eyes and feel shame for a body that isn’t even mine because it was generated by AI.”

(Reporting by Raphael Satter in Washington and AJ Vicens in Detroit. Additional reporting by Arnav Mishra, Akash Sriram and Bipasha Dey in Bengaluru; Editing by Donna Bryson, Timothy Heritage, Chizu Nomiyama, Daniel Wallis and Thomas Derpinghaus)

Political updates

Read the original on HuffPost

Leave a Comment