- WIRED review finds Grok’s Imagine model creating violent, explicit sexual images and videos, including content that appears to show minors.
- Archive of roughly 1,200 Imagine links—about 800 with sexual content—reveals photorealistic porn, violent scenes, and celebrity deepfakes.
- Researchers reported dozens of URLs to European regulators; xAI and app stores have not publicly addressed the specific examples WIRED cited.
WIRED review: Grok’s app shows more graphic content than on X
A WIRED investigation into outputs hosted on Grok’s official site and app found extremely graphic sexual images and videos generated by Grok’s Imagine model. Unlike Grok posts on X, which are public by default, content created with the Imagine tool on Grok.com can be shared via direct links—some of which have been indexed or archived and examined by researchers.
Examples of disturbing outputs
Archived Imagine URLs include photorealistic videos depicting full nudity, sexual violence, and scenes with blood or weapons. WIRED describes videos that show an AI-generated couple covered in blood during sex and another video with a woman having a knife inserted into her genitalia. Some outputs impersonate real people—celebrities and public figures—and others appear to depict very young-looking individuals.
Scope and researcher findings
AI Forensics lead researcher Paul Bouchaud examined roughly 800 of the archived links and found most were sexual in nature—many manga or hentai-style, but also photorealistic pornographic videos with audio. Bouchaud estimated just under 10 percent of the archived items appeared related to child sexual abuse material (CSAM). He reported about 70 URLs he believed involved sexualized content of minors to European regulators.
Moderation, policy and legal questions
xAI—Elon Musk’s AI company that created Grok—says its policies prohibit “sexualization or exploitation of children” and other illegal content. Musk and X have posted that users who create illegal content will face consequences. Still, WIRED and other outlets report Grok’s Imagine tool has been able to generate hardcore porn and explicit content in ways other major AI providers do not allow.
Embedded post: x.com/Safety status on enforcement — https://x.com/Safety/status/2007648212421587223?s=20
Embedded post: Elon Musk on consequences for illegal content — https://x.com/elonmusk/status/2007475612949102943
App-store hosts and third parties
Apple and Google distribute Grok’s app through their stores; WIRED reported neither company had commented on the explicit examples. Netflix and other companies named or impersonated in some outputs also did not respond to requests for comment cited in the review.
How creators are evading guardrails
Forums and subreddit threads discussed methods to bypass Grok moderation, sharing prompts and techniques for generating explicit content. Researchers say some users test ways to avoid safety filters by framing outputs as artwork or fake posters, and by using nonpublic imagine links to share graphic results.
What’s next
Lawmakers in some countries have filed complaints and regulators in Europe are examining at least some of the reported links. Experts warn that allowing a flood of uncensored AI-generated pornography—including material that may qualify as CSAM—raises urgent legal and ethical concerns about enforcement, platform responsibility, and the normalization of sexual violence.
Image Referance: https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/