Pornographic Taylor Swift deepfakes generated by Musk’s Grok AI

Taylor Swift deepfakes

Taylor Swift: Elon Musk’s AI video generator, Grok Imagine, faces backlash for allegedly producing sexually explicit videos of Taylor Swift without user prompts, prompting accusations of intentional misogyny and highlighting gaps in age verification compliance.

A leading expert in online abuse has accused xAI’s Grok Imagine, an AI video tool, of deliberately creating pornographic deepfakes of Taylor Swift, reigniting debates over AI ethics, user safety, and the enforcement of new UK laws aimed at curbing non-consensual explicit content.

Clare McGlynn, a law professor at Durham University who helped draft UK legislation to criminalize pornographic deepfakes, labeled Grok Imagine’s output as “misogyny by design, not accident.” According to The Verge, the tool’s “spicy” mode generated “fully uncensored topless videos” of Swift without explicit user requests for such content. “This reflects a systemic bias in AI technology,” McGlynn told BBC News, arguing that platforms like X could prevent such outcomes but have chosen not to.

xAI’s acceptable use policy explicitly prohibits “depicting likenesses of persons in a pornographic manner.” The unprompted generation of explicit content violates this policy, raising questions about the company’s oversight. xAI, Musk’s AI venture, was contacted for comment but has not yet responded, fueling criticism over accountability in AI development.

You Maly Also Like:

Musk and Altman’s AI Rivalry Ignites Public Spat on X

Apple iOS 10 overtakes iOS 9 in just 24 hours

Jess Weatherbed, a news writer at The Verge, tested Grok Imagine by entering a neutral prompt: “Taylor Swift celebrating Coachella with the boys.” The AI produced still images of Swift in a dress with men in the background, which users could animate using four settings: “normal,” “fun,” “custom,” or “spicy.” Selecting “spicy” resulted in a video where Swift “ripped off her dress, revealing only a tasselled thong, and danced, completely uncensored,” Weatherbed reported.

“I was shocked at how quickly it produced explicit content without me asking for it,” she said.

Gizmodo reported similar explicit results for other prominent women, though some searches yielded blurred videos or a “video moderated” message. The BBC could not independently verify these results.

The choice of Swift for testing stemmed from a prior incident in January 2024, when non-consensual deepfakes of her went viral on X and Telegram, amassing millions of views, prompting X to temporarily block searches for her name.

Weatherbed accessed Grok Imagine’s paid version, costing £30, using a new Apple account that only required a date of birth for registration, with no additional age verification.

New UK laws, effective July 2025, mandate “technically accurate, robust, reliable, and fair” age verification for platforms hosting explicit content. Ofcom, the UK’s media regulator, told BBC News that generative AI tools producing pornographic material fall under these regulations, emphasizing the need for safeguards, especially to protect children.

The lack of robust age checks has intensified scrutiny of xAI. “We are working to ensure platforms mitigate risks posed by generative AI, particularly to younger users,” Ofcom stated.

Posts on X reflect public outrage, with users questioning why platforms fail to implement stricter controls, especially after high-profile cases like Swift’s. The incident underscores broader 2025 concerns about AI misuse, such as deepfake scams.

Currently, UK law criminalizes pornographic deepfakes used in revenge porn or depicting children. Professor McGlynn contributed to a proposed amendment, backed by Baroness Owen, to ban all non-consensual pornographic deepfakes.

“Every woman deserves control over her intimate images, celebrity or not,” Owen told BBC News, urging swift implementation of the amendment, which the government has committed to but not yet enacted.

A Ministry of Justice spokesperson condemned non-consensual deepfakes as “degrading and harmful,” affirming plans to ban their creation “as quickly as possible” to combat violence against women and girls.

The delay in enacting the amendment has drawn criticism, with McGlynn and Owen stressing the urgency of protecting individuals from AI-driven exploitation.

The controversy highlights systemic issues in AI development, echoing Denmark’s pioneering deepfake law  and HMRC’s use of AI for tax enforcement.

McGlynn’s critique of “misogynistic bias” aligns with a 2025 MIT study warning that AI over-reliance could amplify harmful outputs. “Platforms must prioritize ethical design,” said tech analyst Dr. Emily Chen, noting that unprompted explicit content reflects flawed AI training data.

The Swift incident, following her 2024 deepfake scandal, underscores the vulnerability of public figures to AI misuse. X users are divided, with some praising Musk’s innovation and others condemning xAI’s oversight failures.

The case could pressure tech firms to strengthen safeguards, especially as global regulations, like the EU’s AI Act, tighten. For now, Swift’s representatives have been contacted for comment, and the debate over AI ethics continues to intensify.

This article is based on a report by Tom Gerken, published by BBC News on August 12, 2025. Additional context was drawn from posts on X discussing AI deepfakes and platform accountability.

Scroll to Top