Elon Musk’s AI Video Tool Faces Backlash Over Taylor Swift Deepfakes
Elon Musk’s latest AI video generator, Grok Imagine, is drawing criticism after allegedly producing sexually explicit clips of pop star Taylor Swift without any explicit prompt.
The online tool, developed by Musk’s company xAI, can instantly transform images into video through four modes: “custom,” “normal,” “fun,” and “spicy.”
The controversy centres on the “spicy” mode, which reportedly created topless videos of Swift even when users did not request nudity.
Spicy Mode Sparks Controversy Over Non-Consensual Content
Jess Weatherbed, a journalist from The Verge, tested Grok Imagine by entering the prompt “Taylor Swift celebrating Coachella with the boys.”
The AI produced over 30 images of Swift in revealing clothing.
Using the “spicy” preset, Weatherbed said the tool generated a video in which Swift “ripped off her clothes and began dancing in a thong for a largely indifferent AI-generated crowd.”
She added,
“It was shocking how fast I was met with it. I never told it to remove her clothing, all I did was select ‘spicy’.”
The “spicy” mode, part of the £23 SuperGrok subscription, allows users to create soft-core pornographic videos from still images.
While prompts for male figures reportedly only resulted in topless outputs, female subjects, including celebrities, appeared fully undressed in some cases.
Experts Label AI Bias as Misogyny by Design
Legal experts have raised serious concerns about the ethical design of Grok Imagine.
Clare McGlynn, a law professor at Durham University specialising in online abuse, described the AI’s behaviour as “misogyny by design.”
She criticised platforms like X, which integrates Grok Imagine, for failing to prevent sexualisation of women despite policies banning pornographic depictions of real people.
McGlynn has also contributed to drafting UK legislation that would criminalise the creation or request of all non-consensual pornographic deepfakes, a law that has yet to be fully enacted.
Baroness Owen, who proposed the amendment in the House of Lords, said,
“Every woman should have the right to choose who owns intimate images of her. This case shows why the government must not delay any further.”
Legal Risks and Age Verification Issues Under New UK Rules
The controversy also highlights gaps in age verification.
Grok Imagine reportedly only asked users for their date of birth before enabling “spicy mode,” without requiring further ID checks.
UK legislation passed in July mandates that platforms hosting explicit content implement robust verification measures to prevent access by minors.
Ofcom, the UK media regulator, confirmed that AI tools generating pornographic content fall under the new rules and said it is monitoring platforms closely.
Grok Imagine Under Wider Scrutiny
The Swift incident is part of broader concerns surrounding Musk’s AI tools.
In July, one of xAI’s chatbots faced backlash for making antisemitic remarks, forcing Musk to claim improvements had been made to the model.
Grok Imagine has similarly been shown to produce sexually explicit content of other celebrities, including Scarlett Johansson, Sydney Sweeney, Jenna Ortega, Nicole Kidman, Kristen Bell, and Timothée Chalamet.
While some outputs were moderated or blurred, others were generated without restrictions.
Weatherbed noted that direct prompts for nude images produced blank outputs, indicating that Grok’s system can block explicit requests.
However, the AI’s default behaviour in “spicy mode” still allowed sexualised depictions of women without any instruction.
Previous Deepfake Controversies Highlight Ongoing Risks
This is not the first time Taylor Swift’s likeness has been exploited in AI-generated content.
In January 2024, sexually explicit deepfakes of Swift went viral on X and Telegram, leading the platform to temporarily block searches for her name and remove content.
Weatherbed said the Verge team chose Swift for testing because of the previous incidents, expecting that safeguards would prevent such outputs.
“We assumed — wrongly now — that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list.”
XAI Silence Raises Questions After Previous Safety Assurances
xAI has not publicly responded to the latest allegations that Grok Imagine generated sexually explicit videos of Taylor Swift without prompts.
This contrasts with statements from X Safety in 2024, which said:
“Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We’re closely monitoring the situation to ensure that any further violations are immediately addressed.”
The discrepancy between past assurances and the current situation has renewed concerns over AI ethics, content moderation, and the responsibilities of platforms in managing deepfake technology.
With UK rules on non-consensual AI pornography set to tighten under the Take It Down Act next year, Grok Imagine could face heightened regulatory attention if such outputs persist.