Taylor Swift AI Pictures Spark Fury

A variety of sexually inappropriate and offensive AI images of Taylor Swift are making the rounds on X, formerly Twitter, to the disgust of many people on the platform.

AI images are pictures generated through artificial intelligence software using a text prompt. This can be done without a person's consent. Users on the platform have raised fears about how easily AI can be used to post fake images, violating the subject's privacy. Some are also taking action to report the Swift posts, or attempting to bury the issue as a trending topic.

However, experts have told Newsweek that more must be done to address the issue, which can affect anyone, but overwhelmingly is targeted against women.

Among the AI images of Swift being shared are some of her posing inappropriately while at a Kansas City Chiefs game. She has attended several NFL matches this season amid her romance with Travis Kelce, the Chiefs' tight end. While Newsweek has seen the images, it is not sharing them to protect Swift's privacy. These AI images originated on the AI celebrity porn website Celeb Jihad on January 15. At the time of writing, they were still up.

Taylor Swift Gold Dress TIFF 2022 Canada
Taylor Swift attends 'In Conversation With... Taylor Swift' during the 2022 Toronto International Film Festival in Toronto, Canada, on September 9, 2022. AI-generated pictures of the singer are being shared online and causing uproar. Amy Sussman/Getty Images

On Monday, lewd AI images of Swift were posted by X account @FloridaPigMan. They have since been removed for violating the social media platform's rules. Another sexually explicit photo of Swift was posted on the website Rule 34 on November 21, 2023. At the time of writing, it remained online. AI pictures of Swift were also uploaded to the porn website Planet Suzy on December 7, 2023, which had not been removed at the time of writing.

The legal system has not caught up with this surfacing threat—but that could eventually change. On Tuesday, two lawmakers reintroduced a bill that would make the non-consensual sharing of digitally altered pornographic images a federal crime.

Representative Joseph Morelle, a Democrat from New York, first authored the "Preventing Deepfakes of Intimate Images Act" in May 2023, which he said at the time was created "to protect the right to privacy online amid a rise of artificial intelligence and digitally-manipulated content." He has since added Representative Tom Kean, a Republican from New Jersey, as a co-sponsor.

Morelle told Newsweek that the images of Swift were part of a wider trend.

"Intimate deepfake images like those targeting Taylor Swift are disturbing, and sadly, they're becoming more and more pervasive across the internet. I'm appalled this type of sexual exploitation isn't a federal crime—which is why I introduced legislation to finally make it one," Morelle said.

"The images may be fake, but their impacts are very real. Deepfakes are happening every day to women everywhere in our increasingly digital world, and it's time to put a stop to them. My legislation would be a critical step forward in regulating AI and ensuring there are consequences for those who create deepfake images."

Siwei Lyu, a computer science and engineering professor at the University of Buffalo, described the problem as AI technology "gone rogue."

He attributed the rise in sexualized AI images to three factors: the development of newer AI models that do not require huge training data or a long time to train; social media platforms providing this data and allowing images to spread quickly; and the emergence of AI tools that are easy to use for someone without much knowledge of programming.

"The problem is certainly of significant concern due to the defamation effect," he told Newsweek.

"If these are used targeting individuals (known as revenge pornography) they can also cause tremendous psychological trauma to the victims (almost all victims of revenge pornography are female). I think the solution lies in (1) technology for detecting and attributing such deepfake images; (2) laws that protect individuals from unauthorized use of their imagery or voice to make deepfake pornography."

Emma Pickering, the head of technology-facilitated abuse and economic empowerment at Refuge, a U.K. charity that provides support for women and children experiencing domestic violence, said these kinds of images have a lasting impact on survivors and perpetrators often don't face consequences for their actions.

"While there has been a lot of discussion publicly about the fears of 'deepfakes' manipulating politics and altering public opinion, the most common deepfakes shared on the internet are non-consensual sexual depictions of women," she told Newsweek.

"Intimate image deepfakes, have a profound and long-lasting impact on survivors. These types of fake images can harm a person's health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment and shame. The sharing of these intimate deepfakes can also damage a person's reputation, impact their relationships, and affect their employability."

Human rights organization Equality Now's digital law and rights adviser, Amanda Manyame, told Newsweek in a statement that "deepfake image-based sexual abuse" mirrors the historic patterns of sexual exploitation and abuse experienced by women.

"Sexual exploitation and abuse in the physical and digital realms operate together in a continuum. They are a part of the same structure of violence rooted in gender-based inequality and systemic misogyny that perpetuates women's subordination in society," she said.

"There is an urgent need for legal frameworks to confront and combat deepfake sexual abuse. Legal measures also need to recognize and address the continuum of harm experienced by women."

She added: "Digital content can spread across multiple platforms and countries making it difficult to remove or track and protective laws in one country would not be enough to protect all victims across the world. National responses and mechanisms have to be supported by strong interconnected, international responses. The technology sector also needs to play a role by not promoting this content on their platforms and removing it, especially when it has been reported as offending content."

In tweets seen by Newsweek, X user @Zvbear posted some of the AI images of Swift to his account.

"My Taylor post went viral and now everyone is posting it," he wrote in one post.

In another he said: "Bro what have I done... They might pass new laws because of my Taylor Swift post. If Netflix did a documentary about AI pics they'd put me in it as a villain. It's never been so over."

His tweets have since been protected and only approved followers can see them.

Some accounts posting sexualized AI pictures remain active on X such as the accounts @CharlotteAI, @antiofclub, @AIforBNWO and @AICELEBIMAGES.

Newsweek contacted Swift's publicist, as well as Elon Musk and X for comment via email Thursday, outside business hours. This article will be updated if a response is received.

People have taken to the social media platform to share their disgust over the images.

"whoever making those taylor swift ai pictures going to heII," one person wrote on X.

"Who ever is making those Taylor swift AI pictures you are a disgusting person," said another.

"Hoping to God she pulls a Cardi and sues everyone spreading them into homelessness," a third person wrote.

"the same men who call taylor swift mid are searching for taylor swift ai cause they can't see women successful but they enjoy them being degraded." another person wrote, another person wrote, sharing a video of Swift's cameo in the "Three Sad Virgins" sketch for Saturday Night Live. "Anyways if they see this here's a song about your personality. Enjoy!"

Others have shown their support for Swift by trying to bury the "Taylor Swift AI" trending topic with unrelated posts. People are also reporting the images as soon as they see them, and have shared screenshots of them doing so.

"im gonna need the entirety of the adult swiftie community to log onto twitter, search the term 'taylor swift ai,' click the media tab, and report every single ai generated p0rnographic photo of taylor that they can see because im f****** done with this bs. elon get it together," one person wrote on X.

"Anyone else's notifications look like this, this morning? DO NOT engage with any tweets with Taylor swift AI beyond just reporting them pleeeeeease," another wrote, sharing a screenshot of them reporting the pictures.

Some have also expressed their concern for what this could mean for the future of AI and how it could be just the beginning of a worrying trend.

"I'm not going to share that Taylor Swift AI sickening 'sexy' post. But know that all it takes is one button click for any of us or our friends, family or children to be used in that way without our consent. Fight AI now before it's too late," one wrote.

Another X user echoed the sentiment when they posted: "With the Taylor Swift Ai pics I think it's clear that we need some kinda legislation or something against this s***. It's easy to tell it's Ai right now because it's mainly used for ridiculous s***, but it's just gonna get harder and harder to tell with time. F****** bleak."

In response to this Newsweek report, Senator Mark Warner shared his concerns surrounding AI imagery and deepfakes to X.

"I've repeatedly warned that AI could be used to generate non-consensual intimate imagery," he wrote on January 25 alongside a link to this article. "This is a deplorable situation, and I'm going to keep pushing on AI companies to stop this horrible capability and on platforms to stop their circulation."

Update 1/25/24, 10:37 a.m. ET: This article has been updated with additional information.

Update 1/25/24, 11:15 a.m. ET: This article has been updated with further additional information.

Update 1/25/24, 1:10 p.m. ET: This article has been updated with comments from Joe Morelle and Siwei Lyu.

Update 1/26/24, 7:30 a.m. ET: This article has been updated with comments from Senator Mark Warner.

Update 1/26/24, 8:55 a.m. ET: This article has been updated with comments from Emma Pickering and Amanda Manyame.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Billie is a Newsweek Pop Culture and Entertainment Reporter based in London, U.K. She reports on film and TV, trending ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go