This article was originally published on November 30, 2021. Updated on February 6, 2024.
Taylor Swift is the latest high-profile woman to be the target of AI-generated pornography.
Falsified sexually explicit images of the singer-songwriters spread on X on January 25, racking up 47 million views before being removed a day later. The images likely originated from a Telegram channel that produces similar content.
Swifties soon flooded social media platforms, mass-reporting the imagery and posting “Protect Taylor Swift” all over the internet to drown out the searches.
In response, US senators have introduced a billâ known as The Disrupt Explicit Forged Images and Nonconsensual Edits (DEFIANCE) Actâ which criminalises the distribution of digitally-manipulated explicit images.
“[The DEFIANCE Act] would add a civil right of action for intimate “digital forgeries” depicting an identifiable person without their consent, letting victims collect financial damages from anyone who “knowingly produced or possessed” the image with the intent to spread it,” senior tech and policy editor, Adi Robertson explains for The Verge.
The most viewed and shared explicit videos of Taylor showed her nude in a football stadium.
Many viewers have taken this as a connection to her boyfriend, Travis Kelce, a player for the Kansas City Chiefs. Jill Filipovic, writing for The Guardian, highlights the surge in hate directed at Taylor since she made her relationship public.
She theorises this hostility stems from Taylor breaking out of the box she seemed likely to fit intoâ she’s a white, blonde, blue-eyed country singer who spent much of her early years in Tennessee and is now dating an American football star.
But the singer-songwriter is also a wildly successful 34-year-old unmarried woman with no children who (occasionally) speaks out about feminism and has publicly backed Democrats. In Filipovic’s words, Taylor is simply someone “that many aim to “humiliate, degrade and punish.”
As legislation lags behind, Taylor is just one of the many womenânot just famous womenâ who have been threatened and victimised by AI, and specifically, deepfake pornography.
Deepfakes are a form of ‘Synthetic Media’
Deepfake technology allows internet users to replace the “likeness of one person with another in video and other digital media.”
This type of AI rose to viral fame on the r/deepfakes subreddit. In 2017, users began sharing edited pornographic videos featuring celebrities. Although Reddit has since banned r/deepfakes and other related subreddits, this has not stopped other online communities from sharing and creating “fake celebrity porn.”
According to a study by AI firm Deeptrace Technologies, approximately 15,000 deepfake videos had gone live by September 2019. Perhaps more concerning was the fact that “96% (of these videos) were pornographic and 99% of those mapped faces from female celebrities onto porn stars.”
Deeptrace also found that among the top four websites devoted to deepfake pornography, the first platform established had garnered almost 135 million video views between 2018 and 2019. According to independent researcher Genevieve Oh, more than 143,000 new deepfake videos were created in 2023, marking the highest year yet.
Given that viewership and creation of this type of content have continued to skyrocket, it seems there is an alarming demand for this type of nonconsensual pornography.
Taylor is not the first, nor will she be the last, woman to suffer from this sort of AI-generated content.
Back in 2020, an X-rated video showing TikTok star, Addison Rae engaging in a sexual act went viral.
The video was posted by an anonymous X user, who later shared another two sexually explicit videos. While it remains unclear whether this user created the content, the account was suspended for violating the platform’s media policy.
At the time, Addison was only 19 years old.
Much like Taylor, Addison’s fans rallied around her, expressing disgust for this content and questioning why so many internet users even believed the video was real.
With the widespread usage of AI-driven platforms like ChatGPT, MidJourney, and DALL·E, internet users do have a greater understanding of artificial intelligence than when the clips of Addison went viral.
However, this technology is still in its infancy. As AI becomes more advanced and internet users continue to fall for fabricated content, the importance of moderation and regulation becomes increasingly apparent.
The Dangers of Deepfakes
Creating a realistic deepfake requires detailed facial data. With this in mind, it is no surprise that most deepfakes use celebrity faces. However, just because celebrities and social media influencers are public figures doesn’t mean this technology isn’t dangerous for average social media users.
In the United States, dozens of teenage girls have reported being victimised by deepfakes, with many threatened with AI-generated nude photos of themselves.
This situation would be challenging for anyone, but for young women navigating through one of the most confusing and embarrassing times of their lives, being exposed to this type of content can be deeply traumatic.
Experts and journalists have even come to view this sort of nonconsensual sexually explicit media as a form of abuse.
“The fact that it is not “really” her body does not diminish the seriousness. Research shows that the “pornification” of non-risque photos using Photoshop or AI can be just as harmful to the person depicted,” Sociologist Jade Gilbourne writes for The Conversation. “The intent behind these images is not simply sexual â the degradation and humiliation of the victim is an equally desired outcome.”
As deepfakes blur the lines between truth and fantasy, the internet is becoming increasingly challenging to navigate. With many users unfamiliar with the intricacies of deepfake technology, these clips often deceive users into thinking that these videos are a reality.
Over the past few years, we have seen deepfake videos of US Presidents and Mark Zuckerberg go viral. With these AI-generated figures often voicing controversial and offensive opinions, these videos destabilise the already volatile media and political landscape.
While deepfakes are dangerous for political stability, they pose a profound threat to women and their safety.
As the internet’s latest victim, the viral deepfake of Taylor is the perfect example of how this pornography format is almost always nonconsensual. As author and Professor Danielle Citron told Deeptrace, by weaponising deepfakes to degrade and silence women, these AI-generated videos uphold patriarchal ideas of women as sexual objects.Â
“It has nothing to do with clothes or sex. It’s about taking free will away from women with technology,” internet culture and tech journalist Kat Tenbarge shares on X. “It’s not about women as individuals. It’s about the ability to control women, all women, at scale. Your personal decisions, whatever they may be, will be flipped.”
Women have endured the objectification and sexualisation of their identity and likeness online without their consent for years. Unfortunately, as AI advances, more unregulated avenues emerge for internet users to act out hypersexual (and often oppressive) fantasies. Take the rise of companion chatbots, like CarynAI, as an example.
Last year, Snapchat creator Caryn Marjorie, a.k.a @cutiecaryn, launched an AI voice-based chatbot called Caryn AI with tech startup Forever Voices.
As the first creator-turned-chatbot, Caryn hoped the bot would be âthe first step in the right direction to cure loneliness.â
Internet users could âenjoy private, personalised conversationsâ with the AI version of Caryn, where the influencer and the tech team worked tirelessly to make the bot lifelike.
Before long, people began misusing the chatbot’s features, leading CarynAI to participate in explicit conversationsâ something that (IRL) Caryn did not intend the platform to be used for. The influencer soon issued a brief statement to Insider, noting that the chatbot went “rogue” and that the tech team would fix the bug.
With AI-generated content harming female celebrities and regular women alike, it’s clear that the digital space is becoming increasingly unsafe. Without regulation and moderation, this type of content can soon ruin careers and reputations, and compromise mental health.
As deepfake technology and AI become more realistic, internet users will continue to express their sexual fantasies in technological reality. Not only does this compromise internet users’ sense of reality, but it also poses a direct threat to the well-being and safety of women.