Sexually graphic “deepfake” images of Taylor Swift went viral on social media last week, fueling widespread condemnation from Swifties, the general public and even the White House.
This problem isn’t new. Swift is one of many celebrities and public figures, mainly women, who have fallen victim to deepfake pornography in recent years. High-profile examples garner significant media attention, but the increasingly sophisticated nature of AI means anyone can now be targeted.
While there are grave concerns about the broader implications of deepfakes, it’s important to remember the technology isn’t the cause of abuse. It’s just another tool used to enact it.
Deepfakes and other digitally manipulated media
The sexually explicit deepfakes of Swift appeared on multiple social media platforms last week, including X (formerly Twitter), Instagram, Facebook and Reddit.
Most major platforms have bans on sharing synthetic and digitally manipulated media that cause harm, confusion or deception, including deepfake porn. This includes images created through simpler means such as photo-editing software. Nonetheless, one deepfake depicting Swift was viewed 47 million times over a 17-hour period before it was removed from X.
There’s a long history of digital technologies, apps and services being used to facilitate gender-based violence, including sexual harassment, sexual assault, domestic or family violence, dating abuse, stalking and monitoring, and hate speech.
As such, our focus should also be on addressing the problematic gender norms and beliefs that often underpin these types of abuse.
The emergence of deepfakes
The origins of deepfakes can be traced to November 2017 when a Reddit user called “deepfakes” created a forum and video-editing software that allowed users to train their computers to swap the faces of porn actors with the faces of celebrities.
Since then, there’s been a massive expansion of dedicated deepfake websites and threads, as well as apps that can create customized deepfakes for free or for a fee.
In the past, creating a convincing deepfake often required extensive time and expertise, a powerful computer and access to multiple images of the person being targeted. Today, almost anyone can make a deepfake—sometimes in a matter of seconds.
The harms of deepfake porn
Not all applications of AI-generated imagery are harmful. You might have seen funny viral deepfakes such as the images of Pope Francis in a puffer jacket. Or if you watch the latest Indiana Jones film, you’ll see Harrison Ford “de-aged” by 40 years thanks to AI.
That said, deepfakes are often created for malicious purposes, including disinformation, cyberbullying, child sexual abuse, sexual extortion and other forms of image-based sexual abuse.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Taylor Swift deepfakes: New technologies have long been weaponized against women. The solution involves everyone (2024, February 1)
retrieved 1 February 2024
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.