It’s 8 a.m. on a Monday and you pedal furiously on your bike in order to make it to class on time. The sky glistens in the morning light, and by all means it’s a normal day. But as you enter your class, you start seeing other students giving you sideways glances. Without you even knowing, other students have created nude images of you, using artificial intelligence.
For you it’s a nightmare, for others, it’s a reality. Recent high-profile cases across California, and in the United States, have prompted the first case of legal action against companies that produce such content.
The leader of this charge? San Francisco City Attorney David Chiu, who on Aug. 15, filed a lawsuit targeting 16 websites using AI to create non-consensual deepfake nudes of people.
The lawsuit seeks to dismantle these sites, which allow users to upload photos of individuals and use AI to de-clothe them.
“Our Chief Deputy City Attorney was reading the news and saw an article about a young woman who had a nonconsensual, deep fake image created of her,” Alex Berret-Shorter, deputy press secretary, told Anthro. “The article talked about how it was really difficult to find out who did it, and there wasn’t much recourse for her. So she [Chief Deputy City Attorney] came back to her team and said ‘can we look into this and see if there’s anything our office can do?’ ”
This incident was what inspired the San Francisco deep fake lawsuit in the first place.
Collectively, the 16 websites the lawsuit targets have amassed over 200 million visits just in the first six months of 2024. The San Francisco City Attorney’s office accuses these websites of violating multiple laws prohibiting the creation and distribution of non-consensual intimate photos.
Notably, one company, Itai Tech, owns 4 of these websites. The website allows users to take any image of the victim, whether it be from their social media or an online website, and with one click it creates a nude image of that person with their real face.
These images have often been used in negative ways. In one notable case at Beverly Hills Middle School, 16 deepfake nude images of eighth-grade girls were circulated among students, causing widespread concern among parents and educators. In another case at Westfield High, New Jersey, multiple deepfake nudes of 10th graders were created.
Additionally, the Federal Bureau of Investigation has also warned of an increase in extortion schemes. This is where actors create AI-generated nudes and threaten to release the images unless victims comply with their demands, usually monetarily.
The primary objective of the Attorney’s Office is to shut down these websites completely, which differentiates from previous lawsuits that only retroactively targeted the people who created the images.
The major challenge concerning deepfakes is that anyone can be a victim, no matter what you do since they can be created without any actions from the victim’s part.
“I’m scared of the advancement of AI and how it has the potential to create any video, image, or voice recording, and make you do things you didn’t do,” Paly junior Estelle Dufour said. “You have no way of control.”
Another challenge is that deepfakes are extremely difficult to track down and get rid of due to how quickly they spread.
“Once those images are out there on the internet and circulating, you can’t really reign it back in, and there’s no way to completely get rid of those images once they’re on the internet,” Barrett-Shorter said.
Certainly, deepfake nudes are not an issue to be taken lightly.
Currently, RISE, a Paly club devoted to promoting sexual safety and consent culture is working on a consent education for Paly.
Some states have already taken action to regulate deep fakes. For example, Governor Gavin Newsom recently passed Assembly Bill 2655, which mandates that large online platforms remove or label deceptive AI-generated content related to elections.
However, this bill’s limited scope excludes other cases, including the pending lawsuit completely, requiring prosecutors to attack using already formed laws against these companies.
On the other hand, some people believe legislation alone won’t be enough to regulate AI. Instead, they believe AI alignment, the practice of programming AI so that it reflects human values and ideals, can act as a filter to prevent most crimes. For example, Jean-Marc Mommessin, a Stanford AI alignment board member, said he believes the best way to regulate AI deepfakes is with AI alignment.
“Alignment gets you eighty percent of the way there, and the other twenty percent has got to come from the government,” Mommessin told Anthro.
AI alignment is the process of incorporating human values into AI.
“Ask a chatbot how to build a bomb, and it can respond with a helpful list of instructions or a polite refusal to disclose dangerous information,” wrote Kate Martineau, science writer for IBM Research. “Its response depends on how it was aligned by its creators.”
Therefore, using AI alignment, creators could make it so pornographic deepfakes are impossible to be created. Still, now students and others stand unprotected, with retroactive action being the only recourse.
For anyone who is a victim of deepfakes, Do urges you to speak out and look for help with people you trust.
“I would encourage students to report it to [the Palo Alto Unified School District’s] Title IX [office] (law that protects students from sex discrimination) or report it to our club, so we can help them combat this issue and spread awareness about how we can stop it,” Do said.