The legal battle against explicit AI deepfakes

0 1

Omny Miranda Martone was not surprised when computer-generated images and videos purporting to show their naked body started to appear on social media in January.

As the founder of the Washington-based Sexual Violence Prevention Association, Martone — who uses non-binary pronouns — had spent the previous year working with people who had suffered non-consensual intimate image abuse, widely known as deepfake pornography. So the real shock was not the synthetic images and videos themselves, although these were alarmingly realistic. It was the sense of powerlessness. 

“All of the knowledge that I have in this field, all my experience of laws and policy, and all of the resources and connections I have to people working in this field . . . All that still did not help me take action,” Martone says. 

There is no federal requirement in the US for websites or social media platforms to remove non-consensual explicit deepfakes flagged by victims. Creating and sharing these images is not a federal offence either, and there is no specific civil right of action for victims outside of costly and often inaccessible defamation claims. 

Around the globe, legislation on explicit deepfakes has been scattered, and remains largely untested. Some jurisdictions have focused on individuals, while others have focused on technology platforms. Some have pursued civil penalties, while elsewhere criminal laws have been drafted.

Governments have also diverged on what aspects of the deepfake process should be treated as an offence. The UK and Australia, for example, have so far criminalised the act of sharing non-consensual explicit deepfakes, but not creating them. By contrast, South Korea has made it a crime to make, share, or even watch deepfake pornography.

In Italy, after explicit synthetic videos of Giorgia Meloni appeared online, the prime minister herself has been seeking €100,000 in damages for defamation. “I reacted like this to protect women,” she told the court.

Martone’s non-profit, the SVPA, had been working on a bill that would enable people to bring a civil suit against anyone who knowingly shares non-consensual sexually explicit deepfakes of them. Called the Defiance Act, the legislation is bipartisan, with backing from the likes of leftwing Democrat Alexandria Ocasio-Cortez, who has herself repeatedly featured in deepfake pornography, and Trump ally Lindsey Graham.

It was when this bill entered the Senate that an anonymous account started to upload the faked explicit images of Martone to X. “Through advocating on this issue, I myself became a victim,” Martone says.

32mnVisits globally in September to ‘nudify’ or ‘undressing’ sites

Lawmakers, researchers and campaigners warn that rapid advances in generative artificial intelligence have triggered a surge in deepfake image abuse. Until recently, deepfakes required a lot of time, computing power and technical expertise, and most depicted celebrities or public figures.

But advances in technology have made it increasingly easy and cheap to generate hyper-realistic synthetic images and videos. This has been accelerated by a host of AI-based apps and websites — which often need just a single photograph to create plausible videos.

In September, there were more than 32mn visits globally to “nudify” or “undressing” sites, according to figures from the social network analysis company Graphika shared with the Financial Times.

“The development of generative AI allows anybody to make abusive content of anyone,” says Sophie Compton, co-founder of the UK-based campaign group #MyImageMyChoice. 

The majority of these sites have no age-verification processes for either the creator or the subject of the image, meaning that they can be used to create synthetic sexual abuse images of children as well as non-consensual adult deepfakes. Some sites also have options to make people appear younger than they really are, according to Graphika.

Schools in many places, from the UK to South Korea, have reported that children are increasingly accessing these tools. In one recent case in the Australian state of Victoria, a teenage boy allegedly doctored images taken from social media to make explicit images of about 50 female students, using AI.

“This is part of the broader issue with AI,” says Jack Stubbs, chief intelligence officer at Graphika. “This technology is all about amplifying existing processes.”

Explicit images have indeed long been at the forefront of digital innovation: pornography has been credited with everything from driving the adoption of VHS tapes in the 1980s to inspiring the creation of online payment systems in the late 1990s. And legislation has attempted to catch up. Early internet censorship focused on restricting access to adult content. As awareness of — and anxiety about — AI has increased, politicians have begun to heed campaigners’ calls to provide legal and regulatory frameworks. 

But so far there is little agreement on the best way forward. There are also growing questions over whether effective policing is even possible, given how rapidly AI is developing.

“I can’t think of any other technological issue where there is such bipartisan agreement, and yet we still don’t have any meaningful legislation,” says Max Tegmark, president of the Future of Life Institute, a non-profit that campaigns for AI regulation. “If we can’t even do that, then who . . . are we kidding when it comes to controlling AI?”

Yet if it can be framed correctly, analysts say that legislation could provide a template for how to police other abuses of advanced AI, including fraudulent advertising, voice-cloning scams and misinformation — and even AI more generally. It could also be an early test for how courts assess evidence in an era of sophisticated, easily accessible digital forgery.

If we find a way to limit deepfakes, that could prove instructive as well as challenging, says Henry Ajder, an AI researcher and adviser to technology companies including Meta. “How the hell do justice systems around the world treat digital evidence in an age of AI?” asks Ajder.


The US-based live-streamer “Gibi”, who has more than 5mn YouTube subscribers, has been targeted by graphic deepfakes since 2018. But she says they have increased rapidly in both realism and volume during the past year. “In the beginning it was a bit laughable and you could kind of ignore it,” she says. “Now it’s so much worse.”

Although she says she knows the name and the location of one person who has created deepfakes of her, she has limited legal avenues for seeking damages, and says she is stuck in “a waiting game” until Congress passes the Defiance Act.

The bill, which unanimously passed the Senate in July, would grant identifiable victims of explicit deepfakes the right to sue individuals who “knowingly produce, distribute, or receive” them.

Studies have consistently shown that women are disproportionately the target of deepfakes. A report by tech start-up Security Hero last year said that more than 90 per cent of deepfake videos on the internet were pornographic, and more than 99 per cent of these featured women as primary subjects.

Norma Buster, chief of staff at the Brooklyn-based law firm CA Goldberg, says women are often targeted as a form of retribution for rejecting unwanted advances, or in retaliation for speaking out about their views. “These are crimes that are meant to shame the victim into silence,” she says. 

Another piece of legislation on the table in the US, the Shield Act, would introduce criminal penalties for individuals who share private, sexually explicit or nude images without a person’s consent, even if they’re artificial. The bill has passed the Senate but remains under consideration by the House judiciary committee.

Both bills focus on individual perpetrators rather than the AI tools and platforms used to share them. This is largely because of Section 230 of the US Communications Decency Act (1996), an early attempt to regulate the internet, which states that platforms hosting third-party content are not liable for what users choose to post.

Proponents say Section 230 is essential for free expression and innovation, while critics argue that it fosters inadequate content moderation. “Section 230 is the real barrier for us to make change,” says Buster.

By contrast, the EU has introduced tough measures on online safety and AI that would put the onus on technology companies instead. The legislation, which has come under fierce criticism from those in the industry, will require platforms to ensure that their AI models have guardrails in place and cannot be tricked into generating non-consensual explicit content.

Separate EU legislation already requires social media companies to swiftly remove non-consensual intimate material posted by users. Other regions, including Australia and the Canadian state of British Columbia, have introduced similar civil frameworks, which penalise platforms that fail to comply.

When it comes to criminal law, countries have also taken patchwork approaches. The US has not passed any federal criminal legislation on explicit deepfakes, leaving it to individual states to come up with solutions. South Dakota has expanded its definition of child sexual abuse material to specifically include deepfakes; its neighbour, North Dakota, has not.

90%Percentage of deepfake videos on the internet that are pornographic

Prosecutors have also been struggling to assess how the newest deepfakes fit into existing laws, particularly around child sexual abuse. In one recent UK case, a man was sentenced to 18 years in prison after pleading guilty to 16 sexual offences, including using AI to create child sexual abuse imagery, which he had customised to buyers’ specifications.

According to specialist prosecutor Jeanette Smith, the trickiest part of the case was figuring out how to classify the images in question. Made using US-built software, they did not look like real photos. Nevertheless, Smith’s team successfully argued that they could be judged as “indecent photographs” — the most serious category — because they had been derived from images of real children.

The case was the first of its kind in the UK, but Smith says she expects to see more as AI evolves. She also fears these cases will become even more complicated to prosecute as “that line between whether it’s generated or whether it’s a photograph will blur”.

“It’s going to get super-hard to tell if something is real or not,” adds Hayley Brady, a partner at the UK law firm Herbert Smith Freehills.


Against this fragmented legal backdrop, Big Tech has largely continued to police itself. Campaigners have long been frustrated with what they perceive as platforms’ inaction when it comes to harmful content.

Some, however, are optimistic that recent high-profile scandals involving intimate deepfakes might bring change. When deepfakes of Taylor Swift circulated widely on X in January — one image was seen 47mn times before it was taken down — Microsoft chief executive Satya Nadella said the “terrible” incident proved that Big Tech had to act. “The conversation has definitely begun,” says Compton of #MyImageMyChoice.

This year, Microsoft’s coding website GitHub removed several repositories of open-source code that had been used to create explicit deepfakes, as well as links to sites hosting deepfake pornography. GitHub did not respond to a request for comment.

Google, meanwhile, said in July that it was making changes to its search engine to make it easier for victims of intimate image abuse to have videos and images of themselves removed from the internet, and to downgrade the ranking of sites that have received removal notices.

It has not, however, gone so far as to remove deepfake sites from search results altogether — a move that advocacy groups such as #MyImageMyChoice have been pushing for. Google has argued, however, that delisting sites entirely could block victims from accessing important information, such as how to request that content be removed.

Some in tech have argued that they need clearer guidelines from governments on what is and is not acceptable, and on how they should police their platforms.

“Governments are really the only ones who can take the action [against] what is a fundamentally very harmful form of harassment,” says Risa Stein, vice-president responsible for customer experience at the dating app Bumble, a member of the Center for Democracy & Technology’s working group on non-consensual intimate image abuse.


Because generative AI enables people to produce videos, images and audio that are entirely synthetic but fully plausible, it poses huge new challenges, says researcher Nicola Henry, a member of the Australian eSafety Commissioner’s advisory group: “Who gets to decide how realistic it is? Who gets to decide how much it looks like you?”

Moreover, even if search engines, social media platforms and technology companies do crack down on explicit deepfakes, it might not be enough. Many images and videos are generated using customised open-source systems and disseminated on end-to-end encrypted platforms such as Telegram, where they cannot be traced.

San Francisco’s city attorney, David Chiu, is aware of all of these challenges. Nevertheless, he is optimistic about his continuing case against the 16 most widely used “undressing” apps.

The lawsuit, brought on the behalf of residents of California in August, alleges that these sites violated state laws against fraudulent business practices, non-consensual pornography and child sexual abuse.

When governments and technology platforms have previously attempted to block or delist sites, they’ve often popped up again days later under modified domain names. But Chiu says he wants to use his first-of-its-kind lawsuit to “set an example” — both to the sites themselves, and the payment platforms and search engines that enable them to make a profit.

“There are other technology providers that are facilitating this and profiting off this, whether or not they intend it,” says Chiu. “We want to send a very strong signal to everyone involved.”

Most activists accept that it will be impossible to stop people from generating non-consensual deepfakes entirely. Nevertheless, they hope that legislation could force Big Tech platforms to make it harder to access, advertise and monetise these products. “If we can just increase the barrier to entry again, we’ll make a real difference,” says Martone.

This could have broader implications beyond intimate image abuse. The same technologies used to create explicit deepfakes are also used for sophisticated fraud campaigns — including phishing scams that clone someone’s voice, technology that is already widely available — and disinformation such as that seen in recent elections globally.

Campaigners hope that legislation, even if flawed or experimental, will at least make people take deepfakes more seriously. “You need to be able to reap some sort of compensation for what has been done to you,” says YouTuber Gibi.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy