Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
“Falsehood flies, and truth comes limping after it,” wrote the Irish satirist Jonathan Swift in 1710. The rise of AI-generated “deepfakes” — hyper-realistic video, audio, or image manipulations — means fiction is now flying faster than ever before. This week, X blocked searches for Taylor Swift after fake sexually explicit images of the pop star spread on the social media platform. Before reality caught up, one image of the megastar had already been seen 47mn times.
As deepfake software has become slicker, more lifelike and widely accessible, its use for nefarious purposes has grown. Safeguards for individuals and society have not kept up. Legal protections are often insufficient to prevent harmful content spreading, and social media outlets have differing content rules and capabilities to remove malign posts. Yet deepfakes blur the boundary between what is real and what is not, and risk sowing mistrust in everything on the internet. Legislators, social media platforms and tech companies that create these tools must step up their efforts to prevent, detect and respond to harmful deepfakes.
Swift may be the highest-profile female victim, but she is far from the only one. Research suggests that most deepfake images online are of a pornographic nature, mostly of women. The software has been used to scam, blackmail and destroy reputations. It also poses an enormous risk to democracy across the world. It has been used to spread false information, manipulate public opinion and impersonate politicians. Last month, a robocall — using a fake audio of US President Joe Biden — urged Democrats to stay at home ahead of the New Hampshire primary.
The US Federal Communications Commission this week proposed that using voice cloning technology in robocalls be ruled fundamentally illegal. Yet regulating harmful AI-generated content is far from simple; determining malicious intent can be hard. Trying to prevent its creation runs into problems of defining what is permitted and what is not, and could stifle free speech and innovative uses of the software. For all its flaws, the technology has been used for good — for instance to support educators, film creators and even patients who have lost their voice. The range of software and the borderless nature of social media makes policing often anonymous creators a logistical challenge too. The EU’s AI Act requires creators to disclose the use of deepfake technology, but that does not prevent malicious content being spread.
The onus must be on targeting the distribution of malign deepfakes. The UK’s Online Safety Act makes it illegal to disseminate non-consensual pornographic deepfakes. Other countries should follow. The legal framework for spreading other injurious deepfakes without consent — including copyright, defamation and rules on election interference — needs tightening. Boundaries need to be clear, so as not to stifle legitimate free expression. Public awareness campaigns and improved media literacy would help users catch and report malicious deepfakes too.
Deleterious content will still seep through. That means social media companies should be made more accountable, with penalties if they fail to take adequate actions such as detecting and removing malicious content faster, and banning fake accounts. Investment to improve AI detection software would help, along with “watermarking” to distinguish real from machine-generated posts. Collaboration on safeguards across platforms and software is vital.
As protection efforts advance, so will the malign technologies to evade it. Policymakers and companies must act fast, while avoiding unintended consequences. But with the rights of individuals, especially women, and democratic discourse at stake, they cannot afford to fall further behind.
Read the full article here