Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
TikTok will become the first social media platform to automatically label some artificial intelligence-generated content, as rapid advances in generative AI deepen concerns about the spread of online disinformation and deepfakes.
Online groups, such as Facebook owner Meta and TikTok, already require users to disclose if realistic images, audio or videos are made through AI software.
The viral video app, owned by China’s ByteDance, went a step further on Thursday, announcing it is introducing its own features to ensure that videos it can identify as AI-generated will be labelled as such. This will include content made using Adobe’s Firefly tool, TikTok’s own AI image generators and OpenAI’s Dall-E.
“The challenge is, we know from many experts that we work with, that there is a rise in . . . harmful AI-generated content,” said Adam Presser, TikTok’s head of operations and trust and safety.
“This is really important for our community because authenticity is really one of the elements that has made TikTok such a vibrant and joyful community . . . they want to be able to understand what has been made by a human and what has been enhanced or generated with AI.”
Social media platforms including TikTok, Meta, X and YouTube have all been exploring ways to integrate generative AI into their platforms, through chatbots and new tools helping influencers and advertisers to create media. However, the platforms have come under fire for allowing users to be flooded with low-quality AI-generated spam content.
In a year of major elections around the world, these companies also face pressure to introduce guardrails around misleading deepfakes, curb covert influence operations and ensure they properly moderate content while remaining non-partisan.
Earlier this week, TikTok and its parent ByteDance filed a lawsuit against the US government, challenging a law designed to force a sale or ban of the app. Lawmakers had expressed concern the platform could push disinformation and propaganda.
On Thursday, TikTok said it would join a coalition of technology and media groups, led by Adobe, that incorporate so-called content credentials into AI-generated products.
This technology embeds a digital fingerprint into multimedia AI content, along with other identifying information such as when, where and by whom the material was generated. TikTok will use these indicators to automatically flag when content is made using AI products.
OpenAI on Tuesday announced that it would join the coalition, known as the Content Authenticity Initiative, and embed the fingerprinting technology into all images generated by its image model Dall-E 3. Eventually, the ChatGPT maker said, it would also embed these into its video-generating model Sora, when it is widely released.
Large technology companies, including Google, Microsoft and Sony are exploring embedding the technology into their AI tools.
Meta said earlier this month it would start stamping AI-generated content with a “Made by AI label” by detecting invisible markers inserted by groups such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock. Facebook’s owner also said it was developing deepfake detection classifiers.
Experts have argued that bad actors or sophisticated disinformation groups would likely use open source AI generation tools to create deepfakes, which would make them harder to trace and not be flagged by digital fingerprinting and watermarking.
Tech companies argue that their efforts represented the first step towards tackling the problem.
Dana Rao, general counsel and chief trust officer at Adobe, said: “The premise of this solution is if you want to be transparent and have authentic, transparent conversation with your public, this tool will allow you to do it in a world where everything digital can be manipulated.”
Read the full article here