Policymakers around the world are worrying over how AI-generated disinformation can be harnessed to try to mislead voters and inflame divisions ahead of several big elections next year.
In one country it is already happening: Bangladesh.
The South Asian nation of 170mn people is heading to the polls in early January, a contest marked by a bitter and polarising power struggle between incumbent Prime Minister Sheikh Hasina and her rivals, the opposition Bangladesh Nationalist party.
Pro-government news outlets and influencers in Bangladesh have in recent months promoted AI-generated disinformation created with cheap tools offered by artificial intelligence start-ups.
In one purported news clip, an AI-generated anchor lambasts the US, a country that Sheikh Hasina’s government has criticised ahead of the polls. A separate deepfake video, which has since been removed, showed an opposition leader equivocating over support for Gazans, a potentially ruinous position in the Muslim-majority country with strong public sympathy for Palestinians.
Public pressure is rising on tech platforms to crack down on misleading AI content ahead of several big elections expected in 2024, including in the US, the UK, India and Indonesia. In response, Google and Meta have recently announced policies to start requiring campaigns to disclose if political adverts have been digitally altered.
But the examples from Bangladesh show not only how these AI tools can be exploited in elections but also the difficulty in controlling their use in smaller markets that risk being overlooked by American tech companies.
Miraj Ahmed Chowdhury, the managing director of Bangladesh-based media research firm Digitally Right, said that while AI-generated disinformation was “still at an experimentation level” — with most created using conventional photo or video editing platforms — it showed how it could take off.
“When they have technologies and tools like AI, which allow them to produce misinformation and disinformation at a mass scale, then you can imagine how big that threat is,” he said, adding that “a platform’s attention to a certain jurisdiction depends on how important it is as a market”.
Global focus on the ability to use AI to create misleading or false political content has risen over the past year with the proliferation of powerful tools such as OpenAI’s ChatGPT and AI video generators.
Earlier this year, the US Republican National Committee released an attack ad using AI-generated images to depict a dystopian future under President Joe Biden. And YouTube suspended several accounts in Venezuela using AI-generated news anchors to promote disinformation favourable to President Nicolás Maduro’s regime.
In Bangladesh, the disinformation fuels a tense political climate ahead of polls in early January. Sheikh Hasina has cracked down on the opposition. Thousands of leaders and activists have been arrested in what critics warn amounts to an attempt to rig polls in her favour, prompting the US to publicly pressure her government to ensure free and fair elections.
In one video posted on X in September by BD Politico, an online news outlet, a news anchor for “World News” presented a studio segment — interspersed with images of rioting — in which he accused US diplomats of interfering in Bangladeshi elections and blamed them for political violence.
The video was made using HeyGen, a Los Angeles-based AI video generator that allows customers to create clips fronted by AI avatars for as little as $24 a month. The same anchor, named “Edward”, can be seen in HeyGen’s promotional content as one of several avatars — which are themselves generated from real actors — available to the platform’s users. X, BD Politico and HeyGen did not respond to requests for comment.
Other examples include anti-opposition deepfake videos posted on Meta’s Facebook, including one that falsely purports to be of exiled BNP leader Tarique Rahman suggesting the party “keep quiet” about Gaza to not displease the US. The Tech Global Institute, a think-tank, and media non-profit Witness both concluded the fake video was likely AI-generated.
AKM Wahiduzzaman, a BNP official, said that his party asked Meta to remove such content but “most of the time they don’t bother to reply”. Meta removed the video after being contacted by the Financial Times for comment.
In another deepfake video, created by utilising Tel Aviv-based AI video platform D-ID, the BNP’s youth wing leader Rashed Iqbal Khan is shown lying about his age in what the Tech Global Institute said was an effort to discredit him. D-ID did not respond to a request for comment.
A primary challenge in identifying such disinformation was the lack of reliable AI-detection tools, said Sabhanaz Rashid Diya, a Tech Global Institute founder and former Meta executive, with off-the-shelf products particularly ineffective at identifying non-English language content.
She added that the solutions proposed by large tech platforms, which have focused on regulating AI in political adverts, will have limited effect in countries such as Bangladesh where ads are a smaller part of political communication.
“The solutions that are coming out to address this onslaught of AI misinformation are very western-centric.” Tech platforms “are not taking this as seriously in other parts of the world”.
The problem is exacerbated by the lack of regulation or its selective enforcement by authorities. Bangladesh’s Cyber Security Act, for example, has been criticised for giving the government draconian powers to crack down on dissent online.
Bangladesh’s internet regulator did not respond to a request for comment about what it is doing to control online misinformation.
A greater threat than the AI-generated content itself, Diya argued, was the prospect that politicians and others could use the mere possibility of deepfakes to discredit uncomfortable information.
In neighbouring India, for example, a politician responded to a leaked audio in which he allegedly discussed corruption in his party by alleging that it was fake — a claim subsequently dismissed by fact-checkers.
“It’s easy for a politician to say, ‘This is deepfake’, or ‘This is AI-generated’, and sow a sense of confusion,” she said. “The challenge for the global south . . . is going to be how the idea of AI-generated content is being weaponised to erode what people believe to be true versus false.”
Additional reporting by Jyotsna Singh in New Delhi
Read the full article here