2024 elections are threatened by AI abuse, experts say. Literacy is key.

0 0

It isn’t on any ballot, but generative AI may be the most prevailing issue during the 2024 elections.

GenAI could wreak havoc on next year’s elections globally, unleashing a flood of misleading AI-influenced deep fakes and audio that will blur the lines between fact and fiction. Presidential campaigns are already using the technology to deceive voters, and there is grave concern operatives out of Russia, China, North Korea and Iran will do the same.

“The general public has no idea what a deep fake is or misinformation, for that matter,” Susan Gonzales, CEO of nonprofit AIandYou said in an interview. “The 2024 election is going to come down to AI literacy.”

Examples abound more than a year before the U.S. presidential election on Nov. 5, 2024. Florida Governor Ron DeSantis, a Republican seeking the party’s presidential nomination, released a video that used AI-generated photography to depict former President Trump embracing Dr. Anthony Fauci. The RNC used AI to generate an attack ad against President Biden that depicts a dystopian vision of the U.S. if he is re-elected.

The prospect has tech leaders, lawmakers and federal regulators scrambling to get ahead of the problem and avoid a repeat of the social-media chaos that undermined the 2016 presidential election.

“Nobody knows what to expect,” said Bob Ackerman, founder of cyber venture firm AllegisCyber Capital, who recently met with the FBI cybersecurity officials about election interference. “Machine learning on steroids is the boogeyman. It has created the fear of the unknown.”

Former Google CEO Eric Schmidt is among those worried. In June, he warned “the 2024 elections are going to be a mess, because social media is not protecting us from falsely generated AI.”

Some 500,000 video and voice deep fakes will be shared on social media this year, according to DeepMedia, an AI communication company. Political insiders are are fretting over potential scenarios such as fake images of electoral ballot-stuffing paired with AI audio purporting to be from trusted sources or authorities.

Alex Stamos, the former chief security officer at then-Facebook, is raising the red flag on the creation of vast troves of fake content created via generative AI.

Read more: Former Facebook security head warns 2024 election could be ‘overrun’ with AI-created fake content

“My fear is it will happen, and may already be happening,” Stamos, an adjunct professor at Stanford University’s Center for International Security and Cooperation, said in an interview. “What once took a team of 20 to 40 people working out of [Russia or Iran] to produce 100,000 pieces can now be done by one person using open-source gen AI.”

The abuse of AI for political gain isn’t surprising in a field renowned for misleading claims. Candidates and their campaigns have long relied on fliers and robocalls with false claims. It was the misuse of social media in the 2016 race, though, that underscored the spread of false narratives in raising contributions, organizing, and influencing votes.

Lawmakers are trying to get ahead of the AI problem to repeat what happened seven years ago. Last month, Senate Majority Leader Chuck Schumer, D-N.Y., led a closed-door meeting of CEOs from OpenAI, Facebook parent Meta Platforms Inc.
META,
-2.18%,
Alphabet Inc.’s
GOOGL,
-1.52%

GOOG,
-1.52%
Google, Microsoft Corp.,
MSFT,
-3.05%,
Palantir Technologies Inc.
PLTR,
-6.45%,
and others to address AI safety, security and trust.

Read more: Elon Musk and Mark Zuckerberg will talk AI rules in Senate forum today

Google and YouTube said last month they had created new rules for political advertisements that use AI to alter imagery or voices. That goes into effect in November 2023.

Read more: Google to political advertisers using AI: Be ‘clear’ about any digitally created content

Nonprofit AIandYou on Tuesday launched an AI literacy campaign during the 2024 elections and misinformation.

In addition to industry efforts to harness AI, federal legislation like the REAL Political Advertisements Act that requires disclosures of AI-generated text, images, video and audio in political content, could provide a guardrail.

The Federal Election Commission briefly considered regulating deep fakes, but couldn’t agree on a framework.

Overseas, the European Union’s AI Act identifies AI tools that could sway voters or influence elections as high-risk systems requiring regulatory oversight. 

Of course, there needs to be a balance between rules that enforce responsible AI use without trampling on free expression, caution some lawyers.

“Over-regulation could lead to imposition of free speech and stifle creativity,” Jessica Furst Johnson, who practices in campaign finance and election law, said in an interview. “That’s why disclaimers like Google ads are important.”

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy