Deepfakes Threaten US 2024 Election Campaigns
AI-generated videos and deepfakes can create dishevel in the upcoming US 2024 Election if American states don’t bring about effective regulation. The potential threats of artificial intelligence and deepfakes cannot be denied. It is emerging as a top security issue.
Daniel Weiner, director of elections and government program at Brennan Center, believes states need to do more. He said potential regulations would need to be reconciled with First Amendment rights and survive legal challenges.
“Generative AI and deepfake technology are growing and changing quickly and exponentially. Many state lawmakers don’t yet know how to respond to these issues because they don’t sufficiently understand them. And, crucially, any enforcement mechanisms would depend on a broad raft of parties, including giant social media companies.”
Weiner believes states have to navigate these challenges now. “The really corrosive possibilities from deepfakes have fully burst into consciousness in the last year to two years. But there are effective policy solutions on the table.”
What are Deepfakes?
Experts describe deepfakes as videos that use artificial intelligence to create believable but false depictions of real people. In recent months, deepfake videos have become increasingly common and flooded the internet. Concerns have been raised because with election around the corner, voters could see political disinformation videos online and not be able to tell what’s real and what’s not.
An example can be seen in the Slovakia September election. The far-right Republika party circulated deepfakes videos, in the run-up to the country’s parliamentary elections, with altered voices of Progressive Slovakia leader Michal Simecka announcing plans to raise the price of beer, and most seriously, discussing how his party planned to rig the election.
Deepfakes are already in the US – an altered TV interview with Democratic US Senator Elizabeth Warren was circulated earlier this year on social media.
AI, Deepfakes – Scary Stuff
Tony Pietrocola, president of AgileBlue, said artificial intelligence is scary stuff. “When you think about what AI can do, you saw a lot more about not just misinformation, but also more fraud, deception, and deepfakes. It’s pretty scary stuff because it looks like the person, whether it’s congressman, a senator, a presidential candidate, whoever it might be, and they’re saying something. Here’s the crazy part – somebody sees it, and it gets a bazillion hits. That’s what people see and remember; they don’t go back ever to see that, oh, this was a fake.”
He believes the combination of massive amounts of data stolen in hacks and breaches combined with improved AI technology can make deepfakes a ‘perfect storm’ of misinformation. “But it’s not just the AI that makes it sound and act real. It’s the social engineering data that threat actors have either stolen, or we’ve voluntarily given.”
And because of AI technology’s open and increasingly widespread availability, deepfakes might not be limited to traditional nation-state adversaries such as Russia, China and Iran. People from all over the world can use these tools.