Current:Home > ScamsSurpassing Quant Think Tank Center|AI fakes raise election risks as lawmakers and tech companies scramble to catch up -FundGuru
Surpassing Quant Think Tank Center|AI fakes raise election risks as lawmakers and tech companies scramble to catch up
Benjamin Ashford View
Date:2025-04-09 02:19:26
"What a bunch of malarkey." That's what thousands of New Hampshire voters heard last month when they received a robocall purporting to be Surpassing Quant Think Tank Centerfrom President Biden. The voice on the other end sounded like the president, and the catchphrase was his. But the message that Democrats shouldn't vote in the upcoming primary election didn't make sense.
"Your vote makes a difference in November, not this Tuesday," the voice said.
It quickly emerged that the voice wasn't Biden at all. It was the product of artificial intelligence. Bloomberg reported that ElevenLabs, maker of the AI voice-cloning software believed to have made the digital voice, banned the account involved. On Tuesday, New Hampshire's attorney general said a Texas telemarketing company was behind the call and was being investigated for illegal voter suppression.
Faking a robocall is not new. But making a persuasive hoax has gotten easier, faster and cheaper thanks to generative AI tools that can create realistic images, video and audio depicting things that never happened.
As AI-generated deepfakes are being used to spread false information in elections around the world, policymakers, tech companies and governments are trying to catch up.
"We don't really think of [AI] as a free-standing threat but as more of a threat amplifier," said Dan Weiner, director of the elections and government program at the Brennan Center for Justice at New York University School of Law.
He worries that AI will turbocharge efforts to discourage voters or spread bogus claims, especially in the immediate run-up to an election, when there's little time for journalists or campaigns to fact-check or debunk.
That's what appears to have happened last fall in Slovakia, just days before voters went to the polls. Faked audio seeming to show one candidate discussing rigging votes and raising the cost of beer started to spread online. His pro-Western party ended up losing to one led by a pro-Russian politician.
Because the stakes were high and the deepfake came at a critical moment, "there is a plausible case that that really did impact the outcome," Weiner said.
While high-profile fakes like the Biden robocall get a lot of attention, Josh Lawson, director of AI and democracy at the Aspen Institute, is focused on how AI could be used for personalized targeting.
"We are quickly advancing towards a point in the technology, likely before the election itself, when you can have real-time synthetic audio conversations," said Lawson, a former election lawyer who previously worked on elections at Facebook owner Meta.
He imagines a scenario where a bad actor deploys AI, sounding like a real person, to call a voter and give false information about their specific polling place. That could be repeated for other voters in multiple languages.
He's also worried about AI fakes targeting lower-profile elections, especially given the collapse of local news.
"The concern ... is not the big, bad deepfake of somebody at the top of the ticket, where all kinds of national press is going to be out there to verify it," Lawson said. "It's about your local mayor's race. It's about misinformation that's harder and harder for local journalists to tackle, when those local journalists exist at all. And so that's where we see synthetic media being something that will be particularly difficult for voters to navigate with candidates."
Deceiving voters, including spreading false information about when and where to vote, is already illegal under federal law. Many states prohibit false statements about candidates, endorsements or issues on the ballot.
But growing concerns about other ways that AI could warp elections are driving a raft of new legislation. While bills have been introduced in Congress, experts say states are moving faster.
In the first six weeks of this year, lawmakers in 27 states have introduced bills to regulate deepfakes in elections, according to the progressive advocacy group Public Citizen.
"There's huge momentum in the states to address this issue," Public Citizen President Robert Weissman said. "We're seeing bipartisan support ... to recognize there is no partisan interest in being subjected to deepfake fraud."
Many state-level bills focus on transparency, mandating that campaigns and candidates put disclaimers on AI-generated media. Other measures would ban deepfakes within a certain window — say 60 or 90 days before an election. Still others take aim specifically at AI-generated content in political ads.
These cautious approaches reflect the need to weigh potential harms against free speech rights.
"It is important to remember that under the First Amendment, even if something is not true, generally speaking you can't just prohibit lying for its own sake," Weiner said. "There is no truth-in-advertising rule in political advertising. You need to have solutions that are tailored to the problems the government has identified."
Just how prominent a role deepfakes end up playing in the 2024 election will help determine the shape of further regulation, Weiner said.
Tech companies are weighing in too. Meta, YouTube and TikTok have begun requiring people to disclose when they post AI content. Meta said on Tuesday that it is working with OpenAI, Microsoft, Adobe and other companies to develop industrywide standards for AI-generated images that could be used to automatically trigger labels on platforms.
But Meta also came under fire this week from its own oversight board over its policy prohibiting what it calls "manipulated media." The board, which Meta funds through an independent trust, said the policy is "incoherent" and contains major loopholes, and it called on the company to overhaul it.
"As it stands, the policy makes little sense," said Michael McConnell, the board's co-chair. "It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook. Perhaps most worryingly, it does not cover audio fakes, which are one of the most potent forms of electoral disinformation we're seeing around the world."
The moves to develop laws and guardrails reining in AI in elections are a good start, said Lawson, but they won't stop determined bad actors.
He said voters, campaigns, lawmakers and tech platforms have to adapt, creating not just laws but social norms around the use of AI.
"We need to get to a place where things like deepfakes are looked at almost like spam. They're annoying, they happen, but they don't ruin our day," he said. "But the question is, this election, are we going to have gotten to that place?"
veryGood! (5)
Related
- Hackers hit Rhode Island benefits system in major cyberattack. Personal data could be released soon
- Senate committee votes to investigate Steward Health Care bankruptcy and subpoena its CEO
- Prisoners fight against working in heat on former slave plantation, raising hope for change in South
- It’s a college football player’s paradise, where dreams and reality meet in new EA Sports video game
- In ‘Nickel Boys,’ striving for a new way to see
- She's a basketball star. She wears a hijab. So she's barred from France's Olympics team
- Zendaya's Wet Look at 2024 Paris Olympics Pre-Party Takes Home the Gold
- Rural Nevada judge suspended with pay after indictment on federal fraud charges
- US wholesale inflation accelerated in November in sign that some price pressures remain elevated
- Wife who pled guilty to killing UConn professor found dead hours before sentencing: Police
Ranking
- Which apps offer encrypted messaging? How to switch and what to know after feds’ warning
- North Korean charged in ransomware attacks on American hospitals
- Are schools asking too much for back-to-school shopping? Many parents say yes.
- Wife who pled guilty to killing UConn professor found dead hours before sentencing: Police
- Apple iOS 18.2: What to know about top features, including Genmoji, AI updates
- Cleansing Balms & Oils To Remove Summer Makeup, From Sunscreen to Waterproof Mascara
- Watch Simone Biles nail a Yurchenko double pike vault at Olympics podium training
- Flamin' Hot Cheetos 'inventor' sues Frito-Lay alleging 'smear campaign'
Recommendation
Federal appeals court upholds $14.25 million fine against Exxon for pollution in Texas
Man arrested on arson charge after Arizona wildfire destroyed 21 homes, caused evacuations
Captivating drone footage shows whale enjoying feast of fish off New York coast
Whistleblower tied to Charlotte Dujardin video 'wants to save dressage'
Could your smelly farts help science?
Locked out of town hall, 1st Black mayor of a small Alabama town returns to office
Texas deaths from Hurricane Beryl climb to at least 36, including more who lost power in heat
Small stocks are about to take over? Wall Street has heard that before.