Few protections for elected officials targeted by deepfakes
Few protections for elected officials targeted by deepfakes
Homepage   /    technology   /    Few protections for elected officials targeted by deepfakes

Few protections for elected officials targeted by deepfakes

🕒︎ 2025-11-01

Copyright Arizona Capitol Times

Few protections for elected officials targeted by deepfakes

Key Points: On Oct. 17, Sen. Wendy Rogers reposted a deepfake video of Gov. Katie Hobbs Deepfakes do not violate Arizona laws related to content Experts say elected officials cannot rely on regulation to combat deepfakes “I promise if I’m reelected governor, you won’t recognize your state by the end of my term,” Gov. Katie Hobbs says in a six-second video posted to X on Oct. 17. Except the governor never said that. The video is a deepfake, developed using generative artificial intelligence and an official portrait of Hobbs taken in 2023. To Arizona politicos familiar with the governor, the video is obviously fake. The voice does not belong to Hobbs and the smile the fake governor flashes at the end of the clip is slightly uncanny. But to average voters and out-of-state observers, the video is realistic enough to fool the viewer. And it has been viewed more than 10,000 times, thanks to a repost from Republican Sen. Wendy Rogers, who did not respond to a request for comment. “This is kind of horrifying,” said Senate Minority Leader Priya Sundareshan. “It is exactly what we have all been worried about with the rise of AI.” Deepfake content of political figures is nothing new, but it is becoming more common as artificial intelligence models like Grok by X and Sora by OpenAI make the creation of it more accessible and more realistic. Experts say attempts to regulate content won’t keep pace with its evolution, meaning Arizona lawmakers, candidates and elections officials will have to develop their own strategies to combat AI-generated photos and videos. Michael Moore, the chief information security officer at the Secretary of State’s Office, has been helping elected officials and elections administrators prepare for deepfake content aimed at disrupting the state’s elections. He says most leaders do not realize how easy it is to create realistic AI-generated content. “It’s not like just the president of the United States is going to have deepfakes of him, it’s not like just the governor is going to have deepfakes of her,” Moore said. “You can make this of anyone. It could be an elected official, it could be an election official, it could be anyone in your community, like the principal of your kid’s school.” Moore said the state did not see an influx of AI-generated threats in 2024, just a few relatively unsophisticated deepfakes. But he does foresee deepfakes becoming a bigger issue during the 2026 midterm elections, now that technology allows anyone with a smartphone to create them. “My job is to think like a bad guy and then try to defend against it,” Moore said. “If I wanted to erode confidence in elections, these are the tools that I would use. It is incredibly challenging to actually mechanically interfere with ballots, it’s way easier to negatively impact people’s perception and confidence in elections.” Despite the threat deepfakes pose to Arizona’s elections, the laws regulating them are relatively lax. The deepfake created of Hobbs, for example, does not violate the two Arizona laws passed in 2024 regarding political deepfakes. Neither Hobbs’ office nor her campaign responded to requests for comment. Chris McIsaac, a researcher at the nonpartisan R Street Institute, has studied AI policies enacted across the country and said he doesn’t view legislation as the answer to combatting deepfakes. “It’s exceedingly difficult to write a law that is going to capture the current state of technology and certainly where the technology is going,” McIsaac said. One law signed by Hobbs in 2024 requires deepfake creators to include an AI disclosure on manipulated content posted within 90 days of an election. If a politician asks a court to intervene, the creator of the deepfake content will receive a $10 fine per day until the content is removed or reuploaded with a disclosure. Another law, introduced by Republican Rep. Alex Kolodin of Scottsdale, allows candidates to ask a judge to declare a manipulated photo or video fake. However, the judge cannot require the removal of the content or impose a penalty for its creation. Kolodin said he doesn’t see a need to rush to create more regulations just yet, as his law never ended up needing to be enforced during the 2024 election cycle. “We’ve tried the light touch, now let’s see how things develop, let’s see where things go from here,” Kolodin said. “Not everything is a matter for legislation or a crisis that the Legislature needs to respond to. The important thing is simply that voters know the truth.” Sundareshan, who introduced her own deepfake bill that never received a hearing, disagreed. She said the two laws passed in 2024 are just a starting point. “I’m not so sure that what we ended up passing is going to be very capable of deterring this kind of activity,” Sundareshan said. Some states have attempted harsh crackdowns on AI political content, including outright bans or criminal penalties, according to McIsaac. But those laws are hard to enforce and can run afoul of the First Amendment. For example, a California law attempting to prohibit online platforms from hosting AI-generated political content ahead of an election was struck down by a federal judge in August. A similar law that required labels on digitally manipulated campaign ads was also overruled. That limits regulation of deepfakes to disclosures, which McIsaac says is an imperfect solution. “The question is, are these types of disclosures really going to have an effect?” McIsaac said. “The jury is still out. I don’t think there’s great examples of those being implemented.” Moore agreed that legislation won’t keep up with the rapid evolution of AI, but said he supports disclosures for AI-generated content. “I think that that would be a very reasonable piece of legislation,” Moore said. “People can still have their freedom of speech, but if they’re creating masterful impersonations of other people, it being labeled as a deepfake, I think that’s a reasonable request.” McIsaac said elected officials and government bodies should instead focus on educating and communicating with voters about AI, as a response to a deepfake on social media is faster than any piece of legislation or court action could be. “I think a better approach is to think of this as more of a communications exercise and having campaigns figure out ways to quickly counter this false claim that’s floating out there and (put) the counterargument out there for why this is false and then (let) the voters decide who they believe,” McIsaac said. Kolodin encouraged anyone who sees deepfake content of him to reach out and ask if it’s real. Sundareshan said she and her Democratic colleagues have yet to discuss combatting deepfakes, as she wasn’t even aware of how far the technology has come. “I think it is just not even well understood how real a threat this might be posing to us and we do need to be having the conversations,” Sundareshan said.

Guess You Like

OPINION Swamy: Would AI abolish AI?
OPINION Swamy: Would AI abolish AI?
I loved science fiction as a k...
2025-10-29
PONY AI Inc. Launches Hong Kong Initial Public Offering
PONY AI Inc. Launches Hong Kong Initial Public Offering
NOT FOR RELEASE, PUBLICATION O...
2025-10-28
Pakistan PM Shehbaz Sharif to visit Saudi Arabia on Monday
Pakistan PM Shehbaz Sharif to visit Saudi Arabia on Monday
Islamabad, Oct 26 (PTI) Pakist...
2025-10-28