Joe Biden deepfake robocalls, Taylor Swift deepfake porn, the launch of Sora: the past few weeks have seen progressively disruptive spikes in the prevalence of the generative algorithmic tools collectively known as AI. With rapid acceleration comes agitation, and anxious regulators are cobbling together policy as fast as they can. In the EU and the UK some laws are already in place, or close. And North American legislators are on a fast break toward regulation. Still, given the speed at which things are progressing, it may not be fast enough.

Discussions about the societal impact of AI deepfakes have always been laced with alarm, but recent weeks have seen the discourse enter firmly into “don’t say we didn’t warn you” territory. Biometrics and digital ID providers, lawmakers and hundreds of experts from diverse fields are waving a big flag that says deepfake tech is outpacing efforts to regulate it and control its use for malicious purposes, and the time to address the problem was yesterday.

Onfido sees 3000% rise in deepfakes for fraudulent onboarding

In a blog for Onfido, Aled Owen, the digital identity firm’s director of global policy, says that while deepfake pornographic images of celebrities have raised alarm bells, existing and developing legislation needs to cast a wider net.

“While many politicians and regulators are considering their next steps to tackle deepfakes, legislation must go a step further than only addressing explicit images,” Owen writes. His list of threats that need legislative consideration includes the spread of misinformation, especially in the context of elections; and the larger erosion of trust in reality.

“Some experts predict that up to 90 percent of online content could be synthetically generated within a few years,” he says, warning against a tiered reality in which uncertified information becomes a fire that feeds itself. ​​Even certification, he argues, won’t help to parse the difference between a real video of a politician making an offensive comment, which they want to keep under wraps, and a deepfake doing the same. Or if grandma calls asking for $500, how do you juggle distrust sown by audio deepfakes with the chance that she really needs help?

Digital identity fraud and scams are the third major issue Owen encourages lawmakers to pay attention to. Biometric deepfakes are being used to open illegitimate bank accounts, scam employees and let fraudsters pose as family members, friends or colleagues in need. “At Onfido, we’ve seen a 3,000 percent increase in deepfakes as part of fraudulent account onboarding attempts,” Owen says.

This is a problem that is going to affect everyone.

States push laws to address porn

Governments know this, even if the mechanisms of policy are not fluid enough to adapt to such rapid evolution of the technology. Lawmakers in Minnesota, Kentucky and Georgia are pushing laws that target deepfakes and generative AI. There are the expected objections from privacy rights groups. But with the lag in action to combat a threat that grows by the day, legislators largely agree with Minnestoa State Representative Zack Stephenson, who told the state legislature that “deepfakes are here, people are doing them, and we need to be very real about how we address them.”

Under Minnesota’s bill HF3625, candidates convicted of deepfake crimes would have to forfeit their nomination or office. Kentucky’s Senate Bill 131 is a scramble aimed at deepfakes, but applied only to political parties, campaigns, and candidates; in a report from NPR affiliate WUKY, dissenting Senator Gex Williams says “it will absolutely, positively, not keep any of us at any time from being subject to deepfakes in our campaigns.” Georgia’s Senate Bill 392 makes it a felony, classed as election interference, to create deepfake audiovisual content intended to influence the outcome of an election; the penalty includes steep fines and prison time. The panicked sponsor of the bill, Senator John Albers, warns that “this is real-time. This is going to happen in this election cycle as we have never seen it before.”

Per an article in Atlanta Civic Circle, which cites Axios, “as of Feb. 7, there were 407 total AI-related bills before more than 40 state legislatures, up from 67 bills a year ago.”

Open letter calls deepfakes “huge threat to human society”

In each of the states listed above, the deepfake Biden robocall incident in New Hampshire was a prompt to consider exactly what is at stake in controlling this technology. But the aftershocks have resonated far beyond the halls of government. Verdict reports on an open letter signed by more than 300 experts from technology, artificial intelligence, digital ethics, child safety, entertainment, and academia, among other fields, which voices support for ongoing legislative efforts and “provides key recommendations to hold the entire deepfake supply chain accountable.”

“The need for biometric rights becomes ever more apparent as we see how easily your likeness can be taken and transformed for nefarious uses,” says Dr. Joy Buolamwini, founder of the Algorithmic Justice League, and a signatory to the letter. Andrew Critch, a UC Berkeley researcher and lead author of the letter, is even more urgent: “Deepfakes are a huge threat to human society and are already causing growing harm to individuals, communities, and the functioning of democracy. We need immediate action to combat the proliferation of deepfakes.”

“We’re in the midst of a technological arms race between deepfake creators and deepfake detectors,” says Onfido’s Aled Owen. “Robust deepfake detection technology will be crucial to implementing effective legislation. To this end, regulation not only needs to protect victims, but must allow for innovation and the right data flows, aligned with data protection law, to allow for the development of cutting-edge AI deepfake detection solutions.”

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News