close
close

California takes action against AI discrimination and sexually abusive deepfakes

AI (artificial intelligence) letters and robot miniature in this illustration from June 23, 2023. Photo by Dado Ruvic/REUTERS

SACRAMENTO, Calif. (AP) — As companies increasingly integrate artificial intelligence into Americans’ daily lives, California lawmakers are looking to build public trust, combat algorithmic discrimination and ban deepfakes that involve elections or pornography.

The effort in California – home to many of the world’s largest AI companies – could pave the way for AI regulations across the country. The U.S. already lags behind Europe in regulating AI to limit risks, politicians and experts say, and the fast-growing technology raises concerns about job losses, misinformation, privacy invasions and automation bias.

READ MORE: Tech giants sign voluntary agreement to combat AI-based election deepfakes

A series of proposals to address those concerns were introduced last week but still need to win approval from the other chamber before landing on Gov. Gavin Newsom’s desk. The Democratic governor has touted California as both an early adopter and a regulator, saying the state could soon use generative AI tools to ease highway congestion, make roads safer and provide tax advice, while his administration also considers new rules against AI discrimination in hiring practices.

Because California already has strong privacy laws in place, the state is in a better position to enact effective regulations than other states with major AI interests, such as New York, says Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“To pass an AI law, you need a privacy law,” Rice said. “We’re still watching what New York does, but I’d bet on California.”

California lawmakers said they can’t wait to take action, citing hard lessons learned from failing to rein in social media companies when they had a chance, but also wanting to continue luring AI companies to the state.

Here’s a closer look at California’s proposals:

Combating AI discrimination and strengthening public trust

Some companies, including hospitals, are already using AI models to make hiring, housing and healthcare decisions for millions of Americans without much oversight. According to the U.S. Equal Employment Opportunity Commission, up to 83% of employers use AI to help with hiring. How these algorithms work remains largely a mystery.

One of California’s most ambitious AI measures this year would debunk these models by creating a framework of controls designed to prevent bias and discrimination. Companies using AI tools would have to be involved in decisions that determine outcomes and inform those affected when AI is used. AI developers would have to regularly audit their models internally for bias. And the state’s attorney general would have the power to investigate reports of discriminatory models and impose fines of $10,000 per violation.

AI companies may also soon be required to disclose what data they use to train their models.

Protecting jobs and image

Inspired by the months-long strike by Hollywood actors last year, a California lawmaker wants to prevent workers from being replaced by their AI-generated clones – a major point of contention in collective bargaining.

The proposal, backed by the California Labor Federation, would allow artists to opt out of existing contracts if the vague language allowed studios to freely use AI to digitally clone their voices and image. It would also require artists to be represented by a lawyer or union representative when signing new “voice and image” contracts.

California could also impose penalties for digitally cloning dead people without the consent of their estate. The company cites as an example the case of a media company that produced a fake, AI-generated one-hour comedy special to recreate the style and material of the late comedian George Carlin without the permission of his estate.

Regulating powerful generative AI systems

Because generative AI creates new content such as text, audio and photos in response to prompts, real-world risks abound. As a result, lawmakers are considering issuing guardrails for “extremely large” AI systems that might spit out instructions to create disasters — such as building chemical weapons or aiding cyberattacks — that could cause at least $500 million in damage. Such models would have to have a built-in “kill switch,” among other things.

The measure, which is backed by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices, including for even more powerful models that don’t yet exist. The attorney general could also take legal action for violations.

Ban on deepfakes with political or pornographic content

A bipartisan coalition wants to make it easier to prosecute people who use AI tools to create child sexual abuse images. Current law does not allow prosecutors to prosecute people who possess or distribute AI-generated child sexual abuse images if the material does not depict a real person, law enforcement officials said.

A number of Democratic lawmakers are also backing a bill to combat election deepfakes, raising concerns after AI-generated robocalls mimicked President Joe Biden’s voice ahead of the recent New Hampshire primary. The proposal would ban “materially misleading” election-related deepfakes in political mailings, robocalls and television ads 120 days before Election Day and 60 days afterward. Another proposal would require social media platforms to label all election-related posts created by AI.