Silicon Valley is divided over a landmark bill in California that could fundamentally change the pace of AI development around the world.
California Senator Scott Wiener introduced SB 1047 in February and has since garnered support from lawmakers in both parties.
The bill passed the Senate Privacy and Consumer Protection Committee in June and the House Appropriations Committee last week. The full House is expected to vote on the bill by the end of the month.
California has established itself as a leader in the global AI arms race, with 35 of the top 50 AI companies based in the state, according to a report by the Brookings Institution.
California Governor Gavin Newsom has been working to further his state’s position as a global AI pioneer. Earlier this year, California announced a training program for state employees, hosted a generative AI summit, and launched a pilot project to understand how the technology can address challenges like traffic congestion and language accessibility.
This month, the state announced a partnership with chipmaker Nvidia to help state residents use technology to help create jobs, spur innovation and learn how to use AI to solve everyday problems.
But while excitement about AI’s potential is growing, so is concern about the dangers it poses to humanity, meaning California must strike a delicate balance between regulating the industry and curbing its expected growth.
“If we over-regulate, if we over-indulge, if we chase the glamour, we may put ourselves in a dangerous position,” Newsom said at an AI event in San Francisco in May.
Governor Newsom signed an executive order in September calling for several provisions to ensure the responsible deployment of the technology and directing state agencies to consider how it can best be used.
Newsom hasn’t publicly commented on SB 1047 — his office did not respond to Business Insider’s request for comment — but if passed, the bill would be the most sweeping attempt yet to regulate the AI industry.
What changes does SB 1047 make?
The bill’s authors say their goal is to “ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, and commonsense safety standards for developers of the largest and most powerful AI systems.”
The regulations apply to companies developing models that cost more than $100 million to train or require advanced computing power. These companies will be required to test the safety of new technology before releasing it to the public. They will also be required to build a “full shutdown” feature into their models and will be held accountable for all applications of their technology.
The bill also establishes legal standards for AI developers, outlines “misconduct” for which the California Attorney General can sue companies, sets protections for whistleblowers, sets computational limits for AI models, and creates a commission to issue regulatory guidelines.
What big tech companies think
The bill has not been well received by developers it would likely affect.
Companies including Meta, OpenAI and Anthropic, which have poured millions of dollars into building and training large-scale language models, have been lobbying state lawmakers for the reform. Meta said the original bill would stifle innovation and make it less likely that companies would get their models approved as open source.
“The bill also actively discourages the release of open-source AI because providers would be subject to intolerable legal liability if they open-source their models,” Mehta wrote in a June letter to Wiener. This will likely impact the small business ecosystem by reducing the ability of small businesses to “use freely available, fine-tuned models to create new jobs, businesses, tools, and services that are often used by other companies, governments, and civil society groups.”
Anthropic, which positions itself as a safety-focused AI company, was not satisfied with the initial version of the bill and lobbied lawmakers for changes. The bill calls for a focus on preventing companies from building unsafe models, rather than enforcing strict laws before a catastrophic accident occurs. It also suggests that companies that meet the $100 million threshold should be able to set their own standards for safety testing.
The bill has also drawn criticism from venture capitalists, company executives and other tech industry figures. Andreessen Horowitz general partner Anjini Mida called the bill “one of the most anti-competitive proposals I’ve seen in a long time.” He thinks lawmakers should focus on “regulating specific high-risk applications and bad end users.”
California lawmakers have enacted some of the proposed changes, all of which are included in the latest version of the bill, which now bars the California Attorney General’s Office from suing AI developers before a catastrophic event has occurred. The bill also initially called for the creation of a new government agency to oversee implementation of the new law, but that body has since been scaled back to a committee within the state’s Department of Government Operations.
An Anthropik spokesperson told BI that they were “considering the new bill’s text once it is released.” Meta and OpenAI did not respond to requests for comment.
Smaller founders are anxious, but also a little more optimistic.
Arun Subrahmanyam, founder and CEO of enterprise generative AI company Articul8, told BI that no one understands the new board’s “broad powers” or how directors will be appointed, and he believes the bill will impact companies beyond big tech, since even well-funded startups could meet the $100 million threshold.
At the same time, he said he supports the bill’s creation of CalCompute, a public cloud computing cluster to study the safe deployment of large-scale AI models. The cluster could bring more fairness to researchers and groups that don’t have the funds to independently evaluate AI models. He also believes that reporting requirements for developers would increase transparency, which would benefit Articul8’s work in the long run.
He said the future of the industry depends on how one-size-fits-all models are regulated. “It’s good that states are looking to regulate this early, but the terminology needs to be a little more balanced,” he said.