In Support of SB 1047
I’ve finished my two-year leave at OpenAI, and returned to being just a normal (normal?) professor, quantum complexity theorist, and blogger. Despite the huge drama at OpenAI that coincided with my time there, including the departures of most of the people I worked with in the former Superalignment team, I’m incredibly grateful to OpenAI for giving me an opportunity to learn and witness history, and even to contribute here and there, though I wish I could’ve done more.
Over the next few months, I plan to blog my thoughts and reflections about the current moment in AI safety, inspired by my OpenAI experience. You can be certain that I’ll be doing this only as myself, not as a representative of any organization. Unlike some former OpenAI folks, I was never offered equity in the company or asked to sign any non-disparagement agreement. OpenAI retains no power over me, at least as long as I don’t share confidential information (which of course I won’t, not that I know much!).
I’m going to kick off this blog series, today, by defending a position that differs from the official position of my former employer. Namely, I’m offering my strong support for California’s SB 1047, a first-of-its-kind AI safety regulation written by California State Senator Scott Wiener, then extensively revised through consultations with pretty much every faction of the AI community. AI leaders like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell are for the bill, as is Elon Musk (for whatever that’s worth), and Anthropic now says that the bill’s “benefits likely outweigh its costs.” Meanwhile, Facebook, OpenAI, and basically the entire VC industry are against the bill, while California Democrats like Nancy Pelosi and Zoe Lofgren have also come out against it for whatever reasons.
The bill has passed the California State Assembly by a margin of 48-16, having previously passed the State Senate by 32-1. It’s now on Governor Gavin Newsom’s desk, and it’s basically up to him whether it becomes law or not. I understand that supporters and opponents are both lobbying him hard.
People much more engaged than me have already laid out, accessibly and in immense detail, exactly what the current bill does and the arguments for and against. Try for example:
- For a very basic explainer, this in TechCrunch
- This by Kelsey Piper, and this by Kelsey Piper, Sigal Samuel, and Dylan Matthews in Vox
- This by Zvi Mowshowitz (Zvi has also written a great deal else about SB 1047, strongly in support)
Briefly: given the ferocity of the debate about it, SB 1047 does remarkably little. It says that if you spend more than $100 million to train a model, you need to notify the government and submit a safety plan. It establishes whistleblower protections for people at AI companies to raise safety concerns. And, if a company failed to take reasonable precautions and its AI then causes catastrophic harm, it says that the company can be sued (which was presumably already true, but the bill makes it extra clear). And … unless I’m badly mistaken, those are the main things in it!
While the bill is mild, opponents are on a full scare campaign saying that it will strangle the AI revolution in its crib, put American AI development under the control of Luddite bureaucrats, and force companies out of California. They say that it will discourage startups, even though the whole point of the $100 million provision is to target only the big players (like Google, Meta, OpenAI, and Anthropic) while leaving small startups free to innovate.
The only steelman that makes sense to me, for why many tech leaders are against the bill, is the idea that it’s a stalking horse. On this view, the bill’s actual contents are irrelevant. What matters is simply that, once you’ve granted the principle that people worried about AI-caused catastrophes get a seat at the table, any legislative acknowledgment of the validity of their concerns—then they’re going to take a mile rather than an inch, and kill the whole AI industry.
Notice that the exact same slippery-slope argument could be deployed against any AI regulation whatsoever. In other words, if someone opposes SB 1047 on these grounds, then they’d presumably oppose any attempt to regulate AI—either because they reject the whole premise that creating entities with humanlike intelligence is a risky endeavor, and/or because they’re hardcore libertarians who never want government to intervene in the market for any reason, not even if the literal fate of the planet was at stake.
Having said that, there’s one specific objection that needs to be dealt with. OpenAI, and Sam Altman in particular, say that they oppose SB 1047 simply because AI regulation should be handled at the federal rather than the state level. The supporters’ response is simply: yeah, everyone agrees that’s what should happen, but given the dysfunction in Congress, there’s essentially no chance of it anytime soon. And California suffices, since Google, OpenAI, Anthropic and virtually every other AI company is either based on California or does many things subject to California law. So, some California legislators decided to do something. On this issue as on others, it seems to me that anyone who’s serious about a problem doesn’t get to reject a positive step that’s on offer, in favor of a utopian solution that isn’t on offer.
I should also stress that, in order to support SB 1047, you don’t need to be a Yudkowskyan doomer, primarily worried about hard AGI takeoffs and recursive self-improvement and the like. For that matter, if you are such a doomer, SB 1047 might seem basically irrelevant to you (apart from its unknowable second- and third-order effects): a piece of tissue paper in the path of an approaching tank. The world where AI regulation like SB 1047 makes the most difference is the world where the dangers of AI creep up on humans gradually, so that there’s enough time for governments to respond incrementally, as they did with previous technologies.
If you agree with this, it wouldn’t hurt to contact Governor Newsom’s office. For all its nerdy and abstruse trappings, this is, in the end, a kind of battle that ought to be familiar and comfortable for any Democrat: the kind with, on one side, most of the public (according to polls) and also hundreds of the top scientific experts, and on the other side, individuals and companies who all coincidentally have strong financial stakes in being left unregulated. This seems to me like a hinge of history where small interventions could have outsized effects.