The Google Willow thing
Yesterday I arrived in Santa Clara for the Q2B (Quantum 2 Business) conference, which starts this morning, and where I’ll be speaking Thursday on “Quantum Algorithms in 2024: How Should We Feel?” and also closing the conference via an Ask-Us-Anything session with John Preskill. (If you’re at Q2B, reader, come and say hi!)
And to coincide with Q2B, yesterday Google’s Quantum group officially announced “Willow,” its new 105-qubit superconducting chip with which it’s demonstrated an error-corrected surface code qubit as well as a new, bigger quantum supremacy experiment based on Random Circuit Sampling. I was lucky to be able to attend Google’s announcement ceremony yesterday afternoon at the Computer History Museum in Mountain View, where friend-of-the-blog-for-decades Dave Bacon and other Google quantum people explained exactly what was done and took questions (the technical level was surprisingly high for this sort of event). I was also lucky to get a personal briefing last week from Google’s Sergio Boixo on what happened.
Meanwhile, yesterday Sundar Pichai tweeted about Willow, and Elon Musk replied “Wow.” It cannot be denied that those are both things that happened.
Anyway, all yesterday, I then read comments on Twitter, Hacker News, etc. complaining that, since there wasn’t yet a post on Shtetl-Optimized, how could anyone possibly know what to think of this?? For 20 years I’ve been trying to teach the world how to fish in Hilbert space, but (sigh) I suppose I’ll just hand out some more fish. So, here are my comments:
- Yes, this is great. Yes, it’s a real milestone for the field. To be clear: for anyone who’s been following experimental quantum computing these past five years (say, since Google’s original quantum supremacy milestone in 2019), there’s no particular shock here. Since 2019, Google has roughly doubled the number of qubits on its chip and, more importantly, increased the qubits’ coherence time by a factor of 5. Meanwhile, their 2-qubit gate fidelity is now roughly 99.7% (for controlled-Z gates) or 99.85% (for “iswap” gates), compared to ~99.5% in 2019. They then did the more impressive demonstrations that predictably become possible with more and better qubits. And yet, even if the progress is broadly in line with what most of us expected, it’s still of course immensely gratifying to see everything actually work! Huge congratulations to everyone on the Google team for a well-deserved success.
- I already blogged about this!!! Specifically, I blogged about Google’s fault-tolerance milestone when its preprint appeared on the arXiv back in August. To clarify, what we’re all talking about now is the same basic technical advance that Google already reported in August, except now with the PR blitz from Sundar Pichai on down, a Nature paper, an official name for the chip (“Willow”), and a bunch of additional details about it.
- Scientifically, the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter. So, this is a very important threshold that’s now been crossed. As Dave Bacon put it to me, “eddies are now forming”—or, to switch metaphors, after 30 years we’re now finally tickling the tail of the dragon of quantum fault-tolerance, the dragon that (once fully awoken) will let logical qubits be preserved and acted on for basically arbitrary amounts of time, allowing scalable quantum computation.
- Having said that, Sergio Boixo tells me that Google will only consider itself to have created a “true” fault-tolerant qubit, once it can do fault-tolerant two-qubit gates with an error of ~10-6 (and thus, on the order of a million fault-tolerant operations before suffering a single error). We’re still some ways from that milestone: after all, in this experiment Google created only a single encoded qubit, and didn’t even try to do encoded operations on it, let alone on multiple encoded qubits. But all in good time. Please don’t ask me to predict how long, though empirically, the time from one major experimental QC milestone to the next now seems to be measured in years, which are longer than weeks but shorter than decades.
- Google has also announced a new quantum supremacy experiment on its 105-qubit chip, based on Random Circuit Sampling with 40 layers of gates. Notably, they say that, if you use the best currently-known simulation algorithms (based on Johnnie Gray’s optimized tensor network contraction), as well as an exascale supercomputer, their new experiment would take ~300 million years to simulate classically if memory is not an issue, or ~1025 years if memory is an issue (note that a mere ~1010 years have elapsed since the Big Bang). Probably some people have come here expecting me to debunk those numbers, but as far as I know they’re entirely correct, with the caveats stated. Naturally it’s possible that better classical simulation methods will be discovered, but meanwhile the experiments themselves will also rapidly improve.
- Having said that, the biggest caveat to the “1025 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~1025 years for a classical computer to simulate, it would also take ~1025 years for a classical computer to directly verify the quantum computer’s results!! (For example, by computing the “Linear Cross-Entropy” score of the outputs.) For this reason, all validation of Google’s new supremacy experiment is indirect, based on extrapolations from smaller circuits, ones for which a classical computer can feasibly check the results. To be clear, I personally see no reason to doubt those extrapolations. But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.
- In his remarks yesterday, Google Quantum AI leader Hartmut Neven talked about David Deutsch’s argument, way back in the 1990s, that quantum computers should force us to accept the reality of the Everettian multiverse, since “where else could the computation have happened, if it wasn’t being farmed out to parallel universes?” And naturally there was lots of debate about that on Hacker News and so forth. Let me confine myself here to saying that, in my view, the new experiment doesn’t add anything new to this old debate. It’s yet another confirmation of the predictions of quantum mechanics. What those predictions mean for our understanding of reality can continue to be argued as it’s been since the 1920s.
- Cade Metz did a piece about Google’s announcement for the New York Times. Alas, when Cade reached out to me for comment, I decided that it would be too awkward, after what Cade did to my friend Scott Alexander almost four years ago. I talked to several other journalists, such as Adrian Cho for Science.
- No doubt people will ask me what this means for superconducting qubits versus trapped-ion or neutral-atom or photonic qubits, or for Google versus its many competitors in experimental QC. And, I mean, it’s not bad for Google or for superconducting QC! These past couple years I’d sometimes commented that, since Google’s 2019 announcement of quantum supremacy via superconducting qubits, the trapped-ion and neutral-atom approaches had seemed to be pulling ahead, with spectacular results from Quantinuum (trapped-ion) and QuEra (neutral atoms) among others. One could think of Willow as Google’s reply, putting the ball in competitors’ courts likewise to demonstrate better logical qubit lifetime with increasing code size (or, better yet, full operations on logical qubits exceeding that threshold, without resorting to postselection). The great advantage of trapped-ion qubits continues to be that you can move the qubits around (and also, the two-qubit gate fidelities seem somewhat ahead of superconducting). But to compensate, superconducting qubits have the advantage that the gates are a thousand times faster, which makes feasible to do experiments that require collecting millions of samples.
- Of course the big question, the one on everyone’s lips, was always how quantum computing skeptic Gil Kalai was going to respond. But we need not wonder! On his blog, Gil writes: “We did not study yet these particular claims by Google Quantum AI but my general conclusion apply to them ‘Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.’ ” Most of Gil’s post is devoted to re-analyzing data from Google’s 2019 quantum supremacy experiment, which Gil continues to believe can’t possibly have done what was claimed. Gil’s problem is that the 2019 experiment was long ago superseded anyway: besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results. I predict that Gil, and others who take it as axiomatic that scalable quantum computing is impossible, will continue to have their work cut out for them in this new world.
Update: Here’s Sabine Hossenfelder’s take. I don’t think she and I disagree about any of the actual facts; she just decided to frame things much more negatively. Ironically, I guess 20 years of covering hyped, dishonestly-presented non-milestones in quantum computing has inclined me to be pretty positive when a group puts in this much work, demonstrates a real milestone, and talks about it without obvious falsehoods.