And yet quantum computing continues to progress
Pissing away my life in a haze of doomscrolling, sporadic attempts to “parent” two rebellious kids, and now endless conversations about AI safety, I’m liable to forget for days that I’m still mostly known (such as I am) as a quantum computing theorist, and this blog is still mostly known as a quantum computing blog. Maybe it’s just that I spent a quarter-century on quantum computing theory. As an ADHD sufferer, anything could bore me after that much time, even one of the a-priori most exciting things in the world.
It’s like, some young whippersnappers proved another monster 80-page theorem that I’ll barely understand tying together the quantum PCP conjecture, area laws, and Gibbs states? Another company has a quantum software platform, or hardware platform, and they’ve issued a press release about it? Another hypester claimed that QC will revolutionize optimization and machine learning, based on the usual rogues’ gallery of quantum heuristic algorithms that don’t seem to outperform classical heuristics? Another skeptic claimed that scalable quantum computing is a pipe dream—mashing together the real reasons why it’s difficult with basic misunderstandings of the fault-tolerance theorem? In each case, I’ll agree with you that I probably should get up, sit at my laptop, and blog about it (it’s hard to blog with two thumbs), but as likely as not I won’t.
And yet quantum computing continues to progress. In December we saw QuEra announce a small net gain from error-detection in neutral atoms, and accuracy that increased with the use of larger error-correcting codes. Today, a collaboration between Microsoft and Quantinuum has announced what might be the first demonstration of error-corrected two-qubit entangling gates with substantially lower error than the same gates applied to the bare physical qubits. (This is still at the stage where you need to be super-careful in how you phrase every such sentence—experts should chime in if I’ve already fallen short; I take responsibility for any failures to error-correct this post.)
You can read the research paper here, or I’ll tell you the details to the best of my understanding (I’m grateful to Microsoft’s Krysta Svore and others from the collaboration for briefing me by Zoom). The collaboration used a trapped-ion system with 32 fully-connected physical qubits (meaning, the qubits can be shuttled around a track so that any qubit can directly interact with any other). One can apply an entangling gate to any pair of qubits with ~99.8% fidelity.
What did they do with this system? They created up to 4 logical encoded qubits, using the Steane code and other CSS codes. Using logical CNOT gates, they then created logical Bell pairs — i.e., (|00⟩+|11⟩)/√2 — and verified that they did this.
That’s in the version of their experiment that uses “preselection but not postselection.” In other words, they have to try many times until they prepare the logical initial states correctly—as with magic state factories. But once they do successfully prepare the initial states, there’s no further cheating involving postselection (i.e., throwing away bad results): they just apply the logical CNOT gates, measure, and see what they got.
For me personally, that’s the headline result. But then they various further experiments to “spike the football.” For one thing, they show that when they do allow postselected measurement outcomes, the decrease in the effective error rate can be much much larger, as large as 800x. That allows them (again, under postselection!) to demonstrate up to two rounds of error syndrome extraction and correction while still seeing a net gain, or three rounds albeit with unclear gain. The other thing they demonstrate is teleportation of fault-tolerant qubits—so, a little fancier than just preparing an encoded Bell pair and then measuring it.
They don’t try to do (e.g.) a quantum supremacy demonstration with their encoded qubits, like QuEra did—they don’t have nearly enough qubits for that. But this is already extremely cool, and it sets a new bar in quantum error-correction experiments for others to meet or exceed (superconducting, neutral atom, and photonics people, that means you!). And I wasn’t expecting it! Indeed, I’m so far behind the times that I still imagined Microsoft as committed to a strategy of “topological qubits or bust.” While Microsoft is still pursuing the topological approach, their strategy has clearly pivoted over the last few years towards “whatever works.”
Anyway, huge congratulations to the teams at Microsoft and Quantinuum for their accomplishment!
Stepping back, what is the state of experimental quantum computing, 42 years after Feynman’s lecture, 30 years after Shor’s algorithm, 25 years after I entered the field, 5 years after Google’s supremacy experiment? There’s one narrative that quantum computing is already being used to solve practical problems that couldn’t be solved otherwise (look at all the hundreds of startups! they couldn’t possibly exist without providing real value, could they?). Then there’s another narrative that quantum computing has been exposed as a fraud, an impossibility, a pipe dream. Both narratives seem utterly disconnected from the reality on the ground.
If you want to track the experimental reality, my one-sentence piece of advice would be to focus relentlessly on the fidelity with which experimenters can apply a single physical 2-qubit gate. When I entered the field in the late 1990s, ~50% woud’ve been an impressive fidelity. At some point it became ~90%. With Google’s supremacy experiment in 2019, we saw 1000 gates applied to 53 qubits, each gate with ~99.5% fidelity. Now, in superconducting, trapped ions, and neutral atoms alike, we’re routinely seeing ~99.8% fidelities, which is what made possible (for example) the new Microsoft/Quantinuum result. The best fidelities I’ve heard reported this year are more like ~99.9%.
Meanwhile, on paper, it looks like known methods for quantum fault-tolerance, for example using the surface code, should start to become practical once you have 2-qubit fidelities around ~99.99%—i.e., one more “9” from where we are now. And then there should “merely” be the practical difficulty of maintaining that 99.99% fidelity while you scale up to millions or hundreds of millions of physical qubits!
What I’m trying to say is: this looks a pretty good trajectory! It looks like, if we plot the infidelity on a log scale, the experimentalists have already gone three-quarters of the distance. It now looks like it would be a surprise if we couldn’t have hundreds of fault-tolerant qubits and millions of gates on them within the next decade, if we really wanted that—like something unexpected would have to go wrong to prevent it.
Wouldn’t be ironic if all that was true, but it will simply matter much less than we hoped in the 1990s? Either just because the set of problems for which a quantum computing is useful has remained stubbornly more specialized than the world wants it to be (for more on that, see the entire past 20 years of this blog) … or because advances in classical AI render what was always quantum computing’s most important killer app, to the simulation of quantum chemistry and materials, increasingly superfluous (as AlphaFold may have already done for protein folding) … or simply because civilization descends further into barbarism, or the unaligned AGIs start taking over, and we all have bigger things to worry about than fault-tolerant quantum computing.
But, you know, maybe fault-tolerant quantum computing will not only work, but matter—and its use to design better batteries and drugs and photovoltaic cells and so on will pass from science-fiction fantasy to quotidian reality so quickly that much of the world (weary from the hypesters crying wolf too many times?) will barely even notice it when it finally happens, just like what we saw with Large Language Models a few years ago. That would be worth getting out of bed for.