Quantum developments!
Perhaps like the poor current President of the United States, I can feel myself fading, my memory and verbal facility and attention to detail failing me, even while there’s so much left to do to battle the nonsense in the world. I started my career on an accelerated schedule—going to college at 15, finishing my PhD at 22, etc. etc.—and the decline is (alas) also hitting me early, at the ripe age of 43.
Nevertheless, I do seem to remember that this was once primarily a quantum computing blog, and that I was known to the world as a quantum computing theorist. And exciting things continue to happen in quantum computing. In fact, just on last night’s quant-ph arXiv mailing, there were two…
First, a company in the UK called Oxford Ionics has announced that it now has a system of trapped-ion qubits in which it’s prepared two-qubit maximally entangled states with 99.97% fidelity. If true, this seems extremely good. Indeed, it seems better than the numbers from bigger trapped-ion efforts, and quite close to the ~99.99% that you’d want for quantum fault-tolerance. But maybe there’s a catch? Will they not be able to maintain this kind of fidelity when doing a long sequence of programmable two-qubit gates on dozens of qubits? Can the other trapped-ion efforts actually achieve similar fidelities in head-to-head comparisons? Anyway, I was surprised to see how little attention the paper got on SciRate. I look forward to hearing from experts in the comment section.
Second, a new paper by Schuster, Haferkamp, and Huang gives a major advance on k-designs and pseudorandom unitaries. Roughly speaking, the paper shows that even in one dimension, a random n-qubit quantum circuit, with alternating brickwork layers of 2-qubit gates, forms a “k-design” after only O(k polylog k log n) layers of gates. Well, modulo one caveat: the “random circuit” isn’t from the most natural ensemble, but has to have some of its 2-qubit gates set to the identity, namely those that straddle certain contiguous blocks of log n qubits. This seems like a purely technical issue—how could randomizing those straddling gates make the mixing behavior worse?—but future work will be needed to address it. Notably, the new upper bound is off from the best-possible k layers by only logarithmic factors. (For those tuning in from home: a k-design informally means a collection of n-qubit unitaries such that, from the perspective of degree-k polynomials, choosing a unitary randomly from the collection looks the same as choosing randomly among all n-qubit unitary transformations—i.e., from the Haar measure.)
Anyway, even in my current decrepit state, I can see that such a result would have implications for … well, all sorts of things that quantum computing and information theorists care about. Again I welcome any comments from experts!