Google’s Latest Quantum Computing Breakthrough Shows Practical Machines Are Within Reach
One of the biggest barriers to large-scale quantum computing is the error-prone nature of the technology. This week, Google announced a major breakthrough in quantum error correction, which could lead to quantum computers capable of tackling real-world problems.
Quantum computing promises to solve problems that are beyond classical computers by harnessing the strange effects of quantum mechanics. But to do so we’ll need processors made up of hundreds of thousands, if not millions, of qubits (the quantum equivalent of bits).
Having just crossed the 1,000-qubit mark, today’s devices area a long way off, but more importantly their qubits are incredibly unreliable. The devices are highly susceptible to errors which can derail any attempt to carry out calculations long before an algorithm has run its course.
That’s why error correction has been a major focus for quantum computing companies in recent years. Now, Google’s new Willow quantum processor, unveiled Monday, has crossed a critical threshold suggesting that as the company’s devices get larger, their ability to suppress errors will improve exponentially.
“This is the most convincing prototype for a scalable logical qubit built to date,” Hartmut Neven, founder and lead of Google Quantum AI, wrote in a blog post. “It’s a strong sign that useful, very large quantum computers can indeed be built.”
Quantum error-correction schemes typically work by spreading the information needed to carry out calculations across multiple qubits. This introduces redundancy to the systems, so that even if one of the underlying qubits experiences an error, the information can be recovered. Using this approach, many “physical qubits” can be combined to create a single “logical qubit.”
In general, the more physical qubits you use to create each logical qubit, the more resistant it is to errors. But this is only true if the error rate of the individual qubits is below a certain threshold. Otherwise, the increased chance of an error from adding more faulty qubits outweighs the benefits of redundancy.
While other groups have demonstrated error correction that produces modest accuracy improvements, Google’s results are definitive. In a series of experiments reported in Nature, they encoded logical qubits into increasingly large arrays—starting with a three-by-three grid—and found that each time they increased the size the error rate halved. Crucially, the team found that the logical qubits they created lasted more than twice as long as the physical qubits that make them up.
“The more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes,” wrote Neven.
This was made possible by significant improvements in the underlying superconducting qubit technology Google uses to build its processors. In the company’s previous Sycamore processor, the average operating lifetime of each physical qubit was roughly 20 microseconds. But thanks to new fabrication techniques and circuit optimizations, Willow’s qubits have more than tripled this to 68 microseconds.
As well as showing off the chip’s error-correction prowess, the company’s researchers also demonstrated its speed. They carried out a computation in under five minutes that would take the world’s second fastest supercomputer, Frontier, 10 septillion years to complete. However, the test they used is a contrived one with little practical use. The quantum computer simply has to execute random circuits with no useful purpose, and the classical computer then has to try and emulate it.
The big test for companies like Google is to go from such proofs of concept to solving commercially relevant problems. The new error-correction result is a big step in the right direction, but there’s still a long way to go.
Julian Kelly, who leads the company’s quantum hardware division, told Nature that solving practical challenges will likely require error rates of around one per ten million steps. Achieving that will necessitate logical qubits made of roughly 1,000 physical qubits each, though breakthroughs in error-correction schemes could bring this down by several hundred qubits.
More importantly, Google’s demonstration simply involved storing information in its logical qubits rather than using them to carry out calculations. Speaking to MIT Technology Review in September, when a preprint of the research was posted to arXiv, Kenneth Brown from Duke University noted that carrying out practical calculations would likely require a quantum computer to perform roughly a billion logical operations.
So, despite the impressive results, there’s still a long road ahead to large-scale quantum computers that can do anything useful. However, Google appears to have reached an important inflection point that suggests this vision is now within reach.
Image Credit: Google