Quantum Advantage Has Likely Been Achieved — The Debate Is Over What Counts

Insider Brief
- Quantum advantage has likely been demonstrated through multiple large-scale random circuit sampling experiments that perform programmable computational tasks beyond feasible classical simulation, even if those tasks have no practical use.
- Scientific skepticism persists largely because the benchmark tasks are contrived, verification relies on indirect proxies and extrapolation, and early experiments were partially matched by later classical simulations.
- The debate now centers less on whether quantum devices exceeded classical capabilities and more on whether usefulness and intuitive computational value should be retroactively required for quantum advantage claims.
- Photo by Logan Voss on Unsplash
For more than a decade, quantum computing has promised a moment when controlled quantum machines would outperform classical computers at some well-defined task. That moment, widely known as “quantum supremacy” or “quantum advantage,” was framed as a narrow but symbolic milestone: proof that quantum computation is not merely theoretical, but physically real.
By that standard, quantum advantage has likely already been achieved. What remains unsettled is not the physics, but whether the scientific community agrees on what should count as success.
A recent blog post by physicist Dominik Hangleiter highlights just how wide that gap has become. Polling audiences of experimentalists and theorists at recent research meetings, Hangleiter, who is a quantum scientist at the Simons Institute for the Theory of Computing, UC Berkeley, found that fewer than half believed quantum advantage had been demonstrated — despite more than five years of increasingly sophisticated experiments explicitly designed to do exactly that.
A Modest Milestone
When John Preskill introduced the concept of quantum supremacy in 2012, he defined it narrowly: the ability of a controllable quantum system to perform a computational task beyond the reach of classical computers. The definition made no reference to usefulness, economic value, or practical application. The goal was not to solve real-world problems, but to establish a clear computational separation between classical and quantum machines.
That framing mattered because early quantum hardware was known to be small, noisy and ill-suited to algorithms like factoring or chemistry simulation at scale. Demonstrating advantage would require a task engineered to be maximally forgiving to quantum noise while remaining classically hard.
That task became random circuit sampling. Random circuit sampling is deliberately unromantic, according to the post, which appeared as the first post in a series on Quantum Frontiers, a blog by at Caltech’s Institute for Quantum Information and Matter.
A quantum processor is programmed with a randomly generated sequence of simple quantum gates, applied to many qubits, and measured. The output is a collection of bitstrings sampled from a probability distribution defined by quantum mechanics.
There is no hidden insight in the answer. The computation does not optimize anything, simulate a molecule, or break encryption. The task is simply to produce samples from the correct distribution, something quantum hardware does naturally, and classical computers struggle to replicate as system size and entanglement grow.
From a computational standpoint, the task is legitimate: it is programmable, well-defined, and scales in a way that sharply separates quantum and classical resources. From a cultural standpoint, it has always been uncomfortable.
Arguments Over Advantage
The first large-scale demonstration came in 2019, when Google reported random circuit sampling on a 53-qubit superconducting processor. The claim triggered intense scrutiny, and within months classical simulation techniques had narrowed — though not eliminated — the gap.
Since then, the story — like a proverbial goalpost always receding slowly from the kicker — has changed.
Google and the University of Science and Technology of China have repeated the experiment with larger systems, deeper circuits, and improved fidelities, pushing well beyond the regime of known classical simulations. Meanwhile, Quantinuum demonstrated random circuit sampling on a trapped-ion system with fewer qubits but higher connectivity and lower error rates, achieving comparable results via a very different architecture.
Across these platforms, experiments produced statistically significant signals — measured using benchmarks such as linear cross-entropy — that deviate strongly from what would be expected from classical or random noise processes. With the exception of the earliest 2019 experiment, no full classical reproduction of these results has been demonstrated.
In short, the machines appear to be doing something classically infeasible.
Why The Hesitation?
Part of the skepticism is technical, according to the post. Because the task is designed to be classically hard, verifying the result directly would defeat the purpose. Experiments rely on extrapolation from smaller, simulable circuits and on proxy benchmarks that correlate with quantum fidelity. Critics describe this as “a proxy of a proxy,” and in some sense, the criticism is on target.
But this is also how frontier science often works. Particle physics infers new particles from statistical excesses. Astrophysics infers black holes from gravitational effects. Quantum advantage, by definition, cannot be verified by brute-force classical computation.
The deeper objection might be philosophical, however.
Random circuit sampling — and earlier, boson sampling — does not look like “real computation” to many computer scientists. There is no meaningful input–output relationship, no problem being solved in the conventional sense. After the first supremacy claims, this discomfort hardened into a new, informal standard: quantum advantage would only count if the task were useful, even though that requirement was never part of the original deal.
Subtly Moving The Goal Posts
Boson sampling was dismissed as too specialized. Random circuits were dismissed as contrived. When classical simulations caught up to early experiments, those advances were treated not as moving targets, but as retroactive invalidation.
What emerged was a shifting bar: quantum advantage must now be programmable, scalable, verifiable, robust to classical attack and, ideally, economically relevant. These are reasonable criteria for the next phase of the field. Applied retroactively, they obscure what has already been achieved.
The argument in this piece seems to be not that quantum computing is ready for deployment, or that practical advantage is around the corner. The argument is that existing quantum computers have already crossed the line they were built to cross, which is a narrower and more precise argument.
The unresolved tension is not about whether quantum devices can outperform classical ones at some tasks. It is about whether the community agrees those tasks should count.
That debate matters, because it shapes funding priorities, public perception and the credibility of future claims, according to the post. If the field quietly rewrites its own milestones after they are reached, it risks undermining confidence not only in quantum computing, but in how scientific progress is communicated.
Quantum advantage was never meant to be useful. It was meant to be undeniable. The evidence suggests it has been achieved, even if consensus has not.
What comes next — demonstrating practical advantage — is a different challenge entirely, the post suggests. That is a higher bar, and a necessary one, but, it should not erase the fact that the first bar has already been cleared.
