A Tangled Benchmark: Using the Jones Polynomial to Test Quantum Hardware at Scale

Insider Brief:
- Researchers at Quantinuum developed an end-to-end quantum algorithm to estimate the Jones polynomial, implementing it on the H2-2 quantum computer with hardware-aware optimizations and error mitigation strategies.
- The algorithm targets a DQC1-complete problem in knot theory and uses a Fibonacci braid representation, offering a potentially more efficient path to quantum advantage than broader BQP formulations.
- The team introduced an efficiently verifiable benchmark by generating topologically equivalent braids with known polynomial values, enabling precise error analysis across varying circuit sizes and noise models.
- Their findings suggest that with gate fidelities above 99.99%, quantum methods could outperform classical approaches on problems with more than 2,800 braid crossings—highlighting a concrete and measurable route toward quantum advantage.
Where to look for quantum advantage? While technology continues to progress, there are certain areas we return to over and over again. Instead of trying to force the solution to fit within a quantum machine, we can save ourselves some time and heartache and seek out the places where quantum is more likely to be found—problems that are inherently tied to quantum mechanics.
Topology is one such area.
That focus on grounding quantum research in verifiable, physics-native domains reflects a broader philosophy at Quantinuum. “We decided to show progress step by provable step and not wave our arms about some future that was not testable,” wrote Quantinuum’s Founder, Chairman of the Board, and Chief Product Officer Ilyas Khan in a recent LinkedIn post. “In the past year we have literally led the field in areas as diverse as gate fidelities (across all zones); logical qubits; the first topological qubit; simulating the Ising model; certified RCS; a quantum processor that is the first that cannot be simulated classically.” The new work on Jones polynomials continues this trend—placing theoretical complexity and hardware readiness side by side.
In a recent arXiv preprint, researchers at Quantinuum present a detailed, end-to-end quantum algorithm for estimating the Jones polynomial of knots—a problem rooted in knot theory and considered a potential home for finding for quantum advantage. This practical implementation of a quantum-native problem on Quantinuum’s H2-2 quantum computer highlights both algorithmic advances and hardware-tailored optimizations. As the authors describe it, this is more than a benchmark; it provides a framework to systematically search for and quantify near-term quantum advantage.
From Knot Invariants to Quantum Circuits
The Jones polynomial is a topological invariant, which is a function that assigns a polynomial to a knot or link in a way that remains unchanged under continuous deformation. Calculating it is computationally expensive using classical methods, especially when it comes to larger knots with hundreds or even thousands of crossings.
Quantumly, the problem has deep theoretical roots. It was shown nearly two decades ago that approximating the Jones polynomial at certain roots of unity is complete for complexity classes such as BQP (bounded-error quantum polynomial time) and DQC1 (deterministic quantum computation with one clean qubit). In other words, this is a problem naturally suited to quantum circuits. According to the research team, the DQC1 variant, based on Markov-closed braids, is “less quantum” in terms of the required resources, but often harder for classical algorithms, making it a desirable candidate for advantage.
The algorithm developed by Quantinuum implements both DQC1- and BQP-complete versions, choosing the fifth root of unity as the evaluation point and employing the Fibonacci representation of braids—a model known to be approximately universal for quantum computing.
A Fully Compiled Pipeline, Optimized for Hardware
Rather than relying on generic circuit templates, the authors take a hardware-aware approach. Part of their implementation involves a control-free, echo-verified Hadamard test—an optimized variant designed to reduce shot noise and minimize the number of two-qubit gates, which are the dominant source of error on most platforms. In total, the quantum circuit simulates a unitary representation of a braid constructed from three-qubit gates acting on specially chosen basis states known as Fibonacci strings.
To address coherence and phase errors, the team introduces a method called the “conjugate trick,” which uses pairs of topologically related circuits to cancel out systematic phase shifts. They also implement a form of error detection based on the structure of the Fibonacci subspace, discarding samples that violate expected measurement symmetries.
As noted by the researchers, these combined optimizations allow them to scale up problem instances beyond what was previously thought possible on NISQ devices. In one demonstration, they successfully evaluated a 16-qubit, 340 two-qubit gate circuit corresponding to a knot with 104 crossings, using 4,000 shots per circuit and achieving measurable gains from their error mitigation techniques.
Benchmarking with Built-In Verification
A particularly notable feature of the attempt was the inclusion of an efficiently verifiable benchmark. Because the Jones polynomial is a link invariant, any two topologically equivalent braids must yield the same result. The team used this by generating topologically identical braids with varying sizes and depths and comparing their output, both quantum and classical, against a known value. This allows for a fine-grained analysis of error scaling with respect to circuit size, gate depth, and noise model.
According to the paper, the benchmark enables rigorous error characterization under realistic noise assumptions, with simulations modeling depolarizing errors for one- and two-qubit gates. The authors report that for gate fidelities better than 99.99%, their algorithm is likely to outperform classical baselines such as tensor network contraction and matrix product operator simulations once braid sizes exceed 2,800 crossings.
Less Quantum, More Advantage
The title of the paper—“Less Quantum, More Advantage”—cleverly speaks to a shift in strategy. Rather than chasing quantum advantage in the most general or powerful formulations, the team narrows in on a problem that is both theoretically significant and classically difficult, yet solvable with modest quantum resources. As they argue, focusing on the DQC1 version of the Jones polynomial, despite being a less expressive model than BQP, can offer more practical gains.
This perspective echoes a recent Nature write-up on the research, which emphasized the “mind-blowing” relationship between knot theory and quantum mechanics. There, Quantinuum’s Konstantinos Meichanetzidis, who also contributed to this new study, explained how knot invariants could serve not just as computational targets but also as built-in checks for correctness in quantum hardware. If two circuits yield the same result for different representations of the same knot, that’s evidence the computation is behaving as expected.
As quantum computing moves beyond toy problems and handpicked instances, finding real benchmarks that are both verifiable and classically challenging is utmost importance. According to the research, the Jones polynomial provides a rare convergence of theoretical complexity, practical relevance, and compatibility with quantum architecture.
Rather than making grand claims about reaching advantage today, the authors offer a measured and transparent assessment of when, how, and under what conditions a quantum algorithm might outperform classical methods. That in itself is a valuable contribution—and one that brings us closer to understanding what real, useful quantum advantage could look like.
Contributing authors on the study include Tuomas Laakkonen, Enrico Rinaldi, Chris N. Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Konstantinos Meichanetzidis.