Quantum advantage schemes probe the boundary between classically simulatable quantum systems and those that computationally go beyond this realm. Here, we introduce a constant-depth measurement-driven approach for efficiently sampling from a broad class of dense instantaneous quantum polynomial-time circuits and associated Hamiltonian phase states, previously requiring polynomial-depth unitary circuits. Leveraging measurement-adaptive fan-out staircases, our "dynamical circuits" circumvent light-cone constraints, enabling global entanglement with flexible auxiliary qubit usage on bounded-degree lattices. Generated Hamiltonian phase states exhibit statistical metrics indistinguishable from those of fully random architectures. Additionally, we demonstrate measurement-driven globally entangled feature maps capable of distinguishing phases of an extended SSH model from random eigenstates using a quantum reservoir-computing benchmark. Technologically, our results harness the power of mid-circuit measurements for realizing quantum advantages on hardware with a favorable topology. Conceptually, we highlight their power in achieving rigorous computational speedups.
We introduce a family of scalable planar fault-tolerant circuits that implement logical non-Clifford operations on a 2D color code, such as a logical $T$ gate or a logical non-Pauli measurement that prepares a magic $|T\rangle$ state. The circuits are relatively simple, consisting only of physical $T$ gates, $CX$ gates, and few-qubit measurements. They can be implemented with an array of qubits on a 2D chip with nearest-neighbor couplings, and no wire crossings. The construction is based on a spacetime path integral representation of a non-Abelian 2+1D topological phase, which is related to the 3D color code. We turn the path integral into a circuit by expressing it as a spacetime $ZX$ tensor network, and then traversing it in some chosen time direction. We describe in detail how fault tolerance is achieved using a "just-in-time" decoding strategy, for which we repurpose and extend state-of-the-art color-code matching decoders.
Quantum signal processing (QSP) studies quantum circuits interleaving known unitaries (the phases) and unknown unitaries encoding a hidden scalar (the signal). For a wide class of functions one can quickly compute the phases applying a desired function to the signal; surprisingly, this ability can be shown to unify many quantum algorithms. A separate, basic subfield in quantum computing is gate approximation: among its results, the Solovay-Kitaev theorem (SKT) establishes an equivalence between the universality of a gate set and its ability to efficiently approximate other gates. In this work we prove an 'SKT for QSP,' showing that the density of parameterized circuit ansätze in classes of functions implies the existence of short circuits approximating desired functions. This is quite distinct from a pointwise application of the usual SKT, and yields a suite of independently interesting 'lifted' variants of standard SKT proof techniques. Our method furnishes alternative, flexible proofs for results in QSP, extends simply to ansätze for which standard QSP proof methods fail, and establishes a formal intersection between QSP and gate approximation.
In quantum thermodynamics, a system is described by a Hamiltonian and a list of non-commuting charges representing conserved quantities like particle number or electric charge, and an important goal is to determine the system's minimum energy in the presence of these conserved charges. In optimization theory, a semi-definite program (SDP) involves a linear objective function optimized over the cone of positive semi-definite operators intersected with an affine space. These problems arise from differing motivations in the physics and optimization communities and are phrased using very different terminology, yet they are essentially identical mathematically. By adopting Jaynes' mindset motivated by quantum thermodynamics, we observe that minimizing free energy in the aforementioned thermodynamics problem, instead of energy, leads to an elegant solution in terms of a dual chemical potential maximization problem that is concave in the chemical potential parameters. As such, one can employ standard (stochastic) gradient ascent methods to find the optimal values of these parameters, and these methods are guaranteed to converge quickly. At low temperature, the minimum free energy provides an excellent approximation for the minimum energy. We then show how this Jaynes-inspired gradient-ascent approach can be used in both first- and second-order classical and hybrid quantum-classical algorithms for minimizing energy, and equivalently, how it can be used for solving SDPs, with guarantees on the runtimes of the algorithms. The approach discussed here is well grounded in quantum thermodynamics and, as such, provides physical motivation underpinning why algorithms published fifty years after Jaynes' seminal work, including the matrix multiplicative weights update method, the matrix exponentiated gradient update method, and their quantum algorithmic generalizations, perform well at solving SDPs.
We introduce a framework which allows to systematically and arbitrarily scale the code distance of local fermion-to-qubit encodings in one and two dimensions without growing the weights of stabilizers. This is achieved by embedding low-distance encodings into the surface code in the form of topological defects. We introduce a family of Ladder Encodings (LE), which is optimal in the sense that the code distance is equal to the weights of density and nearest-neighbor hopping operators of a one-dimensional Fermi-Hubbard model. In two dimensions, we show how to scale the code distance of LE as well as other low-distance encodings such as Verstraete-Cirac and Derby-Klassen. We further introduce Perforated Encodings, which locally encode two fermionic spin modes within the same surface code structure. We show that our strategy is also extendable to other topological codes by explicitly embedding the LE into a 6.6.6 color code.
Sampling from probability distributions of the form $\sigma \propto e^{-\beta V}$, where $V$ is a continuous potential, is a fundamental task across physics, chemistry, biology, computer science, and statistics. However, when $V$ is non-convex, the resulting distribution becomes non-logconcave, and classical methods such as Langevin dynamics often exhibit poor performance. We introduce the first quantum algorithm that provably accelerates a broad class of continuous-time sampling dynamics. For Langevin dynamics, our method encodes the target Gibbs measure into the amplitudes of a quantum state, identified as the kernel of a block matrix derived from a factorization of the Witten Laplacian operator. This connection enables Gibbs sampling via singular value thresholding and yields the first provable quantum advantage with respect to the Poincaré constant in the non-logconcave setting. Building on this framework, we further develop the first quantum algorithm that accelerates replica exchange Langevin diffusion, a widely used method for sampling from complex, rugged energy landscapes.
Quantum error correction requires accurate and efficient decoding to optimally suppress errors in the encoded information. For concatenated codes, where one code is embedded within another, optimal decoding can be achieved using a message-passing algorithm that sends conditional error probabilities from the lower-level code to a higher-level decoder. In this work, we study the XYZ$^2$ topological stabilizer code, defined on a honeycomb lattice, and use the fact that it can be viewed as a concatenation of a [[2, 1, 1]] phase-flip parity check code and the surface code with $YZZY$ stabilizers, to decode the syndrome information in two steps. We use this sequential decoding scheme to correct errors on data qubits, as well as measurement errors, under various biased error models using both a maximum-likelihood decoder (MLD) and more efficient matching-based decoders. For depolarizing noise we find that the sequential matching decoder gives a threshold of 18.3%, close to optimal, as a consequence of a favorable, effectively biased, error model on the upper-level YZZY code. For phase-biased noise on data qubits, at a bias $\eta = \frac{p_z}{p_x+p_y} = 10$, we find that a belief-matching-based decoder reaches thresholds of 24.1%, compared to 28.6% for the MLD. With measurement errors the thresholds are reduced to 3.4% and 4.3%, for depolarizing and biased noise respectively, using the belief-matching decoder. This demonstrates that the XYZ$^2$ code has thresholds that are competitive with other codes tailored to biased noise. The results also showcase two approaches to taking advantage of concatenated codes: 1) tailoring the upper-level code to the effective noise profile of the decoded lower-level code, and 2) making use of an upper-level decoder that can utilize the local information from the lower-level code.
We introduce the qudit Noisy Stabilizer Formalism, a framework for efficiently describing the evolution of stabilizer states in prime-power dimensions subject to generalized Pauli-diagonal noise under Clifford operations and generalized Pauli measurements. For arbitrary dimensions, the formalism remains applicable, though restricted to a subset of stabilizer states and operations. The computational complexity scales linearly with the number of qudits in the initial state and exponentially with the number of qudits in the final state. This ensures that when noisy qudit stabilizer states evolve via generalized Pauli measurements and Clifford operations to generate multipartite entangled states of a few qudits, their description remains efficient. We demonstrate this by analyzing the generation of a generalized Bell pair from a noisy linear cluster state subject to two distinct noise sources acting on each of the qudits.
High-energy particle collisions can convert energy into matter through the inelastic production of new particles. Quantum computers are an ideal platform for simulating the out-of-equilibrium dynamics of the collision and the formation of the subsequent many-particle state. In this work, evidence for inelastic particle production is observed in one-dimensional Ising field theory using IBM's quantum computers. The scattering experiment is performed on 100 qubits of ibm_marrakesh and uses up to 6,412 two-qubit gates to access the post-collision dynamics. Integral to these simulations is a new quantum algorithm for preparing the initial state (wavepackets) of a quantum field theory scattering simulation. This method efficiently prepares wavepackets by extending recent protocols for creating W states with mid-circuit measurement and feedforward. The required circuit depth is independent of wavepacket size and spatial dimension, representing a superexponential improvement over previous methods. Our wavepacket preparation algorithm can be applied to a wide range of lattice models and is demonstrated in one-dimensional Ising field theory, scalar field theory, the Schwinger model, and two-dimensional Ising field theory.
The computation of dynamical response functions is central to many problems in condensed matter physics. Owing to the rapid growth of quantum correlations following a quench, classical methods face significant challenges even if an efficient description of the equilibrium state is available. Quantum computing offers a promising alternative. However, existing approaches often assume access to the equilibrium state, which may be difficult to prepare in practice. In this work, we present a method that circumvents this by using energy filter techniques, enabling the computation of response functions and other dynamical properties in both microcanonical and canonical ensembles. Our approach only requires the preparation of states that have significant weight at the desired energy. The dynamical response functions are then reconstructed from measurements after quenches of varying duration by classical postprocessing. We illustrate the algorithm numerically by applying it to compute the dynamical conductivity of a free-fermion model, which unveils the energy-dependent localization properties of the model.
In complexity theory, gap-preserving reductions play a crucial role in studying hardness of approximation and in analyzing the relative complexity of multiprover interactive proof systems. In the quantum setting, multiprover interactive proof systems with entangled provers correspond to gapped promise problems for nonlocal games, and the recent result MIP$^*$=RE (Ji et al., arXiv:2001.04383) shows that these are in general undecidable. However, the relative complexity of problems within MIP$^*$ is still not well-understood, as establishing gap-preserving reductions in the quantum setting presents new challenges. In this paper, we introduce a framework to study such reductions and use it to establish MIP$^*$-completeness of the gapped promise problem for the natural class of independent set games. In such a game, the goal is to determine whether a given graph contains an independent set of a specified size. We construct families of independent set games with constant question size for which the gapped promise problem is undecidable. In contrast, the same problem is decidable in polynomial time in the classical setting. To carry out our reduction, we establish a new stability theorem, which could be of independent interest, allowing us to perturb families of almost PVMs to genuine PVMs.
Determining and verifying an object's position is a fundamental task with broad practical relevance. We propose a secure quantum ranging protocol that combines quantum ranging with quantum position verification (QPV). Our method achieves Heisenberg-limited precision in position estimation while simultaneously detecting potential cheaters. Two verifiers each send out a state that is entangled in frequency space within a single optical mode. An honest prover only needs to perform simple beam-splitter operations, whereas cheaters are allowed to use arbitrary linear optical operations, one ancillary mode, and perfect quantum memories-though without access to entanglement. Our approach considers a previously unstudied security aspect to quantum ranging. It also provides a framework to quantify the precision with which a prover's position can be verified in QPV, which previously has been assumed to be infinite.
Giovanni Rodari, Tommaso Francalanci, Eugenio Caruccio, Francesco Hoch, Gonzalo Carvacho, Taira Giordani, Nicolò Spagnolo, Riccardo Albiero, Niki Di Giano, Francesco Ceccarelli, Giacomo Corrielli, Andrea Crespi, Roberto Osellame, Ulysse Chabaud, Fabio Sciarrino Over the past few years, various methods have been developed to engineeer and to exploit the dynamics of photonic quantum states as they evolve through linear optical networks. Recent theoretical works have shown that the underlying Lie algebraic structure plays a crucial role in the description of linear optical Hamiltonians, as such formalism identifies intrinsic symmetries within photonic systems subject to linear optical dynamics. Here, we experimentally investigate the role of Lie algebra applied to the context of Boson sampling, a pivotal model to the current understanding of computational complexity regimes in photonic quantum information. Performing experiments of increasing complexity, realized within a fully-reconfigurable photonic circuit, we show that sampling experiments do indeed fulfill the constraints implied by a Lie algebraic structure. In addition, we provide a comprehensive picture about how the concept of Lie algebraic invariant can be interpreted from the point of view of n-th order correlation functions in quantum optics. Our work shows how Lie algebraic invariants can be used as a benchmark tool for the correctness of an underlying linear optical dynamics and to verify the reliability of Boson Sampling experiments. This opens new avenues for the use of algebraic-inspired methods as verification tools for photon-based quantum computing protocols.
We establish the mathematical equivalence between the spectral form factor, a quantity used to identify the onset of quantum chaos and scrambling in quantum many-body systems, and the classical problem of statistical characterization of planar random walks. We thus associate to any quantum Hamiltonian a random process on the plane. We set down rigorously the conditions under which such random process becomes a Wiener process in the thermodynamic limit and the associated distribution of the distance from the origin becomes Gaussian. This leads to the well known Gaussian behavior of the spectral form factor for quantum chaotic (non-integrable) models, which we show to be violated at low temperature. For systems with quasi-free spectrum (integrable), instead, the distribution of the SFF is Log-Normal. We compute all the moments of the spectral form factor exactly without resorting to the Gaussian approximation. Assuming degeneracies in the quantum chaotic spectrum we solve the classical problem of random walker taking steps of unequal lengths. Furthermore, we demonstrate that the Hausdorff dimension of the frontier of the random walk, defined as the boundary of the unbounded component of the complement, approaches 1 for the integrable Brownian motion, while the non-integrable walk approaches that obtained by the Schramm-Loewner Evolution (SLE) with the fractal dimension $4/3$. Additionally, we numerically show that Bethe Ansatz walkers fall into a category similar to the non-integrable walkers.
We define an 't Hooft anomaly index for a group acting on a 2d quantum lattice system by finite-depth circuits. It takes values in the degree-4 cohomology of the group and is an obstruction to on-siteability of the group action. We introduce a 3-group (modeled as a crossed square) describing higher symmetries of a 2d lattice system and show that the 2d anomaly index is an obstruction for promoting a symmetry action to a morphism of 3-groups. This demonstrates that 't Hooft anomalies are a consequence of a mixing between ordinary symmetries and higher symmetries. Similarly, to any 1d lattice system we attach a 2-group (modeled as a crossed module) and interpret the Nayak-Else anomaly index as an obstruction for promoting a group action to a morphism of 2-groups. The meaning of indices of Symmetry Protected Topological states is also illuminated by higher group symmetry.
The conditional disclosure of secrets (CDS) setting is among the most basic primitives studied in information-theoretic cryptography. Motivated by a connection to non-local quantum computation and position-based cryptography, CDS with quantum resources has recently been considered. Here, we study the differences between quantum and classical CDS, with the aims of clarifying the power of quantum resources in information-theoretic cryptography. We establish the following results: 1) For perfectly correct CDS, we give a separation for a promise version of the not-equals function, showing a quantum upper bound of $O(\log n)$ and classical lower bound of $\Omega(n)$. 2) We prove a $\Omega(\log \mathsf{R}_{0,A\rightarrow B}(f)+\log \mathsf{R}_{0,B\rightarrow A}(f))$ lower bound on quantum CDS where $\mathsf{R}_{0,A\rightarrow B}(f)$ is the classical one-way communication complexity with perfect correctness. 3) We prove a lower bound on quantum CDS in terms of two round, public coin, two-prover interactive proofs. 4) We give a logarithmic upper bound for quantum CDS on forrelation, while the best known classical algorithm is linear. We interpret this as preliminary evidence that classical and quantum CDS are separated even with correctness and security error allowed. We also give a separation for classical and quantum private simultaneous message passing for a partial function, improving on an earlier relational separation. Our results use novel combinations of techniques from non-local quantum computation and communication complexity.
Recent advances have defined nontrivial phases of matter in open quantum systems, such as many-body quantum states subject to environmental noise. In this work, we experimentally probe and characterize mixed-state phases on Quantinuum's H1 quantum computer using two measures: Renyi correlators and the coding performance of a quantum error-correcting code associated with the phase. As a concrete example, we probe the low-energy states of the critical transverse field Ising model under different dephasing noise channels. First, we employ shadow tomography to observe a newly proposed Renyi correlator in two distinct phases: one exhibiting power-law decay and the other long-ranged. Second, we investigate the decoding fidelity of the associated quantum error-correcting code using a variational quantum circuit, and we find that a shallow circuit is sufficient to distinguish the above-mentioned two mixed-state phases through the decoding performance quantified by entanglement fidelity. Our work is a proof of concept for the quantum simulation and characterization of mixed-state phases.
We present a two-step decoder for the parity code and evaluate its performance in code-capacity and faulty-measurement settings. For noiseless measurements, we find that the decoding problem can be reduced to a series of repetition codes while yielding near-optimal decoding for intermediate code sizes and achieving optimality in the limit of large codes. In the regime of unreliable measurements, the decoder demonstrates fault-tolerant thresholds above 5% at the cost of decoding a series of independent repetition codes in (1 + 1) dimensions. Such high thresholds, in conjunction with a practical decoder, efficient long-range logical gates, and suitability for planar implementation, position the parity architecture as a promising candidate for demonstrating quantum advantage on qubit platforms with strong noise bias.
In this paper, we construct quantum circuits for the Black-Scholes equations, a cornerstone of financial modeling, based on a quantum algorithm that overcome the cure of high dimensionality. Our approach leverages the Schrödingerisation technique, which converts linear partial and ordinary differential equations with non-unitary dynamics into a system evolved by unitary dynamics. This is achieved through a warped phase transformation that lifts the problem into a higher-dimensional space, enabling the simulation of the Black-Scholes equation on a quantum computer. We will conduct a thorough complexity analysis to highlight the quantum advantages of our approach compared to existing algorithms. The effectiveness of our quantum circuit is substantiated through extensive numerical experiments.
Product formula methods, particularly the second-order Suzuki decomposition, are an important tool for simulating quantum dynamics on quantum computers due to their simplicity and unitarity preservation. While higher-order schemes have been extensively studied, the landscape of second-order decompositions remains poorly understood in practice. We explore how term ordering and recursive application of the Suzuki formula generate a broad family of approximants beyond standard Strang splitting, introducing a hybrid heuristic that minimizes local error bounds and a fractional approach with tunable sequence length. The hybrid method consistently selects the longest possible decomposition, achieving the lowest error but at the cost of exponential gate overhead, while fractional decompositions often match or exceed this performance with far fewer gates, enabling offline selection of near-optimal approximants for practical quantum simulation. This offers a simple, compiler-accessible heuristic for balancing accuracy and cost, and highlights an underexplored region of decomposition space where many low-cost approximants may achieve high accuracy without global optimization. Finally, we show that in the presence of depolarising noise, fractional decompositions become advantageous as systems approach fault-tolerant error rates, providing a practical path for balancing noise resistance and simulation accuracy.
Digital-analog is a universal quantum computing paradigm which employs the natural entangling Hamiltonian of the system and single-qubit gates as resources. Here, we study the stability of these protocols against Hamiltonian characterization errors. For this, we bound the maximum separation between the target and the implemented Hamiltonians. Additionally, we obtain an upper bound for the deviation in the expected value of an observable. We further propose a protocol for mitigating calibration errors which resembles dynamical-decoupling techniques. These results open the possibility of scaling digital-analog to intermediate and large scale systems while having an estimation on the errors committed.
The continuous monitoring of driven-dissipative systems offers new avenues for quantum advantage in metrology. This approach mixes temporal and spatial correlations in a manner distinct from traditional metrology, leading to ambiguities in how one identifies Heisenberg scalings (e.g.~standard asymptotic metrics like the sensitivity are not bounded by system size). Here, we propose a new metric for continuous sensing, the optimized finite-time environmental quantum Fisher information (QFI), that remedies the above issues by simultaneously treating time and system size as finite resources. In addition to having direct experimental relevance, this quantity is rigorously bounded by both system size and integration time, allowing for a precise formulation of Heisenberg scaling. We also introduce two many-body continuous sensors: the high-temperature superradiant sensor, and the dissipative spin squeezer. Both exhibit Heisenberg scaling of a collective magnetic field for multiple directions. The spin squeezed sensor has a striking advantage over previously studied many-body continuous sensors: the optimal measurement achieving the full QFI does not require the construction of a complex decoder system, but can be achieved using direct photodetection of the cavity output field.
Optical atomic clocks with unrivaled precision and accuracy have advanced the frontier of precision measurement science and opened new avenues for exploring fundamental physics. A fundamental limitation on clock precision is the Standard Quantum Limit (SQL), which stems from the uncorrelated projection noise of each atom. State-of-the-art optical lattice clocks interrogate large ensembles to minimize the SQL, but density-dependent frequency shifts pose challenges to scaling the atom number. The SQL can be surpassed, however, by leveraging entanglement, though it remains an open problem to achieve quantum advantage from spin squeezing at state-of-the-art stability levels. Here we demonstrate clock performance beyond the SQL, achieving a fractional frequency precision of 1.1 $\times 10^{-18}$ for a single spin-squeezed clock. With cavity-based quantum nondemolition (QND) measurements, we prepare two spin-squeezed ensembles of $\sim$30,000 strontium atoms confined in a two-dimensional optical lattice. A synchronous clock comparison with an interrogation time of 61 ms achieves a metrological improvement of 2.0(2) dB beyond the SQL, after correcting for state preparation and measurement errors. These results establish the most precise entanglement-enhanced clock to date and offer a powerful platform for exploring the interplay of gravity and quantum entanglement.
Understanding how noise degrades entanglement is crucial for the development of reliable quantum technologies. While the Markovian approximation simplifies the analysis of noise, it remains computationally demanding, particularly for high-dimensional systems like quantum memories. In this paper, we present a statistical approach to study the impact of different noise models on entanglement in composite quantum systems. By comparing global and local noise scenarios, we quantify entanglement degradation using the Positive Partial Transpose Time (PPTT) metric, which measures how long entanglement persists under noise. When the sampling of different noise scenarios is performed under controlled and homogeneous conditions, our analysis reveals that systems subjected to global noise tend to exhibit longer PPTTs, whereas those influenced by independent local noise models display the shortest entanglement persistence. To carry out this analysis, we employ a computational method proposed by Cao and Lu, which accelerates the simulation of PPTT distributions and enables efficient analysis of systems with dimensions up to $D=8$. Our results demonstrate the effectiveness of this approach for investigating the resilience of quantum systems under Markovian noise.
Balancing trainability and expressibility is a central challenge in variational quantum computing, and quantum architecture search (QAS) plays a pivotal role by automatically designing problem-specific parameterized circuits that address this trade-off. In this work, we introduce a scalable, training-free QAS framework that efficiently explores and evaluates quantum circuits through landscape fluctuation analysis. This analysis captures key characteristics of the cost function landscape, enabling accurate prediction of circuit learnability without costly training. By combining this metric with a streamlined two-level search strategy, our approach identifies high-performance, large-scale circuits with higher accuracy and fewer gates. We further demonstrate the practicality and scalability of our method, achieving significantly lower classical resource consumption compared to prior work. Notably, our framework attains robust performance on a challenging 50-qubit quantum many-body simulation, highlighting its potential for addressing complex quantum problems.
Optical Very Long Baseline Interferometry (VLBI) offers the potential for unprecedented angular resolution in both astronomical imaging and precision measurements. Classical approaches, however, face significant limitations due to photon loss, background noise, and the requirements for dynamical delay lines over large distances. This document surveys recent developments in quantum-enabled VLBI, which aim to address these challenges using entanglement-assisted protocols, quantum memory storage, and nonlocal measurement techniques. While its application to astronomy is well known, we also examine how these techniques may be extended to geodesy -- specifically, the monitoring of Earth's rotation. Particular attention is given to quantum-enhanced telescope architectures, including repeater-based long baseline interferometry and quantum error-corrected encoding schemes, which offer a pathway toward high-fidelity optical VLBI. To aid the discussion, we also compare specifications for key enabling technologies to current state-of-the-art experimental components, including switching rates, gate times, entanglement distribution rates, and memory lifetimes. By integrating quantum technologies, future interferometric networks may achieve diffraction-limited imaging at optical and near-infrared wavelengths, surpassing the constraints of classical techniques and enabling new precision tests of astrophysical and fundamental physics phenomena.
Simulating the time-dependent Schrödinger equation requires finding the unitary operator that efficiently describes the time evolution. One of the fundamental tools to efficiently simulate quantum dynamics is the Trotter decomposition of the time-evolution operator, and various quantum-classical hybrid algorithms have been proposed to implement Trotterization. Given that some quantum hardware is publicly accessible, it is important to assess the practical performance of Trotterization on them. However, a straightforward Trotter decomposition of the Hamiltonian often leads to quantum circuits with large depth, which hinders accurate simulation on devices with limited coherence time. In this work, we propose a hardware-efficient Trotterization scheme for the time evolution of a 3-site, $J=1$ XXX Heisenberg model. By exploiting the symmetry of the Hamiltonian, we derive an effective Hamiltonian that acts on a reduced subspace, significantly lowering the circuit depth after optimization for specific evolution times. This approach can be interpreted as a change of basis in the standard Trotterization scheme. We also test our method on the IBM Quantum device ibmq_jakarta. Combining with the readout error mitigation and the zero-noise extrapolation, we obtain the fidelity $0.9928 \pm 0.0013$ for the simulation of the time evolution of the Heisenberg model from the time $t=0$ to $t=\pi$.
We construct examples of highly entangled two-dimensional states by exploiting a correspondence between stochastic processes in $d$ dimensions and quantum states in $d+1$ dimensions. The entanglement structure of these states, which we explicitly calculate, can be tuned between area law, sub-volume law, and volume law. This correspondence also enables a sequential generation protocol: the states can be prepared through a series of unitary transformations acting on an auxiliary system. We also discuss the conditions under which these states have local, frustration-free parent Hamiltonians.
We present a quantum computational framework using Hamiltonian Truncation (HT) for simulating real-time scattering processes in $(1+1)$-dimensional scalar $\phi^4$ theory. Unlike traditional lattice discretisation methods, HT approximates the quantum field theory Hilbert space by truncating the energy eigenbasis of a solvable reference Hamiltonian, significantly reducing the number of required qubits. Our approach involves preparing initial states as wavepackets through adiabatic evolution from the free-field theory to the interacting regime. We experimentally demonstrate this state preparation procedure on an IonQ trapped-ion quantum device and validate it through quantum simulations, capturing key phenomena such as wavepacket dynamics, interference effects, and particle production post-collision. Detailed resource comparisons highlight the advantages of HT over lattice approaches in terms of qubit efficiency, although we observe challenges associated with circuit depth scaling. Our findings suggest that Hamiltonian Truncation offers a promising strategy for quantum simulations of quantum field theories, particularly as quantum hardware and algorithms continue to improve.
Chiral graviton modes are elusive excitations arising from the hidden quantum geometry of fractional quantum Hall states. It remains unclear, however, whether this picture extends to lattice models, where continuum translations are broken and additional quasiparticle decay channels arise. We present a framework in which we explicitly derive a field theory incorporating lattice chiral graviton operators within the paradigmatic bosonic Harper-Hofstadter model. Extensive numerical evidence suggests that chiral graviton modes persist away from the continuum, and are well captured by the proposed lattice operators. We identify geometric quenches as a viable experimental probe, paving the way for the exploration of chiral gravitons in near-term quantum simulation experiments.
Exact diagonalization (ED) is a cornerstone technique in quantum many-body physics, enabling precise solutions to the Schrödinger equation for interacting quantum systems. Despite its utility in studying ground states, excited states, and dynamical behaviors, the exponential growth of the Hilbert space with system size presents significant computational challenges. We introduce XDiag, an open-source software package designed to combine advanced and efficient algorithms for ED with and without symmetry-adapted bases with user-friendly interfaces. Implemented in C++ for computational efficiency and wrapped in Julia for ease of use, XDiag provides a comprehensive toolkit for ED calculations. Key features of XDiag include the first publicly accessible implementation of sublattice coding algorithms for large-scale spin system diagonalizations, efficient Lin table algorithms for symmetry lookups, and random-hashing techniques for distributed memory parallelization. The library supports various Hilbert space types (e.g., spin-1/2, electron, and t-J models), facilitates symmetry-adapted block calculations, and automates symmetry considerations. The package is complemented by extensive documentation, a user guide, reproducible benchmarks demonstrating near-linear scaling on thousands of CPU cores, and over 20 examples covering ground-state calculations, spectral functions, time evolution, and thermal states. By integrating high-performance computing with accessible scripting capabilities, XDiag allows researchers to perform state-of-the-art ED simulations and explore quantum many-body phenomena with unprecedented flexibility and efficiency.
Quantum Gaussian channels are fundamental models for communication and information processing in continuous-variable quantum systems. This work addresses both foundational aspects and physical implementation pathways for these channels. Firstly, we provide a rigorous, unified framework by formally proving the equivalence of three principal definitions of quantum Gaussian channels prevalent in the literature, consolidating theoretical understanding. Secondly, we investigate the physical realization of these channels using multiport interferometers, a key platform in quantum optics. The central research contribution is a precise characterization of the channel parameters that correspond to Gaussian channels physically implementable via linear optical multiport interferometers. This characterization bridges the abstract mathematical description with concrete physical architectures. Along the way, we also resolve some questions posed by Parthasarathy (Indian J. Pure Appl. Math. 46, (2015)).
We prove tight upper and lower bounds of $\Theta\left(\tfrac{1}{\epsilon}\left( \sqrt{2^k \log\binom{n}{k} } + \log\binom{n}{k} \right)\right)$ on the number of samples required for distribution-free $k$-junta testing. This is the first tight bound for testing a natural class of Boolean functions in the distribution-free sample-based model. Our bounds also hold for the feature selection problem, showing that a junta tester must learn the set of relevant variables. For tolerant junta testing, we prove a sample lower bound of $\Omega(2^{(1-o(1)) k} + \log\binom{n}{k})$ showing that, unlike standard testing, there is no large gap between tolerant testing and learning.
We study the response of a quantum system induced by a collision with a quantum particle, using the time-independent framework of scattering theory. After deriving the dynamical map for the quantum system, we show that the unitary contribution to the dynamics defines a non-perturbative response function obeying a general fluctuation-dissipation relation. We show that Kubo's formula emerges autonomously in the Born approximation, where the time-dependent perturbation is determined by particle's evolution through the potential region.
The Bloch equation, that set the foundation for open quantum systems, was conceived by pure physical reasoning. Since then, the Lindblad (GKLS) form of a quantum master equation, its most general mathematical representation, became an established staple in the open quantum systems toolbox. It allows to describe a multitude of quantum phenomena, however, its universality comes at a cost -- without additional constraints, the resultant dynamics are not necessarily thermodynamically consistent, and the equation itself lacks an intuitive interpretation. We present a mathematically equivalent form of the Lindblad master equation under a single constraint of strict energy conservation. The ``elemental Bloch'' equation separates the system dynamics into its elemental parts, making an explicit distinction between thermal mixing, dephasing, and energy relaxation, and thus reinstating the physical intuition in the equation. We derive the equation for a many-level system by accounting for all relevant transitions between pairs of levels. Finally, the formalism is illustrated by calculating the fixed point of the dynamics and exploring the conditions for canonical invariance in quantum systems.
Conformal interfaces play an important role in quantum critical systems. In closed systems, the transmission properties of conformal interfaces are typically characterized by two quantities: One is the effective central charge $c_{\text{eff}}$, which measures the amount of quantum entanglement through the interface, and the other is the transmission coefficient $c_{\text{LR}}$, which measures the energy transmission through the interface. In the present work, to characterize the transmission property of conformal interfaces in open quantum systems, we propose a third quantity $c_{\text{relax}}$, which is defined through the ratio of Liouvillian gaps with and without an interface. Physically, $c_{\text{relax}}$ measures the suppression of the relaxation rate towards a steady state when the system is subject to a local dissipation. We perform both analytical perturbation calculations and exact numerical calculations based on a free fermion chain at the critical point. It is found that $c_{\text{relax}}$ decreases monotonically with the strength of the interface. In particular, $0\le c_{\text{relax}}\le c_{\text{LR}}\le c_{\text{eff}}$, where the equalities hold if and only if the interface is totally reflective or totally transmissive. Our result for $c_{\text{relax}}$ is universal in the sense that $c_{\text{relax}}$ is independent of (i) the dissipation strength in the weak dissipation regime and (ii) the location where the local dissipation is introduced. Comparing to the previously known $c_{\text{LR}}$ and $c_{\text{eff}}$ in a closed system, our $c_{\text{relax}}$ shows a distinct behavior as a function of the interface strength, suggesting its novelty to characterize conformal interfaces in open systems and offering insights into critical systems under dissipation.
Anastasiia S. Nikolaeva, Daria O. Konina, Anatolii V. Antipov, Maksim A. Gavreev, Konstantin M. Makushin, Boris I. Bantysh, Andrey Yu. Chernyavskiy, Grigory V. Astretsov, Evgeniy A. Polyakov, Aidar I. Saifoulline, Evgeniy O. Kiktenko, Alexey N. Rubtsov, Aleksey K. Fedorov A quantum processor, like any computing device, requires the development of both hardware and the necessary set of software solutions, starting with quantum algorithms and ending with means of accessing quantum devices. As part of the roadmap for the development of the high-tech field of quantum computing in the period from 2020 to 2024, a set of software solutions for quantum computing devices was developed. This software package includes a set of quantum algorithms for solving prototypes of applied tasks, monitoring and benchmarking tools for quantum processors, error suppression and correction methods, tools for compiling and optimizing quantum circuits, as well as interfaces for remote cloud access. This review presents the key results achieved, among which it is necessary to mention the execution of quantum algorithms using a cloud-based quantum computing platform.
Practical implementations of Quantum Key Distribution (QKD) extending beyond urban areas commonly use satellite links. However, the transmission of quantum states through the Earth's atmosphere is highly susceptible to noise, restricting its application primarily to nighttime. High-dimensional (HD) QKD offers a promising solution to this limitation by employing high-dimensionally entangled quantum states. Although experimental platforms for HD QKD exist, previous security analyses were limited to the asymptotic regime and have either relied on impractical measurements or employed computationally demanding convex optimization tasks restricting the security analysis to low dimensions. In this work, we bridge this gap by presenting a composable finite-size security proof against both collective and coherent attacks for a general HD QKD protocol that utilizes only experimentally accessible measurements. In addition to the conventional, yet impractical `one-shot' key rates, we also provide a practical variable-length security argument that yields significantly higher expected key rates. This approach is particularly crucial for rapidly changing and turbulent atmospheric conditions, as encountered for free-space and satellite-based QKD platforms.
Quantum machine learning (QML) is an emerging field that investigates the capabilities of quantum computers for learning tasks. While QML models can theoretically offer advantages such as exponential speed-ups, challenges in data loading and the ability to scale to relevant problem sizes have prevented demonstrations of such advantages on practical problems. In particular, the encoding of arbitrary classical data into quantum states usually comes at a high computational cost, either in terms of qubits or gate count. However, real-world data typically exhibits some inherent structure (such as image data) which can be leveraged to load them with a much smaller cost on a quantum computer. This work further develops an efficient algorithm for finding low-depth quantum circuits to load classical image data as quantum states. To evaluate its effectiveness, we conduct systematic studies on the MNIST, Fashion-MNIST, CIFAR-10, and Imagenette datasets. The corresponding circuits for loading the full large-scale datasets are available publicly as PennyLane datasets and can be used by the community for their own benchmarks. We further analyze the performance of various quantum classifiers, such as quantum kernel methods, parameterized quantum circuits, and tensor-network classifiers, and we compare them to convolutional neural networks. In particular, we focus on the performance of the quantum classifiers as we introduce nonlinear functions of the input state, e.g., by letting the circuit parameters depend on the input state.
We study a generic cavity QED setup under conditions where the coupling between the two-level systems and a single bosonic mode is significantly degraded by low-frequency noise. To overcome this problem, we identify pulsed dynamical decoupling strategies that suppress the effects of noise while still allowing for a coherent exchange of excitations between the individual subsystems. The corresponding pulse sequences can be further designed to realize either Jaynes-Cummings, anti-Jaynes-Cummings, or Rabi couplings, as well as different types of cavity-mediated interactions between the two-level systems. A detailed analysis of the residual imperfections demonstrates that this decoupling strategy can boost the effective cooperativity of the cavity QED system by several orders of magnitude and improve the fidelity of quantum-technologically relevant operations accordingly.
We study quantum phases of a fluid of mobile charged non-abelian anyons, which arise upon doping the lattice Moore-Read quantum Hall state at lattice filling $\nu = 1/2$ and its generalizations to the Read-Rezayi ($\mathrm{RR}_k$) sequence at $\nu = k/(k+2)$. In contrast to their abelian counterparts, non-abelian anyons present unique challenges due to their non-invertible fusion rules and non-abelian braiding structures. We address these challenges using a Chern-Simons-Ginzburg-Landau (CSGL) framework that incorporates the crucial effect of energy splitting between different anyon fusion channels at nonzero dopant density. For the Moore-Read state, we show that doping the charge $e/4$ non-abelion naturally leads to a fully gapped charge-$2$ superconductor without any coexisting topological order. The chiral central charge of the superconductor depends on details of the interactions determining the splitting of anyon fusion channels. For general $\mathrm{RR}_k$ states, our analysis of states obtained by doping the basic non-abelion $a_0$ with charge $e/(k+2)$ reveals a striking even/odd pattern in the Read-Rezayi index $k$. We develop a general physical picture for anyon-driven superconductivity based on charge-flux unbinding, and show how it relates to the CSGL description of doped abelian quantum Hall states. Finally, as a bonus, we use the CSGL formalism to describe transitions between the $\mathrm{RR}_k$ state and a trivial period-$(k+2)$ CDW insulator at fixed filling, driven by the gap closure of the fundamental non-abelian anyon $a_0$. Notably, for $k=2$, this predicts a period-4 CDW neighboring the Moore-Read state at half-filling, offering a potential explanation of recent numerical observations in models of twisted MoTe$_2$.
Collision models (CMs) describe an open system interacting in sequence with elements of an environment, termed ancillas. They have been established as a useful tool for analyzing non-Markovian open quantum dynamics based on the ability to control the environmental memory through simple feedback mechanisms. In this work, we investigate how ancilla-ancilla entanglement can serve as a mechanism for controlling the non-Markovianity of an open system, focusing on an operational approach to generating correlations within the environment. To this end, we first demonstrate that the open dynamics of CMs with sequentially generated correlations between groups of ancillas can be mapped onto a composite CM, where the memory part of the environment is incorporated into an enlarged Markovian system. We then apply this framework to an all-qubit CM, and show that non-Markovian behavior emerges only when the next incoming pair of ancillas are entangled prior to colliding with the system. On the other hand, when system-ancilla collisions precede ancilla-ancilla entanglement, we find the open dynamics to always be Markovian. Our findings highlight how certain qualitative features of inter-ancilla correlations can strongly influence the onset of system non-Markovianity.
Metastable qubits in atomic systems can enable large-scale quantum computing by simplifying hardware requirements and adding efficient erasure conversion to the pre-existing toolbox of high-fidelity laser-based control. For trapped atomic ions, the fundamental error floor of this control is given by spontaneous Raman and Rayleigh scattering from short-lived excited states. We measure spontaneous Raman scattering rates out of a metastable $D_{5/2}$ qubit manifold of a single trapped $^{40}$Ca$^+$ ion illuminated by 976 nm light that is -44 THz detuned from the dipole-allowed transition to the $P_{3/2}$ manifold. This supports the calculation of error rates from both types of scattering during one- and two-qubit gates on this platform, thus demonstrating that infidelities $<10^{-4}$ are possible.
Combinatorial optimization problems have wide-ranging applications in industry and academia. Quantum computers may help solve them by sampling from carefully prepared Ansatz quantum circuits. However, current quantum computers are limited by their qubit count, connectivity, and noise. This is particularly restrictive when considering optimization problems beyond the quadratic order. Here, we introduce Ansatze based on an approximate quadratization of high-order Hamiltonians which do not incur a qubit overhead. The price paid is a loss in the quality of the noiseless solution. Crucially, this approximation yields shallower Ansatze which are more robust to noise than the standard QAOA one. We show this through simulations of systems of 8 to 16 qubits with variable noise strengths. Furthermore, we also propose a noise-aware Ansatz design method for quadratic optimization problems. This method implements only part of the corresponding Hamiltonian by limiting the number of layers of SWAP gates in the Ansatz. We find that for both problem types, under noise, our approximate implementation of the full problem structure can significantly enhance the solution quality. Our work opens a path to enhance the solution quality that approximate quantum optimization achieves on noisy hardware.
Spin models featuring infinite-range, homogeneous all-to-all interactions can be efficiently described due to the existence of a symmetry-restricted Hilbert subspace and an underlying classical phase space structure. However, when the permutation invariance of the system is weakly broken, such as by long- but finite-range interactions, these tools become mathematically invalid. Here we propose to approximately describe these scenarios by considering additional many-body subspaces according to the hierarchy of their coupling to the symmetric subspace, defined by leveraging the structure of irreducible representations (irreps) of the group $SU(2)$. We put forward a procedure, dubbed ``irrep distillation," which defines these additional subspaces to minimize their dimension at each order of approximation. We discuss the validity of our method in connection with the occurrence of quantum many-body scars, benchmark its utility by analyzing the dynamical and equilibrium phase transitions, outline its phenomenology, and compare its use-cases against other approximations of long-range many-body systems.
Reduced basis methods provide an efficient way of mapping out phase diagrams of strongly correlated many-body quantum systems. The method relies on using the exact solutions at select parameter values to construct a low-dimensional basis, from which observables can be efficiently and reliably computed throughout the parameter space. Here we show that this method can be generalized to driven-dissipative Markovian systems allowing efficient calculations of observables in the transient and steady states. A subsequent distillation of the reduced basis vectors according to their explained variances allows for an unbiased exploration of the most pronounced parameter dependencies indicative of phase boundaries in the thermodynamic limit.
Quantum technologies promise information processing and communication technology advancements, including random number generation (RNG). Using Bell inequalities, a user of a quantum RNG hardware can certify that the values provided by an untrusted device are truly random. This problem has been extensively studied for von Neumann and min-entropy as a measure of randomness. However, in this paper, we analyze the feasibility of such verification for Shannon entropy. We investigate how the usability of various Bell inequalities differs depending on the presence of noise. Moreover, we present the benefit of certification for Shannon compared to min-entropy.
We derive a generalized quantum Langevin equation and its fluctuation-dissipation relation describing the quantum dynamics of a tagged particle interacting with a medium (environment), where both the particle and the environment are driven by an external time-dependent (e.g. oscillating) field. We specialize on the case of a charged tagged particle interacting with a bath of charged oscillators, under an external AC electric field, although the results are much more general and can be applied to any type of external time-dependent fields. We derive the corresponding quantum Langevin equation, which obeys a modified fluctuation-dissipation relation where the external field plays an explicit role. Using these results, we provide an illustration of their usefulness and derive a new form of the quantum Nyquist noise for the voltage fluctuations in electrical circuits under AC conditions (finite frequency), which is the most general since it also accounts for the response of the heat bath (e.g. lattice ions) to the applied AC electric field in the GHz-THz region, of relevance for 5G/6G wireless technologies. This generalized quantum fluctuation-dissipation relation for driven systems can also find other applications ranging from quantum noise in quantum optics to quantum computing with trapped ions.
Classical metastability manifests as noise-driven switching between disjoint basins of attraction and slowing down of relaxation, quantum systems like qubits and Rydberg atoms exhibit analogous behavior through collective quantum jumps and long-lived Liouvillian modes with a small spectral gap. Though any metastable mode is expected to decay after a finite time, stochastic switching persists indefinitely. Here, we elaborate on the connection between switching dynamics and quantum metastability through the lens of the large deviation principles, spectral decomposition, and quantum-jump simulations. Specifically, we distinguish the trajectory-level noise-induced metastability (stochastic switching) from the spectrum-level deterministic metastability (small Liouvillian gap) in a Markovian open quantum system with bistability. Without stochastic switching, whether a small spectral gap leads to slow relaxation depends on initial states. In contrast, with switching, the memory of initial conditions is quickly lost, and the relaxation is limited by the rare switching between the metastable states. Consistent with the exponential scaling of the Liouvillian gap with system size, the switching rates conform to the Arrhenius law, with the inverse system size serving as the nonequilibrium analog of temperature. Using the dynamical path integral and the instanton approach, we further extend the connection between the quasipotential functional and the probabilities of rare fluctuations to the quantum realm. These results provide new insights into quantum bistability and the relaxation processes of strongly interacting, dissipative quantum systems far away from the thermodynamic limit.
Quantum metrology is a promising application of quantum technologies, enabling the precise measurement of weak external fields at a local scale. In typical quantum sensing protocols, a qubit interacts with an external field, and the amplitude of the field is estimated by analyzing the expectation value of a measured observable. Sensitivity can, in principle, be enhanced by increasing the number of qubits within a fixed volume, thereby maintaining spatial resolution. However, at high qubit densities, inter-qubit interactions induce complex many-body dynamics, resulting in multiple oscillations in the expectation value of the observable even for small field amplitudes. This ambiguity reduces the dynamic range of the sensing protocol. We propose a method to overcome the limitation in quantum metrology by adopting a quantum circuit learning framework using a parameterized quantum circuit to approximate a target function by optimizing the circuit parameters. In our method, after the qubits interact with the external field, we apply a sequence of parameterized quantum gates and measure a suitable observable. By optimizing the gate parameters, the expectation value is trained to exhibit a monotonic response within a target range of field amplitudes, thereby eliminating multiple oscillations and enhancing the dynamic range. This method offers a strategy for improving quantum sensing performance in dense qubit systems.