FR-PHENO-2025-02
A general approach to quantum integration
of cross sections in high-energy physics
Abstract
We present universal building blocks for the quantum integration of generic cross sections in high-energy physics. We make use of Fourier Quantum Monte Carlo Integration as implemented in Quantinuum’s Quantum Monte Carlo Integration engine to provide an extendable methodology for generating efficient circuits that can implement generic cross-section calculations, providing a quadratic speed-up in root mean-squared error convergence with respect to classical Monte Carlo integration. We focus on a concrete example of a decay process to illustrate our work.
1 Introduction
Research in high-energy physics (HEP) aims at exploring fundamental physics at tiny scales. To probe these small scales, elementary particles are collided with very high energies in order to produce other particles. The prime example of such an experiment is the Large Hadron Collider (LHC), where protons are currently collided at a centre-of-mass (COM) energy of . In this way, properties of known particles can be studied and new particles, such as the Higgs boson [1, 2], can be discovered.
In order to extract useful information from experimental data, theoretical predictions are needed for comparison. At the heart of theoretical predictions is the concept of a cross section. Cross sections relate to the probability of a certain scattering process occurring in some collider experiment. This means that all theoretical predictions are obtained through the computation of cross sections, which consist of either analytical or numerical integration. As explained later, the method of choice for integrating a cross section in HEP is numerical Monte Carlo (MC) integration (MCI). The computational resources needed to provide theoretical predictions for collider experiments are enormous; for the LHC, they are of the order of billions of CPU hours per year [3, 4, 5].
It is therefore particularly interesting to look at whether quantum processing units can provide an alternative to CPU or GPU technology for computing theoretical predictions in HEP. In the last few years, several works have been devoted to exploring the potential of quantum computing for computing theoretical predictions in perturbative theory [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. The quantum analogue of classical MCI is known as Quantum Monte Carlo Integration (QMCI) [18, 19], and this quantum algorithm utilises the Quantum Amplitude Estimation [20] (QAE) algorithm as a key subroutine, providing a quadratic speed up in root mean-squared error (RMSE) convergence with respect to classical MCI.
In Ref. [21], MP investigated the use of QMCI to integrate elementary scattering processes. In this reference, a variant of the original QAE algorithm was used, namely Iterative Quantum Amplitude Estimation [22] (IQAE). This seminal work focused on simplified versions of scattering cross sections and explored both one- and two-dimensional integrals. Following this, Refs. [23, 24] explored the same idea but using Quantum Fourier Iterative Amplitude Estimation [25] for the quantum integration, which relies on the Fourier Quantum Monte Carlo Integration [19] (FQMCI) method (described later in Section 2.3). In Ref. [25], a small-scale (small number of qubits) parametrised quantum circuit (PQC) is trained to prepare a quantum state representing a specific cross-section integrand, and decompose this state into its Fourier series. Each term in the Fourier series is then integrated separately using QAE, in the same manner as in FQMCI. A potential drawback of this method is that it may not scale efficiently to systems with larger numbers of qubits (as would be required for multi-dimensional integrands to be calculated, for example), due to the trainability issues associated with variational methods in quantum machine learning [26, 27, 28, 29, 30, 31]. Indeed, preparing arbitrary probability distributions on a quantum computer is thought to be computationally hard in the general case, and the topic remains an active field of research.
In the present work, we go significantly beyond what has been done to date by presenting a methodology for the computation of generic cross sections in HEP, applicable to arbitrary dimensionality. In particular, the method of quantum integration makes use of the FQMCI method implemented in Quantinuum’s QMCI engine [32] (developed in part by IW). While research using the QMCI engine has so far been focused on financial applications [33], it is particularly interesting to consider other potential fields such as HEP, where the integrals studied possess different challenges to those in finance e.g., high dimensionality, more complex integrands, and non-trivial integral limits.
More concretely, in this article we investigate the computation of one- and two-dimensional cross-section integrations for scattering processes using the QMCI engine. We demonstrate how one can decompose the integrand for an arbitrary, multi-dimensional cross section into products of constituent building blocks, which are single-variable terms. We describe methods for implementing these individual building blocks on a quantum computer, using the in-built functionality of the QMCI engine. We also discuss how kinematical constraints can be applied to the integrals using this in-built functionality.
In Section 2, we begin by defining the problem and explain why numerical MCI is the preferred method for classical cross-section integration. After this, we introduce FQMCI, Quantinuum’s QMCI engine, and discuss how we can use the engine to perform QMCI for generic cross-section calculations. We then discuss in detail some potential state-preparation methods for generating relativistic Breit-Wigner distributions—one of the key building blocks discussed—in Section 3. Some example applications demonstrating the applicability of our methodology are then discussed in Section 4. Finally, Section 5 contains a discussion of the work, an outlook, and concluding remarks.
2 Definition of the problem and approaches
2.1 Cross sections in high-energy physics
The cross section, typically denoted by , is a central concept in HEP. It relates to the probability of a given scattering process occurring, where the letters represent elementary particles. This value can be compared to experimental measurements. In a general, abstract way, a cross section can be expressed as
(1) |
It is worth mentioning that for cross-section calculations, and thus for the calculations in this article, the integration is always performed over the real domain. The flux factor, , characterises the rate at which the initial-state particles ( and ) collide, and is a real number. It is defined as
(2) |
where and denote the energy of the initial particles, respectively, while is the relative velocity between them. For fixed energies in the initial state, as is the case in this article, is a real constant, and thus can simply be factored out. The phase-space term encompasses four-momentum conservation () as well as the real integration over the kinematic variables of the final-state particles (). Finally, the matrix element encodes the transition probabilities between the states and . The matrix element itself is constructed from complex numbers and spinors, but when it is complex-conjugate squared [as in Eq. 1] it is always a real number. The underlying theory of interactions between the particles is encoded in the matrix element. In particular, when the interaction between the initial and final state is mediated via virtual particles (not necessarily identical to the external ones), so-called propagators will be present in the matrix element. Their general form, arising in , reads
(3) |
where and represent the mass and the decay width of the massive particle, respectively. It is worth mentioning that the decay width of an unstable particle is a measure of its decay probability per unit time. It is the inverse of the particle’s lifetime, and in that respect, an important parameter for characterising a particle’s dynamics. For massless particles, the propagator in Eq. 3 reduces to , by setting . In this case, the variable is a kinematic invariant which has units of energy/mass squared. This means that the general form of the integral to be performed in a cross-section calculation [as in Eq. 1] reads
(4) |
where and are the number of integration variables and propagators, respectively, so that . In the present notation, the variables of integration can either be kinematic invariants or angles related to the final-state particles. The number of integration variables scales as for a scattering process. The number of propagator terms depends on the process considered, which is defined by the initial- and final-state particles. The index set contains all combinations of all the indices, which are denoted by the subset of indices . In this way, all possible monomials can be represented. The index, , represents the power of each term, and is a positive integer. The labels denote all possible internal massive particles i.e., all particles that are neither in the initial nor final state. For example, for the process that will be studied later (see Fig. 8), the W boson is an internal particle. Finally, are real constants to ensure full generality. There are as many constants as there are monomials. The challenge is thus to integrate a multi-dimensional integral that has a highly peaked structure due to the propagators.333The propagators describe intermediate particles that can become resonant when the invariant mass of the four momentum is close to the mass of the particle. They are therefore sometimes referred to as resonances. Note that the form of Eq. 4 can be made even more general by allowing sums/differences of random variables instead of simply in the denominator. However, given that this case only arises at very high multiplicities (at least or ), we defer the treatment of this even more general case for future work.
To be more concrete, a process, such as the production of four leptons in association with four bottom quarks (which is dominated by the production of two top quarks in association with two bottom quarks) [34], will have integration variables, and different propagators. Without going into the details of further complications related to considering higher orders in perturbation theory, this example shows the level of complexity for state-of-the-art theoretical computations in HEP.444The interested reader can look at Ref. [35] and references therein to get an overview of the current frontier in theoretical predictions for HEP. It also explains why the computational resources needed for the LHC’s physics programme is so large.555To be exact, the computational time quoted in the introduction is for the generation of so-called events, representing the underlying distribution to be integrated. These events are thus closely related to the integration of the cross section.
A further complication when computing cross sections is the appearance of event selections in experimental analyses. Typically, the detectors of an experiment cannot physically cover the entire phase space. Therefore, the integration in Eq. 1 has to be restricted to a smaller domain of integration, corresponding to that physically accessible by the experiment. This is done by setting minimal and maximal values (setting the cut values) to observable quantities, such as the rapidity or the transverse momentum of the final particles. Events with particles that do not pass these requirements are not detected. From the theoretical side, these observable quantities depend on the final-state momenta, which themselves depend on the integration variables. The cross section therefore becomes
(5) |
where the cut function, , represents these restricted observables. Formally, the function depends on the variables of integration, as it depends on (the final-state particles), which is itself built from the variables of integration. However an analytical form for as a function of the variables of integration may not necessarily exist; we will not investigate this aspect in detail in the rest of the article, however we will briefly discuss it.
We now move on to discuss classical approaches for calculating such integrals.
2.2 Classical approach
The most common approach to compute integrals such as the ones presented in the previous section is by reverting to numerical MCI techniques. There are several reasons for this: first, as eluded to above, closed forms for the cut functions do not always exist. This means that numerical techniques must be used. Second, while analytical integration is possible for simple scattering processes (e.g., or ), with increasing multiplicity the calculations rapidly become intractable. Third, reverting to analytical calculations make things more difficult to automate, as there are typically many processes that are relevant for a given experiment, e.g., different processes for the LHC. The LHC in particular also has the drawback that it is a proton-proton collider, meaning that calculations rely on knowledge of the parton distribution functions for the proton, functions that provide the probability for a parton (an elementary particle such as a quark or gluon) to be emitted. These functions are not known from first principles, and must be extracted from data. As they are typically defined in terms of numerical grids, this makes them more naturally embeddable within a numerical framework. Finally, while one is primarily interested in computing the cross section, this quantity is also measured as a function of other experimental observables such as angles between particles or observables related to energies. When using MCI, computing the cross section as a function of other observables comes at no extra cost, as at each evaluation, the weight used for the cross-section evaluation can also be binned in a histogram according to the value of the observable of interest. In this case, while the binning of the histograms is a computationally cheap operation, for analytical calculations this would require additional, more complicated calculations.
The RMSE of classical MCI scales as , where is the number of samples. Improvements on basic MC techniques aim therefore to reduce the overall variance. This is done either by modifying the original integrand so that the new function has a smaller variance, or by sampling the integration such that the distribution of sampled points is no longer randomly uniform across the entire domain of integration [36].
The first type of technique is known as importance sampling. For HEP, propagators are mapped to functions with lower variance.666See Ref. [36] for a pedagogical explanation in the case of a single propagator. Given that the propagator structure is known a priori (thanks to knowledge of the Feynman diagrams that contribute to the process), it is possible to setup an integration routine such that each of the resonant structures are singled out. This leads to the so-called multi-channelling integration technique.
The second type of approach, is typically referred to as the adaptive MC technique [37]. The idea is that, through an iterative procedure, points are sampled according to the distribution to be integrated. This means that the peak structure is probed more efficiently.
For state-of-the-art integration in HEP, in reality combinations of both approaches are utilised. Nonetheless, the scaling of the overall RMSE is still , motivating the exploration of quantum algorithms for sampling, which are known to have better scaling.777It is fair to say that Markov-chain MC (MCMC) and quasi-MC approaches have so far received little attention as methods for cross-section integration in HEP. While the former seems promising [38, 39, 40, 41], the latter is known to be more appropriate for calculations at higher orders of perturbation theory [42, 43, 44]. The reason is that quasi-MC methods are not appropriate for integrands with highly peaked structures, and where there are asymmetric restrictions on the phase space—as is the case for cross-section calculations.
In order to fix notation for later use and clarity, we here define the problem of MCI in a more rigorous way. MCI calculates the expectation of a function, , of a continuous random variable, , with PDF, , defined as
(6) |
We will hereafter refer to as the function applied. We can straightforwardly extend this to functions, , of two continuous random variables, and , with joint PDF, ,
(7) |
Indeed, this can extended further to an arbitrary number of dimensions, and with the integration performed over either all or subsets of the given variables. However, for the purposes of this article it will only be necessary to focus on expectations of separable functions of at most both and i.e., products of functions, , with expectation
(8) |
We now move on to discuss the equivalent approach in quantum computing.
2.3 Quantum approach
The quantum analogue of classical MCI, QMCI, is a quantum algorithm that returns a numerical estimate for the value of some (possibly multidimensional) integral, in the same manner as for classical MCI.
QMCI provides a quadratic speed up in the convergence of the RMSE of the estimate as a function of the number of samples, as compared to classical MCI. Therefore, the RMSE scales as , compared to classically. The source of the quantum advantage in QMCI arises from the use of QAE [20]—a generalisation of Grover’s Search algorithm [45]—which is the key subroutine of QMCI.
The QMCI algorithm comprises three main steps, as described in Ref. [32], where for demonstration purposes we only consider a single random variable, , (however this is straightforwardly extended to multiple variables).
-
1.
A quantum state, , encoding a multivariate probability distribution, , in the values of its complex amplitudes is prepared by applying a quantum circuit, , such that
(9) -
2.
An observable function, , is applied coherently to the state based on a quantum arithmetic circuit, , with the expectation value of interest, , then encoded in the amplitude of an additional ancilla qubit as
(10) -
3.
The expectation value of interest—which is equal to the probability of measuring the 1 state—is estimated using QAE.
2.3.1 Fourier Quantum Monte Carlo Integration
In Ref. [19], Herbert proposed the FQMCI methodology as an efficient (low depth) means of performing QMCI that retains the full quadratic advantage without requiring costly quantum arithmetic. The idea stems from the fact that is a straightforward quantity to estimate on a quantum computer (merely requiring a simple circuit corresponding to a bank of rotation gates, rather than any complicated quantum arithmetic). In FQMCI the function applied—under the assumption that it obeys certain smoothness conditions (continuous in value and first derivative, and second and third derivatives piecewise continuous and bounded)—is extended as a periodic, piecewise function, and this is then decomposed as a Fourier series. Then, because each term in the series is a cosine or sine, their expectations can be easily estimated individually, and these then recombined.
This methodology is also easily extended to the bivariate case, where there are expectations of products of functions applied of the form . In this case the individual Fourier series for and are multiplied together. One then ends up with univariate sine and cosine terms, alongside bivariate products of sines and cosines, which in the latter case can be reduced to sums of single bivariate sines or cosines using trigonometric identities. Thus the expectation can again be estimated by calculating expectations of sines and cosines, and then recombining. However, we note that this differs from the univariate case as multiple expectations are required to be calculated, and thus the resources required to perform the calculation are greater. In addition, the circuits are larger, because the expectations are for bivariate trigonometric functions (e.g, ), requiring an additional bank of rotation gates to represent the variable.
It is important to note for what follows that this technique can also be extended to an arbitrary multivariate case i.e., to expectations of products of functions applied of the form . However, for products of more than two this technique is unlikely to be particularly efficient due to the polynomial increase in the numbers of terms required to compute, and the additional increase in size of the circuits involved.
2.3.2 Quantinuum’s Quantum Monte Carlo Integration engine
Quantinuum’s QMCI engine [32] is the world’s first fully integrated platform for performing QMCI, with each of the steps discussed previously implemented within the engine based on state-of-the-art methods. There are a number of optimised, in-built features that will become important for the later discussion of how to implement generic cross-section calculations.
First, the engine can take as input any circuit, , provided by the user, and then this circuit is straightforwardly passed along the rest of the QMCI pipeline. It also contains an in-built state-preparation library containing circuits that represent some common distributions, such as the uniform and Gaussian distributions. Second, based on the FQMCI method the engine provides protocols to implement efficient Fourier-decomposition circuits, , for a variety of functions applied, particularly corresponding to moments or products of moments of random variables, such as for , etc. Third, the engine contains a number of in-built algorithms for performing QAE, in general optimised for performance. Fourth, the engine is able to implement limits on integral calculations, based on the use of thresholding operations, implemented via the enhanced -builder functionality. Thresholding operations correspond to binary operations based on some threshold value, , that enable the estimation of , where the indicator function, , with either a random variable or an expression, is defined as
(11) |
Fifth, the user is able to perform a QMCI calculation such that the RMSE of the final estimate is upper bounded by a chosen value. Finally, the engine is equipped with a resource mode that can determine the quantum resources (in terms of qubit numbers and gate counts/depths) required to perform a given QMCI calculation to a desired RMSE. The resource calculator outputs either: with a view to running calculations in the NISQ era—where applying two-qubit gates are the main bottleneck for hardware—the total number and depth of gates across all circuits, alongside the same information for the largest circuit that is run, and the same information for the total number and depth of all gates; or, with a view to running calculations in the fault-tolerant era with full quantum error correction—where applying gates will be the main bottleneck for hardware—the total number and depth of gates across all circuits, alongside the same information for the largest circuit that is run. In both cases, the total number of qubits required for the largest circuit is also output. Overall, these features will be key to later discussions in this article.
We note that how to split the integrand for the expectation value into the product of function applied, , and probability distribution, , is in general arbitrary. For QMCI applications in finance, which is so far the only application that has been previously studied using the QMCI engine [33], there is a clear notion of calculating some expectation of the random variable for use in a financial calculation, such as the mean, , or the second moment, . However, for general applications (i.e., integrands) there is freedom in how to perform this split, and we take advantage of this notion in the next section.
2.4 Implementing a generic cross-section calculation
In the introduction, we have mentioned so called generic building blocks for the computation of cross sections. To illustrate this, we provide here two concrete examples. From Eq. 1 and Eq. 4, a cross section with only a single integration variable reduces to
(12) |
where is the maximal value of . In order to cast the above equation into the more rigorous one of the MCI problem as in Eq. 6, noting that we are free to choose how to split the integrand for the expectation value into the product of function applied, , and probability distribution, , it follows that we can choose the function applied to be
(13) |
(i.e., the numerator). The probability distribution to sample from can then be associated to the propagator function (i.e., the denominator), such that
(14) |
The reason we chose this particular factorisation is twofold. First, as mentioned previously, implementing moments or products of moments of random variables as the function applied is straightforward to implement based on the QMCI engine’s existing functionality, and the RMSE scaling of the corresponding Fourier series’ are likely favourable as compared to those of more complex functions. Second, because the probability distribution corresponds to a real distribution that has physical significance (being used to model resonance behaviour in general), we can envision that future development of, for example, specific state-preparation methods tailored to this distribution888Note that in this article we only consider general state-preparation methods for preparing the BW distribution, and we defer research into more tailored methods for future work. would not only be useful for this particular application, but also for future research directions in HEP (e.g., for quantum-computing applications where resonance behaviours are required to be modelled).
We can identify the general form of Eq. 14 as an (unnormalised) relativistic Breit-Wigner (BW) distribution [46] (hereafter simply denoted by BW distribution). As mentioned previously, state preparation of arbitrary probability distributions is a difficult problem. However, in this case we need only prepare one specific distribution, and as will be discussed in the following section, it will suffice to prepare this distribution a single time (for a given COM energy) for each of the (small number of) relevant massive particles in the Standard Model, i.e., the W and Z bosons, the top (t) quark, and the Higgs boson. In the present example, the two building blocks are the function, , which is a monomial, and the BW distribution, .
While not the focus of this article, it is also worth briefly discussing the case of massless particles where, as discussed previously, the propagator term then reduces to
(15) |
Such terms would then be absorbed in the monomials in the numerator, possibly with negative exponents. For this case of negative exponents, because the functions applied for FQMCI must obey certain smoothness conditions, as detailed previously in Section 2.3.1, then in this case one would not be able to exploit FQMCI. Instead, an approach similar to those of prior works would be required, where the entire integrand is prepared as the probability distribution and then QAE run directly on this state (equivalent in our language to the “function applied” being constant).
In the same way, a general expression for a two-dimensional integration reads
(16) |
where the term means that the numerator can be made of any monomials which are a product of and raised to various powers. It implies that in order to match the form of Eq. 8, the separable functions applied should be
(17) |
Then, the probability distribution becomes
(18) |
In the case that we consider, it should be noted that the two BW distributions factorise in the sense that each propagator is a function of a single random variable. This property is particularly useful as it means that each propagator can be associated to one particular random variable, allowing the propagator product to be cast as a simple product of orthogonal propagator terms. This is key, as it means that our approach is easily extendable to multiple dimensions, as would be required in the general case.
The strategy is thus to consider as the function applied the monomials in the numerator, while the propagators terms are treated as the probability distribution to be sampled from. Analogously, the basic building blocks are simply monomials, and BW distributions. As highlighted here, this approach can be extended to arbitrary dimensions.
The QMCI engine provides efficient Fourier decomposition circuits for moments or products of moment of random variables, as discussed previously, and allows the user to provide the probability distribution to be sampled from as an input. In addition, when constraints on the phase space are required to be imposed on the integration, as given by the cut function, , in Eq. 5, then for simple constraints such as sums, maximum/minimum etc., the engine’s built-in thresholding operations can be used to achieve this. Thus—providing there are efficient ways to prepare BW distributions as quantum states, something discussed in the next section—in principle the QMCI engine can be used to perform quantum integration of a generic cross section in HEP.
It is worth noting that the cut function, , is a function of the integration variables, and as discussed previously, possibly non-analytical. This means that in order to implement the restriction of phase space, the value should be numerically evaluated at each phase-space point. Such an approach has been followed in Section 4 to set the boundaries of integration of a given variable that depends on another variable of integration; however, in this case the expression was analytical, and the quantum arithmetic implemented via the QMCI engine’s enhanced -builder. For the case of some non-analytical form, one possibility is to implement the required quantum arithmetic by constructing the equivalent classical circuit, and then making it a reversible circuit. However, we note that this approach is unlikely to be efficient, and indeed, in the future there may be better ways of constructing such circuits. We leave this aspect for future work.
3 State preparation of relativistic Breit-Wigner distributions
We see that to be able to perform generic cross-section calculations using the QMCI engine, we require a methodology for preparing on a quantum register BW distributions for the massive propagators W, Z and t,999In practice, completeness would also require a propagator for the Higgs boson. However, due to the narrow width of the resonance, generating this peak with sufficient resolution is highly challenging, given current qubit counts and methods. At high-energy colliders, all leptons are typically considered as massless, alongside all quarks (apart from the top quark), as their masses are negligible at these energy scales. In certain cases, the mass of the bottom quark is also included in theoretical calculations, but with the width assumed to be zero. as these constitute one of the building blocks of the method. As discussed briefly in the introduction, state preparation of probability distributions for QMCI is an active research topic, and several methods have been proposed in the literature (see, e.g., Refs. [47, 48, 49, 50, 51, 52, 53]); however, there is currently no standard methodology that is commonly used.
In this section, we propose two different methods for preparing BW distributions, and discuss the benefits and drawbacks of each. We note that this is by no means meant to be a definitive nor exhaustive list, as the purpose of this section is not to demonstrate the best possible solution for preparing BW distributions, but rather to give some examples to illustrate that this can be done in different ways. Indeed, in the future, as new state-preparation methods are developed, these can be utilised to generate larger-scale circuits, improve on the accuracy of the generated distributions, and reduce the quantum resources required to prepare them.
Before we continue the discussion, however, we must first discuss some of the challenges inherent to state preparation for QMCI. These challenges generally arise due to the limited number of qubits, , that are available (due to the limitations of current hardware) to represent the true continuous distribution as a discretised, truncated distribution. In the QMCI engine, the amplitudes of the state, , are interpreted as the support points of the discretised distribution, with the number of support points, . These are uniformly spaced with spacing size, , spanning the range of the support of the distribution, . In practice, the limited number of points alongside the uniform binning leads to limited resolution on the distribution of interest, as well as having to truncate what could be an infinite distribution. For a finite number of qubits, these limitations introduce a number of systematic errors into a QMCI calculation.101010It is important to note that many of these systematic errors are also applicable to classical MCI, as will be touched upon. Reference [33] discusses these systematic errors in detail, and so here we merely reproduce some of the definitions of the relevant ones for this discussion:
-
•
Discretisation error: this arises from the finite resolution of sampling for the random variable, due to the limited number of discrete support points used to encode the continuous random variable. This is negligible for classical MCI with (effectively) unlimited numbers of bits, but significant for QMCI due to the limited number of qubits available, therefore
(19) where , , , and .
-
•
Normalisation error: this arises because truncation and discretisation distort the probability distribution, meaning the mass of the PDF that is loaded as a quantum state may not be equal to unity, therefore
(20) where are the true normalised probabilities.
-
•
Thresholding error: this occurs specifically for the QMCI engine when considering the thresholding operations used to impose limits of integration. To estimate , the QMCI engine defines a threshold with an inclusive upper bound, but if , it instead approximates the threshold using , therefore
(21) -
•
State-preparation error: this error is distinct from the other errors discussed as it does not arise from the limited number of qubits, but rather the imperfect method of preparing the distribution as a quantum state i.e., there is a difference between the ideal quantum state and the approximated state that is actually prepared. The state-preparation error can be defined in a variety of different ways. In this article, we consider the cumulative-distribution-function (CDF) mean-squared error (CMSE) between the probabilities of the prepared state, , and the true probabilities, , as the metric representing the state-preparation error
(22) where are the empirical CDFs of and , respectively. In addition, we also consider the Jensen-Shannon divergence (JSD) as a metric for determining how accurately the state is prepared
(23) where .
All of these systematic errors can be considered relevant for the cross-section calculations considered later in Section 4.
Considering the discretisation and normalisation errors in particular, for a given number of qubits, , there will be a sub-range of the full potential support of the distribution where these systematic errors are sufficiently minimised (below a chosen bound) to have a negligible impact on a QMCI calculation. When considering cross-section calculations for a given experiment, the integration is only ever performed from to some . Therefore, in practice, it will not be necessary to prepare the BW distribution to span the full potential support range [ to ]—which would require an infinite number of qubits in order to sufficiently suppress these errors. Thus, whilst qubit numbers are still limited, for practical applications it will instead suffice to generate a range of circuits that prepare BW distributions with supports spanning a range of different COM energies squared, with each prepared using a sufficient number of qubits as to sufficiently suppress these systematic errors.
It is also worth mentioning that while hadron colliders collide particles at energies of up to tens of TeV ( for the LHC), due to the suppression of the parton distribution functions at high energies, the accessible energy range is effectively reduced to a few hundreds of GeV.
To this end, and for illustration purposes, in this article we consider preparing BW distributions for the W, Z, and t resonances, corresponding to a COM energy, , and for just the W to . The value, , was chosen such that the circuits for the cross-section calculations in Sections 4.2 and 4.3 (which only use the W-boson propagator) were sufficiently small to be classically simulable using state-vector methods, while was chosen in order to be able to generate distributions covering all of the resonances.
Figures 1 and 2 plot the discretisation and normalisation errors as a function of the number of qubits used to prepare the distributions for the various propagators. Considering the distribution first, with the aim of allowing for classical simulations later, while keeping systematic errors relatively suppressed (i.e., minimising the number of qubits in the state-preparation circuit), we selected , based on Fig. 1, which corresponds to errors in the range . For the distributions we chose , such that errors are in the range.


(a) W boson

(b) Z boson

(c) t quark
We now move on to discuss the two different methodologies investigated for preparing BW distributions on quantum computers.
3.1 Variational method
The first methodology is based on a variational, quantum machine-learning approach, as motivated by Ref. [32]. Here a PQC [54] is trained to generate an approximation of the distribution of interest.
Briefly summarising the method, the loss function that is minimised in the training using a classical optimiser is the norm (for ) between the target state, , and the generated state, ,
(24) |
The PQC ansatz chosen, , is a maximally expressive hardware-efficient ansatz [55] consisting of an -qubit circuit formed of layers, with all angles initialised to , giving variational parameters as
(25) |
where are fixed blocks of CX gates. The training parameters to optimise are the angles of the rotations across layers
(26) |
Thus the aim is to learn a such that
(27) |
As is well understood from the literature, variational methods are plagued by issues of trainability, specifically the gradients of the cost function vanishing exponentially in the size of the system (number of qubits), known as barren plateaus. Barren plateaus have been shown to be directly related to the expressitivity of the circuit ansätze used in the training [56]. In general, there is therefore a balance between allowing the circuit to be expressive enough to contain the desired solution, whilst also limiting the effects of barren plateaus. Given this, it is thought that variational approaches for state preparation will likely not be scalable for large system sizes. However, for the BW distributions considered in this article—which are all small-scale circuits—variational methods should give good results in practice. Indeed, no particular effort is made here to limit the expressivity of the circuit ansatz (for example by using some alternative ansätze such as the ones discussed in Refs. [57, 58]), as it was not considered necessary for these small systems.
3.2 Fourier expansion method
The second methodology that we investigate for preparing BW distributions involves a promising recent technique by Rosenkranz et al. [59], where a multivariate probability distribution is decomposed into a Fourier series. In our case we need only consider univariate state preparation.
To again briefly summarise the method, the target function,, is approximated by mapping to the interval 111111Specifically, it is represented in the interval and then a periodic extension is applied to map onto the interval . with a finite Fourier series of the form
(28) |
where is the degree of the expansion and are the Fourier coefficients. An interpolation is used such that matches for a number of interpolation points, and the coefficients for the interpolant are calculated using the fast Fourier transform. The basis functions of the Fourier expansion up to a chosen are then prepared efficiently on a quantum register using a block encoding. The weighted sum of basis functions is prepared using a linear combination of unitary (LCU) operations [60], leading to a block-encoding circuit for .
The potential benefits of this method is that, in contrast to the previous method, it should scale well to larger system sizes. However, one important consideration is that the method relies on a considerable number of ancilla qubits, and is a probabilistic state preparation; this is because the LCU method requires additional ancilla qubits, and to then prepare the correct state, all of these ancilla qubits must be post selected to give the zero state, which has success probability . In practice, can be increased using amplitude amplification, although this of course means additional quantum resources. It is also worth noting that, in practice, the chosen value of dictates the final accuracy of the prepared distribution (i.e., larger means a smaller state-preparation error and JSD).
3.3 Results and comparison
Both the methods discussed in the previous sections were used in order to produce circuits that prepared BW distributions for the W, Z, and t resonances based on the ranges and corresponding numbers of qubits in the circuit ansatz. For comparing the different methods, the metrics that are considered are the resources required for the circuit, in terms of the total number of qubits, , alongside the number of 1- and 2-qubit gates, and , respectively, and the accuracy of the prepared distribution, in terms of the state-preparation error, , and the JSD. In this section we will explicitly discuss the -qubit circuits corresponding to , in order to perform a comparison of the two methods for all resonances. However, results for the other COM energy range are also briefly discussed later.
PQCs were trained based on the variational method, as discussed previously. In all cases, the metric used in the cost function was the norm between the target and generated state. The basin-hopping algorithm [61] implementing the Broyden–Fletcher–Goldfarb–Shanno method [62] was used for the minimisation. A coarse hyperparameter search was performed during training in order to optimise the performance, based on varying the number of layers (parameters) in the ansätze alongside another hyperparameter related to the cost function (the overall scaling factor).
The Fourier expansion method was then analysed. For each of the resonances we produced a circuit that was roughly equivalent to the best one produced by the variational method in terms of the accuracy of the prepared distribution. In addition, we also produced another circuit where the accuracy of the prepared distribution was significantly greater than in the variational case (i.e., with an increased value of ). This was done to demonstrate that the method enables the preparation of larger-scale circuits while maintaining accuracy.
Figure 3 gives the generated distributions for each resonance using the variational method, while Figs. 4 and 6 give the generated distributions using the Fourier method for each of the resonances, respectively. Table 1 lists the metrics for the distributions produced for all COM ranges for both methods. Figure 12 in Section A.1 also plots the distributions generated by the variational and Fourier expansion methods for the , circuits.

(a) W boson

(b) Z boson

(c) t quark

(a)

(b)

(a)

(b)

(a)

(b)
Method | Res | Accuracy | JSD | ||||||
Variational | W | Optimised | N/A | ||||||
Fourier | W | Matched () | |||||||
Variational | W | Optimised | N/A | ||||||
Z | Optimised | N/A | |||||||
t | Optimised | N/A | |||||||
Fourier | W | Matched () | |||||||
More () | |||||||||
Z | Matched () | ||||||||
More () | |||||||||
t | Matched () | ||||||||
More () |
In order to compare the two approaches, we will only discuss the W-boson results, as the same arguments apply to the others. We note that the Fourier expansion method allows for much more accurate states to be prepared than the variational method, and as mentioned previously, the real benefit is that it is scalable to larger numbers of qubits. However, we found that the resources required were significantly larger when considering the equivalent accuracy circuits to those produced using the variational method, due to the large number of additional ancilla qubits required. In addition, because the Fourier expansion method is a probabilistic state-preparation method, then in practice, without using techniques such as amplitude amplification, such small success probabilities will require the circuit to be run several times before the desired state is actually prepared (successful post selection).
It is worth also discussing that, for the circuits (with distributions plotted in Fig. 12 in Section A.1), we find that the variational method produces slightly more accurate results than the Fourier expansion method, and again requires significantly less resources. This indeed makes sense, given that we expect variational methods to perform well for small-scale circuits; however, one would expect the performance to decrease as the system size increases, which is indeed what we see when considering the circuits discussed previously.
As an aside, if we were to consider an example of a potential more realistic use case corresponding to an energy scale comparable to a modern collider experiment—where in reality it would be essential to really suppress systematic errors—of , for example, then we can see from Fig. 7 that we would need around for systematic errors to be suppressed to around . For circuits this size, variational methods are likely not to be performant due to the trainability issues previously discussed. On the contrary, the Fourier expansion method should still be effective for generating accurate distributions at this circuit size, although the resources required would likely be considerable.
As discussed previously, state-preparation for probability distributions is an active research topic, and future developments should hopefully allow for the preparation of accurate BW distributions with reduced circuit sizes, paving the way for less resource-intensive QMCI for HEP. We now move on to discuss an example application of the QMCI engine for calculating a cross-section in HEP, making use of the -qubit, , variationally-trained circuit for the W-boson BW distribution discussed in this section.

(a) W boson

(b) Z boson

(c) t quark
4 Example applications
To illustrate the generality of the approach sketched above, we will focus on one particular example, the decay of the tau lepton into three fermions
(29) | ||||
where and are the initial and final state four momenta, respectively. In this case, the matrix element squared can be written as121212The expression has been obtained with the help of the FeynArts [63, 64] and FormCalc [65] packages to the lowest order in perturbation theory.
(30) |
where the invariants, and , are equal to and , respectively. The electroweak coupling is denoted by , while the weak mixing angle is . The mass and width of the W bosons are denoted by and , respectively, while is the mass of the tau lepton.131313It is worth noting that the expression for the matrix element in Eq. 30 assumes a massless electron, which is a common approximation in HEP. This is because the mass of the electron () is negligible with respect to a collision energy ( for the LHC, for example). It is worth pointing out that the expression in Eq. 30 contains a W-boson propagator term [see Eq. 3]. This originates from the fact that the electron and anti-electron neutrino are produced through the decay of an intermediate W boson, as depicted in the Feynman diagram in Fig. 8.

For a three-particle phase space, the integration measure reads [36]
(31) |
where is a solid angle, and a rotation angle. The integration boundaries are and , respectively, where is the available energy of the system, i.e. for a tau decay. For what follows, we will discuss this particular example and some modifications of it (either by simplifying it or rendering it more complex).
4.1 One-dimensional integration
To demonstrate the methodology, for simplicity, we start by considering only the numerator of Eq. 30. By neglecting the constant factors and integrating only over the two invariants and , one obtains the following integral
(32) |
which has analytical value
In order to implement such a calculation using the QCMI engine, noting that the integral over trivially gives , then we encode the calculation as the following one-dimensional integration with a two-dimensional cut function, ,
(33) |
where , if , and , otherwise. If we compare this to the general form of the expectation calculated using the QMCI engine in Eq. 7, and identify and (this identification will remain for all subsequent examples), we observe that there is freedom in the integrand separation, in terms of what to define as the probability distribution, , and the function applied . In this case, for simplicity, we choose to set , where is the uniform distribution, and or .141414Note that this choice does not leverage the capabilities of the QMCI engine in terms of decomposing the integrand into building blocks, as discussed previously—however, that is not the purpose of this initial example.
As discussed previously in Section 2.3.2, the QMCI engine contains powerful, in-built functionality for performing such a calculation; the cut function can be straightforwardly implemented using thresholding operations, whilst the engine contains efficient methods for calculating both and . The QMCI engine can thus be used to build efficient, low depth circuits that give estimates of both expectation values on the RHS of Eq. 33.
For this example, and the ones described later in Sections 4.2 and 4.3, we considered three different levels of expected precision, corresponding to upper bounds on the expected RMSE of the final estimator of the order , , and ,151515The per-mille precision represents the typical accuracy of classical MCI calculation in HEP. respectively (not accounting for the systematic errors related to the state preparation discussed in Section 3). For illustration purposes, we run noiseless simulations of the circuits for the case of precision.
We considered a decay-like process where , giving an analytical value .161616Given that this quantity is a proxy of a cross section, it is not physical, and thus not necessarily positive. We set for each of the dimensions, and , respectively. The circuits to prepare the uniform distributions used to model the probability distributions for each dimension were loaded directly in the QMCI engine using the in-built state-preparation library, and these then combined to construct the bivariate probability distribution, .
In order to demonstrate the validity of the approach, we ran noiseless simulations using the QMCI engine to numerically estimate the RHS of Eq. 33, which we denote as , different times, based on an expected precision of . The numerical experiments, and the ones described later in Sections 4.2 and 4.3, were carried out using Qulac’s noiseless state-vector simulator [66]. The QAE algorithm used in all cases was an in-built implementation of the maximum-likelihood QAE algorithm [67], that is optimised to make use of all available samples in order to maximise performance.
Figure 9 gives a histogram of the error, , where we can see that the results are as expected, with nearly all (except for a single outlier) values falling within an expected precision of .

Using the in-built resource mode of the QMCI engine (described in Section 2.3.2), we also calculated the resources required to perform this calculation to the three different levels of expected precision. Table 2 gives these resources for both NISQ and fault-tolerant compilation. We note that the number of qubits required is in general small [], and in particular is well within the reach of current quantum hardware (this is assuming that the systematic errors based on using -qubit distributions do not excessively affect calculations—something that is not observed for this case for precision). However, the number of operations required for NISQ compilation almost certainly means that logical qubits would be required; considering the largest circuit for an expected precision of has CX gates, then a simple rule-of-thumb calculation suggests one would need a machine with a two-qubit gate fidelity of approximately —well out of reach of current quantum hardware. We note that the number of T-gate operations for fault-tolerant execution are also high; for example, for a precision, as would be required to compete with current classical MCI methods, the largest circuit requires T gates. However, it is worth noting these gate counts are likely to reduce significantly in future as more research into optimised synthesis is carried out (see Ref. [32], Sec. 10.3 for a detailed discussion of this topic), and, therefore, these values should be regarded as loose upper bounds, and treated with a degree of uncertainty.
Compilation | Resource | Metric | Precision | ||
10% | 1% | 0.1% | |||
NISQ | Number of qubits | Largest across circuits | |||
CX gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit | |||||
All gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit | |||||
Fault tolerant | Number of qubits | Largest across circuits | |||
T gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit |
4.2 Separable two-dimensional integration
We now move on to the expression for the actual integration of the tau decay given in Eq. 30, by also considering the denominator (i.e., the propagator). We again omit the integration over the angles. The integral then becomes
(34) |
The analytical solution to this integral is given in Appendix A.
In order to implement such a calculation using the QCMI engine, we first note that we can express this integral in the following way
(35) |
where the cut function, , is the same as previously. Then, recalling the general form of an expectation calculated using the QMCI engine in Eq. 7, and following the discussion in Section 2.4, we see that each individual integral has the canonical form, in terms of products of building blocks, that we require for performing a generic cross-section calculation using the QMCI engine. We can identify the probability distribution, , as a product of univariate BW distributions [or rather, in this case, the product of an uniform distribution for the dimension , and an univariate W-boson BW distribution for the dimension , i.e., ]. Then, the functions applied, , are just products of moments of the integration variables [or rather in this case simply the univariate products or ].
For this example, for demonstration purposes, it is important that we integrate across the range of the BW peak, and therefore we set .171717Note that this calculation does not correspond to a physical tau decay (or indeed a physical process). The analytical value in this case is . As discussed in Section 3, we make use of a trained PQC with (specifically the optimal circuit given in Table 1) to prepare the BW distribution for the W-boson propagator used to model the probability distributions for dimension , and use the QMCI engine’s state-preparation library to load a circuit preparing the uniform distribution used to model the probability distribution for dimension . We again construct by combining the circuits.
In order to demonstrate the validity of our generic approach, we carried out numerical experiments by running noiseless simulations to numerically estimate the RHS of Section 4.2, different times, for an expected precision of .
Figure 10 gives a histogram of the error, , where we again see all values within the expected precision of . However, in this case there is a clear bias in the results (as they are not centered around ). A bias is to be expected given the discussion of systematic errors regarding the state preparation of BW distributions given in Section 3—a source of error that was notably not applicable to the previous example. In this case, the observed systematic bias is approximately an order of magnitude smaller than the required precision, and therefore negligible. However, for higher levels of precision—such as for and —one would require state-preparation circuits with smaller corresponding systematic errors, in order to still obtain accurate results.

We used the resource mode to calculate the resources for the three expected levels of precision. Table 3 gives the resources for NISQ and fault-tolerant compilation. The results are similar to those found in Section 4.1, and therefore a similar analysis to that described there applies. However, in this case it is worth noting that due to the more complicated circuit used to prepare the BW distribution for the dimension, then the qubit counts are slightly larger, and the gate counts are also approximately an order of magnitude larger.
Compilation | Resource | Metric | Precision | ||
10% | 1% | 0.1% | |||
NISQ | Number of qubits | Largest across circuits | |||
CX gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit | |||||
All gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit | |||||
Fault tolerant | Number of qubits | Largest across circuits | |||
T gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit |
4.3 Non-separable two-dimensional integration
Finally, in order to increase the complexity of the problem and mimic the more general case of Eq. 4, we compute the following
(36) |
which amounts in practice to, in addition to calculating the same terms as in Section 4.2 (multiplied by the constant, ), calculating the following, non-separable, two-dimensional integral
(37) |
The analytical solution to this additional integral is also provided in Appendix A. The analytical value of Eq. 36 is then .
For this example, the setup is the same as in Section 4.2. In order to demonstrate the validity of our approach, we carried out numerical experiments by running noiseless simulations to numerically estimate the RHS of Eq. 36, 4 times, for an expected precision of .
Figure 11 gives a histogram of the error, . The results are broadly similar to those in Section 4.2, with all values within the expected precision, and with a bias again observed (this is not surprising, as the calculations only differ by the additional term, which is subdominant).

We analysed the resources required to run the circuits at the nominal precisions using the resource mode. Table 4 gives these for NISQ and fault-tolerant compilation. Once again the results are similar to the previous sections, and therefore a similar analysis again applies. In this case, as compared to the example in Section 4.2—which itself required greater resources than the example in Section 4.1—both the qubit counts and number of gates are larger, by approximately an additional order of magnitude in the latter case. This increase is due to the extra 2D integration that is performed (i.e., calculating ), which as discussed in Section 2.3.1 requires greater resources than when calculating univariate expectations. Thus, we see that as the dimensionality or complexity of the integrals increases, so too do the required resources.
Based upon the resource analysis here, and that in the previous sections, it is clear that to be able to perform state-of-the-art HEP calculations using QMCI in the future—whereby the dimensionality and complexity of the calculations will be much greater than for these examples—we will require significantly greater resources than are available with near-term hardware. Thus, this clearly remains an application for the future, fault-tolerant era of quantum computing.
Compilation | Resource | Metric | Precision | ||
10% | 1% | 0.1% | |||
NISQ | Number of qubits | Largest across circuits | |||
CX gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit | |||||
All gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit | |||||
Fault tolerant | Number of qubits | Largest across circuits | |||
T gates | Total number across circuits | ||||
Total depth across circuits | |||||
Number in largest circuit | |||||
Depth of largest circuit |
5 Conclusion
Quantum technology holds the promise of solving complex problems that are classically intractable, or providing more efficient methods for tackling existing challenges than classical approaches allow. Meanwhile, the field of high-energy physics (HEP)—which seeks to uncover the fundamental laws of nature at the smallest scales—demands immense computational resources.
In this work, we investigated whether quantum computing could, in the future, help address key computational bottlenecks in HEP. Building on, and going far beyond, the work presented in Ref. [21], where MP first introduced the idea of using Quantum Monte Carlo Integration (QMCI) for cross-section calculations in HEP, we have made significant advancements in this direction.
We have developed a general approach for integrating non-trivial cross sections in HEP, in terms of decomposing the general integrand into products of constituent building blocks. This work leverages the Fourier Quantum Monte Carlo Integration method implemented in Quantinuum’s QMCI engine [32], developed in part by IW, along with several other key proprietary features of the engine. Specifically, we introduced two distinct approaches for preparing relativistic Breit-Wigner distributions on quantum registers—functions that appear as one of the key building blocks of generic integrands in cross-section calculations.
To demonstrate the method’s capabilities, we performed two-dimensional integrations for several examples that arguably represent some of the most challenging integrands investigated to date for quantum integration. However, it is still worth remarking that these examples remain significantly less complex than the high-dimensional integrands encountered in state-of-the-art classical calculations.
Our findings suggest that such applications are unlikely to be practical during the noisy intermediate-scale quantum (NISQ) era. Instead, they are expected to become viable when fault-tolerant quantum devices are available. Notably, the resource estimates we provide for fault tolerance represent loose upper bounds. We anticipate that significant improvements will be achieved in the future—and thus significantly reduced resource requirements—as the field of fault-tolerant compilation matures beyond its infancy.
While this work represents a significant first step toward the quantum integration of cross sections in HEP, there are several areas for improvement. One limitation is the use of uniform spacing for representing underlying probability distributions with qubits, which is an inherent requirement of the QMCI engine. Extending the engine’s functionality to allow for non-uniform spacing could significantly reduce both the number of qubits and the quantum gates required for integration. Additionally, while the method is general and theoretically applicable to integrals of arbitrary dimensionality (beyond the two-dimensional examples studied here), its efficiency will decrease for high-dimensional problems due to the polynomial scaling in the number of terms to calculate. Adapting the method to achieve more favourable scaling would enhance its practicality, and studying how circuit size scales heuristically with integration dimensionality would be particularly valuable.
Another avenue for improvement concerns the handling of experimental constraints, as discussed in Section 2.1, as these could be considered in further generality. Restricted integration domains based on constraints such as sums, maximum/minimum etc., can be easily implemented via the QMCI engine’s enhanced P-builder functionality. However, in more general cases, the cut functions may be much more complex, or even non-analytical, and this thus requires further exploration to ensure efficient integration for these cases. Similarly, extending the method to accommodate more general forms of cross-section integrands would broaden its applicability. Accurate preparation of relativistic Breit-Wigner distributions, which is crucial to this method, represents another area for further investigation. Alternative state-preparation methods that are tailored to the particular distribution could enhance both accuracy and resource efficiency, and also could be of use in general to the HEP community for other future quantum computing applications where resonances are required to be modelled. In particular, understanding in detail the systematic errors associated with state preparation is critical for performing precise integrations, and this issue warrants more detailed exploration. Finally, event sampling based on underlying distributions is a ubiquitous task in HEP. Exploring whether quantum computing and quantum integration could enhance efficiency or accuracy in this area would be of particular interest, and could provide significant advancements for the field. In summary, while this work lays a solid foundation for quantum integration in HEP, addressing these challenges will significantly expand its potential and applicability.
Acknowledgements
The authors would like to thank Matthew DeCross and Julien Sorci for their careful reviewing of the manuscript. The authors thank Steven Herbert and Alexandre Krajenbrink for their support in this work and interesting discussions (and the latter for also reviewing the manuscript). The authors also want to thank Matthias Rosenkranz for interesting discussions. MP acknowledges support by the German Research Foundation (DFG) through the Research Training Group RTG2044.
Appendix A Appendix
A.1 State preparation of relativistic Breit-Wigner distributions
In this section, we have some additional plots related to state preparation of BW distributions.

(a)

(b)
A.2 Analytical expressions
In this section, we provide the analytical expressions for several integrals required in the main text.
First, the analytical expression for the integral in Eq. 34 in Section 4.2 reads
(38) |
Then, the analytical result for the additional integral in Eq. 37 in Section 4.3 reads
(39) |
References
- [1] ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett. B 716 (2012) 1–29, arXiv:1207.7214 [hep-ex].
- [2] CMS Collaboration, S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC. Phys. Lett. B 716 (2012) 30–61, arXiv:1207.7235 [hep-ex].
- [3] A. Buckley, Computational challenges for MC event generation. J. Phys. Conf. Ser. 1525 (2020) no. 1, 012023, arXiv:1908.00167 [hep-ph].
- [4] HSF Physics Event Generator WG Collaboration, S. Amoroso et al., Challenges in Monte Carlo Event Generator Software for High-Luminosity LHC. Comput. Softw. Big Sci. 5 (2021) no. 1, 12, arXiv:2004.13687 [hep-ph].
- [5] ATLAS Collaboration, A. Collaboration, ATLAS Software and Computing HL-LHC Roadmap tech. rep., CERN, Geneva, 2022. https://cds.cern.ch/record/2802918.
- [6] C. W. Bauer, W. A. de Jong, B. Nachman, and D. Provasoli, Quantum Algorithm for High Energy Physics Simulations. Phys. Rev. Lett. 126 (2021) no. 6, 062001, arXiv:1904.03196 [hep-ph].
- [7] K. Bepari, S. Malik, M. Spannowsky, and S. Williams, Towards a quantum computing algorithm for helicity amplitudes and parton showers. Phys. Rev. D 103 (2021) no. 7, 076020, arXiv:2010.00046 [hep-ph].
- [8] A. Pérez-Salinas, J. Cruz-Martinez, A. A. Alhajri, and S. Carrazza, Determining the proton content with a quantum computer. Phys. Rev. D 103 (2021) no. 3, 034027, arXiv:2011.13934 [hep-ph].
- [9] K. Bepari, S. Malik, M. Spannowsky, and S. Williams, Quantum walk approach to simulating parton showers. Phys. Rev. D 106 (2022) no. 5, 056002, arXiv:2109.13975 [hep-ph].
- [10] C. Bravo-Prieto, J. Baglio, M. Cè, A. Francis, D. M. Grabowska, and S. Carrazza, Style-based quantum generative adversarial networks for Monte Carlo events. Quantum 6 (2022) 777, arXiv:2110.06933 [quant-ph].
- [11] S. Ramírez-Uribe, A. E. Rentería-Olivo, G. Rodrigo, G. F. R. Sborlini, and L. Vale Silva, Quantum algorithm for Feynman loop integrals. JHEP 05 (2022) 100, arXiv:2105.08703 [hep-ph].
- [12] P. Deliyannis, J. Sud, D. Chamaki, Z. Webb-Mack, C. W. Bauer, and B. Nachman, Improving quantum simulation efficiency of final state radiation with dynamic quantum circuits. Phys. Rev. D 106 (2022) no. 3, 036007, arXiv:2203.10018 [hep-ph].
- [13] G. Gustafson, S. Prestel, M. Spannowsky, and S. Williams, Collider events on a quantum computer. JHEP 11 (2022) 035, arXiv:2207.10694 [hep-ph].
- [14] G. Clemente, A. Crippa, K. Jansen, S. Ramírez-Uribe, A. E. Rentería-Olivo, G. Rodrigo, G. F. R. Sborlini, and L. Vale Silva, Variational quantum eigensolver for causal loop Feynman diagrams and directed acyclic graphs. Phys. Rev. D 108 (2023) no. 9, 096035, arXiv:2210.13240 [hep-ph].
- [15] H. A. Chawdhry and M. Pellen, Quantum simulation of colour in perturbative quantum chromodynamics. SciPost Phys. 15 (2023) 205, arXiv:2303.04818 [hep-ph].
- [16] C. W. Bauer, S. Chigusa, and M. Yamazaki, Quantum parton shower with kinematics. Phys. Rev. A 109 (2024) no. 3, 032432, arXiv:2310.19881 [hep-ph].
- [17] S. Ramírez-Uribe, A. E. Rentería-Olivo, and G. Rodrigo, Quantum querying based on multicontrolled Toffoli gates for causal Feynman loop configurations and directed acyclic graphs. arXiv:2404.03544 [quant-ph].
- [18] A. Montanaro, Quantum speedup of Monte Carlo methods. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471 (2015) no. 2181, 20150301.
- [19] S. Herbert, Quantum Monte Carlo Integration: The Full Advantage in Minimal Circuit Depth. Quantum 6 (2022) 823, arXiv:2105.09100 [quant-ph].
- [20] G. Brassard, M. Mosca, and A. Tapp, Quantum Amplitude Amplification and Estimation. Quantum Computation and Information 305 (2002) , arXiv:quant-ph/0005055 [quant-ph].
- [21] G. Agliardi, M. Grossi, M. Pellen, and E. Prati, Quantum integration of elementary particle processes. Phys. Lett. B 832 (2022) 137228, arXiv:2201.01547 [hep-ph].
- [22] D. Grinko, J. Gacon, C. Zoufal, and S. Woerner, Iterative Quantum Amplitude Estimation. npj Quantum Inf 7 (2021) no. 52, , arXiv:1912.05559 [quant-ph].
- [23] J. J. M. de Lejarza, L. Cieri, M. Grossi, S. Vallecorsa, and G. Rodrigo, Loop Feynman integration on a quantum computer. Phys. Rev. D 110 (2024) no. 7, 074031, arXiv:2401.03023 [hep-ph].
- [24] J. J. M. de Lejarza, D. F. Rentería-Estrada, M. Grossi, and G. Rodrigo, Quantum integration of decay rates at second order in perturbation theory. Quantum Sci. Technol. 10 (2025) no. 2, 025026, arXiv:2409.12236 [quant-ph].
- [25] J. J. M. de Lejarza, M. Grossi, L. Cieri, and G. Rodrigo, “Quantum Fourier Iterative Amplitude Estimation,” in 2023 International Conference on Quantum Computing and Engineering. IEEE, 5, 2023. arXiv:2305.01686 [quant-ph].
- [26] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren plateaus in quantum neural network training landscapes. Nature Commun. 9 (2018) 4812, arXiv:1803.11173 [quant-ph].
- [27] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Cost function dependent barren plateaus in shallow parametrized quantum circuits. Nature Commun. 12 (2021) no. 1, .
- [28] K. Sharma, M. Cerezo, L. Cincio, and P. J. Coles, Trainability of Dissipative Perceptron-Based Quantum Neural Networks. Physical Review Letters 128 (2022) no. 18, .
- [29] S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cincio, and P. J. Coles, Noise-induced barren plateaus in variational quantum algorithms. Nature Communications 12 (2021) no. 1, .
- [30] C. O. Marrero, M. Kieferová, and N. Wiebe, Entanglement Induced Barren Plateaus. arXiv:2010.15968 [quant-ph].
- [31] A. Arrasmith, M. Cerezo, P. Czarnik, L. Cincio, and P. J. Coles, Effect of barren plateaus on gradient-free optimization. Quantum 5 (2021) 558, arXiv:2011.12245 [quant-ph].
- [32] I. Y. Akhalwaya et al., A Modular Engine for Quantum Monte Carlo Integration. arXiv:2308.06081 [quant-ph].
- [33] J. Cui, P. J. S. de Brouwer, S. Herbert, P. Intallura, C. Kargi, G. Korpas, A. Krajenbrink, W. Shoosmith, I. Williams, and B. Zheng, Quantum Monte Carlo Integration for Simulation-Based Optimisation. arXiv:2410.03926 [quant-ph].
- [34] A. Denner, J.-N. Lang, and M. Pellen, Full NLO QCD corrections to off-shell production. Phys. Rev. D 104 (2021) no. 5, 056018, arXiv:2008.00918 [hep-ph].
- [35] A. Huss, J. Huston, S. Jones, and M. Pellen, Les Houches 2021—physics at TeV colliders: report on the standard model precision wishlist. J. Phys. G 50 (2023) no. 4, 043001, arXiv:2207.02122 [hep-ph].
- [36] E. Byckling and K. Kajantie, Particle Kinematics: (Chapters I-VI, X). University of Jyvaskyla, Jyvaskyla, Finland, 1971.
- [37] G. P. Lepage, A New Algorithm for Adaptive Multidimensional Integration. J. Comput. Phys. 27 (1978) 192.
- [38] H. Kharraziha and S. Moretti, The Metropolis algorithm for on-shell four momentum phase space. Comput. Phys. Commun. 127 (2000) 242–260, arXiv:hep-ph/9909313. [Erratum: Comput.Phys.Commun. 134, 136–138 (2001)].
- [39] K. Kroeninger, S. Schumann, and B. Willenberg, (MC)**3 – a Multi-Channel Markov Chain Monte Carlo algorithm for phase-space sampling. Comput. Phys. Commun. 186 (2015) 1–10, arXiv:1404.4328 [hep-ph].
- [40] D. Yallup, T. Janßen, S. Schumann, and W. Handley, Exploring phase space with Nested Sampling. Eur. Phys. J. C 82 (2022) 8, arXiv:2205.02030 [hep-ph].
- [41] S. La Cagnina, C. Grunwald, T. Janßen, K. Kröninger, and S. Schumann, “Phase space sampling with Markov Chain Monte Carlo methods,” in EPJ Web of Conferences. 12, 2024. arXiv:2412.12963 [hep-ph].
- [42] Z. Li, J. Wang, Q.-S. Yan, and X. Zhao, Efficient numerical evaluation of Feynman integrals. Chin. Phys. C 40 (2016) no. 3, 033103, arXiv:1508.02512 [hep-ph].
- [43] E. de Doncker, A. Almulihi, and F. Yuasa, High-speed evaluation of loop integrals using lattice rules. J. Phys. Conf. Ser. 1085 (2018) no. 5, 052005.
- [44] S. Borowka, G. Heinrich, S. Jahn, S. P. Jones, M. Kerner, and J. Schlenk, A GPU compatible quasi-Monte Carlo integrator interfaced to pySecDec. Comput. Phys. Commun. 240 (2019) 120–137, arXiv:1811.11720 [physics.comp-ph].
- [45] L. K. Grover, “A fast quantum mechanical algorithm for database search,” in Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC ’96, p. 212–219. Association for Computing Machinery, New York, NY, USA, 1996.
- [46] G. Breit and E. Wigner, Capture of Slow Neutrons. Phys. Rev. 49 (1936) 519–531.
- [47] M. Plesch and Č. Brukner, Quantum-state preparation with universal gate decompositions. Physical Review A 83 (2011) no. 3, 032302.
- [48] Y. R. Sanders, G. H. Low, A. Scherer, and D. W. Berry, Black-Box Quantum State Preparation without Arithmetic. Physical Review Letters 122 (2019) no. 2, .
- [49] C. Zoufal, A. Lucchi, and S. Woerner, Quantum Generative Adversarial Networks for Learning and Loading Random Distributions. npj Quantum Inf 5 (2019) no. 103, , arXiv:1904.00043 [quant-ph].
- [50] X.-M. Zhang, T. Li, and X. Yuan, Quantum State Preparation with Optimal Circuit Depth: Implementations and Applications. Phys. Rev. Lett. 129 (2022) no. 23, 230504, arXiv:2201.11495 [quant-ph].
- [51] J. Bausch, Fast Black-Box Quantum State Preparation. Quantum 6 (2022) 773, arXiv:2009.10709 [quant-ph].
- [52] A. G. Rattew and B. Koczor, Preparing Arbitrary Continuous Functions in Quantum Registers With Logarithmic Complexity, 2022. arXiv:2205.00519 [quant-ph].
- [53] A. Wodecki, J. Marecek, V. Kungurtsev, P. Eichler, G. Korpas, and P. Intallura, Spectral Methods for Quantum Optimal Control: Artificial Boundary Conditions. arXiv:2403.14436 [quant-ph].
- [54] M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini, Parameterized quantum circuits as machine learning models. Quantum Sci. Technol. 4 (2019) no. 4, 043001, arXiv:1906.07682 [quant-ph].
- [55] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature 549 (2017) no. 7671, 242–246, arXiv:1704.05018 [quant-ph].
- [56] Z. Holmes, K. Sharma, M. Cerezo, and P. J. Coles, Connecting Ansatz Expressibility to Gradient Magnitudes and Barren Plateaus. PRX Quantum 3 (2022) no. 1, 010313, arXiv:2101.02138 [quant-ph].
- [57] E. Grant, L. Wossnig, M. Ostaszewski, and M. Benedetti, An initialization strategy for addressing barren plateaus in parametrized quantum circuits. Quantum 3 (2019) 214, arXiv:1903.05076 [quant-ph].
- [58] T. Volkoff and P. J. Coles, Large gradients via correlation in random parameterized quantum circuits. Quantum Science and Technology 6 (2021) no. 2, 025008.
- [59] M. Rosenkranz, E. Brunner, G. Marin-Sanchez, N. Fitzpatrick, S. Dilkes, Y. Tang, Y. Kikuchi, and M. Benedetti, Quantum state preparation for multivariate functions. arXiv (2024) , 2405.21058 [quant-ph]. https://arxiv.org/abs/2405.21058.
- [60] A. M. Childs and N. Wiebe, Hamiltonian Simulation Using Linear Combinations of Unitary Operations. Quant. Inf. Comput. 12 (2012) no. 11-12, 0901–0924, arXiv:1202.5822 [quant-ph].
- [61] B. Olson, I. Hashmi, K. Molloy, and A. Shehu, Basin Hopping as a General and Versatile Optimization Framework for the Characterization of Biological Macromolecules. Advances in Artificial Intelligence 2012 (2012) .
- [62] J. Nocedal and S. Wright, Numerical optimization, pp. 1–664. Springer Series in Operations Research and Financial Engineering. Springer Nature, 2006.
- [63] J. Kublbeck, M. Bohm, and A. Denner, Feyn Arts: Computer Algebraic Generation of Feynman Graphs and Amplitudes. Comput. Phys. Commun. 60 (1990) 165–180.
- [64] T. Hahn, Generating Feynman diagrams and amplitudes with FeynArts 3. Comput. Phys. Commun. 140 (2001) 418–431, arXiv:hep-ph/0012260.
- [65] T. Hahn and M. Perez-Victoria, Automatized one loop calculations in four-dimensions and D-dimensions. Comput. Phys. Commun. 118 (1999) 153–165, arXiv:hep-ph/9807565.
- [66] Y. Suzuki, Y. Kawase, Y. Masumura, Y. Hiraga, M. Nakadai, J. Chen, K. M. Nakanishi, K. Mitarai, R. Imai, S. Tamiya, T. Yamamoto, T. Yan, T. Kawakubo, Y. O. Nakagawa, Y. Ibe, Y. Zhang, H. Yamashita, H. Yoshimura, A. Hayashi, and K. Fujii, Qulacs: a fast and versatile quantum circuit simulator for research purpose.
- [67] Y. Suzuki, S. Uno, R. Raymond, T. Tanaka, T. Onodera, and N. Yamamoto, Amplitude estimation without phase estimation. Quantum Information Processing 19 (2020) 1–17.