License: arXiv.org perpetual non-exclusive license
arXiv:2503.14462v1 [quant-ph] 18 Mar 2025

Blockchain with proof of quantum work

Mohammad H. Amin D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Jack Raymond D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Daniel Kinn D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Firas Hamze D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Kelsey Hamer D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Joel Pasvolsky D-Wave Quantum Inc., Burnaby, British Columbia, Canada    William Bernoudy D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Andrew D. King D-Wave Quantum Inc., Burnaby, British Columbia, Canada    Samuel Kortas D-Wave Quantum Inc., Burnaby, British Columbia, Canada
(March 18, 2025)
Abstract

We propose a blockchain architecture in which mining requires a quantum computer. The consensus mechanism is based on proof of quantum work, a quantum-enhanced alternative to traditional proof of work that leverages quantum supremacy to make mining intractable for classical computers. We have refined the blockchain framework to incorporate the probabilistic nature of quantum mechanics, ensuring stability against sampling errors and hardware inaccuracies. To validate our approach, we implemented a prototype blockchain on four D-WaveTM quantum annealing processors geographically distributed within North America, demonstrating stable operation across hundreds of thousands of quantum hashing operations. Our experimental protocol follows the same approach used in the recent demonstration of quantum supremacy [1], ensuring that classical computers cannot efficiently perform the same computation task. By replacing classical machines with quantum systems for mining, it is possible to significantly reduce the energy consumption and environmental impact traditionally associated with blockchain mining. Beyond serving as a proof of concept for a meaningful application of quantum computing, this work highlights the potential for other near-term quantum computing applications using existing technology.

I Introduction

Blockchain technology is widely regarded as one of the most transformative innovations of the 21st century. Its primary purpose is to provide a decentralized and secure method for recording and verifying transactions without the need for a central authority. The concept of blockchain was introduced with the creation of Bitcoin in 2008, as described in the now-famous whitepaper published under the pseudonym Satoshi Nakamoto [2]. Initially developed to enable cryptocurrencies, blockchain technology has since expanded to a diverse range of applications, including decentralized finance (DeFi) [6], identity verification [5], supply chain management [3] and healthcare [4]. Key characteristics of blockchains include transparency (all transactions are visible to participants in the network) and resistance to tampering (it is extremely challenging for malicious actors to modify data on the blockchain, so transactions are practically immutable) [7, 8, 9]. These features have made blockchain technology a cornerstone of the shift toward Web 3.0, the next generation of the internet [10, 11].

At its core, a blockchain is a distributed ledger that operates across a network of computers, where data is stored in “blocks” that are cryptographically linked to form a sequential chain. This architecture eliminates the need for central intermediaries, enabling trustless interactions between parties. The integrity of the data is secured through consensus mechanisms, the process by which blockchain networks agree on the validity of transactions and ensure the synchronization of their distributed ledgers. In the absence of a central authority, consensus mechanisms enable trust and cooperation among participants in the network. The choice of consensus mechanism impacts a blockchain’s security, scalability, energy efficiency, and decentralization, making it a key component of the network’s design. These mechanisms ensure that no single entity can control the ledger, thus preserving its decentralized nature. However, they also present challenges.

Traditional blockchains use consensus algorithms like proof of work (PoW) and proof of stake (PoS) to validate transactions. PoW relies upon the significant computational effort to generate valid blocks. Distributed computational power ensures that no user can control the blockchain evolution. Introducing invalid transactions, such as double-spending, requires of the order of 50% of the community compute power, which is financially prohibitive due to the substantial equipment and electricity costs. While PoW effectively enhances network security, it also leads to significant energy consumption. Bitcoin mining alone consumes an estimated 175.87 terawatt-hours (TWh) in 2024, surpassing the yearly electricity usage of entire countries, including Sweden [12, 13]. This high energy consumption raises environmental concerns.

PoS addresses this issue by selecting validators based on the amount of cryptocurrency eachholds and is willing to stake as collateral, incentivizing honest behavior by exposing their holdings to potential loss in case of misconduct. However, a major drawback of PoS is the potential for wealth concentration, as those with larger holdings have a greater chance of being selected as validators, further increasing their wealth over time. This can reduce the network’s decentralization, a core principle of blockchain technology. Other mechanisms, such as delegated proof of stake (DPoS) [14] and practical Byzantine fault tolerance (PBFT) [15], offer alternative approaches tailored to specific use cases.

As blockchain technology advances, researchers and technologists are investigating innovative approaches to overcome its limitations and ensure its long-term viability. One exciting avenue is the integration of quantum computing with blockchain systems, a combination that could transform the field by offering unmatched security, scalability, and computational power. This article introduces a blockchain architecture that incorporates a proof of quantum work (PoQ) consensus mechanism, where quantum computers are required to validate transactions. The central concept is to harness quantum supremacy to mine problems intractable to classical computers. Quantum supremacy is a term introduced by John Preskill [16] that refers to a scenario in which a quantum computer performs a task that classical computers cannot accomplish with reasonable resources. While there could be various ways to leverage quantum supremacy within a blockchain, our proposal builds on Bitcoin’s architecture with minimal modifications, taking advantage of its proven reliability and robustness.

Unlike previous quantum blockchain proposals [17, 18, 19, 20, 21, 22, 23, 24, 25, 26], which often assume access to fault-tolerant quantum computing or quantum teleportation, our approach integrates both classical and quantum computation to leverage quantum supremacy with existing and near-term QPUs. Currently, Ref. [1] is the only demonstration of quantum supremacy not based on random-circuit sampling [27, 28], and thus the only quantum supremacy demonstration relevant to this work (see Appendix A). Using a small-scale prototype, we demonstrate this blockchain’s functionality on four generally-accessible D-Wave AdvantageTM and Advantage2TM quantum processing units (QPUs) at different geographic locations across North America. By carefully accounting for the probabilistic nature of quantum computation and potential errors within Bitcoin’s architecture, we achieve stable operation across thousands of mining processes distributed among hundreds of stakeholders. This marks the first successful demonstration of an important real-world application that directly leverages quantum supremacy, operating seamlessly across an internationally distributed network of quantum computers.

II Classical Blockchain Architecture

Before introducing any quantum concepts, first described how standard blockchain technology works, using Bitcoin’s architecture as a prime example. At the heart of Bitcoin is its distributed ledger, a decentralized database that records all transactions. This ledger is maintained across a network of computers, ensuring both transparency and security. Each node on the network stores a complete copy of the blockchain, and updates are synchronized through consensus mechanisms. These mechanisms ensure that only valid transactions are added to the blockchain, making it resistant to tampering or fraud.

Refer to caption
Figure 1: Illustration of Bitcoin blocks.

Transactions are recorded in blocks linked sequentially in a chain, forming a continuous and immutable ledger. A Bitcoin block consists of a block header and a list of transactions (see Figure 1 for illustration). The block header acts as a metadata section and contains essential elements such as a timestamp, a Merkle root (a hierarchical summary of all transactions in the block), the nonce, and the hash of the previous block. A hash is a unique, fixed-length binary string generated by processing data through a cryptographic algorithm. It acts as a digital fingerprint, ensuring data integrity by making even the slightest change to the input data result in a completely different hash. The nonce is an arbitrary 32-bit number adjusted by the miners to solve the cryptographic puzzle required for block validation. Miners achieve this by repeatedly combining the transaction data with the nonce and passing it through a classical hashing algorithm, which generates an 𝒩subscript𝒩{\cal N_{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT-bit hash {\cal H}caligraphic_H (𝒩=256subscript𝒩256{\cal N_{H}}=256caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT = 256 for Bitcoin). Adding a block to the blockchain begins with miners competing to find a nonce that produces a hash value below a specific target <𝒯nsubscript𝒯𝑛{\cal H}<{\cal T}_{n}caligraphic_H < caligraphic_T start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, with n𝑛nitalic_n indexing the block. The inequality requires that the first 𝒩zerossubscript𝒩zeros{\cal N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT bits of the hash be zero, where

𝒩zeros=log2(2𝒩𝒯n+1).subscript𝒩zerossubscript2superscript2subscript𝒩subscript𝒯𝑛1{\cal N}_{\rm zeros}=\log_{2}\Bigg{(}{2^{\cal N_{H}}\over{\cal T}_{n}+1}\Bigg{% )}.caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( divide start_ARG 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG caligraphic_T start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT + 1 end_ARG ) . (1)

Due to the irreversibility of the hash function, finding the correct hash can only be achieved through brute-force search, forcing miners to test countless nonce values. This process, known as “mining,” relies on a trial-and-error approach, requiring intensive computational effort to solve the cryptographic puzzles that underpin PoW. In Bitcoin, the network dynamically adjusts the difficulty every 2016 blocks (approximately every two weeks) by redefining the target 𝒯nsubscript𝒯𝑛{\cal T}_{n}caligraphic_T start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, to ensure that the average time to mine a block remains close to 10 minutes. Once a miner successfully solves the puzzle, they broadcast the solution to the network. Other nodes then validate the solution by verifying the hash and ensuring the block’s transactions are legitimate. Upon successful validation, the block is added to the chain, and the miner is rewarded.

The reward for mining a block consists of two components: the block reward and transaction fees. The block reward is a fixed amount of Bitcoin awarded to the miner. This reward halves approximately every four years in an event called the “halving.” In addition to the block reward, miners collect fees from transactions included in the block, incentivizing them to prioritize transactions with higher fees. This dual reward structure ensures the security and continuity of the Bitcoin network while gradually reducing the issuance of new Bitcoin, contributing to its scarcity.

A situation known as a “fork” occurs when two miners solve a block at nearly the same time, resulting in the blockchain temporarily splitting into two competing branches. Both branches propagate through the network, and nodes may accept one branch over the other depending on which block they receive first. The fork is resolved when miners adding new blocks to one of the branches make it longer than the other. Bitcoin follows the “strongest chain rule”, meaning the branch with the highest total computational work (often called Chainwork) is recognized as the valid chain, while the shorter branch is discarded. This process ensures network consensus but can result in transaction delays and unrewarded work. It is therefore essential to understand how the strongest chain is determined.

In Bitcoin, a set of valid proposed blocks forms a directed tree, rooted in a special genesis block, the first block introduced by Satoshi Nakamoto in 2009. Any path including an invalid block is to be ignored by stakeholders. Each node on the tree has an associated work, which is a sum of all work completed from the genesis block to the node (along a unique directed path). For the n𝑛nitalic_n-th block in the chain, the work is defined as

𝒲n=2𝒩zeros.subscript𝒲𝑛superscript2subscript𝒩zeros{\cal W}_{n}=2^{{\cal N}_{\rm zeros}}.caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT . (2)

Note, the n𝑛nitalic_n-dependence of 𝒩zerossubscript𝒩zeros{\cal N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT is omitted as an abbreviation. It represents the number of trials and errors, required to find an acceptable hash, <𝒯nsubscript𝒯𝑛{\cal H}<{\cal T}_{n}caligraphic_H < caligraphic_T start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. Therefore, 𝒲nsubscript𝒲𝑛{\cal W}_{n}caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT quantifies the mining work, which goes up as the hash-hardness grows. The Chainwork is for practical purposes monotonic in the length of the chain. The node with the most Chainwork defines a unique path from the genesis node called the strongest chain. The total Chainwork is therefore defined as

Chainwork=nchain𝒲nChainworksubscript𝑛chainsubscript𝒲𝑛{\rm Chainwork}=\sum_{n\,\in\,{\rm chain}}{\cal W}_{n}roman_Chainwork = ∑ start_POSTSUBSCRIPT italic_n ∈ roman_chain end_POSTSUBSCRIPT caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT (3)

Chainwork is calculated by the stakeholders for all existing chains and the chain with largest Chainwork is selected for adding the next block and the rest are discarded. The miners responsible for the discarded blocks do not receive rewards, while the miner who successfully extends the longest chain is rewarded with both block rewards and transaction fees from the new block. Transactions from the discarded branch are returned to the memory pool (mempool), where they await inclusion in a future block. This mechanism ensures that consensus is maintained across the network, although it may introduce temporary delays in transaction confirmation. On a strongest chain, blocks closer to the genesis block are increasingly likely to remain unchanged as broadcasts continue. If we define a small cut-off in probability, we might say blocks become (for practical purposes) immutable, and define the transaction delay accordingly. In the case of Bitcoin it is only a few blocks, assuming good network synchronization. Miners are incentivized to maintain synchronization, to minimize the risk of pursuing wasteful computations, so that forks are merely a rare inconvenience and 𝒟𝒟{\cal D}caligraphic_D of 2 or 3 is typical. Forks may also happen due to software updates and change of rules, which may lead to temporary (soft-fork) or permanent (hard-fork) splitting of the blockchain.

We highlight three features required for viability of a blockchain: 1. The number of blocks in the strongest chain relative to the number of honestly-mined blocks broadcast should be a large finite faction (the efficiency). 2. A stakeholder should have confidence that that blocks buried to some small finite depth on the strongest chain are immutable with high probability. The ratio depth/efficiency dictates the expected number of honestly-mined blocks that have to be broadcast before the most recently broadcast transactions can be considered reliably processed. 3. Consensus mechanisms should protect against disruptions, and prevent rewards from being obtained (on average) by incomplete or fraudulent work. These are necessary requirements for guaranteeing confidence, and maintaining an incentive to mine according to protocol rules.

III Classical Hashing

Hash algorithms are widely used for various purposes, including verifying data integrity, securely storing passwords (by hashing them instead of storing them in plain text), creating hash tables in data structures, and enabling cryptocurrencies. These algorithms are a fundamental tool in secure communications. It should be clear by now that cryptographic hashing plays a crucial role in blockchain technology. In essence, hashing is the deterministic process of transforming a message or data of any length into a unique fixed-size bit string, known as the hash. The primary distinction between cryptographic hashing and encryption is that hashing is a one-way transformation designed to generate a unique, irreversible output, whereas encryption is a two-way process intended to securely encode data while allowing the original message to be recovered with a decryption key.

Cryptographic hashing has several important properties that make it suitable for various applications, including the following:

  1. 1.

    Fixed output size: The hash function should generate a fixed-size output, regardless of the size of the input.

  2. 2.

    Collision resistance: It should be very unlikely that two different inputs produce the same hash value.

  3. 3.

    Avalanche effect: A small change in the input data should result in a significantly different hash value (similar inputs should not produce similar hash values).

  4. 4.

    Uniform distribution: The hash values should be uniformly distributed across the output space to avoid clustering, which can lead to collisions.

  5. 5.

    Pre-image resistance: It should be computationally infeasible to infer anything about the original input data from the hash. Given an input, it should be infeasible to find a second input yielding the same hash.

  6. 6.

    Deterministic: For the same input, a hash function must always produce the same output.

Commonly used hash algorithms include MD5, SHA-1, SHA-2, and the SHA-3 family, all introduced by the National Institute of Standards and Technology (NIST) [41]. Among these, SHA-256 (Secure Hash Algorithm 256-bit) is the standard hash function used in Bitcoin and many other blockchain applications. It works by processing input data through a series of mathematical operations, including bitwise logical operations, modular arithmetic, and compression functions. The input message is first padded to a specific length and divided into 512-bit blocks. Each block is then processed through 64 rounds of mathematical transformations, which involve constants and logical functions, resulting in a fixed 256-bit hash output. This hash is unique to the input data and ensures that even a tiny change to the input produces a drastically different hash, making it highly secure and resistant to collision attacks. Beyond SHA-256, the SHA-2 family includes variants like SHA-384 and SHA-512, which offer larger output sizes for even greater security, and the newer SHA-3 family provides enhanced resistance to potential cryptographic attacks.

IV Quantum Hashing

Refer to caption
Figure 2: Illustration of the quantum hash generation and its use as proof of work for block security.

Various quantum hashing algorithms have been proposed, which can generally be categorized into two types. The first type produces a quantum state as the output, requiring communication and authentication to be quantum mechanical [30, 31, 32, 33, 34, 35, 36, 37]. The second type leverages quantum mechanics to generate a classical hash value, which can then be communicated and processed using classical means [38, 39, 40]. In this paper, we focus on the latter.

The main challenge in quantum hashing is that quantum mechanics is inherently probabilistic, while standard hash functions are deterministic. However, although the outcome of any single quantum measurement is probabilistic, the probability distribution is a deterministic function of the system parameters. Similarly, all expectation values derived from the distribution are deterministic functions of these parameters. This allows a quantum hash function to be uniquely defined based on a quantum probability distribution and its corresponding expectation values. In practice, however, accessing this probability distribution is possible only through sampling, so that accuracy is limited. This process is inevitably subject to inaccuracies due to sampling and hardware-induced errors. These factors can reduce the precision and reliability of the quantum hash function.

A quantum hash function must satisfy the first five classical requirements of hash functions, outlined in the previous section. The final requirement, being deterministic, is not possible and must be replaced with two new quantum-specific requirements:

  1. 6.

    Probabilistic: For the same input, a quantum hash function has a nonzero likelihood of producing different outputs.

  2. 7.

    Spoof resistance: The hash must be infeasible to replicate using classical computers with realistic resources.

The last requirement is essential when the hash is used in a blockchain with a PoQ-based consensus mechanism. It ensures that only quantum computers can perform mining, thereby preventing energy-intensive classical mining. The probabilistic nature of quantum hash values must be carefully considered in the design and application of hash functions, particularly in systems like blockchain, where reliable behavior is essential. This issue, along with potential solutions, will be discussed in greater detail in subsequent sections. Since existing standard classical hashes already fulfill the first five classical requirements, by carefully combining a classical hash function with a quantum operation, the focus can shift to achieving the last requirement.

We start by introducing the general concept without delving into specific implementation details. Let \mathcal{M}caligraphic_M represent a message of arbitrary length. (For clarity and consistency, we use calligraphic symbols for blockchain-related quantities, while standard symbols are reserved for all other variables, including those characterizing the quantum evolution.) A hash function is defined as a unique transformation of \mathcal{M}caligraphic_M into a hash value (){\cal H}(\mathcal{M})caligraphic_H ( caligraphic_M ), with a fixed length of 𝒩subscript𝒩\mathcal{N}_{\mathcal{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT bits. This transformation is performed in four steps (see Fig. 2 for visualization):

  1. i.

    Preprocessing: Transform message \mathcal{M}caligraphic_M into a set of random parameters ΘΘ\Thetaroman_Θ

    Θ=Random().ΘRandom\Theta={\rm Random}(\mathcal{M}).roman_Θ = roman_Random ( caligraphic_M ) . (4)
  2. ii.

    Unitary evolution: Use ΘΘ\Thetaroman_Θ to define a unitary evolution Uθsubscript𝑈𝜃U_{\theta}italic_U start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT for an NQsubscript𝑁𝑄N_{Q}italic_N start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT-qubit quantum system with initial state |ψ0ketsubscript𝜓0|\psi_{0}\rangle| italic_ψ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⟩ such that:

    |ψ=UΘ|ψ0,ketsubscript𝜓subscript𝑈Θketsubscript𝜓0|\psi_{\mathcal{M}}\rangle=U_{\Theta}|\psi_{0}\rangle,| italic_ψ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ⟩ = italic_U start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT | italic_ψ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⟩ , (5)

    where |ψketsubscript𝜓|\psi_{\mathcal{M}}\rangle| italic_ψ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ⟩ is the final state with its corresponding pure state density matrix ρ=|ψψ|subscript𝜌ketsubscript𝜓brasubscript𝜓\rho_{\mathcal{M}}=|\psi_{\mathcal{M}}\rangle\langle\psi_{\mathcal{M}}|italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT = | italic_ψ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ⟩ ⟨ italic_ψ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT |. The final state may be described in a compressed form by a set of expected (measured) observables.

  3. iii.

    Postprocessing: Using samples obtained from multiple measurements, compute a vector W(ρ)𝑊subscript𝜌W(\rho_{\mathcal{M}})italic_W ( italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ) consisting of observables (witnesses) Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT.

  4. iv.

    Digitalization: Discretize witnesses into binary numbers, e.g.,

    α=θ(WαW0α)={0,Wα<W0α1,WαW0α,subscript𝛼𝜃subscript𝑊𝛼subscript𝑊0𝛼cases0subscript𝑊𝛼subscript𝑊0𝛼1subscript𝑊𝛼subscript𝑊0𝛼{\cal H}_{\alpha}=\theta(W_{\alpha}{-}W_{0\alpha})=\left\{\begin{array}[]{cc}0% ,&\quad W_{\alpha}<W_{0\alpha}\\ 1,&\quad W_{\alpha}\geq W_{0\alpha}\end{array}\right.,caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = italic_θ ( italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT - italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT ) = { start_ARRAY start_ROW start_CELL 0 , end_CELL start_CELL italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT < italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL 1 , end_CELL start_CELL italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≥ italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT end_CELL end_ROW end_ARRAY , (6)

    where θ(x)𝜃𝑥\theta(x)italic_θ ( italic_x ) is the heaviside step-function and W0αsubscript𝑊0𝛼W_{0\alpha}italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT is a threshold selected depending on the definition of Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. Hash is a sequence of the binaries:

    ()=12𝒩.subscript1subscript2subscriptsubscript𝒩{\cal H}(\mathcal{M})={\cal H}_{1}{\cal H}_{2}\dots\cal H_{N_{H}}.caligraphic_H ( caligraphic_M ) = caligraphic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT caligraphic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT … caligraphic_H start_POSTSUBSCRIPT caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT end_POSTSUBSCRIPT . (7)

In the preprocessing step, the message \mathcal{M}caligraphic_M is transformed into a set of parameters ΘΘ\Thetaroman_Θ that govern the quantum operation. These parameters may include Hamiltonian variables, evolution times, a circuit, or other relevant quantities. If the size of ΘΘ\Thetaroman_Θ remains fixed regardless of the length of \mathcal{M}caligraphic_M, the preprocessing step effectively acts as a classical hash. Steps (iii-iv) can also be parameterized as a cryptographic function of ΘΘ\Thetaroman_Θ and the message. Standard cryptographic hashing algorithms can be employed at this stage to ensure, with care, that the first five criteria outlined in the previous section are met. This can be achieved by first hashing the message using a classical algorithm, such as SHA-256, and then using the resulting hash as a seed for a pseudo-random number generator to generate ΘΘ\Thetaroman_Θ and other hashing parameters preventing reversibility. This can preserve the essential properties of the classical hash in ΘΘ\Thetaroman_Θ, which in turn defines the quantum evolution in the next step.

The quantum operation must be sufficiently complex to satisfy the seventh requirement, ensuring that the hash cannot be spoofed by classical computers with feasible resources. Such complexity can be achieved through a continuous-time evolution, sequence of quantum gate operations, or a combination of digital and analog operations. Information from the resulting density matrix is extracted through measurements and summarized in a vector of witnesses W(ρ)𝑊subscript𝜌W(\rho_{\mathcal{M}})italic_W ( italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ). It should be noted that some classical properties, such as collision resistance and the avalanche effect, do not necessarily carry over automatically. For example, if quantum evolution maps all ΘΘ\Thetaroman_Θ to the same hash value, these properties would be lost. One should therefore be careful in the design of quantum evolution and the witness vector. The definition of witnesses and the quality of the resulting hash ultimately depend on the quantum hardware’s capability to execute complex quantum evolutions and its measurement capabilities, as we discuss next.

The 4-stage process outlined includes both classical and quantum computation. Although the classical part likely includes non-trivial operations such as cryptographic hashing, we assume that the quantum computation is significantly more costly (and time-consuming) than the sum of classical operations, which is true in our experimental implementation.

IV.1 Single-Basis Measurement

If hardware limitations constrain qubit measurement to a single basis, which we herein refer to as the z𝑧zitalic_z-basis (associated with the σizsubscriptsuperscript𝜎𝑧𝑖\sigma^{z}_{i}italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT Pauli matrices), the only accessible information is the diagonal elements of the density matrix in this basis. This corresponds to the probability distribution:

P=diagonal(ρ).subscript𝑃diagonalsubscript𝜌P_{\mathcal{M}}={\rm diagonal}(\rho_{\mathcal{M}}).italic_P start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT = roman_diagonal ( italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ) . (8)

The postprocessing step then aims to convert Psubscript𝑃P_{\mathcal{M}}italic_P start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT into a unique bit string of predetermined length 𝒩subscript𝒩\mathcal{N}_{\mathcal{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT, ensuring robustness to uncertainty in the experiment whilst capturing non-trivial statistical properties. In other words, the resulting quantum hash is a fixed-length bit string that uniquely identifies the probability distribution Psubscript𝑃P_{\mathcal{M}}italic_P start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT. A complete characterization of Psubscript𝑃P_{\mathcal{M}}italic_P start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT for a system with NQsubscript𝑁𝑄N_{Q}italic_N start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT qubits requires 2NQ1superscript2subscript𝑁𝑄12^{N_{Q}}{-}12 start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT end_POSTSUPERSCRIPT - 1 real parameters. This characterization can be achieved by determining the probability of each state in the computational basis or by calculating all expectation values of single- and multi-qubit operators up to order NQsubscript𝑁𝑄N_{Q}italic_N start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT, i.e., σizdelimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖\langle\sigma^{z}_{i}\rangle⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩, σizσjzdelimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖subscriptsuperscript𝜎𝑧𝑗\langle\sigma^{z}_{i}\sigma^{z}_{j}\rangle⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ⟩, σizσjzσkzdelimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖subscriptsuperscript𝜎𝑧𝑗subscriptsuperscript𝜎𝑧𝑘\langle\sigma^{z}_{i}\sigma^{z}_{j}\sigma^{z}_{k}\rangle⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⟩, etc. In practice, for quantum systems with local interactions, it is often sufficient to consider a limited number of dominant k-local expectations within the correlation length to characterize the distribution with acceptable accuracy. This simplification significantly reduces the complexity of the problem, particularly when the primary goal is to ensure collision resistance in the resulting quantum hash.

Let V𝑉Vitalic_V represent a vector of length NVsubscript𝑁𝑉N_{V}italic_N start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT of the dominant expectation values

V=[,σiz,,σizσjz,σizσjzσkz,]T𝑉superscriptdelimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖delimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖subscriptsuperscript𝜎𝑧𝑗delimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖subscriptsuperscript𝜎𝑧𝑗subscriptsuperscript𝜎𝑧𝑘𝑇V=[...,\langle\sigma^{z}_{i}\rangle,...,\langle\sigma^{z}_{i}\sigma^{z}_{j}% \rangle,...\langle\sigma^{z}_{i}\sigma^{z}_{j}\sigma^{z}_{k}\rangle,...]^{T}italic_V = [ … , ⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ , … , ⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ⟩ , … ⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⟩ , … ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT (9)

Each element of the feature vector V𝑉Vitalic_V is a real number 1Vα11subscript𝑉𝛼1-1{\leq}V_{\alpha}{\leq}1- 1 ≤ italic_V start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≤ 1. To generate an 𝒩subscript𝒩{\cal N_{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT-bit hash we create 𝒩subscript𝒩{\cal N_{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT witnesses as linear combinations of Vβsubscript𝑉𝛽V_{\beta}italic_V start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT

Wα=1NVβ=1NVGαβVβsubscript𝑊𝛼1subscript𝑁𝑉superscriptsubscript𝛽1subscript𝑁𝑉subscript𝐺𝛼𝛽subscript𝑉𝛽W_{\alpha}={1\over N_{V}}\sum_{\beta=1}^{N_{V}}G_{\alpha\beta}\,V_{\beta}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_N start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_β = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT end_POSTSUPERSCRIPT italic_G start_POSTSUBSCRIPT italic_α italic_β end_POSTSUBSCRIPT italic_V start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT (10)

where G𝐺Gitalic_G is an 𝒩×NVsubscript𝒩subscript𝑁𝑉{\cal N_{H}}{\times}N_{V}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT × italic_N start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT matrix with O(1)𝑂1O(1)italic_O ( 1 ) elements such that 1Wα11subscript𝑊𝛼1-1{\leq}W_{\alpha}{\leq}1- 1 ≤ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≤ 1. We turn each element of the witness vector W𝑊Witalic_W into a binary number using (6). Each equation Wα=W0αsubscript𝑊𝛼subscript𝑊0𝛼W_{\alpha}=W_{0\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT defines a hyperplane in the vector space made of V𝑉Vitalic_V’s and each hash bit αsubscript𝛼\mathcal{H}_{\alpha}caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT indicates on which side of the hyperplane the observed V𝑉Vitalic_V lies. The threshold value W0αsubscript𝑊0𝛼W_{0\alpha}italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT can be taken to be zero or as some simple function of the experimental parameters. Robustness of the hash bits towards errors and spoofing depends on the choice of the hyperplanes (see Appendix C.5). The signs on hyperplane projections can be chosen uniformly at random as a function of the message (as in our proof of concept) to guarantee a uniform distribution of bit strings that is not a reversible function of other experimental parameters, providing pre-image protection.

When qubit readout is performed in a single basis, it is, in principle, possible to identify a classical system that can replicate the same distribution without entanglement. While finding such a classical system may be practically infeasible due to quantum supremacy, its theoretical existence cannot be ruled out. Indeed, any proof of entanglement, whether through an entanglement measure or a witness, requires access to information beyond single-basis measurements of the quantum states. This additional information could be obtained by performing measurements in multiple bases or by examining the system’s response to perturbations (e.g., susceptibility measurements). In the absence of more complex measurement, one may resort to susceptibility measurements to protect further against classical spoofing.

IV.2 Multi-Basis Measurement

In some quantum systems, it is possible to perform measurements in multiple bases. This is feasible, for example, when the system allows single-qubit rotations (whether Clifford or non-Clifford gates) prior to measurement. Both digital and digital-analog QPUs can potentially enable such capabilities. Measuring in multiple bases allows access to off-diagonal elements of the density matrix, ρsubscript𝜌\rho_{\mathcal{M}}italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT. Given their exponential growth in number, accessing all off-diagonal elements is both impractical and unnecessary for most applications. Therefore, rather than attempting to recover all elements of the density matrix, one can measure quantities accessible through sampling by performing measurements in multiple bases.

The easiest way of using multi-basis measurement is by randomizing the measurement basis. The basis to measure the qubits can be determined by the message through the QPU parameters Θ()Θ\Theta(\mathcal{M})roman_Θ ( caligraphic_M ). This way random basis rotations are considered part of the unitary evolution Uθsubscript𝑈𝜃U_{\theta}italic_U start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT. While this would make classical simulation very challenging, it is still based on a single basis measurement of the density matrix ρsubscript𝜌\rho_{\mathcal{M}}italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT and the previous concern persists.

A better option is to measure the same ρsubscript𝜌\rho_{\mathcal{M}}italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT in different bases. This allows calculating entanglement witnesses as components of the witness vector. An entanglement witness is a measurable quantity, W𝑊Witalic_W, such that exceeding a threshold value, W>W0𝑊subscript𝑊0W>W_{0}italic_W > italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, necessarily implies the presence of entanglement while W<W0𝑊subscript𝑊0W<W_{0}italic_W < italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT does not guarantee the absence of entanglement. We can design a quantum hash such that each Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT in Eq. (6) represents an entanglement witness, and each corresponding W0αsubscript𝑊0𝛼W_{0\alpha}italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT acts as a boundary, beyond which entanglement is assured.

As an illustrative example, we consider using Bell’s inequality violations for pairs of qubits as entanglement witnesses. In an appropriate experimental setup, a loophole-free violation of Bell’s inequality indicates that the experimental results cannot be explained by any local hidden-variable theory. However, this is not our objective. Instead, we aim to use the inequality as an entanglement witness. Specifically, we employ an experimentally feasible variation of Bell’s inequality, known as the Clauser-Horne-Shimony-Holt (CHSH) inequality. Additional details are provided in Appendix 35.

Let us consider a system in which measurements can be performed in two Pauli bases, x𝑥xitalic_x and z𝑧zitalic_z. For any two qubits, i𝑖iitalic_i and j𝑗jitalic_j, we define a modified Bell operator as (see also Appendix B):

Bij=2(σixσjx+σizσjz).subscript𝐵𝑖𝑗2superscriptsubscript𝜎𝑖𝑥superscriptsubscript𝜎𝑗𝑥superscriptsubscript𝜎𝑖𝑧superscriptsubscript𝜎𝑗𝑧B_{ij}=\sqrt{2}(\sigma_{i}^{x}\sigma_{j}^{x}+\sigma_{i}^{z}\sigma_{j}^{z}).italic_B start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = square-root start_ARG 2 end_ARG ( italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT ) . (11)

Based on purely statistical argument, it is shown that for any local hidden variable theory, the expectation value of the Bell operator satisfies

|Bij|2,delimited-⟨⟩subscript𝐵𝑖𝑗2|\langle B_{ij}\rangle|\leq 2\;,| ⟨ italic_B start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ⟩ | ≤ 2 , (12)

which is known as CHSH inequality. When Pauli matrices are replaced by classical spin components, we get an even smaller bound, |Bij|2delimited-⟨⟩subscript𝐵𝑖𝑗2|\langle B_{ij}\rangle|\leq\sqrt{2}| ⟨ italic_B start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ⟩ | ≤ square-root start_ARG 2 end_ARG (see Appendix B). In contrast, quantum mechanics allows |Bij|delimited-⟨⟩subscript𝐵𝑖𝑗|\langle B_{ij}\rangle|| ⟨ italic_B start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ⟩ | to reach values as large as 22222\sqrt{2}2 square-root start_ARG 2 end_ARG. This is possible when |σixσjx|=|σizσjz|=1delimited-⟨⟩superscriptsubscript𝜎𝑖𝑥superscriptsubscript𝜎𝑗𝑥delimited-⟨⟩superscriptsubscript𝜎𝑖𝑧superscriptsubscript𝜎𝑗𝑧1|\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle|=|\langle\sigma_{i}^{z}\sigma_{j}^% {z}\rangle|=1| ⟨ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT ⟩ | = | ⟨ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT ⟩ | = 1, which is classically counterintuitive as it means the spins are oriented both in x𝑥xitalic_x and z𝑧zitalic_z directions at the same time. Quantum mechanically it is possible because of quantum contextuality, i.e., dependence of the outcome on the measurement setup. It is clear that violation of CHSH inequality is a sufficient but not necessary condition for quantum mechanics, and more explicitly for entanglement between the qubit pair. Therefore, CHSH inequality is an entanglement witness.

CHSH inequality can also hold in a multi-qubit system and therefore can be used as a basis for hash generation. Let us define Wα=|Bij|subscript𝑊𝛼delimited-⟨⟩subscript𝐵𝑖𝑗W_{\alpha}=|\langle B_{ij}\rangle|italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = | ⟨ italic_B start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ⟩ | in Eq. (6) with a threshold value W0α=2subscript𝑊0𝛼2W_{0\alpha}=2italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT = 2. Here, α𝛼\alphaitalic_α represents a pair of qubits in the multi-qubit system. The hash bit αsubscript𝛼{\cal H}_{\alpha}caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT is then equal to one if the CHSH inequality for the corresponding pair is violated, and zero otherwise. It has been shown that in a multi-qubits system the CHSH inequality must satisfy a trade-off relation [45] (see Appendix B):

α{allpairs}Wα22NQ(NQ1)subscript𝛼allpairssuperscriptsubscript𝑊𝛼22subscript𝑁𝑄subscript𝑁𝑄1\sum_{\alpha\in\{{\rm all\ pairs}\}}W_{\alpha}^{2}\ \leq 2N_{Q}(N_{Q}-1)∑ start_POSTSUBSCRIPT italic_α ∈ { roman_all roman_pairs } end_POSTSUBSCRIPT italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≤ 2 italic_N start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT ( italic_N start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT - 1 ) (13)

The equal sign happens when for all pairs, Wα=2subscript𝑊𝛼2W_{\alpha}=2italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = 2. It is therefore impossible for all pairs to violate the CHSH inequality simultaneously. In other words, if some of the inequalities are violated, there must be others that are not.

To use the CHSH inequality for hash generation, we must first identify 𝒩subscript𝒩\mathcal{N}_{\mathcal{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT pairs of qubits for which the likelihood of violating the CHSH inequality is maximized. Hash generation then reduces to executing a quantum evolution parameterized by ΘΘ\Thetaroman_Θ and determining which pairs from the predetermined set of qubit pairs violate the CHSH inequality. Each hash bit is assigned a value of one or zero, depending on whether or not the CHSH inequality is violated for the corresponding qubit pair. No classical model can reliably predict these outcomes, making a quantum computer essential for calculating the hash value. This approach ensures compliance with the seventh requirement mentioned earlier: spoof resistance.

IV.3 Shadow Tomography

If the QPU can perform multi-qubit unitary operations before measurement, shadow tomography can be utilized for hash generation. First introduced by Aaronson [42], shadow tomography provides an efficient method for estimating quantum observables using a limited number of measurements. Unlike full quantum state tomography, which requires an exponentially large number of measurements to reconstruct the full density matrix, shadow tomography focuses on extracting specific properties of the state with significantly fewer samples. A more practical variant was later introduced by Huang, Kueng, and Preskill [43]. In their approach randomized measurements are used to generate compact representations, known as classical shadows (ρ^isubscript^𝜌𝑖\hat{\rho}_{i}over^ start_ARG italic_ρ end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) of the full density matrix (ρ𝜌\rhoitalic_ρ) that can be generated and stored efficiently. The expectation value of any observable O𝑂Oitalic_O can then be estimated by averaging over all classical shadows:

O1NiTr(Oρ^i)delimited-⟨⟩𝑂1𝑁subscript𝑖Tr𝑂subscript^𝜌𝑖\langle O\rangle\approx{1\over N}\sum_{i}{\rm Tr}(O\hat{\rho}_{i})⟨ italic_O ⟩ ≈ divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_Tr ( italic_O over^ start_ARG italic_ρ end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) (14)

where N𝑁Nitalic_N is the number of measurement rounds. The number of measurements required to estimate m𝑚mitalic_m observables with error ϵitalic-ϵ\epsilonitalic_ϵ scales as log(m)/ϵ2𝑚superscriptitalic-ϵ2\log(m)/\epsilon^{2}roman_log ( italic_m ) / italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, making it exponentially more efficient than full state tomography.

To generate a classical shadow, we apply a randomly chosen unitary transformation U𝑈Uitalic_U from a known ensemble (e.g., Clifford group): ρUρU𝜌𝑈𝜌superscript𝑈\rho\to U\rho U^{\dagger}italic_ρ → italic_U italic_ρ italic_U start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT. We then perform measurement in the computational basis to obtain a bitstring outcome b𝑏bitalic_b. A classical shadow of ρ𝜌\rhoitalic_ρ corresponding to the measurement outcome is calculated as

ρ^=M1(U|bb|U)^𝜌superscript𝑀1superscript𝑈ket𝑏bra𝑏𝑈\hat{\rho}=M^{-1}(U^{\dagger}\left|b\right\rangle\left\langle b\right|U)over^ start_ARG italic_ρ end_ARG = italic_M start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_U start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT | italic_b ⟩ ⟨ italic_b | italic_U ) (15)

where M𝑀Mitalic_M is the shadow tomography measurement channel, defined as:

M(ρ)=𝔼[U|bb|U].𝑀𝜌𝔼delimited-[]superscript𝑈ket𝑏bra𝑏𝑈M(\rho)=\mathbb{E}[U^{\dagger}\left|b\right\rangle\left\langle b\right|U].italic_M ( italic_ρ ) = blackboard_E [ italic_U start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT | italic_b ⟩ ⟨ italic_b | italic_U ] . (16)

The expectation is over all unitaries U𝑈Uitalic_U and measured outcomes b𝑏bitalic_b. The inverse channel M1superscript𝑀1M^{-1}italic_M start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT depends on the chosen unitary ensemble. For example, for Pauli measurements, it has a simple analytical form (see Ref. [43] for detail).

To use shadow tomography for hashing, one can choose 𝒩subscript𝒩{\cal N_{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT observables Oαsubscript𝑂𝛼O_{\alpha}italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT from which witnesses are extracted via shadow tomography:

Wα=Tr(Oαρ)1NiTr(Oαρ^i).subscript𝑊𝛼Trsubscript𝑂𝛼subscript𝜌1𝑁subscript𝑖Trsubscript𝑂𝛼subscript^𝜌𝑖W_{\alpha}={\rm Tr}(O_{\alpha}\rho_{\mathcal{M}})\approx{1\over N}\sum_{i}{\rm Tr% }(O_{\alpha}\hat{\rho}_{\mathcal{M}i}).italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = roman_Tr ( italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ) ≈ divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_Tr ( italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT over^ start_ARG italic_ρ end_ARG start_POSTSUBSCRIPT caligraphic_M italic_i end_POSTSUBSCRIPT ) . (17)

Shadow tomography enables the estimation of expectation values for complex observables Oαsubscript𝑂𝛼O_{\alpha}italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, which cannot be directly measured or reliably extracted from simple measurement protocols within a feasible number of repetitions. One can, for example, choose Oαsubscript𝑂𝛼O_{\alpha}italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT such that Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT becomes an entanglement witness. For example, let O=|ψψ|𝑂ket𝜓bra𝜓O=\left|\psi\right\rangle\left\langle\psi\right|italic_O = | italic_ψ ⟩ ⟨ italic_ψ | represent a pure state projection such that

Tr(Oαρ)>κ>Tr(Oαρ~)Trsubscript𝑂𝛼𝜌𝜅Trsubscript𝑂𝛼~𝜌{\rm Tr}(O_{\alpha}\rho)>\kappa>{\rm Tr}(O_{\alpha}\tilde{\rho})roman_Tr ( italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT italic_ρ ) > italic_κ > roman_Tr ( italic_O start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT over~ start_ARG italic_ρ end_ARG ) (18)

where ρ~~𝜌\tilde{\rho}over~ start_ARG italic_ρ end_ARG is a separable density matrix. By choosing W0α=κsubscript𝑊0𝛼𝜅W_{0\alpha}=\kappaitalic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT = italic_κ, the extracted Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT defined in (17), serves as an entanglement witness in Eq. (6), facilitating hash bit generation.

Hashing based on shadow tomography will likely require a fault-tolerant gate-based quantum computer. However, digital-analog quantum computers with sufficient flexibility may have sufficient accuracy without quantum error correction. The details of the unitary evolution leading to ρsubscript𝜌\rho_{\mathcal{M}}italic_ρ start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT and the random unitaries U𝑈Uitalic_U applied before measurement would depend on the specific QPU architecture and the available quantum operations.

V Quantum Blockchain with Probabilistic Hashing

As previously emphasized, an inherent feature of any quantum mechanical hash generation is its probabilistic nature. Therefore, it is crucial to ensure that the blockchain remains reliable despite this uncertainty. Two primary sources of error can affect the estimation of observables used for hash generation: sampling error and hardware inaccuracies. Sampling error can be reduced by increasing the number of samples, while systematic hardware inaccuracies pose less of a problem if they remain consistent across both the miner and validator quantum devices. However, random errors and variations between different QPUs can introduce discrepancies, potentially leading to disagreements between miners and validators. To address this issue, we introduce two key modifications to the standard blockchain algorithm: probabilistic validation and a revised definition of the strongest chain.

V.1 Probabilistic Validation

Let us assume that the experimentally extracted Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT in Eq. (6) follows a Gaussian distribution with a mean of W¯αsubscript¯𝑊𝛼\bar{W}_{\alpha}over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT and standard deviation δWα𝛿subscript𝑊𝛼\delta W_{\alpha}italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as

P(Wα)=12πδWαe(WαW¯α)2/2δWα2.𝑃subscript𝑊𝛼12𝜋𝛿subscript𝑊𝛼superscript𝑒superscriptsubscript𝑊𝛼subscript¯𝑊𝛼22𝛿superscriptsubscript𝑊𝛼2P(W_{\alpha})={1\over\sqrt{2\pi}\delta W_{\alpha}}e^{-(W_{\alpha}-\bar{W}_{% \alpha})^{2}/2\delta W_{\alpha}^{2}}.italic_P ( italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 italic_π end_ARG italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT end_ARG italic_e start_POSTSUPERSCRIPT - ( italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT - over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / 2 italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT . (19)

In the absence of any systematic error, W¯αsubscript¯𝑊𝛼\bar{W}_{\alpha}over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT should be the same as the theoretical value. While the exact theoretical values are inaccessible due to the computational infeasibility of solving the Schrödinger equation at large scales, W¯αsubscript¯𝑊𝛼\bar{W}_{\alpha}over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT and δWα𝛿subscript𝑊𝛼\delta W_{\alpha}italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT can be estimated through statistical averaging. For a single run of the experiment, however, one only has access to the extracted Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. Assuming that W¯αsubscript¯𝑊𝛼\bar{W}_{\alpha}over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT follows the same Gaussian distribution centered around Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, one can estimate the probability of error in the generated hash bit αsubscript𝛼\mathcal{H}_{\alpha}caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT:

εαsubscript𝜀𝛼\displaystyle\varepsilon_{\alpha}italic_ε start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT =\displaystyle== P[(WαW0α)(W¯αW0α)<0]𝑃delimited-[]subscript𝑊𝛼subscript𝑊0𝛼subscript¯𝑊𝛼subscript𝑊0𝛼0\displaystyle P\big{[}(W_{\alpha}{-}W_{0\alpha})(\bar{W}_{\alpha}{-}W_{0\alpha% })<0\big{]}italic_P [ ( italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT - italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT ) ( over¯ start_ARG italic_W end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT - italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT ) < 0 ] (20)
=\displaystyle== 1πdαex2𝑑x=12[1erf(dα)]1𝜋superscriptsubscriptsubscript𝑑𝛼superscript𝑒superscript𝑥2differential-d𝑥12delimited-[]1erfsubscript𝑑𝛼\displaystyle{1\over\sqrt{\pi}}\int_{d_{\alpha}}^{\infty}e^{-x^{2}}dx={1\over 2% }[1-{\rm erf}(d_{\alpha})]divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_π end_ARG end_ARG ∫ start_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT italic_d italic_x = divide start_ARG 1 end_ARG start_ARG 2 end_ARG [ 1 - roman_erf ( italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ]

where erf(x)erf𝑥{\rm erf}(x)roman_erf ( italic_x ) is the error function and

dα=|WαW0α|2δWαsubscript𝑑𝛼subscript𝑊𝛼subscript𝑊0𝛼2𝛿subscript𝑊𝛼d_{\alpha}={|W_{\alpha}-W_{0\alpha}|\over\sqrt{2}\,\delta W_{\alpha}}italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = divide start_ARG | italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT - italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT | end_ARG start_ARG square-root start_ARG 2 end_ARG italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT end_ARG (21)

represents a dimensionless distance from the corresponding hyperplane. As expected, εα0subscript𝜀𝛼0\varepsilon_{\alpha}\to 0italic_ε start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT → 0 for dα1much-greater-thansubscript𝑑𝛼1d_{\alpha}\gg 1italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≫ 1 and εα0.5subscript𝜀𝛼0.5\varepsilon_{\alpha}\to 0.5italic_ε start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT → 0.5 for dα0subscript𝑑𝛼0d_{\alpha}\to 0italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT → 0. It is more convenient to work with confidence values defined as 𝒫α1εαsubscript𝒫𝛼1subscript𝜀𝛼{\cal P}_{\alpha}\equiv 1-\varepsilon_{\alpha}caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≡ 1 - italic_ε start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, instead of error rates. Assuming that errors in witnesses are uncorrelated (this assumption does not affect the applicability of this method as a heuristic), the miner’s confidence in the generated 𝒩subscript𝒩{\cal N_{H}}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT-bit hash value is given by

𝒫miner=α=1NH𝒫α.subscript𝒫minersuperscriptsubscriptproduct𝛼1subscript𝑁𝐻subscript𝒫𝛼{\cal P}_{\rm miner}=\prod_{\alpha=1}^{N_{H}}{\cal P}_{\alpha}.caligraphic_P start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_α = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT end_POSTSUPERSCRIPT caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT . (22)

If for all hash bits the distances from the hyperplanes are large, the confidence value will be close to one. In contrast, if any distance dαsubscript𝑑𝛼d_{\alpha}italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT is sufficiently small, confidence value for the corresponding bit approaches 1/2, reducing the overall confidence value. In that case, the generated hash may be rejected by the validators. The number of such uncertain bits is given by

𝒩miner=log2𝒫miner.subscript𝒩minersubscript2subscript𝒫miner{\cal N}_{\rm miner}=-\log_{2}{\cal P}_{\rm miner}.caligraphic_N start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT = - roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT caligraphic_P start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT . (23)

The validator performs the same calculations to determine Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT and dαsubscript𝑑𝛼d_{\alpha}italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT and the hash bit αsubscript𝛼\mathcal{H}_{\alpha}caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, which may or may not match the proposed hash bit αsubscriptsuperscript𝛼\mathcal{H}^{\prime}_{\alpha}caligraphic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. The validator can calculate their confidence value on the received hash as:

𝒫validator=α=1NH[𝒫αδα,α+(1𝒫α)(1δα,α)],subscript𝒫validatorsuperscriptsubscriptproduct𝛼1subscript𝑁𝐻delimited-[]subscript𝒫𝛼subscript𝛿subscript𝛼subscriptsuperscript𝛼1subscript𝒫𝛼1subscript𝛿subscript𝛼subscriptsuperscript𝛼{\cal P}_{\rm validator}=\prod_{\alpha=1}^{N_{H}}\big{[}{\cal P}_{\alpha}% \delta_{\mathcal{H}_{\alpha},\mathcal{H}^{\prime}_{\alpha}}+(1-{\cal P}_{% \alpha})(1{-}\delta_{\mathcal{H}_{\alpha},\mathcal{H}^{\prime}_{\alpha}})\big{% ]},caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_α = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT end_POSTSUPERSCRIPT [ caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT italic_δ start_POSTSUBSCRIPT caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT , caligraphic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT end_POSTSUBSCRIPT + ( 1 - caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ( 1 - italic_δ start_POSTSUBSCRIPT caligraphic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT , caligraphic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) ] , (24)

where δa,bsubscript𝛿𝑎𝑏\delta_{a,b}italic_δ start_POSTSUBSCRIPT italic_a , italic_b end_POSTSUBSCRIPT is the Kronecker delta-function. If all hash bits match, the validator’s confidence in the received hash bit is equal to their confidence in their own bit. However, if a single bit is incorrect, the confidence value decreases by a factor of (1𝒫α)/𝒫α1subscript𝒫𝛼subscript𝒫𝛼(1{-}{\cal P}_{\alpha})/{\cal P}_{\alpha}( 1 - caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) / caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. This would significantly lower 𝒫validatorsubscript𝒫validator{\cal P}_{\rm validator}caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT if the incorrect bit is one with high confidence value, where 𝒫α1subscript𝒫𝛼1{\cal P}_{\alpha}\approx 1caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≈ 1. Conversely, if the incorrect hash bit has 𝒫α1/2subscript𝒫𝛼12{\cal P}_{\alpha}\approx 1/2caligraphic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≈ 1 / 2, the reduction in the overall confidence value is negligible. We can turn 𝒫validatorsubscript𝒫validator{\cal P}_{\rm validator}caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT into a number of bits in a similar way as for the miner:

𝒩validator=log2𝒫validator.subscript𝒩validatorsubscript2subscript𝒫validator{\cal N}_{\rm validator}=-\log_{2}{\cal P}_{\rm validator}.caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT = - roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT . (25)

The interpretation of 𝒩validatorsubscript𝒩validator{\cal N}_{\rm validator}caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT is slightly different. If discrepancies between the hashes occur only in bits with a small distance dαsubscript𝑑𝛼d_{\alpha}italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, then 𝒩validatorsubscript𝒩validator{\cal N}_{\rm validator}caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT represents the count of those uncertain bits. However, even a single bit discrepancy outside this set can cause 𝒫validatorsubscript𝒫validator{\cal P}_{\rm validator}caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT to vanish, leading 𝒩validatorsubscript𝒩validator{\cal N}_{\rm validator}caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT to diverge, effectively rendering all hash bits uncertain.

The simplest way of implementing probabilistic validation is to set a threshold 𝒩maxsubscript𝒩{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT for the allowable discrepancy between the received hash and the one generated by the validator. The validator would then accept the block if 𝒩validator<𝒩maxsubscript𝒩validatorsubscript𝒩{\cal N}_{\rm validator}<{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT < caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT and reject it otherwise. The stability of the chain can be further enhanced if miners and validators agree on the same 𝒩maxsubscript𝒩{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT. Provided broadcasting has reasonable cost, the miner will refrain from proposing a block if 𝒩miner>𝒩maxsubscript𝒩minersubscript𝒩{\cal N}_{\rm miner}>{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT > caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT because it is very likely to be rejected by the validators. The advantage of using 𝒩minersubscript𝒩miner{\cal N}_{\rm miner}caligraphic_N start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT and 𝒩validatorsubscript𝒩validator{\cal N}_{\rm validator}caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT instead of the Hamming distance between the hashes is that it prevents malicious actors from exploiting the relaxed requirements to generate invalid hashes. This is because only hash bits with a small distance dαsubscript𝑑𝛼d_{\alpha}italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT are permitted to differ, and calculating dαsubscript𝑑𝛼d_{\alpha}italic_d start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT can only be achieved accurately using a quantum computer, making it challenging for attackers to identify uncertain bits and manipulate specific hash bits confidently. It also deters a miner from generating the hash with less computational effort, such as by producing a hash with fewer leading zeros and randomly flipping the remaining bits. In practice one needs to fine-tune 𝒩maxsubscript𝒩{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT to achieve maximum stability of the blockchain.

V.2 Redefining Chainwork

Before introducing any changes to the blockchain, let us first analyze how a standard blockchain algorithm like Bitcoin would be affected by probabilistic hash generation and validation. We assume that the miner proposes their hash without considering their confidence value 𝒫minersubscript𝒫miner{\cal P}_{\rm miner}caligraphic_P start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT, and the validator accepts the hash if all bits match. In the limit of rare bit-flipping, at least one bit may deviate from its theoretical (deterministic) value. This introduces an additional mechanism for chain splitting, which we call “validation fork”. With small probabilities, two key events can happen: (1) False positive by the miner: A miner believes they have found a valid nonce and broadcasts a block that is rejected by all stakeholders. This occurs with probability 1𝒫miner1subscript𝒫miner1{-}{\cal P}_{\rm miner}1 - caligraphic_P start_POSTSUBSCRIPT roman_miner end_POSTSUBSCRIPT. (2) False negative by an isolated validator: An individual stakeholder fails to reproduce a valid block accepted by all other stakeholders, with probability 1𝒫validator1subscript𝒫validator1{-}{\cal P}_{\rm validator}1 - caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT. In Bitcoin, the strongest chain is defined as the sequence of blocks consistent with the most valid work. Any path containing a single invalid block cannot be part of the strongest chain from the stakeholder’s perspective. Under this consideration, we can examine the consequences.

Scenario (1) would lead to a soft validation fork. The miner’s proposed branch becomes obsolete as a new branch mined by the dominant computing power quickly replaces it. The isolated miner rejoins the strongest chain with high probability after two block broadcasts. This soft fork is similar to those caused by network desynchronization in Bitcoin and is not problematic.

Scenario (2), on the other hand, can cause a hard validation fork. A stakeholder who fails to validate a block is left waiting for a new fork that may emerge slowly but is unlikely to be considered the strongest chain by others. Since the stakeholder cannot join a chain with invalid transactions, they become permanently disconnected from the main chain. If this issue persists, stakeholders will eventually split into hard forks, causing the blockchain to stop functioning. To prevent this, the definition of the strongest chain must allow for the inclusion of unverified blocks.

We can modify the definition of the strongest chain to resolve the hard-validation fork problem. The strongest chain remains defined as the path with the greatest cumulative work, but we allow the block tree to contain invalid blocks by attributing negative work to them. The simplest approach is to introduce equal but negative work for invalid blocks:

𝒲n={2𝒩zeros,validblock2𝒩zeros,invalidblock.subscript𝒲𝑛casessuperscript2subscript𝒩zerosvalidblocksuperscript2subscript𝒩zerosinvalidblock{\cal W}_{n}=\left\{\begin{array}[]{cc}2^{{\cal N}_{\rm zeros}},&{\rm valid\ % block}\\ -2^{{\cal N}_{\rm zeros}},&{\rm invalid\ block}\end{array}\right..caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = { start_ARRAY start_ROW start_CELL 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , end_CELL start_CELL roman_valid roman_block end_CELL end_ROW start_ROW start_CELL - 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , end_CELL start_CELL roman_invalid roman_block end_CELL end_ROW end_ARRAY . (26)

If the work remains constant, then one can assign ±1plus-or-minus1\pm 1± 1 to valid/invalid blocks, we call this basic +/- Chainwork. In the case of a tie, the first path evaluated is preferred, following the same rule as in Bitcoin. This adjustment effectively resolves the issue by turning a hard validation fork to a soft validation fork with a lag time. If a user falls behind by two blocks, the majority compute power of miners will produce a third block, causing the strongest chain to advance by three blocks in agreement with the majority.

A more sophisticated way to modify Bitcoin’s original Chainwork calculation is to incorporate validator’s confidence in the hashes. This can be easily done by reweighting the work (per block) in proportion to confidence:

𝒲n=2𝒩zeros𝒫validator=2𝒩zeros𝒩validator.subscript𝒲𝑛superscript2subscript𝒩zerossubscript𝒫validatorsuperscript2subscript𝒩zerossubscript𝒩validator{\cal W}_{n}=2^{{\cal N}_{\rm zeros}}{\cal P}_{\rm validator}=2^{{\cal N}_{\rm zeros% }-{\cal N}_{\rm validator}}.caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT = 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT end_POSTSUPERSCRIPT . (27)

This is equivalent to subtracting the number of uncertain bits from the required number of zeros when calculating the work. Note than in the limit of 𝒫validator1subscript𝒫validator1{\cal P}_{\rm validator}\to 1caligraphic_P start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT → 1 or 𝒩validator0subscript𝒩validator0{\cal N}_{\rm validator}\to 0caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT → 0, 𝒲nsubscript𝒲𝑛{\cal W}_{n}caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT becomes identical to that defined by the standard Bitcoin algorithm. Conversely, if the validator has no confidence in the hash, then 𝒲n=0subscript𝒲𝑛0{\cal W}_{n}=0caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = 0. Assigning zero work to invalid blocks can open the network to attacks such as spamming with incomplete work. To prevent this we can again introduce a threshold 𝒩maxsubscript𝒩{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT for the allowed discrepancy. The validator may reject the block if 𝒩validator>𝒩maxsubscript𝒩validatorsubscript𝒩{\cal N}_{\rm validator}>{\cal N}_{\max}caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT > caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT or assign a negative work to invalid blocks similar to (26):

𝒲n={2𝒩zeros𝒩validator,𝒩validator𝒩max2𝒩zeros,𝒩validator>𝒩max.subscript𝒲𝑛casessuperscript2subscript𝒩zerossubscript𝒩validatorsubscript𝒩validatorsubscript𝒩superscript2subscript𝒩zerossubscript𝒩validatorsubscript𝒩{\cal W}_{n}=\left\{\begin{array}[]{cc}2^{{\cal N}_{\rm zeros}-{\cal N}_{\rm validator% }},&\ {\cal N}_{\rm validator}\leq{\cal N}_{\max}\\ -2^{{\cal N}_{\rm zeros}},&\ {\cal N}_{\rm validator}>{\cal N}_{\max}\end{% array}\right..caligraphic_W start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = { start_ARRAY start_ROW start_CELL 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , end_CELL start_CELL caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT ≤ caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL - 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , end_CELL start_CELL caligraphic_N start_POSTSUBSCRIPT roman_validator end_POSTSUBSCRIPT > caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_CELL end_ROW end_ARRAY . (28)

We call this confidence-based Chainwork. In both cases, no block with zero work would be added to the blockchain. Modifications to the definition of work impact blockchain efficiency, transaction delays and susceptibility to attacks as discussed in discussed in Appendix C.

VI Experimental Implementation

Experimental realization of a quantum blockchain requires access to a quantum computer capable of solving complex problems where quantum supremacy has been demonstrated or is potentially achievable. The generated distribution must exhibit sufficient structure to allow observables with nontrivial expectation values that can be measured reproducibly across multiple QPUs. Therefore, techniques such as boson sampling or random circuit sampling, which produce nearly uniform random outcomes, are not suitable for this application (see Appendix A). Currently, Ref. [1] is the only experimental demonstration of quantum supremacy that meets these requirements. Therefore, we base our experimental demonstration on the same protocol. Since Ref. [1] has already established agreement with quantum mechanics and the infeasibility of classical simulation, our primary focus here is to demonstrate that multiple QPUs can generate and validate hashes, ensuring the blockchain operates reliably without persistent branching or network fragmentation.

Refer to caption
Figure 3: Operation of an example blockchain with 100100100100 miners using four publicly accessible Advantage and Advantage2 QPUs. The mining difficulty was set to 𝒩zeros=28subscript𝒩zeros28\mathcal{N}_{\rm zeros}=28caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = 28 with basic +/-1 Chainwork defined in (26). Orange blocks show soft (resolved) forks, these blocks are included in no strongest chains (have been rejected by all miners). Blue blocks are immutable, included in all strongest chains (agreed upon by all miners). Gray and black blocks are in contention: common to some (but not all) strongest chains. Only black blocks are candidates for maximum Chainwork (are being mined) and have the potential for further branching. Blocks are labeled by the order of broadcast with the genesis block on the left and most recently mined block on the right. Either of the paths terminating in black nodes might be extended, but the orange path is no longer a candidate for the strongest chain. Blocks 10, 11, 12 and 13 contain delayed transactions, that require further mining (fork resolution) in order to be confirmed. A majority of miners believe node 13 has maximum Chainwork, and in this experiment all miners consolidate on this branch as the chain develops (see Figure 4).
Refer to caption
Figure 4: The same blockchain as described in Figure 3 after 219 block broadcasts with matching color scheme. The data was collected over a two-day period. The mining process is accelerated without impact on statistical outcomes (further details and results are contained in Appendix D). The thickness of lines is proportional to the number of miners transferring to the block at the time of its broadcast. The genesis block is placed centrally with blocks placed on a spiral in the order of broadcast. Approximately 70% of block broadcasts are immutable (blue).
Refer to caption
Figure 5: Mining efficiency, i.e., the fraction of blocks joining the strongest chain, from 5-10 chains each of length 512-2048. The Chainwork uses either the simple +/-1 weighting, or the confidence-based weighting with 𝒩max=1,2subscript𝒩max12\mathcal{N}_{\rm max}=1,2caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT = 1 , 2 or 3333 with a constant δWα𝛿subscript𝑊𝛼\delta W_{\alpha}italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT extracted from experimental data. The mining process is accelerated without impact on statistical outcomes, and witnesses are resampled to accelerate analysis. Further experimental details and results are contained in Appendix D. Confidence-weighted strongest chains are more efficient.

The experiments are conducted on four D-Wave Advantage and Advantage2 QPUs. Each QPU comprises a network of coupled qubits that undergo continuous evolution governed by a time-dependent Hamiltonian

H(s)=Γ(s)iσix+𝒥(s)HP,𝐻𝑠Γ𝑠subscript𝑖superscriptsubscript𝜎𝑖𝑥𝒥𝑠subscript𝐻𝑃H(s)=-\Gamma(s)\sum_{i}\sigma_{i}^{x}+{\cal J}(s)H_{P}\;,italic_H ( italic_s ) = - roman_Γ ( italic_s ) ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT + caligraphic_J ( italic_s ) italic_H start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT , (29)

where the problem Hamiltonian is

HP=ihiσiz+i,jJijσizσjz,subscript𝐻𝑃subscript𝑖subscript𝑖superscriptsubscript𝜎𝑖𝑧subscript𝑖𝑗subscript𝐽𝑖𝑗superscriptsubscript𝜎𝑖𝑧superscriptsubscript𝜎𝑗𝑧H_{P}=\sum_{i}h_{i}\sigma_{i}^{z}+\sum_{\langle i,j\rangle}J_{ij}\sigma_{i}^{z% }\sigma_{j}^{z}\;,italic_H start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT + ∑ start_POSTSUBSCRIPT ⟨ italic_i , italic_j ⟩ end_POSTSUBSCRIPT italic_J start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT , (30)

and s=t/ta𝑠𝑡subscript𝑡𝑎s=t/t_{a}italic_s = italic_t / italic_t start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, with t𝑡titalic_t being time and tasubscript𝑡𝑎t_{a}italic_t start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT being the annealing time. The energy scales Γ(s)Γ𝑠\Gamma(s)roman_Γ ( italic_s ) and 𝒥(s)𝒥𝑠{\cal J}(s)caligraphic_J ( italic_s ) evolve with time in such a way that Γ(0)𝒥(0)much-greater-thanΓ0𝒥0\Gamma(0)\gg{\cal J}(0)roman_Γ ( 0 ) ≫ caligraphic_J ( 0 ) and Γ(1)𝒥(1)much-less-thanΓ1𝒥1\Gamma(1)\ll{\cal J}(1)roman_Γ ( 1 ) ≪ caligraphic_J ( 1 ). These energy scales are predetermined during calibration and typically vary between different QPUs. The dimensionless parameters hisubscript𝑖h_{i}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and Jijsubscript𝐽𝑖𝑗J_{ij}italic_J start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT are programmable. The inter-qubit connectivity provided by Advantage and Advantage2 are different, but they are compatible over the subgraphs used for the study.

The quantum problem we solve for hash generation is the coherent quenching of spin glasses through quantum phase transitions. We embed cubic lattices of size 4×4×44444{\times}4{\times}44 × 4 × 4 dimers (128 qubits) on each of four different internationally distributed QPUs, supplementary results for 18×18181818{\times}1818 × 18 dimerized Biclique models (72 qubits) are limited to Appendix D. The critical dynamics ensure that by adjusting a single parameter (annealing time) different QPUs can produce consistent results despite variations in their energy scales [1]. All qubit biases are set to zero, hi=0subscript𝑖0h_{i}=0italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0, and couplers are selected as a function of the input message Jij=Θ()subscript𝐽𝑖𝑗ΘJ_{ij}=\Theta(\mathcal{M})italic_J start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = roman_Θ ( caligraphic_M ). This is done in two steps: First the message goes through a classical hash, e.g., SHA-256 in our case. Then the classical hash is used as a seed in a pseudo-random number generator that generates the experimental parameters (J𝐽Jitalic_J only, in our proof of concept). The classical hash is also used as the cryptographic identifier of the block in the block header of the next block. The quantum hash is only used for PoQ. This way, we can choose 𝒩=𝒩zerossubscript𝒩subscript𝒩zeros{\cal N_{H}}={\cal N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT = caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT because nonzero bits in the quantum hash have no significance. Using a spin glass Hamiltonian for quantum evolution ensures that classical collision resistance carries over to the quantum hash. It is extremely unlikely that two spin glass Hamiltonians with entirely different randomly chosen parameters collide in the resulting high dimensional distribution. Furthermore, the projection matrix G()𝐺G(\mathcal{M})italic_G ( caligraphic_M ) is also randomly generated based on the message \mathcal{M}caligraphic_M, making collisions even more improbable.

The feature vector is constructed to include only pairwise nearest-neighbor correlations: V=[,σizσjz,]T𝑉superscriptdelimited-⟨⟩subscriptsuperscript𝜎𝑧𝑖subscriptsuperscript𝜎𝑧𝑗𝑇V=[...,\langle\sigma^{z}_{i}\sigma^{z}_{j}\rangle,...]^{T}italic_V = [ … , ⟨ italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ⟩ , … ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT. As demonstrated in [1], reproducing V𝑉Vitalic_V within the accuracy of QPUs is computationally infeasible for classical computers. The witness vector W𝑊Witalic_W is calculated using random projection by a matrix G𝐺Gitalic_G with matrix elements selected from a normal distribution N(0,I)𝑁0𝐼N(0,I)italic_N ( 0 , italic_I ); elements are pseudo-random functions of {\mathcal{M}}caligraphic_M. As a miner, we run the quantum algorithm multiple times, each time with a different nonce until we find a hash with 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT bits equal to zero (more specifically for confidence-based Chainwork, a hash satisfying the confidence threshold). The miner then adds the block to the blockchain with the nonce that generated =00{\cal H}=0caligraphic_H = 0. The validator then goes through the same calculations, i.e., generates the classical hash and thereby the QPU parameters and the quantum hash and checks if all bits are zeros.

Fig. 3 presents the initial-stage operation of a blockchain using four publicly accessible D-Wave Advantage and Advantage2 QPUs, involving 100 miners with 𝒩zeros=28subscript𝒩zeros28\mathcal{N}_{\rm zeros}=28caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = 28. The visualization captures the dynamic evolution of the blockchain, highlighting block statuses and mining activity. Blocks on the (majority) strongest chain are arranged chronologically from left to right, with the genesis block on the far left and the most recently mined block on the far right. Different block colors indicate their status within the consensus process: Blue blocks (immutable) are included in all strongest chains and are universally agreed upon. Orange blocks (soft forks) have been rejected by all miners and are not included in any of the strongest chains. Gray and black blocks appear in some but not all strongest chains, reflecting temporary disagreement among miners. Only black blocks are actively being mined, meaning they have the potential for further branching. This illustrates how consensus emerges dynamically as blocks are proposed and either accepted or rejected by the network.

In Fig. 4, the blockchain is shown after 219 block broadcasts, with the genesis block repositioned centrally for clarity. By this stage, approximately 70% of proposed blocks have become immutable (blue), aligning well with theoretical expectations based on the resampling methodology (see Appendix D.3). The remaining blocks continue to be evaluated, with only those on the strongest chains persisting in the blockchain ledger. This simulation demonstrates the robustness of the blockchain framework, showcasing its ability to maintain stability and achieve miner consensus despite the probabilistic nature of quantum computation. We are able to use statistical properties of the Chainwork described in Section D.2 to eliminate failed mining experiments, and create the blockchain with only 100×219100219100\times 219100 × 219 total QPU programmings.

Fig. 5 illustrates the mining efficiency across multiple blockchain operations. Mining efficiency is defined as the fraction of block broadcasts that successfully enter the strongest chain in the limit of many miners. For each curve the data is taken from 5 to 10 chains, each having between 512 and 2048 blocks. In Appendix D.1 we describe mining efficiency and transaction delay in greater detail, presenting additional data. The analysis compares different Chainwork definitions and post-processing methods. For the simple ±1plus-or-minus1\pm 1± 1 weighting method, which assigns equal weights to all block broadcasts, mining efficiency drops to close to 50% as 𝒩zeros60subscript𝒩zeros60\mathcal{N}_{\rm zeros}\to 60caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT → 60. The confidence-based weighting technique performs better, as expected, approaching 75% efficiency for 𝒩zeros=60subscript𝒩zeros60\mathcal{N}_{\rm zeros}=60caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = 60. Confidence-based Chainwork allows higher efficiency than basic +/-1 Chainwork, also with accounting of small differences in the mining rate as outlined in Appendix D.2. To calculate confidence values we used a constant δWα𝛿subscript𝑊𝛼\delta W_{\alpha}italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT extracted from experimental data, as detailed in Appendix D.

VII Conclusions

We have proposed and successfully tested a blockchain architecture in which proof of quantum work (PoQ) is performed on quantum computers. Our approach introduces multiple quantum-based hash generation methods, with varying levels of complexity depending on the capabilities of the quantum hardware. To address the inherent probabilistic nature of quantum hash generation and validation—an unavoidable property of any quantum hash algorithm—we developed techniques to ensure blockchain stability. We experimentally tested our proposed blockchain on four internationally distributed D-Wave annealing QPUs, demonstrating that despite architectural and parametric variations, these QPUs can be programmed to generate and validate each other’s hashes in a way that allowed stable blockchains to persist over thousands of broadcast blocks and hundreds of thousands of hash validations.

For this demonstration, we used the critical dynamics of 3D and biclique spin glass at a fixed size as the computational problem for PoQ. This was motivated by its role in the first experimental demonstration of quantum supremacy in a real-world problem [1] that is not based on boson or random-circuit sampling. However, this choice does not constrain the quantum problem that can be used for PoQ. Strategies, such as randomizing problem topology, dimensionality, qubit count, and annealing time, can prevent classical learning algorithms from predicting expectation values based on large datasets [46]. By choosing a sufficiently complex ensemble of experiments and sampling it cryptographically, experimental outcomes can be made weakly correlated. Additionally, hash algorithms leveraging entanglement witnesses and shadow tomography can further enhance the quantum nature of the hashing process. We note that while supremacy is desirable for PoQ, it may not be strictly necessary as long as classical competition remains prohibitively expensive, making quantum approaches the only practicable ones. In fact, one may deliberately use smaller problem instances where the ground truth can be obtained — albeit with significant computational effort — to benchmark and calibrate QPUs.

A quantum blockchain could transform existing blockchain technology by significantly mitigating the substantial electricity demands of classical proof of work (PoW) mining. Electricity costs are estimated to comprise 90% to 95% of Bitcoin’s total mining expenses [47]. In our PoQ approach, energy consumption amounts to a small fraction of quantum computation cost (which is the dominant cost in the PoQ approach). For example, energy expenses account for just 0.1% of the total computation cost on D-Wave quantum computers [50]. Replacing classical PoW with quantum PoQ could therefore reduce energy costs by a factor of 1,000. This ratio may shift as quantum computation becomes more cost-efficient over time, but substantial energy savings are expected to persist. Transition to PoQ would also relocate mining from regions with low electricity costs to countries with advanced quantum computing infrastructure. Beyond demonstrating the first practical use of quantum supremacy in a critical domain, this work proves the feasibility of quantum computing in real-world tasks and suggests broader high-impact applications with today’s technology.

Acknowledgment

We thank Brian Barch, Vladimir Vargas Calderon, Rahul Deshpande, Pau Farré, Fiona Hannington, Richard Harris, Emile Hoskinson, Trevor Lanting, Daniel Lidar, Peter Love, Chris Rich, and Murray Thom for fruitful discussions and comments on the manuscript.

References

  • [1] A. King et al., “Beyond classical computation in quantum simulation,” Science (2025).
  • [2] Satoshi Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” (2008). Available at: https://bitcoin.org/bitcoin.pdf.
  • [3] S. Saberi, M. Kouhizadeh, J. Sarkis, L. Shen, “Blockchain technology and its relationships to sustainable supply chain management,” Int. J. Prod. Res. 57, 2117 (2019), https://doi.org/10.1080/00207543.2018.1533261.
  • [4] T. T. Kuo, H. E. Kim, L. Ohno-Machado, “Blockchain distributed ledger technologies for biomedical and healthcare applications,” J. Am. Med. Inform. Assoc. 24, 1211 (2017), https://doi.org/10.1093/jamia/ocx068.
  • [5] Haber, M.J., Rolls, D. (2020). Blockchain and Identity Management. In: Identity Attack Vectors. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5165-2_20.
  • [6] F. Schär, “Decentralized finance: On blockchain- and smart contract-based financial markets,” Fed. Reserve Bank St. Louis Rev. 103, 153 (2021), https://doi.org/10.20955/r.103.153-74.
  • [7] Z. Zheng, S. Xie, H. N. Dai, X. Chen, H. Wang, “An overview of blockchain technology: Architecture, consensus, and future trends,” in Proc. IEEE Int. Congr. Big Data, 557 (2017), https://doi.org/10.1109/BigDataCongress.2017.85.
  • [8] F. Casino, T. K. Dasaklis, C. Patsakis, “A systematic literature review of blockchain-based applications: Current status, classification, and open issues,” Telemat. Inform. 36, 55 (2019), https://doi.org/10.1016/j.tele.2018.11.006.
  • [9] M. Conti, E. S. Kumar, C. Lal, S. Ruj, “A survey on security and privacy issues of blockchain technology,” IEEE Commun. Surv. Tutor. 20, 2786 (2018), https://doi.org/10.1109/COMST.2018.2842460.
  • [10] X. Xu, I. Weber, M. Staples, Architecture for blockchain applications (Springer, 2019), https://doi.org/10.1007/978-3-030-03035-3.
  • [11] TechCrunch, “Crypto’s networked collaboration will drive Web 3.0,” (2021). Available at: https://techcrunch.com/2021/09/16/cryptos-networked-collaboration-will-drive-web-3-0/.
  • [12] Digiconomist, “Bitcoin energy consumption,” (2025). Available at: https://digiconomist.net/bitcoin-energy-consumption.
  • [13] Financial Times, “Can Bitcoin miners find a green way to dig for digital gold?” (2025).
  • [14] D. Larimer, “Delegated proof of stake (DPoS),” BitShares Whitepaper (2014). Available at: https://bitshares.org/technology/delegated-proof-of-stake/.
  • [15] M. Castro, B. Liskov, “Practical Byzantine fault tolerance,” in Proc. 3rd Symp. Oper. Syst. Des. Implement. (OSDI), 173 (1999).
  • [16] John Preskill, “Quantum computing and the entanglement frontier,” arXiv:1203.5813.
  • [17] W. Wang, Y. Yu, L. Du, “Quantum blockchain based on asymmetric quantum encryption and a stake vote consensus algorithm,” Sci. Rep. 12, 8086 (2022).
  • [18] M. Gandhi et al., “Quantum blockchain: Trends, technologies, and future directions,” EPJ Quantum Technol. 5, 516 (2024), https://doi.org/10.1049/qtc2.12119.
  • [19] A. Nag, P. Pal, S. Bhattacharyya, L. Mrsic, J. Platos, “A brief review on quantum blockchain,” In: Bhattacharyya, S., Banerjee, J.S., Köppen, M. (eds) Human-Centric Smart Computing. ICHCSC 2023. Smart Innovation, Systems and Technologies, vol 376. Springer, Singapore. https://doi.org/10.1007/978-981-99-7711-6_53.
  • [20] C. Li, Y. Xu, J. Tang, W. Liu, “Quantum blockchain: A decentralized, encrypted, and distributed database,” J. Quantum Comput. 1, 49 (2019), https://doi.org/10.32604/jqc.2019.06715.
  • [21] N.K. Parida, C. Jatoth, V.D. Reddy, et al. “Post-quantum distributed ledger technology: a systematic survey,” Sci. Rep. 13, 20729 (2023), https://doi.org/10.1038/s41598-023-47331-1.
  • [22] Z. Yang, H. Alfauri, B. Farkiani, R. Jain, R. D. Pietro and A. Erbad, “A survey and comparison of post-quantum and quantum blockchains,” in IEEE Communications Surveys & Tutorials, 26, 967 (2024), https://doi.org/10.1109/COMST.2023.3325761.
  • [23] S. Banerjee, A. Mukherjee, P. K. Panigrahi, “Quantum blockchain using weighted hypergraph states,” Phys. Rev. Res. 2, 013322 (2020), https://doi.org/10.1103/PhysRevResearch.2.013322.
  • [24] K. Nilesh, P. K. Panigrahi, “Quantum blockchain based on dimensional lifting generalized Gram-Schmidt procedure,” arXiv:2110.02763 (2021).
  • [25] M. Xu, X. Ren, D. Niyato, J. Kang, C. Qiu, Z. Xiong, X. Wang, V. C. M. Leung, “When quantum information technologies meet blockchain in Web 3.0,” arXiv:2211.15941 (2022).
  • [26] A. Orun and F. Kurugollu, “The lower energy consumption in cryptocurrency mining processes by SHA-256 quantum circuit design used in hybrid computing domains,” arXiv:2401.10902
  • [27] Morvan et al., “Phase transitions in random circuit sampling,” Nature 634, 328 (2024)
  • [28] Dongxin Gao et al., “Establishing a new benchmark in quantum computational advantage with 105-qubit Zuchongzhi 3.0 processor,” Phys. Rev. Lett. 134, 090601 (2025).
  • [29] Boixo, S., Isakov, S.V., Smelyanskiy, V.N. et al. “Characterizing quantum supremacy in near-term devices.” Nature Phys 14, 595–600 (2018).
  • [30] F. Ablayev, M. Ablayev, “Quantum hashing via classical ε𝜀\varepsilonitalic_ε-universal hashing constructions,” arXiv:1404.1503 (2014).
  • [31] S. Banerjee, H. Meena, S. Tripathy, P. K. Panigrahi, “Fully quantum hash function,” arXiv:2408.03672 (2024).
  • [32] D. Li, J. Zhang, F.-Z. Guo, W. Huang, Q.-Y. Wen, H. Chen, “Discrete-time interacting quantum walks and quantum hash schemes,” Quantum Inf. Process. 12, 1501 (2013), https://doi.org/10.1007/s11128-012-0421-8.
  • [33] M. Ziatdinov, “From graphs to keyed quantum hash functions,” arXiv:1606.00256 (2016).
  • [34] F. Ablayev, A. Vasiliev, “Quantum hashing,” arXiv:1310.4922.
  • [35] S. Banerjee, H. Meena, S. Tripathy, P. K. Panigrahi, “Fully quantum hash function,” arXiv:2408.03672.
  • [36] A.V. Vasiliev, “Binary quantum hashing,” Russ. Math. 60, 61 (2016). https://doi.org/10.3103/S1066369X16090073
  • [37] Yang, YG., Bi, JL., Li, D. et al. “Hash function based on quantum walks,” Int. J. Theor. Phys. 58, 1861 (2019). https://doi.org/10.1007/s10773-019-04081-z
  • [38] L. Behera, G. Paul, “Quantum to classical one-way function and its applications in quantum money authentication,” arXiv:1801.01910.
  • [39] Yang, YG., Xu, P., Yang, R. et al. “Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption,’, Sci. Rep. 6, 19788 (2016). https://doi.org/10.1038/srep19788.
  • [40] K. P. Kalinin, N. G. Berloff, “Blockchain platform with proof-of-work based on analog Hamiltonian optimisers,” arXiv:1802.10091.
  • [41] NIST, “Secure hash standard (SHS),” FIPS PUB 180-4 (2015). Available at: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf.
  • [42] S. Aaronson, “Shadow tomography of quantum states,” in Proc. 50th Annu. ACM SIGACT Symp. Theory Comput. (STOC), 325 (2018), https://doi.org/10.1145/3188745.3188802.
  • [43] H. Y. Huang, R. Kueng, J. Preskill, “Predicting many properties of a quantum system from very few measurements,” Nat. Phys. 16, 1050 (2020), https://doi.org/10.1038/s41567-020-0932-7.
  • [44] J. F. Clauser, M. A. Horne, A. Shimony, R. A. Holt, “Proposed experiment to test local hidden-variable theories,” Phys. Rev. Lett. 23, 880 (1969).
  • [45] Hui-Hui Qin, Shao-Ming Fei, and Xianqing Li-Jost, “Trade-off relations of Bell violations among pairwise qubit systems,” Phys. Rev. A 92, 062339 (2015).
  • [46] H.Y. Huang, M. Broughton, M. Mohseni et al., “Power of data in quantum machine learning,” Nat. Commun. 12, 2631 (2021). https://doi.org/10.1038/s41467-021-22539-9
  • [47] Rakesh Sharma, “Do Bitcoin mining energy costs influence its price?,” https://www.investopedia.com/news/do-bitcoin-mining-energy-costs-influence-its-price/
  • [48] https://en.bitcoin.it/wiki/Weaknesses
  • [49] Dootika Vats and Christina Knudson, “Revisiting the Gelman-Rubin diagnostic,” arXiv:1812.09384
  • [50] D-Wave white paper, “Computational Power Consumption and Speedup,”

Appendix A Cross-Validation Limitations in Boson and Random-Circuit Sampling

To operate within a quantum blockchain, QPUs must cross-validate each other’s outcomes. Unstructured distributions yielding states with close to uniform random probability, such as those obtained from boson or random-circuit sampling, are unsuitable for this purpose: the expectation values of all foreseeable k-local operators remain close to zero, making them ineffective in the construction of witnesses. The only observable with a nontrivial expectation is ψ|ρ|ψquantum-operator-product𝜓𝜌𝜓\left\langle\psi\right|\rho\left|\psi\right\rangle⟨ italic_ψ | italic_ρ | italic_ψ ⟩, where |ψket𝜓\left|\psi\right\rangle| italic_ψ ⟩ is a ground-truth wave function. For this reason the estimator commonly used to establish supremacy is the cross entropy [29]. Given a distribution allowing fair and efficient sampling in the computational basis (PQPUsubscript𝑃QPUP_{\text{QPU}}italic_P start_POSTSUBSCRIPT QPU end_POSTSUBSCRIPT, typically a QPU), and a well-resolved ground-truth distribution in the same basis (PGTsubscript𝑃GTP_{\text{GT}}italic_P start_POSTSUBSCRIPT GT end_POSTSUBSCRIPT, used for verification), the cross entropy is quantified as

XEB=2nxPGT(x)PQPU(x)1.XEBsuperscript2𝑛subscript𝑥subscript𝑃GT𝑥subscript𝑃QPU𝑥1\text{XEB}=2^{n}\sum_{x}P_{\text{GT}}(x)P_{\text{QPU}}(x)-1\;.XEB = 2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT italic_P start_POSTSUBSCRIPT GT end_POSTSUBSCRIPT ( italic_x ) italic_P start_POSTSUBSCRIPT QPU end_POSTSUBSCRIPT ( italic_x ) - 1 . (31)

PGTsubscript𝑃GTP_{\text{GT}}italic_P start_POSTSUBSCRIPT GT end_POSTSUBSCRIPT is typically computed using tensor-network techniques at small scales. This allows the sum to be evaluated by Monte Carlo sampling (of PQPUsubscript𝑃QPUP_{\text{QPU}}italic_P start_POSTSUBSCRIPT QPU end_POSTSUBSCRIPT, i.e. use of a sample set). Calculating PGTsubscript𝑃GTP_{\text{GT}}italic_P start_POSTSUBSCRIPT GT end_POSTSUBSCRIPT by classical methods becomes intractable at large system sizes (thence supremacy). The alternative is then to determine PGTsubscript𝑃GTP_{\text{GT}}italic_P start_POSTSUBSCRIPT GT end_POSTSUBSCRIPT from a QPU. XEB is small in experimental demonstrations, even using an ideal (tensor network) distribution as PGTsubscript𝑃𝐺𝑇P_{GT}italic_P start_POSTSUBSCRIPT italic_G italic_T end_POSTSUBSCRIPT [27, 28]. Nevertheless, we might hope that PGTsubscript𝑃𝐺𝑇P_{GT}italic_P start_POSTSUBSCRIPT italic_G italic_T end_POSTSUBSCRIPT could be constructed for the purpose of validating quantum work.

Unfortunately, the properties of boson sampling and random-circuit sampled distributions mean that the summation of (31) cannot be executed in a scalable way, even by Monte-Carlo methods. Since sampled state probabilities are all close to 1/2N1superscript2𝑁1/2^{N}1 / 2 start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT (for N𝑁Nitalic_N spins) establishing PGTsubscript𝑃𝐺𝑇P_{GT}italic_P start_POSTSUBSCRIPT italic_G italic_T end_POSTSUBSCRIPT with sufficient resolution requires O(2N)𝑂superscript2𝑁O(2^{N})italic_O ( 2 start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) samples. More generally, for any pair of high entropy distributions amenable to fair sampling (but not exact integration), impractically large bias in estimation of distributional distance can be an issue[29, 1]. Entropy is especially large in boson and random-circuit sampling.

Appendix B CHSH Inequality

In this appendix we introduce CHSH inequality and formularize it in a way suitable for our application. Consider two observers a𝑎aitalic_a (Alice) and b𝑏bitalic_b (Bob), both allowed to perform measurement in two orthogonal bases “1” and “2” via operators a1,a2,b1,b2subscript𝑎1subscript𝑎2subscript𝑏1subscript𝑏2a_{1},a_{2},b_{1},b_{2}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. We define the Bell’s operator as

B=a1b1+a1b2+a2b1a2b2𝐵subscript𝑎1subscript𝑏1subscript𝑎1subscript𝑏2subscript𝑎2subscript𝑏1subscript𝑎2subscript𝑏2B=a_{1}b_{1}+a_{1}b_{2}+a_{2}b_{1}-a_{2}b_{2}italic_B = italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (32)

Consider a quantum state represented by a density matrix ρ𝜌\rhoitalic_ρ. Measuring the Bell operator would result in an expectation value Bρ=Tr[ρB]subscriptdelimited-⟨⟩𝐵𝜌T𝑟delimited-[]𝜌𝐵\langle B\rangle_{\rho}={\text{T}r}[\rho B]⟨ italic_B ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT = T italic_r [ italic_ρ italic_B ]. Bell’s theorem states that for any classical local hidden variable theory, we should have

|Bρ|2Classical,subscriptdelimited-⟨⟩𝐵𝜌2C𝑙𝑎𝑠𝑠𝑖𝑐𝑎𝑙|\langle B\rangle_{\rho}|\leq 2\qquad{\text{C}lassical},| ⟨ italic_B ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT | ≤ 2 C italic_l italic_a italic_s italic_s italic_i italic_c italic_a italic_l , (33)

whereas quantum mechanics allows

|Bρ|22Quantum.subscriptdelimited-⟨⟩𝐵𝜌22Q𝑢𝑎𝑛𝑡𝑢𝑚|\langle B\rangle_{\rho}|\leq 2\sqrt{2}\qquad{\text{Q}uantum}.| ⟨ italic_B ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT | ≤ 2 square-root start_ARG 2 end_ARG Q italic_u italic_a italic_n italic_t italic_u italic_m . (34)

This is commonly known as the Clauser-Horne-Shimony-Holt (CHSH) inequality, an experimentally practical variation of Bell’s inequality [44]. Violation happens when Bρ>2subscriptdelimited-⟨⟩𝐵𝜌2\langle B\rangle_{\rho}>2⟨ italic_B ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT > 2, which indicates that the experimental results cannot be explained by any local hidden variable theory. It is clear that violation of CHSH inequality is a sufficient but not necessary condition for quantum mechanics. It is also evident that violation strongly depends on the choice of a𝑎aitalic_a and b𝑏bitalic_b operators as well as the quantum state ρ𝜌\rhoitalic_ρ. We therefore define an operator-independent quantity as a maximum over all possible operators:

CHSHρ=maxAα,BβTr[ρB].subscriptdelimited-⟨⟩CHSH𝜌subscriptsubscript𝐴𝛼subscript𝐵𝛽T𝑟delimited-[]𝜌𝐵\langle{\text{CHSH}}\rangle_{\rho}=\max_{A_{\alpha},B_{\beta}}{\text{T}r}[\rho B].⟨ CHSH ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT = roman_max start_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT end_POSTSUBSCRIPT T italic_r [ italic_ρ italic_B ] . (35)

The CHSH inequality therefore is expressed as CHSHρ2subscriptdelimited-⟨⟩CHSH𝜌2\langle{\text{CHSH}}\rangle_{\rho}\leq 2⟨ CHSH ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT ≤ 2.

Consider the Bell state

|ψ=|+|2\left|\psi\right\rangle={\left|\uparrow\uparrow\right\rangle+\left|\downarrow% \downarrow\right\rangle\over\sqrt{2}}| italic_ψ ⟩ = divide start_ARG | ↑ ↑ ⟩ + | ↓ ↓ ⟩ end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG (36)

where |ket\left|\uparrow\right\rangle| ↑ ⟩ and |ket\left|\downarrow\right\rangle| ↓ ⟩ are eigenstates of σzsuperscript𝜎𝑧\sigma^{z}italic_σ start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT with eigenvalues ±1plus-or-minus1\pm 1± 1, respectively. The pure state density matrix is

ρ=12(|+|)(|+|)\rho={1\over 2}(\left|\uparrow\uparrow\right\rangle+\left|\downarrow\downarrow% \right\rangle)(\left\langle\uparrow\uparrow\right|+\left\langle\downarrow% \downarrow\right|)italic_ρ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( | ↑ ↑ ⟩ + | ↓ ↓ ⟩ ) ( ⟨ ↑ ↑ | + ⟨ ↓ ↓ | ) (37)

We define operators

a1=σax,a2=σaz,b1=σbx+σbz2,b2=σbxσbz2,formulae-sequencesubscript𝑎1superscriptsubscript𝜎𝑎𝑥formulae-sequencesubscript𝑎2superscriptsubscript𝜎𝑎𝑧formulae-sequencesubscript𝑏1superscriptsubscript𝜎𝑏𝑥superscriptsubscript𝜎𝑏𝑧2subscript𝑏2superscriptsubscript𝜎𝑏𝑥superscriptsubscript𝜎𝑏𝑧2\displaystyle a_{1}=\sigma_{a}^{x},\ a_{2}=\sigma_{a}^{z},\ b_{1}={\sigma_{b}^% {x}{+}\sigma_{b}^{z}\over\sqrt{2}},\ b_{2}={\sigma_{b}^{x}{-}\sigma_{b}^{z}% \over\sqrt{2}},italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT , italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = divide start_ARG italic_σ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = divide start_ARG italic_σ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT - italic_σ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG , (38)

which means Bob’s measurement bases are 45 degrees rotated compared to Alice. The Bell operator therefore becomes

B=2(σaxσbx+σazσbz)𝐵2superscriptsubscript𝜎𝑎𝑥superscriptsubscript𝜎𝑏𝑥superscriptsubscript𝜎𝑎𝑧superscriptsubscript𝜎𝑏𝑧\displaystyle B=\sqrt{2}(\sigma_{a}^{x}\sigma_{b}^{x}+\sigma_{a}^{z}\sigma_{b}% ^{z})italic_B = square-root start_ARG 2 end_ARG ( italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT + italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT ) (39)

It is easy to check that |ψket𝜓\left|\psi\right\rangle| italic_ψ ⟩ is an eigenfunction of B𝐵Bitalic_B with eigenvalue 22222\sqrt{2}2 square-root start_ARG 2 end_ARG. To violate Bell’s inequality with Bρ>2subscriptdelimited-⟨⟩𝐵𝜌2\langle B\rangle_{\rho}>2⟨ italic_B ⟩ start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT > 2, we need to have strong correlations in both x𝑥xitalic_x and z𝑧zitalic_z directions. Each correlation can maximally be equal to one so the optimal value is 22222\sqrt{2}2 square-root start_ARG 2 end_ARG, which occurs in Bell’s state. But weaker correlations can also lead to violation as long as the sum of the two correlators is greater than 22\sqrt{2}square-root start_ARG 2 end_ARG.

Classically, if we represent each spin by a 2D vector S=[cosθ,sinθ]T𝑆superscript𝜃𝜃𝑇\vec{S}=[\cos\theta,\sin\theta]^{T}over→ start_ARG italic_S end_ARG = [ roman_cos italic_θ , roman_sin italic_θ ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, we obtain

B𝐵\displaystyle Bitalic_B =\displaystyle== 2(cosθacosθb+sinθasinθb)2subscript𝜃𝑎subscript𝜃𝑏subscript𝜃𝑎subscript𝜃𝑏\displaystyle\sqrt{2}(\cos\theta_{a}\cos\theta_{b}+\sin\theta_{a}\sin\theta_{b})square-root start_ARG 2 end_ARG ( roman_cos italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT roman_cos italic_θ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT + roman_sin italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT roman_sin italic_θ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) (40)
=\displaystyle== 2cos(θaθb)2subscript𝜃𝑎subscript𝜃𝑏\displaystyle\sqrt{2}\cos(\theta_{a}-\theta_{b})square-root start_ARG 2 end_ARG roman_cos ( italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT - italic_θ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT )

Maximum happens when θa=θbsubscript𝜃𝑎subscript𝜃𝑏\theta_{a}=\theta_{b}italic_θ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = italic_θ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, which gives |B|max=2subscriptdelimited-⟨⟩𝐵max2|\langle B\rangle|_{\text{max}}=\sqrt{2}| ⟨ italic_B ⟩ | start_POSTSUBSCRIPT max end_POSTSUBSCRIPT = square-root start_ARG 2 end_ARG. This is less than the maximum possible classically based on conditional probability arguments, i.e., |B|max=2subscriptdelimited-⟨⟩𝐵max2|\langle B\rangle|_{\text{max}}=2| ⟨ italic_B ⟩ | start_POSTSUBSCRIPT max end_POSTSUBSCRIPT = 2.

It is possible to define CHSH inequalities between pairs of qubits in a multi-qubit system. One can use (35) with ρijsubscript𝜌𝑖𝑗\rho_{ij}italic_ρ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT representing the reduced density matrix for qubits i𝑖iitalic_i and j𝑗jitalic_j:

CHSHρij=maxTr[ρijB],subscriptdelimited-⟨⟩CHSHsubscript𝜌𝑖𝑗T𝑟delimited-[]subscript𝜌𝑖𝑗𝐵\langle{\text{CHSH}}\rangle_{\rho_{ij}}=\max{\text{T}r}[\rho_{ij}B],⟨ CHSH ⟩ start_POSTSUBSCRIPT italic_ρ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT = roman_max T italic_r [ italic_ρ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT italic_B ] , (41)

where the maximum is over all possible measurement bases. While each CHSH inequality can be violated, there is a trade-off relation that needs to be satisfied. For a system of N𝑁Nitalic_N qubits, we should have [45]

i<jNCHSHρij22N(N1)superscriptsubscript𝑖𝑗𝑁subscriptsuperscriptdelimited-⟨⟩CHSH2subscript𝜌𝑖𝑗2𝑁𝑁1\sum_{i<j}^{N}\langle\text{CHSH}\rangle^{2}_{\rho_{ij}}\leq 2N(N-1)∑ start_POSTSUBSCRIPT italic_i < italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ⟨ CHSH ⟩ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ρ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ≤ 2 italic_N ( italic_N - 1 ) (42)

Since the number of pairs is N(N1)/2𝑁𝑁12N(N-1)/2italic_N ( italic_N - 1 ) / 2, the equality is satisfied if CHSHρij=2subscriptdelimited-⟨⟩CHSHsubscript𝜌𝑖𝑗2\langle\text{CHSH}\rangle_{\rho_{ij}}=2⟨ CHSH ⟩ start_POSTSUBSCRIPT italic_ρ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 2 for all pairs. This means that it is not possible to violate all CHSH inequalities at the same time. In other words, if some of the inequalities are violated, there are always others that are not violated. This is closely related to the monogamy of entanglement, which states that if two subsystems are strongly entangled, they should have less entanglement with other subsystems.

Appendix C Weaknesses and Mitigations

Many weaknesses for PoW blockchains such as Bitcoin are known, and many mitigations have been developed [48, 7, 8, 9, 10]. We focus on several attacks that might be attempted by an adversarial agent to gain reward with incomplete work. The reward might be gained deterministically, or more often probabilistically (in expectation). Neither the attacks presented, nor the mitigations, are an exhaustive list.

An example of an attack for PoW blockchains is the 51% attack. In this attack an attacker sends Bitcoin to a service provider. After a delay (of a few block generation events) the service provider considers the payment made to be immutable, and renders the service. The attacker then creates a fork in the blockchain by mining a block that precedes the payment block, and omits their payment to the service provider in this new branch. If they are able to make this new branch strong (long) enough all stakeholders will switch to it, at which point the transaction to the service provider is no longer part of the record. The attacker is then able to spend their money again (a double spend). If an attacker controls a significant percentage of compute power it can be worth their time to attempt such an attack to negate a large payment, and if they have control of 51% of compute power they will succeed with high probability.

The 51% attack and other blockchain weaknesses apply also to our probabilistic blockchain. To mitigate for the 51% attack we require well-distributed quantum compute power; since mining power can be split by probabilistic verification, the threshold can be lower than 51% (but still substantial if mining efficiency is high enough). We focus in this appendix on attacks that are new or qualitatively changed relative to classical or non-probabilistic blockchains. Attacks are different and potentially more powerful owing to several factors including a lack of determinism, bypassing of experimental work (inference given the experimental parameters, i.e. spoofing), and changes in the relative costs of broadcasting, mining, and validation.

As described in the main text, the particular proof-of-concept experiment is premised on the notion of honest network users and is susceptible to attacks. In addition to mitigations described in the main text we explore additional procedures in this appendix. We present a set of tools to protect against no-work and partial-work attacks, but optimization of the experimental ensemble and digitalization process, essential in practice, goes beyond the scope of this publication.

C.1 No Work Attacks

We consider first the possibility that a miner might present a fraudulent block, one in which the nonce is chosen uniformly at randomly without conducting any experiment. In Bitcoin a random nonce can be used in a broadcast block. This convinces either all users with probability 2𝒩zerossuperscript2subscript𝒩zeros2^{-\mathcal{N}_{\rm zeros}}2 start_POSTSUPERSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT or convinces no users. The expected reward from such a strategy is 2𝒩zerosRCBCPsuperscript2subscript𝒩zeros𝑅subscript𝐶𝐵subscript𝐶𝑃2^{-\mathcal{N}_{\rm zeros}}R-C_{B}-C_{P}2 start_POSTSUPERSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT italic_R - italic_C start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT - italic_C start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, where CBsubscript𝐶𝐵C_{B}italic_C start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT is the cost of broadcasting the block, R𝑅Ritalic_R is the expected block reward and CPsubscript𝐶𝑃C_{P}italic_C start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT the cost of block preparation. Pursuing this strategy repeatedly the cost of block preparation is negligible (only a bit increment of the nonce is required between proposals). By contrast the expected reward from honest mining is ((RCB)2𝒩zerosCPCH)𝑅subscript𝐶𝐵superscript2subscript𝒩zerossubscript𝐶𝑃subscript𝐶𝐻((R-C_{B})2^{-\mathcal{N}_{\rm zeros}}-C_{P}-C_{H})( ( italic_R - italic_C start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) 2 start_POSTSUPERSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT - italic_C start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT - italic_C start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ), where CHsubscript𝐶𝐻C_{H}italic_C start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is the cost of hashing (validation). The reward in Bitcoin is large enough compared to the broadcast cost and CHsubscript𝐶𝐻C_{H}italic_C start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT that mining is incentivized. Simultaneously CB>2𝒩zerosRsubscript𝐶𝐵superscript2subscript𝒩zeros𝑅C_{B}>2^{-\mathcal{N}_{\rm zeros}}Ritalic_C start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT > 2 start_POSTSUPERSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT italic_R, so random broadcasts are disincentivized.

The random nonce strategy is similar to a denial of service attack (DoS) from the perspective of validators. In DoS an attacker broadcasts in large volume to overwhelm the bandwidth or verification capabilities of the network (thereby disrupting or delaying transactions).

DoS and random nonce mitigations effective in other mature blockchain technologies can be used (see e.g.  [48]). The situation is qualitatively the same in our probabilistic blockchain, although the expected reward is smaller as a function of the blockchain efficiency (enhanced branching), and validation may be inhomogeneous with respect to users. Validation is also more expensive than in many other blockchains, so fewer malicious packets can be more disruptive. The inhomogeneity of validation is a problem, since mining power may be diluted by such an attack even if the attacker is not ultimately rewarded.

In our application we should consider also important practical changes: the cost of hashing CHsubscript𝐶𝐻C_{H}italic_C start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is raised, since quantum measurements will be more expensive than single SHA-256 hashes. Similarly 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT may be smaller, since the verification time scale is longer. This makes a no-work strategy more viable. Relative to Bitcoin there is a smaller gap between the honest and dishonest expected rewards. To mitigate for no-work attacks a modified block structure can be effective, as described in Section C.4.

C.2 Partial work attacks

Partial work does not offer attack opportunities in Bitcoin owing to the cryptographic properties of SHA-256. A hypothetical probabilistic hash might fail in an independent and identically distributed manner, providing similar guarantees against partial work. However, our hash is to be inferred from a known experimental protocol (ΘΘ\Thetaroman_Θ) and digitalization (G,W0𝐺subscript𝑊0G,W_{0}italic_G , italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT) procedure, which in practical implementations can leave room for inference to bypass at least some quantum work.

Consider the case of a blockchain built upon quantum supremacy in the estimation of quench statistics [1], used in our proof of concept. Since signs on hyperplanes are determined as a (classical) cryptographic function of the block header, preimage resistance is assured: it is not possible to determine a block header from a desired (all zero) bit pattern. However, given a nonce (thence protocol parameters Θ,G,W0Θ𝐺subscript𝑊0\Theta,G,W_{0}roman_Θ , italic_G , italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT) one can estimate correlations with increasing accuracy by a hierarchy of classical computational methods (of increasing cost). Certain experiments (ΘΘ\Thetaroman_Θ) may allow better or worse estimators, combined with certain realizations of G𝐺Gitalic_G and W0subscript𝑊0W_{0}italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that allow an accumulation of signal relative to the error. In this way the sign WαW0subscript𝑊𝛼subscript𝑊0W_{\alpha}-W_{0}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT - italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT for some (many) bits might be inferred with non-negligible (or perhaps even high) confidence.

An attack might proceed as follows. For many nonces an attacker can estimate {Wα}subscript𝑊𝛼\{W_{\alpha}\}{ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT } by classical methods (e.g. by approximate simulation, or machine learning). The results are ranked by the number of (robust) threshold-satisfying bits. The best candidate might be broadcast directly. The reward from this strategy is described as in the no-work scenario, but with an enhanced (classical) work cost and enhanced probability that the bits are verified. This attack is mitigated in part by the modified block structure of Appendix C.4. In particular, spoofing enhanced experimental data (beyond the 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT proof of work bits) is impossible without quantum supremacy [1].

A second way to use classical estimators is as a filter. Rather than broadcast the most promising nonce, the nonces are ranked and subjected to the prescribed quantum experiment. Only if the quantum work validates the nonce is it broadcast. The filter allows a user to bypass experiments that are less likey to meet the Chainwork requirement. In this case, the filter success rate must grow slowly enough with the classical work for quantum experiments to be favored over the filter.

To mitigate this filter attack it is necessary that all practical estimators are weak. One simple mechanism by which to weaken classical estimators is described in Appendix C.5. The quality of estimators as a function of the classical-work investigated is above all a function of the experimental ensemble (distribution of ΘΘ\Thetaroman_Θ).

C.2.1 Partial Quantum Work Attack

The basic +/-1 and confidence-based chainwork definitions allow for near misses with respect to complete or incomplete work. A participant might have some confidence that their result is close to the threshold, and reasonably expect that although their validation fails their nonce might be validated by some significant fraction of stakeholders.

Since a finite fraction of stakeholders (thence mining power) can validate a near-threshold block, there is finite probability of such a block being incorporated in the strongest chain subject to a delay; motivating a minor to broadcast such a block. A careful balancing of block penalties and rewards is necessary to ensure quantum-partial work is not a profitable strategy, achievable by control of transaction and block rewards and enhanced block structure.

The quantum experimental set-up also allows room for efficiencies as well, which should be understood in a full application. To minimize QPU access time (costs) a miner should optimize parameters such as number of samples and programming time. They could then more efficiently search, or filter, nonces. A miner is incentivized to prefilter nonces with cheap experiments, and then seek higher confidence (via enhanced sampling) only on candidates for broadcast. In our demonstration by contrast we fix QPU access time to 1 second in mining and validation.

C.3 Multi-Block Attack

The multi-block attack can be considered a generalization of the 51% attack. This involves an attacker proposing a no-work (or partial work) block and seeking to bury it under full-work blocks. By burying no-work blocks with full work blocks they may receive a reward enhanced in proportion to the block ratio.

Our definitions of Chainwork mean that for any path to be viable as a strongest chain there must be strictly more verified blocks than unverified blocks. Owing to this, the criteria necessary for this attack to succeed are similar to the standard 51% attack. In effect an attacker must be able to grow the chain more quickly than alternative non-fraudulent candidates, in order to undo the sum-work damage of their fraudulent block. This requires control of a significant fraction of community compute power assuming practical chain efficiency (concentrated honest mining power, up to small fluctuations).

C.4 Enhanced Block Structure Mitigations

An enhanced block structure shown in Fig. 6 can mitigate against no-work and partial-work attacks, as now discussed.

Refer to caption
Figure 6: The standard block (details in Figure 1) can be broadcast with additional information or costs, shown in red and blue, to mitigate for weaknesses. Some or none of these enhancements may be useful, depending on the strength of the quantum work. Some of this information, marked red, such as the enhanced experimental data, is not cryptographically protected by the quantum work, but this encoding becomes cryptographically (securely) linked when a subsequent block is created, via the SHA-256 hash of the enhanced block (entering the next standard header). Hybrid (classical) work (characterized by a secondary nonce requirement), or stakes, can be a means to raise the cost of tampering with this information, and/or the costs associated with fraudulent block broadcasts. A stake is a signed payment by the miner that is forfeited if the block fails to enter the strongest chain (not related to proof of stake). Stakes associated with failed branching can be collected alongside the standard transactions, disincentivizing branch proposal. The secondary block information may be marked with fields mirroring the standard block, such as timestamp, version and hardness (not shown).

Block broadcasts that are verified with low probability pay a broadcasting cost. A sufficient strategy for mitigation is raising the cost of broadcasting blocks that have low-probability of validation, and raising the bar for validation by requiring the publication of enhanced experimental information. A modified block structure can achieve this as shown in Fig. 6. We describe some potential components:

C.4.1 Network and identity mitigations

If broadcasters are not anonymous their behavior can be tracked. A rule can be introduced to penalize or temporarily block users who consistently propose unverifiable work, as is done for DoS. Note that a transmission path or node might be penalized with limited information on the stakeholder.

C.4.2 Broadcasting with additional classical work

In order to broadcast it can be required that a user completes a classical proof of work. The work should be enough to delay transmission and raise costs, yet negligible compared to the quantum work. If the classical work is sufficiently high, it ceases to become profitable to risk approximations in the quantum part. In the case where the classical work becomes comparable to the quantum work, we have a hybrid system. In an immature technology it may be valuable to pursue such a hybrid approach and gradually reduce the proportion of classical work as confidence is gained in the quantum operation. Quantum proof of work could of course also be hybridized with proof of stake as an alternative mitigation.

C.4.3 Broadcasting with a stake

A user might be required to stake (bid on) their proposal. The stake can be a finite fraction of the block reward, signed from the miner’s current wallet to a burn wallet. Stakes for failed branches are paid to a burn-wallet. In this way, any user is limited in the number of weak proposals they are able to make, and risk significant loses from doing so. Payment to a burn wallet is a standard component in proof of stake blockchains such as Etherium, but note that the stake here does not require any community involvement. This is not a proof-of-stake consensus mechanism, which requires a community-wide mechanism.

C.4.4 Enhanced experimental data

We can also raise the difficulty to pass validation by requiring the publication of enhanced experimental data (EED). Quantum work is based on a 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT bit description of the experimental outcome, by contrast the full experimental data may be subject to more robust statistical validation tests that more robustly eliminate bad proposals [1]. EED can fall outside the header if the information content is large, and be linked in a similar manner to transaction data. The 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT bits used in validation are cryptographically protected, but a user might be required to publish EED consisting of more extensive statistics (up to perhaps full sample sets) as part of a broadcast. Although EED might be manipulated, it might be taken as a rule to evaluate the strongest available EED. Weak classical work (2.) or a stake (3.) might be used to protect EED and disincentivize manipulation. The EED can be hashed on the next mining event, so that it is cyrptographically protected subject to a delay of one block. For quantum-supremacy satisfying experiments EED might not be spoofed by classical methods, but this does not prevent classical filtering of nonces as described above.

C.5 Hyperplane distribution mitigations

Refer to caption
Figure 7: Operation of an example blockchain of 128 block broadcasts generated in the limit of many miners using projection orthogonal to J𝐽Jitalic_J with 𝒩zeros=20subscript𝒩𝑧𝑒𝑟𝑜𝑠20\mathcal{N}_{zeros}=20caligraphic_N start_POSTSUBSCRIPT italic_z italic_e italic_r italic_o italic_s end_POSTSUBSCRIPT = 20, and confidence-based Chainwork with 𝒩zeros=1subscript𝒩zeros1\mathcal{N}_{\rm zeros}=1caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = 1. The most recent block mined is marked in black and the strongest chain in blue.

Suppose a per-nonce classical work less than the quantum experimental cost (CHsubscript𝐶𝐻C_{H}italic_C start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT) allows for a witness estimator W^^𝑊{\hat{W}}over^ start_ARG italic_W end_ARG. One mitigation is to modify the distribution of thresholds or hyperplanes (to choose W0W^similar-tosubscript𝑊0^𝑊W_{0}\sim{\hat{W}}italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ over^ start_ARG italic_W end_ARG), or to choose hyperplanes orthogonal to W^^𝑊{\hat{W}}over^ start_ARG italic_W end_ARG. In either case the classical approximation is rendered uninformative on the signs expected from the quantum experiment. If the uncertainty in the quantum experiment is relatively small, we might still have sufficient accuracy in quantum experiments for robust validation. A visualization of this process is shown in Fig. 11.

In the case of our quench experiments, we know that nearest-neighbor correlation are correlated with J𝐽-J- italic_J, the approximation CijJijproportional-tosubscript𝐶𝑖𝑗subscript𝐽𝑖𝑗C_{ij}\propto-J_{ij}italic_C start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ∝ - italic_J start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT describes accurately the weak coupling (fast anneal) limit. There are significant deviations from this estimate, but any hyperplane that is well correlated with J𝐽Jitalic_J can accumulate a reliable signal from weak estimators incorporating this approximation. To mitigate for this we might choose our hyperplanes to be orthogonal to J𝐽Jitalic_J. Choosing hyperplanes (or thresholds W0αsubscript𝑊0𝛼W_{0\alpha}italic_W start_POSTSUBSCRIPT 0 italic_α end_POSTSUBSCRIPT) to decrease reliability of classical estimators will typically enhance the uncertainty in quantum validation, thus reduce the efficiency and enhance delays in a blockchain. This effect is shown in Fig. 15 as a function of 𝒩zerossubscript𝒩𝑧𝑒𝑟𝑜𝑠\mathcal{N}_{zeros}caligraphic_N start_POSTSUBSCRIPT italic_z italic_e italic_r italic_o italic_s end_POSTSUBSCRIPT. In Fig. 7 we show an example of a chain resistant to simple classical inference based on local coupling strength.

Appendix D Experimental Implementation Details

Refer to caption
Figure 8: Energy scales (29) of the general-access QPUs used in experiments. Γ(s)Γ𝑠\Gamma(s)roman_Γ ( italic_s ) decreases with anneal progression s𝑠sitalic_s, 𝒥(s)𝒥𝑠{\cal J}(s)caligraphic_J ( italic_s ) increases. Transverse fields are comparable on all devices, but the problem energy scale is larger in Advantage2. Advantage QPUs can be programmed with (approximately) doubling the anneal duration to simulate an Advantage2 QPU full-energy scale anneal [1].
Refer to caption
Figure 9: Operation of an example blockchain using four generally-accessible Advantage and Advantage2 QPUs with 𝒩zeros=40subscript𝒩zeros40\mathcal{N}_{\rm zeros}=40caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = 40 and 100 miners with data collected across more than 2 weeks, with 780 total block broadcasts and using basic +/-1 Chainwork. The color scheme matches that of Figure 3.Node placement and line-thickness conventions follow Figure 4. It is apparent in the larger number of black and gray (unconfirmed) transactions, and in the increased rate of edges spanning the spiral, that this blockchain is of lower efficiency and larger delay than that presented in Figure 4. This is consistent with larger 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT as shown in Figure 5 and 15. A temporary decrease in efficiency was also observed mid-chain owing to one of the four QPUs going offline for an hour.
Refer to caption
Figure 10: Operation of an example blockchain using the Advantage2 QPU and the dimerized biclique spin-glass problems with 𝒩zeros=20subscript𝒩zeros20\mathcal{N}_{\rm zeros}=20caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT = 20 and 100 miners, 592 block broadcasts and using basic +/-1 Chainwork. Data was collected across 10 days. The color scheme, node placemenet and line thicknesses match the conventions of Figures 3 and 4. Data was collected across more than a week.
Refer to caption
Figure 11: A high-dimensional statistics vector V𝑉Vitalic_V is estimated; here only 2 dimensions are shown. Quantum (and classical) estimators are subject to some uncertainty represented as a ball, in high dimensions and with reasonable variance in estimation, all estimators become well resolved in space. By standard locality-sensitive hashing we can digitalize the result. We record what side of each hyperplane the statistics are on. With respect to hyperplanes G2subscript𝐺2G_{2}italic_G start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and G3subscript𝐺3G_{3}italic_G start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT quantum statistics are encoded as +1 and +1, but the outcome with respect to G3subscript𝐺3G_{3}italic_G start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is subject to experimental uncertainty. A weak classical estimator, such as any estimator without access to J𝐽Jitalic_J, produces uniform random results. Stronger estimators might produce outcomes correlated in space with the quantum result. Hyperplane distributions can be adjusted by displacement of the origin, or selecting hyperplanes orthogonal to strong classical estimators. G2,subscript𝐺2G_{2,\cdot}italic_G start_POSTSUBSCRIPT 2 , ⋅ end_POSTSUBSCRIPT (a projection orthogonal to the strong classical estimator) allows high confidence in the quantum result, but both the strong and weak classical estimators shown produce outcomes uncorrelated with the quantum result.
Refer to caption
Refer to caption
Figure 12: Validation rates with respect to experiments on different QPUs. Validation rates are higher using a common QPU, or a common QPU architecture (Advantage (Adv) or Advantage2 prototype (Adv2)). They are slightly higher on Advantage QPUs due the smaller size of the prototype Advantage2 prototype QPU combined with the bound of 1 second of QPU access time. On each QPU the probability of an outcome 1 is measured on each QPU from 20202020 independent (with randomized embeddings) programmings, for 5 J𝐽Jitalic_J realizations and total of 5120512051205120 projections. From these 20 experiments we infer bit error rates for different QPU combinations. 3 Advantage QPUs are used, with some small variability between QPUs (not shown). (top) Hyperplane are chosen with independent and normally distributed components. (bottom) With the same hyperplanes projected into the space orthogonal to J𝐽Jitalic_J (see Section C.5).
Refer to caption
Refer to caption
Figure 13: Witnesses Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT measured on Advantage (Adv) and Advantage2 (Adv2) QPUs are similar, and much larger than the root mean square (RMS) difference between Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT in different experiments (programmings with independent embeddings on a fixed system, or between systems, as labeled). δWα𝛿subscript𝑊𝛼\delta W_{\alpha}italic_δ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT is shown as a root mean square value across all programmings (variation of embedding and QPUs). (top) Each hyperplane has independent and normally distributed components. (bottom) With the same hyperplanes projected into the space orthogonal to J𝐽Jitalic_J. This enhances spoof-resistance, but reduces the signal to noise ratio (Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT are made smaller relative to dWα𝑑subscript𝑊𝛼dW_{\alpha}italic_d italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT). (top) Hyperplane are chosen with independent and normally distributed components. (bottom) With the same hyperplanes projected into the space orthogonal to J𝐽Jitalic_J (see Section C.5).
Refer to caption
Figure 14: Expected hashes required per broadcast with confidence-based Chainwork measured by Monte Carlo sampling. For basic +/- Chainwork the expected number of hashes required per broadcast is precisely 2𝒩zerossuperscript2subscript𝒩zeros2^{\mathcal{N}_{\rm zeros}}2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT; confidence-based Chainwork is close to this, with corrections shown. As 𝒩maxsubscript𝒩max\mathcal{N}_{\rm max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT increases blocks are easier to mine. As 𝒩maxsubscript𝒩max\mathcal{N}_{\rm max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT increases it is easier to find work-satisfying hashes. Using J-orthogonal hyperplanes more bits are uncertain and it becomes harder to find a confidence-satisfying bit sequence (mining rate goes down).
Refer to caption
Refer to caption
Figure 15: Statistics from 5-10 chains, each of length 512-2048 using the QPU-witness resampling methodology (see Appendices D.1 and D.3). Hyperplanes are either i.i.d normally distributed, or are restricted to be orthogonal-to-J𝐽Jitalic_J (strengthening hashes against weak classical estimators). Resistance to spoofing increases uncertainty in validation, reducing efficiency. The Chainwork uses either the basic +/-1 definition, or the confidence-weighted definition with 𝒩max=1subscript𝒩max1\mathcal{N}_{\rm max}=1caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT = 1 or 2222: confidence-weighted strongest chains are more efficient. (top) Mining efficiency measures the fraction of block broadcasts entering the strongest chain in the limit of many miners. (bottom) Transactions (blocks) are uncertain until agreed upon by the majority of miners. Transaction delay with 95% confidence measures the depth required for a consolidation of 95% of mining power in the limit of many miners. Stakeholders might consider blocks buried this deep within their strongest chain as immutable with 95% confidence; higher confidence implies higher delays.

The dimerized cubic lattice is equivalent to a cubic lattice minor-embedded in the QPU with chain strength 1, with a regular pattern of connectivity between chains. All couplers are ±1plus-or-minus1\pm 1± 1 uniformly and independently distributed. This matches the cubic-nodimer ensemble of [1]. A dimerized biclique is equivalent to an embedded biclique (Kn,nsubscript𝐾𝑛𝑛K_{n,n}italic_K start_POSTSUBSCRIPT italic_n , italic_n end_POSTSUBSCRIPT) model. Couplers are ±1plus-or-minus1\pm 1± 1 on chains, and ±1nplus-or-minus1𝑛\pm\frac{1}{\sqrt{n}}± divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_n end_ARG end_ARG between chains in a regular pattern. For the dimerized biclique, the regular pattern of inter-chain couplers is specific to the Advantage2 QPU, so we do not perform mining or cross-QPU validation on Advantage QPUs. The dimerized bicliques ensemble matches that of [1].

For 3D spin-glasses we run experiments (mining or validation) on each of the four generally-accessible QPUs, in each case with parallel embeddings (because we consider relatively small problems of 128 qubits, we can program in parallel many copies). We fix the annealing time on the Advantage2 device to be 5 nstimes5ns5\text{\,}\mathrm{n}\mathrm{s}start_ARG 5 end_ARG start_ARG times end_ARG start_ARG roman_ns end_ARG, and fix annealing times for the other three Advantage devices at approximately 10ns, so as to maximize the cross-platform validation rate (a median with respect to a test set of spin-glass disorder realizations and hyperplanes). The difference in programmed times reflects a difference in energy scales between devices, the unitless unitary evolution then becomes well matched, atleast throughout the critical region [1]. Energy scales are shown with respect to the Hamiltonian (29) in Fig. 8. We then fix the number of reads to fully utilize the maximum QPU access time (1 secondtimes1second1\text{\,}\mathrm{s}\mathrm{e}\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{d}start_ARG 1 end_ARG start_ARG times end_ARG start_ARG roman_second end_ARG) per programming, and otherwise use default parameters. For the biclique problem we have 72 qubits and fix the Advantage2 annealing time to 15 nstimes15ns15\text{\,}\mathrm{n}\mathrm{s}start_ARG 15 end_ARG start_ARG times end_ARG start_ARG roman_ns end_ARG. Experimental plots relate to the cubic lattice ensemble, unless stated otherwise.

Per experiment (mining or validation) we choose uniformly at random among the available QPUs and then embed the problem subject to an automorphism and spin-reversal transform on each available embedding. The effect of these transformations is to randomize the mapped position and signs on programmed fields and couplers, in effect emulating the enhanced noise that might be anticipated from (more) distributed computation.

Returned sample sets are processed to nearest-neighbor correlations C𝐶Citalic_C (1 per programmed coupler), that are then projected with independent and identically distributed hyperplanes to determine witnesses. Hyperplanes have independent and normally distributed components. In the case of Fig. 15 we also consider a variation in which hyperplanes are projected into the space orthogonal to J𝐽Jitalic_J, as described in C.5. C𝐶Citalic_C is correlated with J𝐽-J- italic_J, and we remove the linear part of this correlation with such a projection, making the quantum statistics harder to spoof classically. A visualization of the projection process is shown in Fig. 11, experimental distributions for witnesses are shown in Fig. 13. Uncertainty in witness values is typically much smaller than the witness value. Witness values vary more between Advantage and Advantage2 QPUs, than for same-generation QPU validations.

Witnesses are thresholded to determine hashes. This digitalization of the correlations is the standard locality-sensitive hashing method, which guarantees preservation of local distances. With enough projections a small (supremacy achieving error) in correlations translates to a small (supremacy-achieving) Hamming distance in the resulting bit sequence. The probability that bits are validated (reproduced) under a repeated experiment are shown in Fig. 12. More than 99% of bits are typically validated, but the remaining bits can have high uncertainty, particularly with respect to cross-processor values.

We performed simulations of many chains with 100 miners modeling a mining pool and transactions, or taking the limit of many miners (where no miner mines more than one block). Examples are shown in Figs. 4, 9, 10 and 7. However, only the distribution of witness statistics impacts the structure and stability of the chain, assuming no simultaneous broadcasts, via mining events and stakeholder validation. Understanding this we were able to accelerate mining (Section D.2), and simulate experiments by resampling (Section D.3). We first describe our main statistical measures of blockchain stability in Section D.1.

D.1 Blockchain Efficiency and Transaction Delay

For a blockchain to be successful, (1) efficiency must be high so mining work is reliably rewarded and (2) delays in transaction confirmation must be small to enable use as a real-time ledger. These quantities are strongly correlated but subtly different.

Any stakeholder identifies a node with maximum Chainwork that defines the strongest chain; this can be considered a random variable and defines a unique strongest path from the genesis node.

We define blockchain efficiency as the expected number of blocks in the strongest path divided by the total number of blocks broadcast, in the limit of many blocks. For viable parameterizations this concentrates at a finite value as the number of block broadcasts becomes large. For purposes of Figs. 5 and 15 we estimate this value once after every block broadcast for blockchains of between 512512512512 and 2048204820482048 broadcasts (mining events). We discard the values for the first 50% of the chain evolution and average the remaining result. The resulting value is shown as a mean and standard error with respect to either 5 or 10 independent chains.

We can establish two such independent strongest paths, using two stakeholders who have not mined any blocks, and define their pair delay, Dpairsubscript𝐷𝑝𝑎𝑖𝑟D_{pair}italic_D start_POSTSUBSCRIPT italic_p italic_a italic_i italic_r end_POSTSUBSCRIPT, as the number of blocks in the first path absent from the second path (i.e. the depth at which there is disagreement in the strongest chain). If two users agree on the strongest chain this delay is 00, and is otherwise positive. Stakeholders can be miners: the pair delay describes the distribution of mining power relative to a typical user. If pair-delays are with high probability less than 𝒟𝒟\mathcal{D}caligraphic_D, then a stakeholder can have corresponding confidence that blocks of depth at least 𝒟𝒟\mathcal{D}caligraphic_D on the strongest chain are immutable, since mining power is concentrated on branches compatible with the block validity. We call 𝒟𝒟\mathcal{D}caligraphic_D the transaction delay.

For figure 15 we measure 1 pair-delay per chain following each mining event on chains of length from 512 up to 2048. We determine the empirical distribution of Dpairsubscript𝐷𝑝𝑎𝑖𝑟D_{pair}italic_D start_POSTSUBSCRIPT italic_p italic_a italic_i italic_r end_POSTSUBSCRIPT, discarding statistics from the first half of each chain. We threshold the distribution to determine a transaction delay with 95% confidence, presenting the mean and variance with respect to at least 5 (at most 10) blockchains. Although pair-delays measured this way are correlated, for each data point we have at least thousands of effectively independent samples for each data point, so that a 95% quantile is established with reasonable confidence [49].

D.2 Mining Acceleration for Blockchain Analysis

Each mining event requires O(2𝒩zeros)𝑂superscript2subscript𝒩zerosO(2^{\mathcal{N}_{\rm zeros}})italic_O ( 2 start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ) experiments. For purposes of blockchain structural analysis we show methods whereby only O(1)𝑂1O(1)italic_O ( 1 ) experiments are required per mining event. This allows us to present, with practical compute resources, results on stability for large 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT, limited primarily by the number of validation experiments rather than mining.

A valid mining event occurs when the QPU produces witnesses that (a) after thresholding contain 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT zeros (with the +/-1 Chainwork definition) or (b) satisfy the confidence-based Chainwork criteria.

We first make the approximation in all experiments of perfect network synchronization, discounting the rare event of two miners simultaneously completing a proof-of-work. This approximation is reasonable if the work is mining rate is small enough.

We employ a hashing method where the overall sign on any hyperplane is uniformly distributed in {1,1}11\{-1,1\}{ - 1 , 1 }. For this reason the probability a nonce describes a broadcastable (valid) block in case (a) is 1/2𝒩zeros1superscript2subscript𝒩zeros1/2^{-\mathcal{N}_{\rm zeros}}1 / 2 start_POSTSUPERSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT; mining is successful if and only if the hyperplane signs Hα=sign(Wα)subscript𝐻𝛼signsubscript𝑊𝛼H_{\alpha}=\mathrm{sign}(W_{\alpha})italic_H start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = roman_sign ( italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ). We can therefore emulate properly distributed statistics of a fair mining event by first running an experiment to produce Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, and then resampling the signs on hyperplanes to allow mining.

For a confidence-based Chainwork, the probability of successful mining is a function of more than the signs on the hyperplanes. To accelerate mining events in this case, we perform a large number of experiments to determine many copies of Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT for α=1𝛼1\alpha=1italic_α = 1 to 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT. In each case, we exhaustively enumerate all hyperplane signs consistent with a mining event. We thereby enumerate a subset of all mining events. These mining events are correlated (in some cases differing only by the signs on hyperplanes), but this is judged to be a weak correlation. Most often only one value for the signs is supported, and events that allow many sign values are rare for the parameter ranges presented. The rate of mining can be inferred, and the mining subset can be used in a randomized order to model the mining.

The probability of confidence-threshold satisfying events is a function of δW𝛿𝑊\delta Witalic_δ italic_W, 𝒩maxsubscript𝒩max\mathcal{N}_{\rm max}caligraphic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT and 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT, and remains approximately equal to 2𝒩zerossuperscript2subscript𝒩zeros2^{-\mathcal{N}_{\rm zeros}}2 start_POSTSUPERSCRIPT - caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT end_POSTSUPERSCRIPT subject to a small correction, as shown in Fig. 14.

D.3 Resampling Witness Methodology

For the case that Chainwork is defined as +/-1 for matching (unmatching) sequences, knowing the distribution of the 𝒩zerossubscript𝒩zeros\mathcal{N}_{\rm zeros}caligraphic_N start_POSTSUBSCRIPT roman_zeros end_POSTSUBSCRIPT bits for each device used in mining and validation is sufficient to evaluate the distribution of Chainwork and thereby the evolution of the blockchain. For the case of confidence-based Chainkwork, it is sufficient to know the distribution of {Wα}subscript𝑊𝛼\{W_{\alpha}\}{ italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT }.

For a training set of 5 experiments and 1024102410241024 random projections, the distribution of bits (thence validation rates), and distribution of Wαsubscript𝑊𝛼W_{\alpha}italic_W start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT (thence confidence in validation) can be established as shown in Figs. 12 and 13. We can fit the witness distributions by Gaussians, and thresholded bit outcomes by Bernouilli distributions. For purposes of accelerating analysis we can then resample these distributions to model a larger set of experiments without use of the QPU experiment each time.

We employ this procedure in Figs. 5 and 15 with results consistent with full blockchain experiments involving finite-number of miners, such as those in Fig. 9. We consider 5120 combinations of model (spin-glass realization) and hyperplanes, for each of the 4 experimental QPUs. Using 20 programmings, we determine the distribution (Gaussian or Bernouilli for witnesses and bits respectively). We can then use these parameterized distributions to generate reasonable witness (and thresholded witness) distributions. For each mining attempt, we uniformly randomly sample a combination of model and hyperplanes - and then for any mining or validation event we choose uniformly at random an QPU and sample from the corresponding distribution.

We can further save on computation costs by only mining (validating) as necessary for purposes of constructing the blockchain. We need only understand a miner’s verification pattern if they are responsible for broadcasting the next block. We therefore construct on-the-fly a miner’s Chainwork and establish their strongest chain at the time of mining. With this approximation we can work in the informative limit of well-distributed compute power (many miners), where each miner mines at most one block in any blockchain. The number of validation events required to realize a chain with L𝐿Litalic_L blocks broadcast in total is L(L1)/2𝐿𝐿12L(L-1)/2italic_L ( italic_L - 1 ) / 2.

Qualitative justification for this approach relies on the observation that a spin-glass ensemble operated in our experimental regime is self-averaging. The distributional properties of such witnesses fluctuates little between typical spin-glass disorder realizations. Variation in witnesses values between QPUs (and embeddings) appears well described by Gaussian distributions, and since hyperplanes are nearly orthogonal the correlations in their fluctuations are weak. However, we expect this method to underestimate fluctuations so that Figs. 5 and 15 may be biased toward higher efficiency (and reduced delay). Full simulations (every verification and mining event uses a QPU) are used for all other figures. The outcomes are in qualitative agreement with the resampling method, and reasonable quantitative agreement with respect to blockchain efficiency and delay.