Blockchain with proof of quantum work
Abstract
We propose a blockchain architecture in which mining requires a quantum computer. The consensus mechanism is based on proof of quantum work, a quantum-enhanced alternative to traditional proof of work that leverages quantum supremacy to make mining intractable for classical computers. We have refined the blockchain framework to incorporate the probabilistic nature of quantum mechanics, ensuring stability against sampling errors and hardware inaccuracies. To validate our approach, we implemented a prototype blockchain on four D-WaveTM quantum annealing processors geographically distributed within North America, demonstrating stable operation across hundreds of thousands of quantum hashing operations. Our experimental protocol follows the same approach used in the recent demonstration of quantum supremacy [1], ensuring that classical computers cannot efficiently perform the same computation task. By replacing classical machines with quantum systems for mining, it is possible to significantly reduce the energy consumption and environmental impact traditionally associated with blockchain mining. Beyond serving as a proof of concept for a meaningful application of quantum computing, this work highlights the potential for other near-term quantum computing applications using existing technology.
I Introduction
Blockchain technology is widely regarded as one of the most transformative innovations of the 21st century. Its primary purpose is to provide a decentralized and secure method for recording and verifying transactions without the need for a central authority. The concept of blockchain was introduced with the creation of Bitcoin in 2008, as described in the now-famous whitepaper published under the pseudonym Satoshi Nakamoto [2]. Initially developed to enable cryptocurrencies, blockchain technology has since expanded to a diverse range of applications, including decentralized finance (DeFi) [6], identity verification [5], supply chain management [3] and healthcare [4]. Key characteristics of blockchains include transparency (all transactions are visible to participants in the network) and resistance to tampering (it is extremely challenging for malicious actors to modify data on the blockchain, so transactions are practically immutable) [7, 8, 9]. These features have made blockchain technology a cornerstone of the shift toward Web 3.0, the next generation of the internet [10, 11].
At its core, a blockchain is a distributed ledger that operates across a network of computers, where data is stored in “blocks” that are cryptographically linked to form a sequential chain. This architecture eliminates the need for central intermediaries, enabling trustless interactions between parties. The integrity of the data is secured through consensus mechanisms, the process by which blockchain networks agree on the validity of transactions and ensure the synchronization of their distributed ledgers. In the absence of a central authority, consensus mechanisms enable trust and cooperation among participants in the network. The choice of consensus mechanism impacts a blockchain’s security, scalability, energy efficiency, and decentralization, making it a key component of the network’s design. These mechanisms ensure that no single entity can control the ledger, thus preserving its decentralized nature. However, they also present challenges.
Traditional blockchains use consensus algorithms like proof of work (PoW) and proof of stake (PoS) to validate transactions. PoW relies upon the significant computational effort to generate valid blocks. Distributed computational power ensures that no user can control the blockchain evolution. Introducing invalid transactions, such as double-spending, requires of the order of 50% of the community compute power, which is financially prohibitive due to the substantial equipment and electricity costs. While PoW effectively enhances network security, it also leads to significant energy consumption. Bitcoin mining alone consumes an estimated 175.87 terawatt-hours (TWh) in 2024, surpassing the yearly electricity usage of entire countries, including Sweden [12, 13]. This high energy consumption raises environmental concerns.
PoS addresses this issue by selecting validators based on the amount of cryptocurrency eachholds and is willing to stake as collateral, incentivizing honest behavior by exposing their holdings to potential loss in case of misconduct. However, a major drawback of PoS is the potential for wealth concentration, as those with larger holdings have a greater chance of being selected as validators, further increasing their wealth over time. This can reduce the network’s decentralization, a core principle of blockchain technology. Other mechanisms, such as delegated proof of stake (DPoS) [14] and practical Byzantine fault tolerance (PBFT) [15], offer alternative approaches tailored to specific use cases.
As blockchain technology advances, researchers and technologists are investigating innovative approaches to overcome its limitations and ensure its long-term viability. One exciting avenue is the integration of quantum computing with blockchain systems, a combination that could transform the field by offering unmatched security, scalability, and computational power. This article introduces a blockchain architecture that incorporates a proof of quantum work (PoQ) consensus mechanism, where quantum computers are required to validate transactions. The central concept is to harness quantum supremacy to mine problems intractable to classical computers. Quantum supremacy is a term introduced by John Preskill [16] that refers to a scenario in which a quantum computer performs a task that classical computers cannot accomplish with reasonable resources. While there could be various ways to leverage quantum supremacy within a blockchain, our proposal builds on Bitcoin’s architecture with minimal modifications, taking advantage of its proven reliability and robustness.
Unlike previous quantum blockchain proposals [17, 18, 19, 20, 21, 22, 23, 24, 25, 26], which often assume access to fault-tolerant quantum computing or quantum teleportation, our approach integrates both classical and quantum computation to leverage quantum supremacy with existing and near-term QPUs. Currently, Ref. [1] is the only demonstration of quantum supremacy not based on random-circuit sampling [27, 28], and thus the only quantum supremacy demonstration relevant to this work (see Appendix A). Using a small-scale prototype, we demonstrate this blockchain’s functionality on four generally-accessible D-Wave AdvantageTM and Advantage2TM quantum processing units (QPUs) at different geographic locations across North America. By carefully accounting for the probabilistic nature of quantum computation and potential errors within Bitcoin’s architecture, we achieve stable operation across thousands of mining processes distributed among hundreds of stakeholders. This marks the first successful demonstration of an important real-world application that directly leverages quantum supremacy, operating seamlessly across an internationally distributed network of quantum computers.
II Classical Blockchain Architecture
Before introducing any quantum concepts, first described how standard blockchain technology works, using Bitcoin’s architecture as a prime example. At the heart of Bitcoin is its distributed ledger, a decentralized database that records all transactions. This ledger is maintained across a network of computers, ensuring both transparency and security. Each node on the network stores a complete copy of the blockchain, and updates are synchronized through consensus mechanisms. These mechanisms ensure that only valid transactions are added to the blockchain, making it resistant to tampering or fraud.

Transactions are recorded in blocks linked sequentially in a chain, forming a continuous and immutable ledger. A Bitcoin block consists of a block header and a list of transactions (see Figure 1 for illustration). The block header acts as a metadata section and contains essential elements such as a timestamp, a Merkle root (a hierarchical summary of all transactions in the block), the nonce, and the hash of the previous block. A hash is a unique, fixed-length binary string generated by processing data through a cryptographic algorithm. It acts as a digital fingerprint, ensuring data integrity by making even the slightest change to the input data result in a completely different hash. The nonce is an arbitrary 32-bit number adjusted by the miners to solve the cryptographic puzzle required for block validation. Miners achieve this by repeatedly combining the transaction data with the nonce and passing it through a classical hashing algorithm, which generates an -bit hash ( for Bitcoin). Adding a block to the blockchain begins with miners competing to find a nonce that produces a hash value below a specific target , with indexing the block. The inequality requires that the first bits of the hash be zero, where
(1) |
Due to the irreversibility of the hash function, finding the correct hash can only be achieved through brute-force search, forcing miners to test countless nonce values. This process, known as “mining,” relies on a trial-and-error approach, requiring intensive computational effort to solve the cryptographic puzzles that underpin PoW. In Bitcoin, the network dynamically adjusts the difficulty every 2016 blocks (approximately every two weeks) by redefining the target , to ensure that the average time to mine a block remains close to 10 minutes. Once a miner successfully solves the puzzle, they broadcast the solution to the network. Other nodes then validate the solution by verifying the hash and ensuring the block’s transactions are legitimate. Upon successful validation, the block is added to the chain, and the miner is rewarded.
The reward for mining a block consists of two components: the block reward and transaction fees. The block reward is a fixed amount of Bitcoin awarded to the miner. This reward halves approximately every four years in an event called the “halving.” In addition to the block reward, miners collect fees from transactions included in the block, incentivizing them to prioritize transactions with higher fees. This dual reward structure ensures the security and continuity of the Bitcoin network while gradually reducing the issuance of new Bitcoin, contributing to its scarcity.
A situation known as a “fork” occurs when two miners solve a block at nearly the same time, resulting in the blockchain temporarily splitting into two competing branches. Both branches propagate through the network, and nodes may accept one branch over the other depending on which block they receive first. The fork is resolved when miners adding new blocks to one of the branches make it longer than the other. Bitcoin follows the “strongest chain rule”, meaning the branch with the highest total computational work (often called Chainwork) is recognized as the valid chain, while the shorter branch is discarded. This process ensures network consensus but can result in transaction delays and unrewarded work. It is therefore essential to understand how the strongest chain is determined.
In Bitcoin, a set of valid proposed blocks forms a directed tree, rooted in a special genesis block, the first block introduced by Satoshi Nakamoto in 2009. Any path including an invalid block is to be ignored by stakeholders. Each node on the tree has an associated work, which is a sum of all work completed from the genesis block to the node (along a unique directed path). For the -th block in the chain, the work is defined as
(2) |
Note, the -dependence of is omitted as an abbreviation. It represents the number of trials and errors, required to find an acceptable hash, . Therefore, quantifies the mining work, which goes up as the hash-hardness grows. The Chainwork is for practical purposes monotonic in the length of the chain. The node with the most Chainwork defines a unique path from the genesis node called the strongest chain. The total Chainwork is therefore defined as
(3) |
Chainwork is calculated by the stakeholders for all existing chains and the chain with largest Chainwork is selected for adding the next block and the rest are discarded. The miners responsible for the discarded blocks do not receive rewards, while the miner who successfully extends the longest chain is rewarded with both block rewards and transaction fees from the new block. Transactions from the discarded branch are returned to the memory pool (mempool), where they await inclusion in a future block. This mechanism ensures that consensus is maintained across the network, although it may introduce temporary delays in transaction confirmation. On a strongest chain, blocks closer to the genesis block are increasingly likely to remain unchanged as broadcasts continue. If we define a small cut-off in probability, we might say blocks become (for practical purposes) immutable, and define the transaction delay accordingly. In the case of Bitcoin it is only a few blocks, assuming good network synchronization. Miners are incentivized to maintain synchronization, to minimize the risk of pursuing wasteful computations, so that forks are merely a rare inconvenience and of 2 or 3 is typical. Forks may also happen due to software updates and change of rules, which may lead to temporary (soft-fork) or permanent (hard-fork) splitting of the blockchain.
We highlight three features required for viability of a blockchain: 1. The number of blocks in the strongest chain relative to the number of honestly-mined blocks broadcast should be a large finite faction (the efficiency). 2. A stakeholder should have confidence that that blocks buried to some small finite depth on the strongest chain are immutable with high probability. The ratio depth/efficiency dictates the expected number of honestly-mined blocks that have to be broadcast before the most recently broadcast transactions can be considered reliably processed. 3. Consensus mechanisms should protect against disruptions, and prevent rewards from being obtained (on average) by incomplete or fraudulent work. These are necessary requirements for guaranteeing confidence, and maintaining an incentive to mine according to protocol rules.
III Classical Hashing
Hash algorithms are widely used for various purposes, including verifying data integrity, securely storing passwords (by hashing them instead of storing them in plain text), creating hash tables in data structures, and enabling cryptocurrencies. These algorithms are a fundamental tool in secure communications. It should be clear by now that cryptographic hashing plays a crucial role in blockchain technology. In essence, hashing is the deterministic process of transforming a message or data of any length into a unique fixed-size bit string, known as the hash. The primary distinction between cryptographic hashing and encryption is that hashing is a one-way transformation designed to generate a unique, irreversible output, whereas encryption is a two-way process intended to securely encode data while allowing the original message to be recovered with a decryption key.
Cryptographic hashing has several important properties that make it suitable for various applications, including the following:
-
1.
Fixed output size: The hash function should generate a fixed-size output, regardless of the size of the input.
-
2.
Collision resistance: It should be very unlikely that two different inputs produce the same hash value.
-
3.
Avalanche effect: A small change in the input data should result in a significantly different hash value (similar inputs should not produce similar hash values).
-
4.
Uniform distribution: The hash values should be uniformly distributed across the output space to avoid clustering, which can lead to collisions.
-
5.
Pre-image resistance: It should be computationally infeasible to infer anything about the original input data from the hash. Given an input, it should be infeasible to find a second input yielding the same hash.
-
6.
Deterministic: For the same input, a hash function must always produce the same output.
Commonly used hash algorithms include MD5, SHA-1, SHA-2, and the SHA-3 family, all introduced by the National Institute of Standards and Technology (NIST) [41]. Among these, SHA-256 (Secure Hash Algorithm 256-bit) is the standard hash function used in Bitcoin and many other blockchain applications. It works by processing input data through a series of mathematical operations, including bitwise logical operations, modular arithmetic, and compression functions. The input message is first padded to a specific length and divided into 512-bit blocks. Each block is then processed through 64 rounds of mathematical transformations, which involve constants and logical functions, resulting in a fixed 256-bit hash output. This hash is unique to the input data and ensures that even a tiny change to the input produces a drastically different hash, making it highly secure and resistant to collision attacks. Beyond SHA-256, the SHA-2 family includes variants like SHA-384 and SHA-512, which offer larger output sizes for even greater security, and the newer SHA-3 family provides enhanced resistance to potential cryptographic attacks.
IV Quantum Hashing

Various quantum hashing algorithms have been proposed, which can generally be categorized into two types. The first type produces a quantum state as the output, requiring communication and authentication to be quantum mechanical [30, 31, 32, 33, 34, 35, 36, 37]. The second type leverages quantum mechanics to generate a classical hash value, which can then be communicated and processed using classical means [38, 39, 40]. In this paper, we focus on the latter.
The main challenge in quantum hashing is that quantum mechanics is inherently probabilistic, while standard hash functions are deterministic. However, although the outcome of any single quantum measurement is probabilistic, the probability distribution is a deterministic function of the system parameters. Similarly, all expectation values derived from the distribution are deterministic functions of these parameters. This allows a quantum hash function to be uniquely defined based on a quantum probability distribution and its corresponding expectation values. In practice, however, accessing this probability distribution is possible only through sampling, so that accuracy is limited. This process is inevitably subject to inaccuracies due to sampling and hardware-induced errors. These factors can reduce the precision and reliability of the quantum hash function.
A quantum hash function must satisfy the first five classical requirements of hash functions, outlined in the previous section. The final requirement, being deterministic, is not possible and must be replaced with two new quantum-specific requirements:
-
6.
Probabilistic: For the same input, a quantum hash function has a nonzero likelihood of producing different outputs.
-
7.
Spoof resistance: The hash must be infeasible to replicate using classical computers with realistic resources.
The last requirement is essential when the hash is used in a blockchain with a PoQ-based consensus mechanism. It ensures that only quantum computers can perform mining, thereby preventing energy-intensive classical mining. The probabilistic nature of quantum hash values must be carefully considered in the design and application of hash functions, particularly in systems like blockchain, where reliable behavior is essential. This issue, along with potential solutions, will be discussed in greater detail in subsequent sections. Since existing standard classical hashes already fulfill the first five classical requirements, by carefully combining a classical hash function with a quantum operation, the focus can shift to achieving the last requirement.
We start by introducing the general concept without delving into specific implementation details. Let represent a message of arbitrary length. (For clarity and consistency, we use calligraphic symbols for blockchain-related quantities, while standard symbols are reserved for all other variables, including those characterizing the quantum evolution.) A hash function is defined as a unique transformation of into a hash value , with a fixed length of bits. This transformation is performed in four steps (see Fig. 2 for visualization):
-
i.
Preprocessing: Transform message into a set of random parameters
(4) -
ii.
Unitary evolution: Use to define a unitary evolution for an -qubit quantum system with initial state such that:
(5) where is the final state with its corresponding pure state density matrix . The final state may be described in a compressed form by a set of expected (measured) observables.
-
iii.
Postprocessing: Using samples obtained from multiple measurements, compute a vector consisting of observables (witnesses) .
-
iv.
Digitalization: Discretize witnesses into binary numbers, e.g.,
(6) where is the heaviside step-function and is a threshold selected depending on the definition of . Hash is a sequence of the binaries:
(7)
In the preprocessing step, the message is transformed into a set of parameters that govern the quantum operation. These parameters may include Hamiltonian variables, evolution times, a circuit, or other relevant quantities. If the size of remains fixed regardless of the length of , the preprocessing step effectively acts as a classical hash. Steps (iii-iv) can also be parameterized as a cryptographic function of and the message. Standard cryptographic hashing algorithms can be employed at this stage to ensure, with care, that the first five criteria outlined in the previous section are met. This can be achieved by first hashing the message using a classical algorithm, such as SHA-256, and then using the resulting hash as a seed for a pseudo-random number generator to generate and other hashing parameters preventing reversibility. This can preserve the essential properties of the classical hash in , which in turn defines the quantum evolution in the next step.
The quantum operation must be sufficiently complex to satisfy the seventh requirement, ensuring that the hash cannot be spoofed by classical computers with feasible resources. Such complexity can be achieved through a continuous-time evolution, sequence of quantum gate operations, or a combination of digital and analog operations. Information from the resulting density matrix is extracted through measurements and summarized in a vector of witnesses . It should be noted that some classical properties, such as collision resistance and the avalanche effect, do not necessarily carry over automatically. For example, if quantum evolution maps all to the same hash value, these properties would be lost. One should therefore be careful in the design of quantum evolution and the witness vector. The definition of witnesses and the quality of the resulting hash ultimately depend on the quantum hardware’s capability to execute complex quantum evolutions and its measurement capabilities, as we discuss next.
The 4-stage process outlined includes both classical and quantum computation. Although the classical part likely includes non-trivial operations such as cryptographic hashing, we assume that the quantum computation is significantly more costly (and time-consuming) than the sum of classical operations, which is true in our experimental implementation.
IV.1 Single-Basis Measurement
If hardware limitations constrain qubit measurement to a single basis, which we herein refer to as the -basis (associated with the Pauli matrices), the only accessible information is the diagonal elements of the density matrix in this basis. This corresponds to the probability distribution:
(8) |
The postprocessing step then aims to convert into a unique bit string of predetermined length , ensuring robustness to uncertainty in the experiment whilst capturing non-trivial statistical properties. In other words, the resulting quantum hash is a fixed-length bit string that uniquely identifies the probability distribution . A complete characterization of for a system with qubits requires real parameters. This characterization can be achieved by determining the probability of each state in the computational basis or by calculating all expectation values of single- and multi-qubit operators up to order , i.e., , , , etc. In practice, for quantum systems with local interactions, it is often sufficient to consider a limited number of dominant k-local expectations within the correlation length to characterize the distribution with acceptable accuracy. This simplification significantly reduces the complexity of the problem, particularly when the primary goal is to ensure collision resistance in the resulting quantum hash.
Let represent a vector of length of the dominant expectation values
(9) |
Each element of the feature vector is a real number . To generate an -bit hash we create witnesses as linear combinations of
(10) |
where is an matrix with elements such that . We turn each element of the witness vector into a binary number using (6). Each equation defines a hyperplane in the vector space made of ’s and each hash bit indicates on which side of the hyperplane the observed lies. The threshold value can be taken to be zero or as some simple function of the experimental parameters. Robustness of the hash bits towards errors and spoofing depends on the choice of the hyperplanes (see Appendix C.5). The signs on hyperplane projections can be chosen uniformly at random as a function of the message (as in our proof of concept) to guarantee a uniform distribution of bit strings that is not a reversible function of other experimental parameters, providing pre-image protection.
When qubit readout is performed in a single basis, it is, in principle, possible to identify a classical system that can replicate the same distribution without entanglement. While finding such a classical system may be practically infeasible due to quantum supremacy, its theoretical existence cannot be ruled out. Indeed, any proof of entanglement, whether through an entanglement measure or a witness, requires access to information beyond single-basis measurements of the quantum states. This additional information could be obtained by performing measurements in multiple bases or by examining the system’s response to perturbations (e.g., susceptibility measurements). In the absence of more complex measurement, one may resort to susceptibility measurements to protect further against classical spoofing.
IV.2 Multi-Basis Measurement
In some quantum systems, it is possible to perform measurements in multiple bases. This is feasible, for example, when the system allows single-qubit rotations (whether Clifford or non-Clifford gates) prior to measurement. Both digital and digital-analog QPUs can potentially enable such capabilities. Measuring in multiple bases allows access to off-diagonal elements of the density matrix, . Given their exponential growth in number, accessing all off-diagonal elements is both impractical and unnecessary for most applications. Therefore, rather than attempting to recover all elements of the density matrix, one can measure quantities accessible through sampling by performing measurements in multiple bases.
The easiest way of using multi-basis measurement is by randomizing the measurement basis. The basis to measure the qubits can be determined by the message through the QPU parameters . This way random basis rotations are considered part of the unitary evolution . While this would make classical simulation very challenging, it is still based on a single basis measurement of the density matrix and the previous concern persists.
A better option is to measure the same in different bases. This allows calculating entanglement witnesses as components of the witness vector. An entanglement witness is a measurable quantity, , such that exceeding a threshold value, , necessarily implies the presence of entanglement while does not guarantee the absence of entanglement. We can design a quantum hash such that each in Eq. (6) represents an entanglement witness, and each corresponding acts as a boundary, beyond which entanglement is assured.
As an illustrative example, we consider using Bell’s inequality violations for pairs of qubits as entanglement witnesses. In an appropriate experimental setup, a loophole-free violation of Bell’s inequality indicates that the experimental results cannot be explained by any local hidden-variable theory. However, this is not our objective. Instead, we aim to use the inequality as an entanglement witness. Specifically, we employ an experimentally feasible variation of Bell’s inequality, known as the Clauser-Horne-Shimony-Holt (CHSH) inequality. Additional details are provided in Appendix 35.
Let us consider a system in which measurements can be performed in two Pauli bases, and . For any two qubits, and , we define a modified Bell operator as (see also Appendix B):
(11) |
Based on purely statistical argument, it is shown that for any local hidden variable theory, the expectation value of the Bell operator satisfies
(12) |
which is known as CHSH inequality. When Pauli matrices are replaced by classical spin components, we get an even smaller bound, (see Appendix B). In contrast, quantum mechanics allows to reach values as large as . This is possible when , which is classically counterintuitive as it means the spins are oriented both in and directions at the same time. Quantum mechanically it is possible because of quantum contextuality, i.e., dependence of the outcome on the measurement setup. It is clear that violation of CHSH inequality is a sufficient but not necessary condition for quantum mechanics, and more explicitly for entanglement between the qubit pair. Therefore, CHSH inequality is an entanglement witness.
CHSH inequality can also hold in a multi-qubit system and therefore can be used as a basis for hash generation. Let us define in Eq. (6) with a threshold value . Here, represents a pair of qubits in the multi-qubit system. The hash bit is then equal to one if the CHSH inequality for the corresponding pair is violated, and zero otherwise. It has been shown that in a multi-qubits system the CHSH inequality must satisfy a trade-off relation [45] (see Appendix B):
(13) |
The equal sign happens when for all pairs, . It is therefore impossible for all pairs to violate the CHSH inequality simultaneously. In other words, if some of the inequalities are violated, there must be others that are not.
To use the CHSH inequality for hash generation, we must first identify pairs of qubits for which the likelihood of violating the CHSH inequality is maximized. Hash generation then reduces to executing a quantum evolution parameterized by and determining which pairs from the predetermined set of qubit pairs violate the CHSH inequality. Each hash bit is assigned a value of one or zero, depending on whether or not the CHSH inequality is violated for the corresponding qubit pair. No classical model can reliably predict these outcomes, making a quantum computer essential for calculating the hash value. This approach ensures compliance with the seventh requirement mentioned earlier: spoof resistance.
IV.3 Shadow Tomography
If the QPU can perform multi-qubit unitary operations before measurement, shadow tomography can be utilized for hash generation. First introduced by Aaronson [42], shadow tomography provides an efficient method for estimating quantum observables using a limited number of measurements. Unlike full quantum state tomography, which requires an exponentially large number of measurements to reconstruct the full density matrix, shadow tomography focuses on extracting specific properties of the state with significantly fewer samples. A more practical variant was later introduced by Huang, Kueng, and Preskill [43]. In their approach randomized measurements are used to generate compact representations, known as classical shadows () of the full density matrix () that can be generated and stored efficiently. The expectation value of any observable can then be estimated by averaging over all classical shadows:
(14) |
where is the number of measurement rounds. The number of measurements required to estimate observables with error scales as , making it exponentially more efficient than full state tomography.
To generate a classical shadow, we apply a randomly chosen unitary transformation from a known ensemble (e.g., Clifford group): . We then perform measurement in the computational basis to obtain a bitstring outcome . A classical shadow of corresponding to the measurement outcome is calculated as
(15) |
where is the shadow tomography measurement channel, defined as:
(16) |
The expectation is over all unitaries and measured outcomes . The inverse channel depends on the chosen unitary ensemble. For example, for Pauli measurements, it has a simple analytical form (see Ref. [43] for detail).
To use shadow tomography for hashing, one can choose observables from which witnesses are extracted via shadow tomography:
(17) |
Shadow tomography enables the estimation of expectation values for complex observables , which cannot be directly measured or reliably extracted from simple measurement protocols within a feasible number of repetitions. One can, for example, choose such that becomes an entanglement witness. For example, let represent a pure state projection such that
(18) |
where is a separable density matrix. By choosing , the extracted defined in (17), serves as an entanglement witness in Eq. (6), facilitating hash bit generation.
Hashing based on shadow tomography will likely require a fault-tolerant gate-based quantum computer. However, digital-analog quantum computers with sufficient flexibility may have sufficient accuracy without quantum error correction. The details of the unitary evolution leading to and the random unitaries applied before measurement would depend on the specific QPU architecture and the available quantum operations.
V Quantum Blockchain with Probabilistic Hashing
As previously emphasized, an inherent feature of any quantum mechanical hash generation is its probabilistic nature. Therefore, it is crucial to ensure that the blockchain remains reliable despite this uncertainty. Two primary sources of error can affect the estimation of observables used for hash generation: sampling error and hardware inaccuracies. Sampling error can be reduced by increasing the number of samples, while systematic hardware inaccuracies pose less of a problem if they remain consistent across both the miner and validator quantum devices. However, random errors and variations between different QPUs can introduce discrepancies, potentially leading to disagreements between miners and validators. To address this issue, we introduce two key modifications to the standard blockchain algorithm: probabilistic validation and a revised definition of the strongest chain.
V.1 Probabilistic Validation
Let us assume that the experimentally extracted in Eq. (6) follows a Gaussian distribution with a mean of and standard deviation as
(19) |
In the absence of any systematic error, should be the same as the theoretical value. While the exact theoretical values are inaccessible due to the computational infeasibility of solving the Schrödinger equation at large scales, and can be estimated through statistical averaging. For a single run of the experiment, however, one only has access to the extracted . Assuming that follows the same Gaussian distribution centered around , one can estimate the probability of error in the generated hash bit :
(20) | |||||
where is the error function and
(21) |
represents a dimensionless distance from the corresponding hyperplane. As expected, for and for . It is more convenient to work with confidence values defined as , instead of error rates. Assuming that errors in witnesses are uncorrelated (this assumption does not affect the applicability of this method as a heuristic), the miner’s confidence in the generated -bit hash value is given by
(22) |
If for all hash bits the distances from the hyperplanes are large, the confidence value will be close to one. In contrast, if any distance is sufficiently small, confidence value for the corresponding bit approaches 1/2, reducing the overall confidence value. In that case, the generated hash may be rejected by the validators. The number of such uncertain bits is given by
(23) |
The validator performs the same calculations to determine and and the hash bit , which may or may not match the proposed hash bit . The validator can calculate their confidence value on the received hash as:
(24) |
where is the Kronecker delta-function. If all hash bits match, the validator’s confidence in the received hash bit is equal to their confidence in their own bit. However, if a single bit is incorrect, the confidence value decreases by a factor of . This would significantly lower if the incorrect bit is one with high confidence value, where . Conversely, if the incorrect hash bit has , the reduction in the overall confidence value is negligible. We can turn into a number of bits in a similar way as for the miner:
(25) |
The interpretation of is slightly different. If discrepancies between the hashes occur only in bits with a small distance , then represents the count of those uncertain bits. However, even a single bit discrepancy outside this set can cause to vanish, leading to diverge, effectively rendering all hash bits uncertain.
The simplest way of implementing probabilistic validation is to set a threshold for the allowable discrepancy between the received hash and the one generated by the validator. The validator would then accept the block if and reject it otherwise. The stability of the chain can be further enhanced if miners and validators agree on the same . Provided broadcasting has reasonable cost, the miner will refrain from proposing a block if because it is very likely to be rejected by the validators. The advantage of using and instead of the Hamming distance between the hashes is that it prevents malicious actors from exploiting the relaxed requirements to generate invalid hashes. This is because only hash bits with a small distance are permitted to differ, and calculating can only be achieved accurately using a quantum computer, making it challenging for attackers to identify uncertain bits and manipulate specific hash bits confidently. It also deters a miner from generating the hash with less computational effort, such as by producing a hash with fewer leading zeros and randomly flipping the remaining bits. In practice one needs to fine-tune to achieve maximum stability of the blockchain.
V.2 Redefining Chainwork
Before introducing any changes to the blockchain, let us first analyze how a standard blockchain algorithm like Bitcoin would be affected by probabilistic hash generation and validation. We assume that the miner proposes their hash without considering their confidence value , and the validator accepts the hash if all bits match. In the limit of rare bit-flipping, at least one bit may deviate from its theoretical (deterministic) value. This introduces an additional mechanism for chain splitting, which we call “validation fork”. With small probabilities, two key events can happen: (1) False positive by the miner: A miner believes they have found a valid nonce and broadcasts a block that is rejected by all stakeholders. This occurs with probability . (2) False negative by an isolated validator: An individual stakeholder fails to reproduce a valid block accepted by all other stakeholders, with probability . In Bitcoin, the strongest chain is defined as the sequence of blocks consistent with the most valid work. Any path containing a single invalid block cannot be part of the strongest chain from the stakeholder’s perspective. Under this consideration, we can examine the consequences.
Scenario (1) would lead to a soft validation fork. The miner’s proposed branch becomes obsolete as a new branch mined by the dominant computing power quickly replaces it. The isolated miner rejoins the strongest chain with high probability after two block broadcasts. This soft fork is similar to those caused by network desynchronization in Bitcoin and is not problematic.
Scenario (2), on the other hand, can cause a hard validation fork. A stakeholder who fails to validate a block is left waiting for a new fork that may emerge slowly but is unlikely to be considered the strongest chain by others. Since the stakeholder cannot join a chain with invalid transactions, they become permanently disconnected from the main chain. If this issue persists, stakeholders will eventually split into hard forks, causing the blockchain to stop functioning. To prevent this, the definition of the strongest chain must allow for the inclusion of unverified blocks.
We can modify the definition of the strongest chain to resolve the hard-validation fork problem. The strongest chain remains defined as the path with the greatest cumulative work, but we allow the block tree to contain invalid blocks by attributing negative work to them. The simplest approach is to introduce equal but negative work for invalid blocks:
(26) |
If the work remains constant, then one can assign to valid/invalid blocks, we call this basic +/- Chainwork. In the case of a tie, the first path evaluated is preferred, following the same rule as in Bitcoin. This adjustment effectively resolves the issue by turning a hard validation fork to a soft validation fork with a lag time. If a user falls behind by two blocks, the majority compute power of miners will produce a third block, causing the strongest chain to advance by three blocks in agreement with the majority.
A more sophisticated way to modify Bitcoin’s original Chainwork calculation is to incorporate validator’s confidence in the hashes. This can be easily done by reweighting the work (per block) in proportion to confidence:
(27) |
This is equivalent to subtracting the number of uncertain bits from the required number of zeros when calculating the work. Note than in the limit of or , becomes identical to that defined by the standard Bitcoin algorithm. Conversely, if the validator has no confidence in the hash, then . Assigning zero work to invalid blocks can open the network to attacks such as spamming with incomplete work. To prevent this we can again introduce a threshold for the allowed discrepancy. The validator may reject the block if or assign a negative work to invalid blocks similar to (26):
(28) |
We call this confidence-based Chainwork. In both cases, no block with zero work would be added to the blockchain. Modifications to the definition of work impact blockchain efficiency, transaction delays and susceptibility to attacks as discussed in discussed in Appendix C.
VI Experimental Implementation
Experimental realization of a quantum blockchain requires access to a quantum computer capable of solving complex problems where quantum supremacy has been demonstrated or is potentially achievable. The generated distribution must exhibit sufficient structure to allow observables with nontrivial expectation values that can be measured reproducibly across multiple QPUs. Therefore, techniques such as boson sampling or random circuit sampling, which produce nearly uniform random outcomes, are not suitable for this application (see Appendix A). Currently, Ref. [1] is the only experimental demonstration of quantum supremacy that meets these requirements. Therefore, we base our experimental demonstration on the same protocol. Since Ref. [1] has already established agreement with quantum mechanics and the infeasibility of classical simulation, our primary focus here is to demonstrate that multiple QPUs can generate and validate hashes, ensuring the blockchain operates reliably without persistent branching or network fragmentation.



The experiments are conducted on four D-Wave Advantage and Advantage2 QPUs. Each QPU comprises a network of coupled qubits that undergo continuous evolution governed by a time-dependent Hamiltonian
(29) |
where the problem Hamiltonian is
(30) |
and , with being time and being the annealing time. The energy scales and evolve with time in such a way that and . These energy scales are predetermined during calibration and typically vary between different QPUs. The dimensionless parameters and are programmable. The inter-qubit connectivity provided by Advantage and Advantage2 are different, but they are compatible over the subgraphs used for the study.
The quantum problem we solve for hash generation is the coherent quenching of spin glasses through quantum phase transitions. We embed cubic lattices of size dimers (128 qubits) on each of four different internationally distributed QPUs, supplementary results for dimerized Biclique models (72 qubits) are limited to Appendix D. The critical dynamics ensure that by adjusting a single parameter (annealing time) different QPUs can produce consistent results despite variations in their energy scales [1]. All qubit biases are set to zero, , and couplers are selected as a function of the input message . This is done in two steps: First the message goes through a classical hash, e.g., SHA-256 in our case. Then the classical hash is used as a seed in a pseudo-random number generator that generates the experimental parameters ( only, in our proof of concept). The classical hash is also used as the cryptographic identifier of the block in the block header of the next block. The quantum hash is only used for PoQ. This way, we can choose because nonzero bits in the quantum hash have no significance. Using a spin glass Hamiltonian for quantum evolution ensures that classical collision resistance carries over to the quantum hash. It is extremely unlikely that two spin glass Hamiltonians with entirely different randomly chosen parameters collide in the resulting high dimensional distribution. Furthermore, the projection matrix is also randomly generated based on the message , making collisions even more improbable.
The feature vector is constructed to include only pairwise nearest-neighbor correlations: . As demonstrated in [1], reproducing within the accuracy of QPUs is computationally infeasible for classical computers. The witness vector is calculated using random projection by a matrix with matrix elements selected from a normal distribution ; elements are pseudo-random functions of . As a miner, we run the quantum algorithm multiple times, each time with a different nonce until we find a hash with bits equal to zero (more specifically for confidence-based Chainwork, a hash satisfying the confidence threshold). The miner then adds the block to the blockchain with the nonce that generated . The validator then goes through the same calculations, i.e., generates the classical hash and thereby the QPU parameters and the quantum hash and checks if all bits are zeros.
Fig. 3 presents the initial-stage operation of a blockchain using four publicly accessible D-Wave Advantage and Advantage2 QPUs, involving 100 miners with . The visualization captures the dynamic evolution of the blockchain, highlighting block statuses and mining activity. Blocks on the (majority) strongest chain are arranged chronologically from left to right, with the genesis block on the far left and the most recently mined block on the far right. Different block colors indicate their status within the consensus process: Blue blocks (immutable) are included in all strongest chains and are universally agreed upon. Orange blocks (soft forks) have been rejected by all miners and are not included in any of the strongest chains. Gray and black blocks appear in some but not all strongest chains, reflecting temporary disagreement among miners. Only black blocks are actively being mined, meaning they have the potential for further branching. This illustrates how consensus emerges dynamically as blocks are proposed and either accepted or rejected by the network.
In Fig. 4, the blockchain is shown after 219 block broadcasts, with the genesis block repositioned centrally for clarity. By this stage, approximately 70% of proposed blocks have become immutable (blue), aligning well with theoretical expectations based on the resampling methodology (see Appendix D.3). The remaining blocks continue to be evaluated, with only those on the strongest chains persisting in the blockchain ledger. This simulation demonstrates the robustness of the blockchain framework, showcasing its ability to maintain stability and achieve miner consensus despite the probabilistic nature of quantum computation. We are able to use statistical properties of the Chainwork described in Section D.2 to eliminate failed mining experiments, and create the blockchain with only total QPU programmings.
Fig. 5 illustrates the mining efficiency across multiple blockchain operations. Mining efficiency is defined as the fraction of block broadcasts that successfully enter the strongest chain in the limit of many miners. For each curve the data is taken from 5 to 10 chains, each having between 512 and 2048 blocks. In Appendix D.1 we describe mining efficiency and transaction delay in greater detail, presenting additional data. The analysis compares different Chainwork definitions and post-processing methods. For the simple weighting method, which assigns equal weights to all block broadcasts, mining efficiency drops to close to 50% as . The confidence-based weighting technique performs better, as expected, approaching 75% efficiency for . Confidence-based Chainwork allows higher efficiency than basic +/-1 Chainwork, also with accounting of small differences in the mining rate as outlined in Appendix D.2. To calculate confidence values we used a constant extracted from experimental data, as detailed in Appendix D.
VII Conclusions
We have proposed and successfully tested a blockchain architecture in which proof of quantum work (PoQ) is performed on quantum computers. Our approach introduces multiple quantum-based hash generation methods, with varying levels of complexity depending on the capabilities of the quantum hardware. To address the inherent probabilistic nature of quantum hash generation and validation—an unavoidable property of any quantum hash algorithm—we developed techniques to ensure blockchain stability. We experimentally tested our proposed blockchain on four internationally distributed D-Wave annealing QPUs, demonstrating that despite architectural and parametric variations, these QPUs can be programmed to generate and validate each other’s hashes in a way that allowed stable blockchains to persist over thousands of broadcast blocks and hundreds of thousands of hash validations.
For this demonstration, we used the critical dynamics of 3D and biclique spin glass at a fixed size as the computational problem for PoQ. This was motivated by its role in the first experimental demonstration of quantum supremacy in a real-world problem [1] that is not based on boson or random-circuit sampling. However, this choice does not constrain the quantum problem that can be used for PoQ. Strategies, such as randomizing problem topology, dimensionality, qubit count, and annealing time, can prevent classical learning algorithms from predicting expectation values based on large datasets [46]. By choosing a sufficiently complex ensemble of experiments and sampling it cryptographically, experimental outcomes can be made weakly correlated. Additionally, hash algorithms leveraging entanglement witnesses and shadow tomography can further enhance the quantum nature of the hashing process. We note that while supremacy is desirable for PoQ, it may not be strictly necessary as long as classical competition remains prohibitively expensive, making quantum approaches the only practicable ones. In fact, one may deliberately use smaller problem instances where the ground truth can be obtained — albeit with significant computational effort — to benchmark and calibrate QPUs.
A quantum blockchain could transform existing blockchain technology by significantly mitigating the substantial electricity demands of classical proof of work (PoW) mining. Electricity costs are estimated to comprise 90% to 95% of Bitcoin’s total mining expenses [47]. In our PoQ approach, energy consumption amounts to a small fraction of quantum computation cost (which is the dominant cost in the PoQ approach). For example, energy expenses account for just 0.1% of the total computation cost on D-Wave quantum computers [50]. Replacing classical PoW with quantum PoQ could therefore reduce energy costs by a factor of 1,000. This ratio may shift as quantum computation becomes more cost-efficient over time, but substantial energy savings are expected to persist. Transition to PoQ would also relocate mining from regions with low electricity costs to countries with advanced quantum computing infrastructure. Beyond demonstrating the first practical use of quantum supremacy in a critical domain, this work proves the feasibility of quantum computing in real-world tasks and suggests broader high-impact applications with today’s technology.
Acknowledgment
We thank Brian Barch, Vladimir Vargas Calderon, Rahul Deshpande, Pau Farré, Fiona Hannington, Richard Harris, Emile Hoskinson, Trevor Lanting, Daniel Lidar, Peter Love, Chris Rich, and Murray Thom for fruitful discussions and comments on the manuscript.
References
- [1] A. King et al., “Beyond classical computation in quantum simulation,” Science (2025).
- [2] Satoshi Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” (2008). Available at: https://bitcoin.org/bitcoin.pdf.
- [3] S. Saberi, M. Kouhizadeh, J. Sarkis, L. Shen, “Blockchain technology and its relationships to sustainable supply chain management,” Int. J. Prod. Res. 57, 2117 (2019), https://doi.org/10.1080/00207543.2018.1533261.
- [4] T. T. Kuo, H. E. Kim, L. Ohno-Machado, “Blockchain distributed ledger technologies for biomedical and healthcare applications,” J. Am. Med. Inform. Assoc. 24, 1211 (2017), https://doi.org/10.1093/jamia/ocx068.
- [5] Haber, M.J., Rolls, D. (2020). Blockchain and Identity Management. In: Identity Attack Vectors. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5165-2_20.
- [6] F. Schär, “Decentralized finance: On blockchain- and smart contract-based financial markets,” Fed. Reserve Bank St. Louis Rev. 103, 153 (2021), https://doi.org/10.20955/r.103.153-74.
- [7] Z. Zheng, S. Xie, H. N. Dai, X. Chen, H. Wang, “An overview of blockchain technology: Architecture, consensus, and future trends,” in Proc. IEEE Int. Congr. Big Data, 557 (2017), https://doi.org/10.1109/BigDataCongress.2017.85.
- [8] F. Casino, T. K. Dasaklis, C. Patsakis, “A systematic literature review of blockchain-based applications: Current status, classification, and open issues,” Telemat. Inform. 36, 55 (2019), https://doi.org/10.1016/j.tele.2018.11.006.
- [9] M. Conti, E. S. Kumar, C. Lal, S. Ruj, “A survey on security and privacy issues of blockchain technology,” IEEE Commun. Surv. Tutor. 20, 2786 (2018), https://doi.org/10.1109/COMST.2018.2842460.
- [10] X. Xu, I. Weber, M. Staples, Architecture for blockchain applications (Springer, 2019), https://doi.org/10.1007/978-3-030-03035-3.
- [11] TechCrunch, “Crypto’s networked collaboration will drive Web 3.0,” (2021). Available at: https://techcrunch.com/2021/09/16/cryptos-networked-collaboration-will-drive-web-3-0/.
- [12] Digiconomist, “Bitcoin energy consumption,” (2025). Available at: https://digiconomist.net/bitcoin-energy-consumption.
- [13] Financial Times, “Can Bitcoin miners find a green way to dig for digital gold?” (2025).
- [14] D. Larimer, “Delegated proof of stake (DPoS),” BitShares Whitepaper (2014). Available at: https://bitshares.org/technology/delegated-proof-of-stake/.
- [15] M. Castro, B. Liskov, “Practical Byzantine fault tolerance,” in Proc. 3rd Symp. Oper. Syst. Des. Implement. (OSDI), 173 (1999).
- [16] John Preskill, “Quantum computing and the entanglement frontier,” arXiv:1203.5813.
- [17] W. Wang, Y. Yu, L. Du, “Quantum blockchain based on asymmetric quantum encryption and a stake vote consensus algorithm,” Sci. Rep. 12, 8086 (2022).
- [18] M. Gandhi et al., “Quantum blockchain: Trends, technologies, and future directions,” EPJ Quantum Technol. 5, 516 (2024), https://doi.org/10.1049/qtc2.12119.
- [19] A. Nag, P. Pal, S. Bhattacharyya, L. Mrsic, J. Platos, “A brief review on quantum blockchain,” In: Bhattacharyya, S., Banerjee, J.S., Köppen, M. (eds) Human-Centric Smart Computing. ICHCSC 2023. Smart Innovation, Systems and Technologies, vol 376. Springer, Singapore. https://doi.org/10.1007/978-981-99-7711-6_53.
- [20] C. Li, Y. Xu, J. Tang, W. Liu, “Quantum blockchain: A decentralized, encrypted, and distributed database,” J. Quantum Comput. 1, 49 (2019), https://doi.org/10.32604/jqc.2019.06715.
- [21] N.K. Parida, C. Jatoth, V.D. Reddy, et al. “Post-quantum distributed ledger technology: a systematic survey,” Sci. Rep. 13, 20729 (2023), https://doi.org/10.1038/s41598-023-47331-1.
- [22] Z. Yang, H. Alfauri, B. Farkiani, R. Jain, R. D. Pietro and A. Erbad, “A survey and comparison of post-quantum and quantum blockchains,” in IEEE Communications Surveys & Tutorials, 26, 967 (2024), https://doi.org/10.1109/COMST.2023.3325761.
- [23] S. Banerjee, A. Mukherjee, P. K. Panigrahi, “Quantum blockchain using weighted hypergraph states,” Phys. Rev. Res. 2, 013322 (2020), https://doi.org/10.1103/PhysRevResearch.2.013322.
- [24] K. Nilesh, P. K. Panigrahi, “Quantum blockchain based on dimensional lifting generalized Gram-Schmidt procedure,” arXiv:2110.02763 (2021).
- [25] M. Xu, X. Ren, D. Niyato, J. Kang, C. Qiu, Z. Xiong, X. Wang, V. C. M. Leung, “When quantum information technologies meet blockchain in Web 3.0,” arXiv:2211.15941 (2022).
- [26] A. Orun and F. Kurugollu, “The lower energy consumption in cryptocurrency mining processes by SHA-256 quantum circuit design used in hybrid computing domains,” arXiv:2401.10902
- [27] Morvan et al., “Phase transitions in random circuit sampling,” Nature 634, 328 (2024)
- [28] Dongxin Gao et al., “Establishing a new benchmark in quantum computational advantage with 105-qubit Zuchongzhi 3.0 processor,” Phys. Rev. Lett. 134, 090601 (2025).
- [29] Boixo, S., Isakov, S.V., Smelyanskiy, V.N. et al. “Characterizing quantum supremacy in near-term devices.” Nature Phys 14, 595–600 (2018).
- [30] F. Ablayev, M. Ablayev, “Quantum hashing via classical -universal hashing constructions,” arXiv:1404.1503 (2014).
- [31] S. Banerjee, H. Meena, S. Tripathy, P. K. Panigrahi, “Fully quantum hash function,” arXiv:2408.03672 (2024).
- [32] D. Li, J. Zhang, F.-Z. Guo, W. Huang, Q.-Y. Wen, H. Chen, “Discrete-time interacting quantum walks and quantum hash schemes,” Quantum Inf. Process. 12, 1501 (2013), https://doi.org/10.1007/s11128-012-0421-8.
- [33] M. Ziatdinov, “From graphs to keyed quantum hash functions,” arXiv:1606.00256 (2016).
- [34] F. Ablayev, A. Vasiliev, “Quantum hashing,” arXiv:1310.4922.
- [35] S. Banerjee, H. Meena, S. Tripathy, P. K. Panigrahi, “Fully quantum hash function,” arXiv:2408.03672.
- [36] A.V. Vasiliev, “Binary quantum hashing,” Russ. Math. 60, 61 (2016). https://doi.org/10.3103/S1066369X16090073
- [37] Yang, YG., Bi, JL., Li, D. et al. “Hash function based on quantum walks,” Int. J. Theor. Phys. 58, 1861 (2019). https://doi.org/10.1007/s10773-019-04081-z
- [38] L. Behera, G. Paul, “Quantum to classical one-way function and its applications in quantum money authentication,” arXiv:1801.01910.
- [39] Yang, YG., Xu, P., Yang, R. et al. “Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption,’, Sci. Rep. 6, 19788 (2016). https://doi.org/10.1038/srep19788.
- [40] K. P. Kalinin, N. G. Berloff, “Blockchain platform with proof-of-work based on analog Hamiltonian optimisers,” arXiv:1802.10091.
- [41] NIST, “Secure hash standard (SHS),” FIPS PUB 180-4 (2015). Available at: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf.
- [42] S. Aaronson, “Shadow tomography of quantum states,” in Proc. 50th Annu. ACM SIGACT Symp. Theory Comput. (STOC), 325 (2018), https://doi.org/10.1145/3188745.3188802.
- [43] H. Y. Huang, R. Kueng, J. Preskill, “Predicting many properties of a quantum system from very few measurements,” Nat. Phys. 16, 1050 (2020), https://doi.org/10.1038/s41567-020-0932-7.
- [44] J. F. Clauser, M. A. Horne, A. Shimony, R. A. Holt, “Proposed experiment to test local hidden-variable theories,” Phys. Rev. Lett. 23, 880 (1969).
- [45] Hui-Hui Qin, Shao-Ming Fei, and Xianqing Li-Jost, “Trade-off relations of Bell violations among pairwise qubit systems,” Phys. Rev. A 92, 062339 (2015).
- [46] H.Y. Huang, M. Broughton, M. Mohseni et al., “Power of data in quantum machine learning,” Nat. Commun. 12, 2631 (2021). https://doi.org/10.1038/s41467-021-22539-9
- [47] Rakesh Sharma, “Do Bitcoin mining energy costs influence its price?,” https://www.investopedia.com/news/do-bitcoin-mining-energy-costs-influence-its-price/
- [48] https://en.bitcoin.it/wiki/Weaknesses
- [49] Dootika Vats and Christina Knudson, “Revisiting the Gelman-Rubin diagnostic,” arXiv:1812.09384
- [50] D-Wave white paper, “Computational Power Consumption and Speedup,”
Appendix A Cross-Validation Limitations in Boson and Random-Circuit Sampling
To operate within a quantum blockchain, QPUs must cross-validate each other’s outcomes. Unstructured distributions yielding states with close to uniform random probability, such as those obtained from boson or random-circuit sampling, are unsuitable for this purpose: the expectation values of all foreseeable k-local operators remain close to zero, making them ineffective in the construction of witnesses. The only observable with a nontrivial expectation is , where is a ground-truth wave function. For this reason the estimator commonly used to establish supremacy is the cross entropy [29]. Given a distribution allowing fair and efficient sampling in the computational basis (, typically a QPU), and a well-resolved ground-truth distribution in the same basis (, used for verification), the cross entropy is quantified as
(31) |
is typically computed using tensor-network techniques at small scales. This allows the sum to be evaluated by Monte Carlo sampling (of , i.e. use of a sample set). Calculating by classical methods becomes intractable at large system sizes (thence supremacy). The alternative is then to determine from a QPU. XEB is small in experimental demonstrations, even using an ideal (tensor network) distribution as [27, 28]. Nevertheless, we might hope that could be constructed for the purpose of validating quantum work.
Unfortunately, the properties of boson sampling and random-circuit sampled distributions mean that the summation of (31) cannot be executed in a scalable way, even by Monte-Carlo methods. Since sampled state probabilities are all close to (for spins) establishing with sufficient resolution requires samples. More generally, for any pair of high entropy distributions amenable to fair sampling (but not exact integration), impractically large bias in estimation of distributional distance can be an issue[29, 1]. Entropy is especially large in boson and random-circuit sampling.
Appendix B CHSH Inequality
In this appendix we introduce CHSH inequality and formularize it in a way suitable for our application. Consider two observers (Alice) and (Bob), both allowed to perform measurement in two orthogonal bases “1” and “2” via operators . We define the Bell’s operator as
(32) |
Consider a quantum state represented by a density matrix . Measuring the Bell operator would result in an expectation value . Bell’s theorem states that for any classical local hidden variable theory, we should have
(33) |
whereas quantum mechanics allows
(34) |
This is commonly known as the Clauser-Horne-Shimony-Holt (CHSH) inequality, an experimentally practical variation of Bell’s inequality [44]. Violation happens when , which indicates that the experimental results cannot be explained by any local hidden variable theory. It is clear that violation of CHSH inequality is a sufficient but not necessary condition for quantum mechanics. It is also evident that violation strongly depends on the choice of and operators as well as the quantum state . We therefore define an operator-independent quantity as a maximum over all possible operators:
(35) |
The CHSH inequality therefore is expressed as .
Consider the Bell state
(36) |
where and are eigenstates of with eigenvalues , respectively. The pure state density matrix is
(37) |
We define operators
(38) |
which means Bob’s measurement bases are 45 degrees rotated compared to Alice. The Bell operator therefore becomes
(39) |
It is easy to check that is an eigenfunction of with eigenvalue . To violate Bell’s inequality with , we need to have strong correlations in both and directions. Each correlation can maximally be equal to one so the optimal value is , which occurs in Bell’s state. But weaker correlations can also lead to violation as long as the sum of the two correlators is greater than .
Classically, if we represent each spin by a 2D vector , we obtain
(40) | |||||
Maximum happens when , which gives . This is less than the maximum possible classically based on conditional probability arguments, i.e., .
It is possible to define CHSH inequalities between pairs of qubits in a multi-qubit system. One can use (35) with representing the reduced density matrix for qubits and :
(41) |
where the maximum is over all possible measurement bases. While each CHSH inequality can be violated, there is a trade-off relation that needs to be satisfied. For a system of qubits, we should have [45]
(42) |
Since the number of pairs is , the equality is satisfied if for all pairs. This means that it is not possible to violate all CHSH inequalities at the same time. In other words, if some of the inequalities are violated, there are always others that are not violated. This is closely related to the monogamy of entanglement, which states that if two subsystems are strongly entangled, they should have less entanglement with other subsystems.
Appendix C Weaknesses and Mitigations
Many weaknesses for PoW blockchains such as Bitcoin are known, and many mitigations have been developed [48, 7, 8, 9, 10]. We focus on several attacks that might be attempted by an adversarial agent to gain reward with incomplete work. The reward might be gained deterministically, or more often probabilistically (in expectation). Neither the attacks presented, nor the mitigations, are an exhaustive list.
An example of an attack for PoW blockchains is the 51% attack. In this attack an attacker sends Bitcoin to a service provider. After a delay (of a few block generation events) the service provider considers the payment made to be immutable, and renders the service. The attacker then creates a fork in the blockchain by mining a block that precedes the payment block, and omits their payment to the service provider in this new branch. If they are able to make this new branch strong (long) enough all stakeholders will switch to it, at which point the transaction to the service provider is no longer part of the record. The attacker is then able to spend their money again (a double spend). If an attacker controls a significant percentage of compute power it can be worth their time to attempt such an attack to negate a large payment, and if they have control of 51% of compute power they will succeed with high probability.
The 51% attack and other blockchain weaknesses apply also to our probabilistic blockchain. To mitigate for the 51% attack we require well-distributed quantum compute power; since mining power can be split by probabilistic verification, the threshold can be lower than 51% (but still substantial if mining efficiency is high enough). We focus in this appendix on attacks that are new or qualitatively changed relative to classical or non-probabilistic blockchains. Attacks are different and potentially more powerful owing to several factors including a lack of determinism, bypassing of experimental work (inference given the experimental parameters, i.e. spoofing), and changes in the relative costs of broadcasting, mining, and validation.
As described in the main text, the particular proof-of-concept experiment is premised on the notion of honest network users and is susceptible to attacks. In addition to mitigations described in the main text we explore additional procedures in this appendix. We present a set of tools to protect against no-work and partial-work attacks, but optimization of the experimental ensemble and digitalization process, essential in practice, goes beyond the scope of this publication.
C.1 No Work Attacks
We consider first the possibility that a miner might present a fraudulent block, one in which the nonce is chosen uniformly at randomly without conducting any experiment. In Bitcoin a random nonce can be used in a broadcast block. This convinces either all users with probability or convinces no users. The expected reward from such a strategy is , where is the cost of broadcasting the block, is the expected block reward and the cost of block preparation. Pursuing this strategy repeatedly the cost of block preparation is negligible (only a bit increment of the nonce is required between proposals). By contrast the expected reward from honest mining is , where is the cost of hashing (validation). The reward in Bitcoin is large enough compared to the broadcast cost and that mining is incentivized. Simultaneously , so random broadcasts are disincentivized.
The random nonce strategy is similar to a denial of service attack (DoS) from the perspective of validators. In DoS an attacker broadcasts in large volume to overwhelm the bandwidth or verification capabilities of the network (thereby disrupting or delaying transactions).
DoS and random nonce mitigations effective in other mature blockchain technologies can be used (see e.g. [48]). The situation is qualitatively the same in our probabilistic blockchain, although the expected reward is smaller as a function of the blockchain efficiency (enhanced branching), and validation may be inhomogeneous with respect to users. Validation is also more expensive than in many other blockchains, so fewer malicious packets can be more disruptive. The inhomogeneity of validation is a problem, since mining power may be diluted by such an attack even if the attacker is not ultimately rewarded.
In our application we should consider also important practical changes: the cost of hashing is raised, since quantum measurements will be more expensive than single SHA-256 hashes. Similarly may be smaller, since the verification time scale is longer. This makes a no-work strategy more viable. Relative to Bitcoin there is a smaller gap between the honest and dishonest expected rewards. To mitigate for no-work attacks a modified block structure can be effective, as described in Section C.4.
C.2 Partial work attacks
Partial work does not offer attack opportunities in Bitcoin owing to the cryptographic properties of SHA-256. A hypothetical probabilistic hash might fail in an independent and identically distributed manner, providing similar guarantees against partial work. However, our hash is to be inferred from a known experimental protocol () and digitalization () procedure, which in practical implementations can leave room for inference to bypass at least some quantum work.
Consider the case of a blockchain built upon quantum supremacy in the estimation of quench statistics [1], used in our proof of concept. Since signs on hyperplanes are determined as a (classical) cryptographic function of the block header, preimage resistance is assured: it is not possible to determine a block header from a desired (all zero) bit pattern. However, given a nonce (thence protocol parameters ) one can estimate correlations with increasing accuracy by a hierarchy of classical computational methods (of increasing cost). Certain experiments () may allow better or worse estimators, combined with certain realizations of and that allow an accumulation of signal relative to the error. In this way the sign for some (many) bits might be inferred with non-negligible (or perhaps even high) confidence.
An attack might proceed as follows. For many nonces an attacker can estimate by classical methods (e.g. by approximate simulation, or machine learning). The results are ranked by the number of (robust) threshold-satisfying bits. The best candidate might be broadcast directly. The reward from this strategy is described as in the no-work scenario, but with an enhanced (classical) work cost and enhanced probability that the bits are verified. This attack is mitigated in part by the modified block structure of Appendix C.4. In particular, spoofing enhanced experimental data (beyond the proof of work bits) is impossible without quantum supremacy [1].
A second way to use classical estimators is as a filter. Rather than broadcast the most promising nonce, the nonces are ranked and subjected to the prescribed quantum experiment. Only if the quantum work validates the nonce is it broadcast. The filter allows a user to bypass experiments that are less likey to meet the Chainwork requirement. In this case, the filter success rate must grow slowly enough with the classical work for quantum experiments to be favored over the filter.
To mitigate this filter attack it is necessary that all practical estimators are weak. One simple mechanism by which to weaken classical estimators is described in Appendix C.5. The quality of estimators as a function of the classical-work investigated is above all a function of the experimental ensemble (distribution of ).
C.2.1 Partial Quantum Work Attack
The basic +/-1 and confidence-based chainwork definitions allow for near misses with respect to complete or incomplete work. A participant might have some confidence that their result is close to the threshold, and reasonably expect that although their validation fails their nonce might be validated by some significant fraction of stakeholders.
Since a finite fraction of stakeholders (thence mining power) can validate a near-threshold block, there is finite probability of such a block being incorporated in the strongest chain subject to a delay; motivating a minor to broadcast such a block. A careful balancing of block penalties and rewards is necessary to ensure quantum-partial work is not a profitable strategy, achievable by control of transaction and block rewards and enhanced block structure.
The quantum experimental set-up also allows room for efficiencies as well, which should be understood in a full application. To minimize QPU access time (costs) a miner should optimize parameters such as number of samples and programming time. They could then more efficiently search, or filter, nonces. A miner is incentivized to prefilter nonces with cheap experiments, and then seek higher confidence (via enhanced sampling) only on candidates for broadcast. In our demonstration by contrast we fix QPU access time to 1 second in mining and validation.
C.3 Multi-Block Attack
The multi-block attack can be considered a generalization of the 51% attack. This involves an attacker proposing a no-work (or partial work) block and seeking to bury it under full-work blocks. By burying no-work blocks with full work blocks they may receive a reward enhanced in proportion to the block ratio.
Our definitions of Chainwork mean that for any path to be viable as a strongest chain there must be strictly more verified blocks than unverified blocks. Owing to this, the criteria necessary for this attack to succeed are similar to the standard 51% attack. In effect an attacker must be able to grow the chain more quickly than alternative non-fraudulent candidates, in order to undo the sum-work damage of their fraudulent block. This requires control of a significant fraction of community compute power assuming practical chain efficiency (concentrated honest mining power, up to small fluctuations).
C.4 Enhanced Block Structure Mitigations
An enhanced block structure shown in Fig. 6 can mitigate against no-work and partial-work attacks, as now discussed.

Block broadcasts that are verified with low probability pay a broadcasting cost. A sufficient strategy for mitigation is raising the cost of broadcasting blocks that have low-probability of validation, and raising the bar for validation by requiring the publication of enhanced experimental information. A modified block structure can achieve this as shown in Fig. 6. We describe some potential components:
C.4.1 Network and identity mitigations
If broadcasters are not anonymous their behavior can be tracked. A rule can be introduced to penalize or temporarily block users who consistently propose unverifiable work, as is done for DoS. Note that a transmission path or node might be penalized with limited information on the stakeholder.
C.4.2 Broadcasting with additional classical work
In order to broadcast it can be required that a user completes a classical proof of work. The work should be enough to delay transmission and raise costs, yet negligible compared to the quantum work. If the classical work is sufficiently high, it ceases to become profitable to risk approximations in the quantum part. In the case where the classical work becomes comparable to the quantum work, we have a hybrid system. In an immature technology it may be valuable to pursue such a hybrid approach and gradually reduce the proportion of classical work as confidence is gained in the quantum operation. Quantum proof of work could of course also be hybridized with proof of stake as an alternative mitigation.
C.4.3 Broadcasting with a stake
A user might be required to stake (bid on) their proposal. The stake can be a finite fraction of the block reward, signed from the miner’s current wallet to a burn wallet. Stakes for failed branches are paid to a burn-wallet. In this way, any user is limited in the number of weak proposals they are able to make, and risk significant loses from doing so. Payment to a burn wallet is a standard component in proof of stake blockchains such as Etherium, but note that the stake here does not require any community involvement. This is not a proof-of-stake consensus mechanism, which requires a community-wide mechanism.
C.4.4 Enhanced experimental data
We can also raise the difficulty to pass validation by requiring the publication of enhanced experimental data (EED). Quantum work is based on a bit description of the experimental outcome, by contrast the full experimental data may be subject to more robust statistical validation tests that more robustly eliminate bad proposals [1]. EED can fall outside the header if the information content is large, and be linked in a similar manner to transaction data. The bits used in validation are cryptographically protected, but a user might be required to publish EED consisting of more extensive statistics (up to perhaps full sample sets) as part of a broadcast. Although EED might be manipulated, it might be taken as a rule to evaluate the strongest available EED. Weak classical work (2.) or a stake (3.) might be used to protect EED and disincentivize manipulation. The EED can be hashed on the next mining event, so that it is cyrptographically protected subject to a delay of one block. For quantum-supremacy satisfying experiments EED might not be spoofed by classical methods, but this does not prevent classical filtering of nonces as described above.
C.5 Hyperplane distribution mitigations

Suppose a per-nonce classical work less than the quantum experimental cost () allows for a witness estimator . One mitigation is to modify the distribution of thresholds or hyperplanes (to choose ), or to choose hyperplanes orthogonal to . In either case the classical approximation is rendered uninformative on the signs expected from the quantum experiment. If the uncertainty in the quantum experiment is relatively small, we might still have sufficient accuracy in quantum experiments for robust validation. A visualization of this process is shown in Fig. 11.
In the case of our quench experiments, we know that nearest-neighbor correlation are correlated with , the approximation describes accurately the weak coupling (fast anneal) limit. There are significant deviations from this estimate, but any hyperplane that is well correlated with can accumulate a reliable signal from weak estimators incorporating this approximation. To mitigate for this we might choose our hyperplanes to be orthogonal to . Choosing hyperplanes (or thresholds ) to decrease reliability of classical estimators will typically enhance the uncertainty in quantum validation, thus reduce the efficiency and enhance delays in a blockchain. This effect is shown in Fig. 15 as a function of . In Fig. 7 we show an example of a chain resistant to simple classical inference based on local coupling strength.
Appendix D Experimental Implementation Details











The dimerized cubic lattice is equivalent to a cubic lattice minor-embedded in the QPU with chain strength 1, with a regular pattern of connectivity between chains. All couplers are uniformly and independently distributed. This matches the cubic-nodimer ensemble of [1]. A dimerized biclique is equivalent to an embedded biclique () model. Couplers are on chains, and between chains in a regular pattern. For the dimerized biclique, the regular pattern of inter-chain couplers is specific to the Advantage2 QPU, so we do not perform mining or cross-QPU validation on Advantage QPUs. The dimerized bicliques ensemble matches that of [1].
For 3D spin-glasses we run experiments (mining or validation) on each of the four generally-accessible QPUs, in each case with parallel embeddings (because we consider relatively small problems of 128 qubits, we can program in parallel many copies). We fix the annealing time on the Advantage2 device to be , and fix annealing times for the other three Advantage devices at approximately 10ns, so as to maximize the cross-platform validation rate (a median with respect to a test set of spin-glass disorder realizations and hyperplanes). The difference in programmed times reflects a difference in energy scales between devices, the unitless unitary evolution then becomes well matched, atleast throughout the critical region [1]. Energy scales are shown with respect to the Hamiltonian (29) in Fig. 8. We then fix the number of reads to fully utilize the maximum QPU access time () per programming, and otherwise use default parameters. For the biclique problem we have 72 qubits and fix the Advantage2 annealing time to . Experimental plots relate to the cubic lattice ensemble, unless stated otherwise.
Per experiment (mining or validation) we choose uniformly at random among the available QPUs and then embed the problem subject to an automorphism and spin-reversal transform on each available embedding. The effect of these transformations is to randomize the mapped position and signs on programmed fields and couplers, in effect emulating the enhanced noise that might be anticipated from (more) distributed computation.
Returned sample sets are processed to nearest-neighbor correlations (1 per programmed coupler), that are then projected with independent and identically distributed hyperplanes to determine witnesses. Hyperplanes have independent and normally distributed components. In the case of Fig. 15 we also consider a variation in which hyperplanes are projected into the space orthogonal to , as described in C.5. is correlated with , and we remove the linear part of this correlation with such a projection, making the quantum statistics harder to spoof classically. A visualization of the projection process is shown in Fig. 11, experimental distributions for witnesses are shown in Fig. 13. Uncertainty in witness values is typically much smaller than the witness value. Witness values vary more between Advantage and Advantage2 QPUs, than for same-generation QPU validations.
Witnesses are thresholded to determine hashes. This digitalization of the correlations is the standard locality-sensitive hashing method, which guarantees preservation of local distances. With enough projections a small (supremacy achieving error) in correlations translates to a small (supremacy-achieving) Hamming distance in the resulting bit sequence. The probability that bits are validated (reproduced) under a repeated experiment are shown in Fig. 12. More than 99% of bits are typically validated, but the remaining bits can have high uncertainty, particularly with respect to cross-processor values.
We performed simulations of many chains with 100 miners modeling a mining pool and transactions, or taking the limit of many miners (where no miner mines more than one block). Examples are shown in Figs. 4, 9, 10 and 7. However, only the distribution of witness statistics impacts the structure and stability of the chain, assuming no simultaneous broadcasts, via mining events and stakeholder validation. Understanding this we were able to accelerate mining (Section D.2), and simulate experiments by resampling (Section D.3). We first describe our main statistical measures of blockchain stability in Section D.1.
D.1 Blockchain Efficiency and Transaction Delay
For a blockchain to be successful, (1) efficiency must be high so mining work is reliably rewarded and (2) delays in transaction confirmation must be small to enable use as a real-time ledger. These quantities are strongly correlated but subtly different.
Any stakeholder identifies a node with maximum Chainwork that defines the strongest chain; this can be considered a random variable and defines a unique strongest path from the genesis node.
We define blockchain efficiency as the expected number of blocks in the strongest path divided by the total number of blocks broadcast, in the limit of many blocks. For viable parameterizations this concentrates at a finite value as the number of block broadcasts becomes large. For purposes of Figs. 5 and 15 we estimate this value once after every block broadcast for blockchains of between and broadcasts (mining events). We discard the values for the first 50% of the chain evolution and average the remaining result. The resulting value is shown as a mean and standard error with respect to either 5 or 10 independent chains.
We can establish two such independent strongest paths, using two stakeholders who have not mined any blocks, and define their pair delay, , as the number of blocks in the first path absent from the second path (i.e. the depth at which there is disagreement in the strongest chain). If two users agree on the strongest chain this delay is , and is otherwise positive. Stakeholders can be miners: the pair delay describes the distribution of mining power relative to a typical user. If pair-delays are with high probability less than , then a stakeholder can have corresponding confidence that blocks of depth at least on the strongest chain are immutable, since mining power is concentrated on branches compatible with the block validity. We call the transaction delay.
For figure 15 we measure 1 pair-delay per chain following each mining event on chains of length from 512 up to 2048. We determine the empirical distribution of , discarding statistics from the first half of each chain. We threshold the distribution to determine a transaction delay with 95% confidence, presenting the mean and variance with respect to at least 5 (at most 10) blockchains. Although pair-delays measured this way are correlated, for each data point we have at least thousands of effectively independent samples for each data point, so that a 95% quantile is established with reasonable confidence [49].
D.2 Mining Acceleration for Blockchain Analysis
Each mining event requires experiments. For purposes of blockchain structural analysis we show methods whereby only experiments are required per mining event. This allows us to present, with practical compute resources, results on stability for large , limited primarily by the number of validation experiments rather than mining.
A valid mining event occurs when the QPU produces witnesses that (a) after thresholding contain zeros (with the +/-1 Chainwork definition) or (b) satisfy the confidence-based Chainwork criteria.
We first make the approximation in all experiments of perfect network synchronization, discounting the rare event of two miners simultaneously completing a proof-of-work. This approximation is reasonable if the work is mining rate is small enough.
We employ a hashing method where the overall sign on any hyperplane is uniformly distributed in . For this reason the probability a nonce describes a broadcastable (valid) block in case (a) is ; mining is successful if and only if the hyperplane signs . We can therefore emulate properly distributed statistics of a fair mining event by first running an experiment to produce , and then resampling the signs on hyperplanes to allow mining.
For a confidence-based Chainwork, the probability of successful mining is a function of more than the signs on the hyperplanes. To accelerate mining events in this case, we perform a large number of experiments to determine many copies of for to . In each case, we exhaustively enumerate all hyperplane signs consistent with a mining event. We thereby enumerate a subset of all mining events. These mining events are correlated (in some cases differing only by the signs on hyperplanes), but this is judged to be a weak correlation. Most often only one value for the signs is supported, and events that allow many sign values are rare for the parameter ranges presented. The rate of mining can be inferred, and the mining subset can be used in a randomized order to model the mining.
The probability of confidence-threshold satisfying events is a function of , and , and remains approximately equal to subject to a small correction, as shown in Fig. 14.
D.3 Resampling Witness Methodology
For the case that Chainwork is defined as +/-1 for matching (unmatching) sequences, knowing the distribution of the bits for each device used in mining and validation is sufficient to evaluate the distribution of Chainwork and thereby the evolution of the blockchain. For the case of confidence-based Chainkwork, it is sufficient to know the distribution of .
For a training set of 5 experiments and random projections, the distribution of bits (thence validation rates), and distribution of (thence confidence in validation) can be established as shown in Figs. 12 and 13. We can fit the witness distributions by Gaussians, and thresholded bit outcomes by Bernouilli distributions. For purposes of accelerating analysis we can then resample these distributions to model a larger set of experiments without use of the QPU experiment each time.
We employ this procedure in Figs. 5 and 15 with results consistent with full blockchain experiments involving finite-number of miners, such as those in Fig. 9. We consider 5120 combinations of model (spin-glass realization) and hyperplanes, for each of the 4 experimental QPUs. Using 20 programmings, we determine the distribution (Gaussian or Bernouilli for witnesses and bits respectively). We can then use these parameterized distributions to generate reasonable witness (and thresholded witness) distributions. For each mining attempt, we uniformly randomly sample a combination of model and hyperplanes - and then for any mining or validation event we choose uniformly at random an QPU and sample from the corresponding distribution.
We can further save on computation costs by only mining (validating) as necessary for purposes of constructing the blockchain. We need only understand a miner’s verification pattern if they are responsible for broadcasting the next block. We therefore construct on-the-fly a miner’s Chainwork and establish their strongest chain at the time of mining. With this approximation we can work in the informative limit of well-distributed compute power (many miners), where each miner mines at most one block in any blockchain. The number of validation events required to realize a chain with blocks broadcast in total is .
Qualitative justification for this approach relies on the observation that a spin-glass ensemble operated in our experimental regime is self-averaging. The distributional properties of such witnesses fluctuates little between typical spin-glass disorder realizations. Variation in witnesses values between QPUs (and embeddings) appears well described by Gaussian distributions, and since hyperplanes are nearly orthogonal the correlations in their fluctuations are weak. However, we expect this method to underestimate fluctuations so that Figs. 5 and 15 may be biased toward higher efficiency (and reduced delay). Full simulations (every verification and mining event uses a QPU) are used for all other figures. The outcomes are in qualitative agreement with the resampling method, and reasonable quantitative agreement with respect to blockchain efficiency and delay.