We are very pleased to have published "Proof-of-Work Consensus by Quantum Sampling," introducing a novel approach that has the potential to revolutionize blockchain consensus algorithms. This research showcases the application of coarse-grained boson-sampling (CGBS) as a quantum Proof-of-Work (PoW) scheme, enabling users to perform boson-sampling and commit their samples to the network based on current block information. The proposed CGBS scheme incentivizes honest nodes through a strategic combination of rewards for miners submitting honest samples and penalties for miners submitting dishonest samples, leading to a Nash equilibrium and offering dramatic speedup and energy savings compared to classical hardware computation.
Find the paper on arXiv here.
Proof-of-Work Consensus
Any system for securely recording transactions requires a mechanism for approval and authorisation. Conventionally, this relies on a trusted central authority. In a conventional banking context, the bank itself acts as a central authority for this purpose. Similarly, when trading shares on the stock market an exchange will act in this capacity. The purpose of blockchains is to avoid reliance on centralised authorities in a distributed and trustless environment.
Blockchains rely on consensus algorithms for this purpose, where the goal is to replace a trusted central authority with an untrusted, decentralised one for authorising transactions. For a consensus algorithm relying on untrusted parties to provide honest and reliable outcomes, the algorithm must both incentivise participation and prevent collusion. Proof-of-work (PoW) is one such approach.
In a proof-of-work consensus protocol, nodes in the network compete to participate in consensus and sign-off on new transactions by solving a mathematical puzzle that requires expending computational resources. To prevent collusion we’d like consensus participants to be chosen randomly, and to incentivise their participation there should be a reward. Enforcing randomisation is achieved by the way we define the mathematical puzzle.
In the original Bitcoin protocol the problem of inverse hashing is used as the mathematical puzzle, a so-called trapdoor function. A hash function is a pseudo-random function that is easy to evaluate but hard to invert. Given an input it is straightforward to evaluate the hash output, but given an output the only approach to finding a satisfying input is via brute-force trial and error. Bitcoin miners compete to solve an inverse-hashing problem associated with a set of transactions awaiting authorisation. Participants who successfully solve this problem win the right to participate in consensus and sign-off on the transactions, and are rewarded with newly minted Bitcoin, hence the term mining. The pseudo-random structure of hash functions ensures the randomisation of who succeeds at mining and hence who participates in consensus, thereby mitigating the potential for collusion. Because hash functions are easy to evaluate in the forward direction, once a participant announces that they have solved the problem it is very easy for others to verify their solution.
There are, however, three issues that threaten to compromise continued usage of PoW consensus.
- Problems such as inverse hashing used in PoW admit solutions by special purpose processors like application specific integrated circuits (ASICs) which are extremely energy intensive. It is necessary to maintain a constant average block mining time in order to avoid inflationary pressure for asset-based blockchains like Bitcoin with mining rewards, and to maintain integrity of the protocol in the presence of network latency and growth. This means that as the computational speed of ASICs increases, the difficulty of the one way function must be increased, and consequently, the energy costs of mining increases.
- PoW as traditionally formulated assumes only classical computers are available as mining resources. Quantum computers can achieve a quadratic speedup in the solution to unstructured problems like inverse hashing and this means they no longer satisfy the progress free condition, since the probability to solve the problem grows non-linearly with computational time spent. An adversarial network of future quantum computers performing traditional PoW consensus will have radically different dynamics to the classical counterpart. A future proofed consensus algorithm must take quantum processing into account.
- A large network of participants using brute force to solve random mathematical problems is inherently wasteful. Such a network collectively consumes enormous computational resources and vast amounts of energy, continually growing over time. Currently, the energy consumption associated with a single Bitcoin transaction is on the order of 700 kWh. As of May, 2023, a single Bitcoin transaction had the equivalent energy consumption of an average U.S. household over 19.1 days. Such an energy-intensive process makes PoW based on inverse hashing extremely unsustainable.
Boson-Sampling
Boson-sampling is an example of a NISQ-era (Noisy Intermediate Scale Quantum) algorithm, which can be readily implemented using current quantum hardware. Unlike large-scale universal quantum computing, which requires active error correction and is likely at least a decade away, boson-sampling is already available on the market today.
Boson-sampling is the problem of evolving some number of single photons through a large, random optical interferometer (i.e. a network of beamsplitters) and measuring the photo-statistics at the output. The output state prior to detection is a weighted superposition of all the combinations of how the photons could exit the different outputs (the inputs and outputs are referred to as optical modes). The combinatorics associated with this implies a super-exponential number of possible configurations. Upon measurement, we collapse the state and observe exactly one. However, which one we collapse onto is random, weighted in accordance with the respective quantum amplitude in the output superposition. This means that if we repeat the experiment many times, we are statistically likely to always obtain different measurement outcomes. This type of problem is known as a sampling problem, where we are sampling from a random distribution. Contrast this with the inverse hashing problem, which is a decision problem, where a given input has a well-defined and deterministic output.
The boson-sampling problem of measuring samples from the output to a large optical interferometer is one that is easy to implement using quantum hardware, but computationally unviable to simulate classically. An instance of a boson-sampling problem is defined by the choice of interferometer and the input configuration of photons.
Proof-of-Work Consensus by Quantum Sampling
To employ boson-sampling in a consensus algorithm we require the ability for multiple, independent parties solving the same problem instance to collectively verify which participants have honestly solved the problem and win the right to participate in signing-off a new transaction. But since boson-sampling is a sampling problem where every sample is statistically likely to be unique, this rules out directly comparing measurement results, which will necessarily be different, even when everyone is acting honestly and solving the same problem. So how can parties implementing boson-sampling reach consensus on who is acting honestly and who is not?
The answer is that rather than directly comparing measurement outcomes we compare statistical properties associated with outcomes that we expect to be consistent even when the individual samples are not. Here we employ the peak-bin probability of coarse-grained boson-sampling for this purpose. In coarse-grained boson-sampling we partition the output space into bins according to some binning strategy, which can be performed in one of two ways. In mode-binning we define bins as sets of output modes from the interferometer.
In state-binning we define bins as sets of photon-number output configurations. A bin probability is the probability that output samples reside within a given bin, and the peak bin probability is the largest of these across all defined bins.
While the number of possible photon output configurations scales super-exponentially with photon-number, binning is defined such that the number of bins scales polynomially with photon-number. Although individual samples from a given boson-sampling instance are random and statistically likely to be distinct, once binned, repetitions are seen and bin probabilities converge to values that are expected to be consistent between different parties solving the same problem instance. This provides a mechanism for independent parties to form consensus on who has honestly solved the problem and is entitled to sign-off on transactions for a reward — mining via quantum proof-of-work.
While this mining process is computationally unviable for classical participants, it is extremely efficient for miners possessing the necessary quantum hardware, one that is far more energy efficient than current inverse-hashing proof-of-work. Since every transaction should be associated with a new puzzle to solve, we define the problem instance according to a permutation specifying which inputs are fed with single photons for a fixed interferometer, where the permutation is dictated by the block header of the previous transaction block.
Implications for the Field of Quantum Computing
Boson-sampling was first proposed for the purpose of demonstrating that current-generation optical quantum hardware can demonstrate quantum supremacy. However, as the problem definition is contrived for this purpose it has until now found little practical use beyond this. The boson-sampling proof-of-work protocol demonstrates that this otherwise contrived problem has practical utility, perhaps the first genuinely useful commercial application for boson-sampling.
This result also demonstrates the broader conceptual observation that proof-of-work needn’t be defined based upon trapdoor decision problems. Since the purpose of proof-of-work consensus algorithms is to force parties to solve a puzzle and prove that they have done so honestly, convergent statistical measures can act as proof-of-work.
Conclusion
This protocol acts as proof-of-concept that current NISQ-era quantum hardware has utility in energy-efficient proof-of-work distributed consensus algorithms. While we have focussed here on boson-sampling, this creates a forward outlook that other NISQ architectures may find similar utility. Quantum sampling problems are not limited to photonic implementation and have been proposed and demonstrated across various other physical architectures.
Looking beyond NISQ to the era of scalable, universal quantum computing, there are further identifiable candidates for quantum proof-of-work. The inverse-hashing problem employed in Bitcoin mining is an example of a trapdoor function that is an NP decision-problem — one that can be efficiently classically verified but is classically hard to compute. In principle, any function with this property may act as a direct substitute for inverse-hashing. Other examples of such problems are integer factorisation and the discrete logarithm problem. Efficient quantum algorithms based on Shor’s algorithm exist for solving both of these problems, and both admit efficient classical verification. In the quantum context these could be employed as trapdoor functions where the size of the problem instance dictates the required quantum runtime. Read the full paper here.