Why is quantum computing unsuitable for mining?
The idea that quantum computers could one day “revolutionize” Bitcoin mining is a recurring theme in the media. This anticipation is based on a confusion between two distinct fields: post-quantum cryptanalysis (concerning the security of digital signatures) and proof of work (concerning the search for valid SHA-256 hashes). However, recent scientific research shows that quantum computing offers no competitive advantage for mining, either in theory or in practice. The following analysis explains the specific reasons: algorithmic limitations, hardware constraints, energy costs, protocol neutralization, and lack of real economic impact.
Key figures to know beforehand:
-
256 bits: size of the SHA-256 hash used for Bitcoin mining.
-
1 in 2²⁵⁶: the raw probability that a random hash will satisfy the network target.
-
10 minutes: the average time targeted by the Bitcoin protocol for discovering a block.
-
2016 blocks: the interval for automatic recalculation of network difficulty.
-
≈ 1.23 × 10¹⁹: average number of theoretical attempts with Grover for a difficulty equivalent to 128 bits.
-
100 to 400 TH/s: computing power of modern ASICs (hundreds of trillions of hashes per second).
-
12 to 35 joules per terahash: average energy efficiency of a current ASIC miner.
-
< 1 nanojoule per hash: individual energy efficiency of an SHA-256 ASIC.
-
10⁻¹⁴ seconds: average execution time of an SHA-256 hash on ASIC.
-
10⁻³ to 1 second: estimated duration of a quantum SHA-256 oracle per iteration (even in an optimistic scenario).
-
10¹¹ to 10¹⁵ times slower: performance gap between a quantum oracle and a conventional ASIC.
-
10³ to 10⁶ physical qubits: required to stabilize a single error-corrected logical qubit.
-
> 10⁹ T logic gates: estimated depth of a complete fault-tolerant quantum SHA-256 circuit.
-
10 to 15 millikelvins: typical operating temperature of superconducting quantum systems.
-
Several kilowatts: power consumption of a single cryogenic dilution refrigerator.
-
Several hundred physical qubits: maximum capacity of the best quantum processors (Google, IBM, 2025).
-
Several million corrected qubits: required to break a 256-bit ECDSA key with Shor's algorithm.
-
2²⁵⁶ ≈ 1.16 × 10⁷⁷: total search space of the SHA-256 hash, which cannot be exploited by Grover beyond the symbol.
-
O(2ⁿ) → O(2ⁿ⁄²): Grover's maximum theoretical gain, i.e., only quadratic acceleration.
-
10⁶ to 10⁸ times more expensive: estimated energy cost of a quantum calculation equivalent to a classical hash.
Definition of a quantum SHA-256 oracle
This is the translation into quantum computing formalism of the SHA-256 hash function used in Bitcoin mining. It is a central component of Grover's algorithm when applied to a hash function.
In a classical calculation, SHA-256 is a deterministic function: it takes an input (a block of data) and produces a 256-bit hash. In quantum computing, this function must be represented by a reversible unitary operation, i.e., a logic circuit that transforms an input quantum state |x⟩ and an output register |y⟩ according to the rule:
|x, y⟩ → |x, y ⊕ SHA-256(x)⟩
where ⊕ represents a bitwise addition (XOR). This operator is called a quantum oracle because it “guides” Grover's search by marking entries whose hash satisfies a given condition (for example, being less than the network target).
During each iteration of Grover's algorithm, the quantum SHA-256 oracle:
-
Calculates the SHA-256 hash of all possible entries in superposition.
-
Compares the result to a condition (e.g., “the first 20 bits are equal to zero”).
-
Reverses the phase of the states that satisfy this condition.
This operation then amplifies the probability of measuring a valid input at the end of the calculation through constructive interference.
Building a realistic quantum SHA-256 oracle involves:
-
Converting the irreversible operations of classical SHA-256 (modular addition, shifts, XOR, AND, OR) into reversible quantum gates.
-
Ensuring quantum coherence over millions of successive gates.
-
Maintaining fault tolerance (error correction) over thousands of logical qubits.
In practice, each quantum SHA-256 oracle would correspond to an extremely deep circuit, comprising billions of elementary operations and requiring millions of physical qubits.
In summary, a quantum SHA-256 oracle is the reversible and unitary version of the hash function used in Bitcoin, serving to mark valid solutions in a Grover algorithm. It is the theoretical element that links classical cryptography to quantum computing, but also the main practical barrier making quantum mining unfeasible.
Nature of the computational problem
Mining is based on the SHA-256 hash function, applied twice for each block: the miner must find a nonce value such that the hash of the block is less than a target set by the protocol. This process corresponds to an exhaustive search, where each attempt is statistically independent.
The probability of success for an attempt is:
p = T / 2^256 where T represents the network target.
The average number of attempts required to find a valid block is therefore:
N_classic = 1 / p
In this model, each attempt is a hash calculation, and current ASIC miners perform several hundred trillion hashes per second, thanks to a massively parallel architecture optimized for energy efficiency of a few dozen joules per terahash.
The illusion of quantum acceleration
Grover's algorithm (1996) accelerates the search for a particular element in an unstructured space. Its complexity goes from O(2^n) to O(2^(n/2)). Applied to mining, this would reduce the average number of attempts to:
N_Grover ≈ (π/4) × 1 / √p, which is a theoretical gain of a quadratic factor.
Let's take a simple example: If the probability of success is p = 2⁻¹²⁸, then: – N_classic = 2¹²⁸ – N_Grover ≈ (π/4) × 2⁶⁴ ≈ 1.23 × 10¹⁹
Even in the best-case scenario, this gain remains marginal in view of the physical constraints of implementation. Quantum mining therefore does not multiply the speed by 10⁶ or 10⁹; it only reduces the exponential complexity by a quadratic factor. This improvement is arithmetically insufficient to compete with ASIC farms equipped with millions of parallel circuits.
Actual implementation of quantum SHA-256
The main obstacle lies in the depth and stability of the circuits needed to execute SHA-256 in quantum form. A benchmark study (Amy et al., SAC 2016) estimates that implementing SHA-256 with quantum error correction would require several billion T logic gates and millions of physical qubits . By comparison, the best experimental quantum processors (Google, IBM, Rigetti) currently handle a few hundred physical qubits, with gate error rates between 10⁻³ and 10⁻² and coherence times on the order of microseconds.
Even assuming the availability of a fault-tolerant quantum computer (FTQC), the circuit depth of Grover's algorithm on SHA-256 would far exceed the coherence window of current qubits. The cost of error correction, which requires 10³ to 10⁶ physical qubits per logical qubit, makes any industrial application impractical.
Energy and hardware limitations
Contrary to popular belief, a quantum computer does not consume “zero energy”. Superconducting or trapped ion devices require cooling to temperatures close to absolute zero (10 to 15 mK), using expensive and energy-intensive dilution refrigerators. The consumption of a single cryogenic system already exceeds several kilowatts for a few hundred qubits, not counting microwave control instruments and high-frequency power supplies.
However, mining is a massively parallel process: billions of independent calculations must be performed per second. Quantum computing, on the other hand, is sequential, with each Grover iteration depending on the previous one. Thus, even if a quantum computer could perform a “smarter” hash, its overall throughput would be orders of magnitude lower than that of specialized ASICs, whose energy efficiency per operation is less than 1 nanojoule.
The 2023 study (“Conditions for advantageous quantum Bitcoin mining,” Blockchain: Research and Applications) confirms that the energy cost and latency of quantum control negate any theoretical advantage. In other words, quantum computing is unsuited to the PoW structure, which is based on the ultra-fast repetition of a simple function, not on deep, coherent computation.
Difficulty adjustment: protocol neutralization
Even if an actor discovers a faster quantum method, the Bitcoin protocol's difficulty adjustment mechanism would make this advantage temporary. The difficulty is recalculated every 2016 blocks to maintain an average interval of 10 minutes. If a “quantum” miner doubled the network's overall hash rate, the difficulty would be doubled in the next period, bringing the yield back to normal. Thus, quantum computing could never “break” mining: it would simply be integrated into the economic equilibrium of the network and then neutralized.
The only residual risk would be centralization: the possession of exceptionally powerful quantum hardware by a single player could temporarily unbalance the hashpower market. But this risk is economic in nature, not cryptographic, and remains unlikely given the necessary investment costs (cryogenic infrastructure, maintenance, advanced engineering).
Differentiating risks: signatures vs. hashing
Two distinct threats must be distinguished:
-
Hashing (SHA-256): used for mining, it is resistant to quantum attacks because Grover only confers a quadratic gain.
-
Signatures (ECDSA): used to prove ownership of an address, they would be vulnerable to Shor's algorithm (1994), which is capable of calculating discrete logarithms.
It is therefore the signature layer, not the mining layer, that justifies post-quantum transition work. Recent estimates put the resources needed to break a 256-bit ECDSA key at several millions of corrected qubits. In 2025, no system will come close to this scale: corrected logic processors will be counted in units, not thousands.
The real progress of 2024-2025: advances with no impact on mining
Recent announcements of progress—for example, the stabilization of error-corrected logical qubits—are important steps, but they concern experimental reliability, not computing power. Quantum computing useful for mining would involve billions of consistent, repeated operations, which current qubits cannot sustain. Even a major breakthrough in error correction or modularity would not reverse the fact that quantum architecture remains incompatible with the massively parallel, shallow depth, and high frequency nature of mining.
The following explanations are a little more complex, so here are some prerequisites
The concepts of bits, pool mining, and difficulty bounds may seem abstract. Here is a clear explanation of these three essential elements for understanding how mining actually works.
MSB and LSB
In a 256-bit binary number (such as the result of an SHA-256), the MSB (Most Significant Bits) are the bits on the left: they represent the most significant values in the number. The LSB (Least Significant Bits) are those on the right, which change most often but have little influence on the overall value. When we talk about finding a hash “with leading zeros,” it means that the MSB must be zero: the hash begins with a long series of zeros. Miners vary a small data field called a nonce so that the final hash meets this constraint. The difficulty of the network is precisely the number of MSBs that the hash must have as zero.
How pools work
Mining is now organized into pools, groups of miners who work together and share the reward. Each miner is given simplified tasks: they do not seek to validate the entire block, but to produce shares, i.e., hashes whose difficulty is lower than a target that is much easier than that of the network. These shares serve as proof of participation: the more a miner provides, the greater their share of the final block reward will be. The pool server constantly adjusts the individual difficulty (vardiff) to balance speeds: a miner who is too fast is given more difficult tasks, which prevents any unfair advantage.
Lower and upper mining limits
The Bitcoin protocol sets two difficulty thresholds that govern the entire mining process. The upper limit corresponds to the network target: for a block to be validated, its header hash must be less than this value. The lower the target, the more zeros are required at the beginning of the hash, making the block more difficult to find. Conversely, the lower limit corresponds to the difficulty of work assigned by the pools to each miner, which is much easier to achieve. It is used solely to measure individual participation.
The pool server constantly adjusts these limits. If a miner finds too many shares too quickly, the pool increases the difficulty of their tasks. If they find them too slowly, it reduces it. This mechanism—called vardiff—effectively eliminates extreme behavior: miners who are too fast do not earn more, while those who are too slow are naturally excluded, as their shares become too rare to be profitable.
Thanks to this balancing system, each miner's computing power remains proportional to their actual contribution, with no possibility of a lasting advantage. The upper and lower limits thus ensure overall network stability and local fairness in the distribution of work.
Understanding the “partial Grover” illusion
One idea often comes up: applying Grover's algorithm not to the entire 256 bits of the SHA-256 hash, but only to a portion of the most significant bits (the “MSBs”), then completing the rest in the traditional way. This approach, known as “partial Grover,” seems logical: if the search covers a smaller space (for example, 40 bits instead of 256), the number of iterations required decreases accordingly, according to the rule √(2^r). In theory, this could make it possible to obtain low-difficulty shares more quickly in a mining pool.
In practice, this approach does not change the reality of the calculation. Each Grover iteration requires executing the entire SHA-256 to evaluate the condition on the most significant bits. It is impossible to “truncate” the hash or partially test a cryptographic hash function without calculating it entirely. In other words, fewer iterations are repeated, but each one costs just as much—and millions of times more than a conventional hash on ASIC.
Furthermore, Grover does not allow multiple correlated solutions to be produced. The quantum state collapses after the first measurement: to find another solution, you have to start all over again. Unlike classical computation, you cannot reuse the result to generate nearby variants or multiple close shares.
Finally, even if a quantum miner achieved a slight local acceleration on the shares, this difference would be immediately neutralized by the pools' automatic regulation mechanisms, which dynamically adjust the difficulty for each miner. The protocol is designed to maintain a balance between all participants, regardless of their speed.
In summary, “partial Grover” offers no practical advantage: the quadratic gain remains purely theoretical, negated by the slowness, decoherence, and physical constraints of quantum computing. Even when applied to a small portion of the hash, the energy, time, and structural costs of such a process exceed those of conventional miners by several orders of magnitude.
Other possible objections
“Grover's algorithm can process multiple solutions (multiple-solutions search).” Source: PennyLane Codebook on “Grover's Algorithm | Multiple Solutions” explains the generalization of the algorithm to find M solutions in a space of size N.
Response: In theory, finding M solutions reduces the complexity to O(√(N/M)). However:
-
In the context of mining, “solutions” would correspond to valid hashes for the difficulty target. But the quantum oracle must still test the entire hash function for each input, so the cost remains maximum per iteration.
-
Having multiple solutions M does not change the latency or circuit depth: we remain limited by error correction and consistency.
-
For large values of N (≈ 2²⁵⁶) and small M (very rare target), √(N/M) remains astronomical. Therefore, even by adopting Grover's “multiple-solutions” variant, hardware and time constraints make its application to mining still impractical.
“If a quantum miner appeared, it could cause more forks/reorganizations.” Source: the academic article “On the insecurity of quantum Bitcoin mining” (Sattath, 2018) suggests that the correlation of measurement times could increase the probability of forking.
Response: This argument is interesting but largely speculative and is based on the assumption that an ultra-fast quantum miner would work. However:
-
The scenario required a quantum miner capable of reaching a speed comparable to or greater than the best ASICs, which is not realistic today.
-
Even if such a miner existed, the increase in forks would not necessarily result from a generalized mining advantage but from an opportunistic strategy. This does not call into question network adaptation, difficulty adjustment, or security measures.
-
The fact that forks can occur does not mean that quantum mining is viable or advantageous: the cost remains prohibitive. In summary, this objection can be formalized, but it does not constitute proof of an effective quantum advantage in the real world.
Economic and energy consequences
Modern ASIC farms operate at full energy efficiency, around 12 to 35 J/TH. A cryogenic quantum computer, even if perfectly optimized, would have efficiency several orders of magnitude lower, due to the costs of cooling, control, and error correction.
Quantum computing is therefore uneconomical for mining:
-
it requires a centralized architecture;
-
it does not allow for large-scale duplication;
-
it does not reduce total energy consumption;
-
it does not improve network security.
Conclusion
Quantum computing, in its current and foreseeable state, is fundamentally unsuitable for Bitcoin mining:
-
Algorithmically, Grover's quadratic acceleration remains insufficient in the face of the exponential complexity of hashing.
-
In terms of hardware, error correction and decoherence limit any attempt at large-scale parallelization.
-
In terms of energy, cryogenic cooling and the complexity of control make any industrial operation inefficient.
-
In terms of protocol, the difficulty adjustment mechanism neutralizes any transient advantage.
-
Economically, the centralization required to maintain a quantum infrastructure would destroy the network's resilience and would therefore be excluded from rewards by the nodes (which decide).
The quantum threat to Bitcoin concerns exclusively cryptographic signatures (ECDSA) and not proof of work (SHA-256). Based on current knowledge and technological projections, there is no credible prospect of quantum computing offering any advantage for mining, or even energy efficiency.
The myth of the “quantum miner” is therefore more a matter of media speculation than applied science. Bitcoin, designed to adapt and adjust its difficulty, remains today and for the foreseeable future resilient in the face of the quantum revolution.
#Bitcoin #QuantumComputing #ProofOfWork #SHA256 #Grover #Mining #PostQuantum #Decentralization