At a high level, quantum sampling problems ask:
Can a quantum computer efficiently sample from a probability distribution that is hard (or impossible) for a classical computer to sample from?
These problems are not about computing an exact answer, but instead about producing outputs (samples) drawn from complex probability distributions that arise from quantum processes.
2. Difference Between Sampling and Decision Problems
To understand where quantum sampling fits in computational complexity, let’s distinguish two types of problems:
Type | Description | Example |
---|---|---|
Decision problem | Output is “yes” or “no” | Is this number prime? |
Sampling problem | Output is a random sample from a distribution | Generate a random bit string distributed like a quantum circuit’s output |
Quantum sampling problems belong to the second category. You don’t care about correctness per se, but about how well the output distribution matches a target quantum distribution.
3. Why Do Quantum Sampling Problems Matter?
Quantum sampling problems have emerged as a key battleground for demonstrating “quantum supremacy” — the idea that a quantum computer can do something fundamentally faster or better than any classical computer.
They are crucial in:
- Quantum supremacy demonstrations (e.g., Google’s Sycamore)
- Understanding quantum computational advantage
- Exploring quantum complexity classes like SampBQP
4. Examples of Quantum Sampling Problems
a) BosonSampling
Introduced by Aaronson and Arkhipov (2011):
- A special quantum device sends photons through a linear optical network.
- The output is a sample from the distribution of bosonic particles.
- Conjectured to be hard to simulate classically.
- Not universal for quantum computing, but believed to be hard to emulate.
b) Random Quantum Circuit Sampling (RQCS)
- Implemented by Google in its 2019 quantum supremacy experiment.
- A quantum circuit is built from random gates, and the device samples output bitstrings.
- The distribution is complex, and believed to be hard for classical algorithms to approximate.
c) IQP Sampling (Instantaneous Quantum Polynomial-time)
- A restricted class of quantum circuits (commuting gates, all applied at once).
- Believed to produce distributions that are hard to sample from classically.
5. The Complexity Theory Behind Sampling
Let’s explore some important complexity classes:
a) SampBQP
- Stands for Sampling Bounded-Error Quantum Polynomial time.
- The class of sampling problems solvable by quantum computers in polynomial time, with high probability of accuracy.
b) SampBPP
- Classical analogue: Sampling problems solvable by classical probabilistic polynomial-time algorithms.
The goal is to understand:
Is SampBQP ⊆ SampBPP? Or are they distinct?
If SampBQP ≠ SampBPP, this suggests quantum sampling problems are strictly more powerful, showing clear quantum advantage.
6. Hardness Results and Anti-Concentration
To claim that quantum sampling is hard, researchers rely on a mix of complexity-theoretic assumptions:
a) Average-case hardness
It’s not just worst-case complexity that matters. For supremacy claims, the average-case instance of the sampling problem must also be hard to simulate classically.
b) Anti-concentration
The output distribution of the quantum device should not be too “flat” — it should spread its probability mass out, so some outcomes occur with noticeably higher probability.
This ensures:
- You can test whether the device is working correctly (e.g., by computing something like cross-entropy with the ideal distribution).
- You can’t easily simulate it with noise or uniformity assumptions.
c) Stockmeyer’s Theorem and Classical Simulation
Under complexity assumptions (like the Polynomial Hierarchy not collapsing), it has been shown that even approximate classical sampling from quantum output distributions (e.g., BosonSampling) is likely infeasible.
This forms the core hardness evidence:
If a classical algorithm could simulate these quantum samplers, it would imply unlikely complexity collapses (e.g., PH collapses to the third level).
7. Quantum Supremacy and Verification
Sampling problems have been used to demonstrate quantum supremacy in the lab.
Google’s 2019 Sycamore Claim:
- Random circuit with 53 qubits.
- Output distribution from 20 million samples.
- Claimed to take 200 seconds on the quantum device, but 10,000 years for classical simulation.
Though later challenged by improved classical methods (e.g., Zhu et al. 2021, Alibaba simulation), the principle remains:
Quantum devices can generate samples far faster than classical ones can simulate or verify.
Verification Challenge:
Because quantum outputs are probabilistic, how do we verify correctness?
Common methods:
- Cross-entropy benchmarking
- Heavy output generation tests
- Fidelity estimates using simulated small subsystems
8. Open Problems and Current Research
Despite rapid advances, several open questions remain:
a) Robustness to Noise
Can noisy quantum devices still exhibit sampling hardness? This is essential for real-world supremacy.
b) Total Variation Distance
Can we rule out classical approximations even when only approximate sampling (within total variation distance) is allowed?
c) Classical Lower Bounds
Proving concrete classical lower bounds for specific circuits or configurations remains difficult.
d) Fine-Grained Complexity
There’s growing interest in connecting sampling hardness to fine-grained assumptions, like ETH or SETH.
9. Relation to Quantum Advantage and Cryptography
Quantum sampling problems also have implications in:
- Cryptographic primitives (e.g., post-quantum assumptions)
- Quantum-secure random number generation
- Quantum money schemes (using unclonable quantum states)
- Zero-knowledge proofs for quantum problems
They establish that quantum systems can generate structured randomness that is hard to reproduce or fake classically — a key idea for secure protocols.