Lecture Notes on Quantum Circuit Synthesis and Complexity
Introduction
Correction on Rotation Phase
In the previous lecture, it was incorrectly stated that rotations around the Y-axis do not change the phase of a quantum state. While the rotation matrix itself might not explicitly show a phase change in its elements, applying a Y-axis rotation to a quantum state generally does induce a phase change. This is in contrast to rotations around the Z-axis, where the Bloch sphere representation is fully faithful and accurately reflects phase changes.
The misconception arose from observing that the rotation matrix for a Y-axis rotation appears to preserve the phase. However, this observation is misleading because it doesn’t account for the global phase factor that can be introduced during such transformations. Specifically, while vectors lying in the YZ plane maintain their position within that plane upon Y-axis rotation, and their projection onto the XY plane (which defines the phase) might seem unchanged, this is not universally true for all quantum states.
Limitations of Bloch Sphere Representation
The Bloch sphere provides a valuable visualization tool for single-qubit states and rotations. However, it is essential to recognize that the Bloch sphere representation is not entirely faithful for all types of rotations, particularly when considering phase changes. While it accurately depicts rotations around the Z-axis, it can be incomplete for rotations around other axes like the X or Y axis.
For rotations other than around the Z-axis, the Bloch sphere visualization may not capture a global phase factor that emerges during the transformation. This means that while the Bloch sphere offers an intuitive geometric understanding of state evolution, it does not fully represent all aspects of quantum rotations, especially concerning global phase. For precise calculations and a complete understanding of quantum circuit behavior, it is necessary to work directly with unitary matrices and state vectors, rather than relying solely on the Bloch sphere for phase analysis. The Bloch sphere representation, while useful, omits a factor in its visualization of rotations around axes other than the Z-axis, meaning it does not capture all changes induced by these rotations.
Two-Level Unitary Operators
Intuitive Meaning and Definition
A two-level unitary operator acts non-trivially on only two computational basis states within a Hilbert space, while behaving as the identity operator on the subspace spanned by the remaining basis states. In an \(N\)-dimensional Hilbert space, a two-level unitary operator \(B\) can be represented as an \(N \times N\) matrix. Most of its entries correspond to the identity matrix, except for a \(2 \times 2\) unitary submatrix that operates on a chosen pair of rows and columns.
Definition 1 (Two-Level Unitary Operator). A two-level unitary operator is a unitary operator that acts non-trivially only on a two-dimensional subspace of the Hilbert space.
A two-level unitary operator is a unitary operator that acts non-trivially only on a two-dimensional subspace of the Hilbert space.
For an \(n\)-qubit system, which operates in a \(2^n\)-dimensional Hilbert space, a two-level unitary operator \(B\) targeting specific basis states can be visualized in block matrix form. If we assume the operator acts on two basis states indexed by \(i\) and \(j\), the matrix \(B\) takes the form: \[B = \begin{pmatrix} I_{top} & 0 & 0 & 0 \\ 0 & U_{ij} & 0 & 0 \\ 0 & 0 & I_{middle} & 0 \\ 0 & 0 & 0 & I_{bottom}\end{pmatrix}\] where \(U_{ij}\) is a \(2 \times 2\) unitary matrix acting in the subspace spanned by the \(i\)-th and \(j\)-th basis states, and \(I_{top}\), \(I_{middle}\), and \(I_{bottom}\) are identity matrices of appropriate dimensions that ensure \(B\) acts as the identity on the rest of the Hilbert space. More simply, if we reorder the basis such that the two targeted states are the first two, we can represent \(B\) as: \[B = \begin{pmatrix} U & 0 \\ 0 & I\end{pmatrix}\] where \(U\) is a \(2 \times 2\) unitary matrix, and \(I\) is an identity matrix of size \((2^n-2) \times (2^n-2)\).
Intuitively, a two-level operator selectively modifies the amplitudes of only two basis states, effectively mixing or transforming them, while leaving all other basis states unaffected. For instance, in a three-qubit system, a two-level operator could be designed to act on \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\). It would transform superpositions of these two states but leave states such as \(\left| 001 \right\rangle\), \(\left| 010 \right\rangle\), and others unchanged. This concept extends the idea of single-qubit gates to act on any chosen pair of basis states within a multi-qubit system.
Decomposition of Unitary Operators and Controlled Gates
Any arbitrary unitary operator can be decomposed into a sequence of two-level unitary operators. This decomposition is a fundamental result in quantum computation, demonstrating that complex quantum operations can be built from simpler, two-level operations. Furthermore, these two-level operators can be physically implemented using controlled quantum gates, particularly controlled-unitary gates. These gates apply a unitary transformation on a target qubit conditioned on the state of one or more control qubits.
Theorem 2 (Decomposition of Unitary Operators). Any unitary operator can be decomposed into a sequence of two-level unitary operators.
Any unitary operator can be decomposed into a sequence of two-level unitary operators. This theorem highlights the fundamental building blocks of quantum computation.
To realize a two-level operator that acts on basis states not directly accessible by standard controlled gates (e.g., basis states that are not computationally adjacent), a basis transformation is necessary. The general strategy involves:
Basis Transformation: Apply a sequence of operations to transform the target basis states, say \(\left| v_1 \right\rangle\) and \(\left| v_2 \right\rangle\), into basis states that are easily manipulated by controlled gates. Ideally, these transformed states become computationally adjacent, such as \(\left| 00\cdots 0 \right\rangle\) and \(\left| 00\cdots 1 \right\rangle\).
Apply Controlled Unitary: Implement the desired \(2 \times 2\) unitary operation \(U\) on these transformed basis states using a sequence of controlled gates. This step leverages the simplified basis to apply the core transformation.
Inverse Basis Transformation: Reverse the initial basis transformation by applying the inverse sequence of operations. This step returns the operator to act on the original basis states \(\left| v_1 \right\rangle\) and \(\left| v_2 \right\rangle\) in the initial basis.
Gray codes offer an efficient method for performing the basis transformations required in step 1 and reversing them in step 3.
Gray Codes for Efficient Basis Transformation
Gray codes are binary number sequences where consecutive numbers differ by only a single bit change. This property is invaluable for transitioning between computational basis states by flipping qubits one at a time. In the context of implementing two-level operators, especially those acting on non-adjacent basis states, Gray codes provide a systematic approach to achieve the necessary basis transformations using controlled gates.
Definition 3 (Gray Code). A Gray code is a binary number sequence where consecutive numbers differ by only a single bit change.
A Gray code is a binary number sequence where consecutive numbers differ by only a single bit change. This property is useful for basis transformations in quantum computing.
To implement a two-level operator targeting basis states \(\left| v_1 \right\rangle\) and \(\left| v_2 \right\rangle\), a Gray code sequence can be constructed to transform \(\left| v_1 \right\rangle\) into \(\left| v_2 \right\rangle\) through a series of single-bit flips. Each bit flip in the Gray code sequence corresponds to a quantum gate operation, typically a controlled-NOT (CNOT) gate or a similar controlled operation. This sequence of gates effectively maps the action of the desired two-level operator from the original basis states \(\left| v_1 \right\rangle, \left| v_2 \right\rangle\) to a more convenient pair, such as \(\left| 00\cdots 0 \right\rangle\) and \(\left| 00\cdots 1 \right\rangle\). On these transformed states, applying a controlled unitary gate becomes straightforward.
Illustrative Example: Three-Qubit Operator on \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\)
Consider a three-qubit system where we aim to implement a two-level operator \(B\) that acts on the basis states \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\). We can use a Gray code path to transform \(\left| 000 \right\rangle\) to \(\left| 111 \right\rangle\) by flipping one bit at each step. A possible Gray code path is: \[\left| 000 \right\rangle \xrightarrow{\text{Flip bit 3}} \left| 001 \right\rangle \xrightarrow{\text{Flip bit 2}} \left| 011 \right\rangle \xrightarrow{\text{Flip bit 1}} \left| 111 \right\rangle\] Each step in this sequence, flipping a single bit, can be implemented using controlled-NOT gates. For instance, the transition \(\left| 000 \right\rangle \rightarrow \left| 001 \right\rangle\) can be achieved by a CNOT gate targeting the third qubit and controlled by the condition that the first two qubits are in the \(\left| 0 \right\rangle\) state.
Example 4 (Three-Qubit Operator using Gray Code). Implementing a two-level operator on \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\) using a Gray code path for basis transformation.
Implementing a two-level operator on \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\) using a Gray code path for basis transformation.
To implement the two-level operator \(B\) with a \(2 \times 2\) unitary matrix \(U\) acting on \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\), we can follow these steps based on the Gray code path:
Forward Transformation (Gray Code Path): Apply a sequence of controlled-NOT gates corresponding to the Gray code path \(000 \rightarrow 001 \rightarrow 011 \rightarrow 111\). This sequence effectively transforms the basis such that the operation intended for \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\) is now applied to, conceptually, the ‘start’ and ‘end’ of the Gray code path in the transformed space.
Apply Controlled Unitary Gate: Apply a controlled unitary gate that implements the \(2 \times 2\) unitary \(U\). This gate is controlled on reaching the transformed state that corresponds to the original \(\left| 111 \right\rangle\) (after the Gray code transformation). The target of this controlled unitary gate would be a qubit that allows for the \(2 \times 2\) operation within the transformed subspace.
Reverse Transformation (Inverse Gray Code Path): Apply the inverse sequence of controlled-NOT gates, in reverse order, to undo the initial basis transformation. This step ensures that the overall operation is applied to the original basis states \(\left| 000 \right\rangle\) and \(\left| 111 \right\rangle\) as intended.
For the Gray code path \(000 \rightarrow 001 \rightarrow 011 \rightarrow 111\), a symbolic representation of the circuit structure is:
Qubit 1: --- $C_1$ --- $C_2$ --- $C_3$ --- ... --- $C_3^{-1}$ --- $C_2^{-1}$ --- $C_1^{-1}$ ---
Qubit 2: --- $C_1$ --- $C_2$ --- $C_3$ --- ... --- $C_3^{-1}$ --- $C_2^{-1}$ --- $C_1^{-1}$ ---
Qubit 3: --- $T_1$ --- $T_2$ --- $T_3$ --- $U$ Gate --- $T_3^{-1}$ --- $T_2^{-1}$ --- $T_1^{-1}$ ---
Here, \(C_i\) and \(T_i\) represent control and target operations for the controlled-NOT gates implementing the \(i\)-th step of the Gray code path, and \(C_i^{-1}\), \(T_i^{-1}\) are their inverses. The \(U\) gate is applied conditionally at the end of the forward Gray code transformation. The precise control conditions and target qubits for each \(C_i\), \(T_i\), and the \(U\) gate depend on the specific Gray code path chosen and require detailed circuit design, as elaborated in specialized texts on quantum circuit synthesis.
Relevance of Classical Gray Codes
Classical Gray codes were developed in classical computation to represent numerical sequences where transitions between consecutive numbers involve only a single bit flip in their binary representation. This is particularly advantageous in systems where simultaneous multi-bit flips are prone to errors or require complex synchronization mechanisms.
In standard binary encoding, incrementing a number can require flipping multiple bits. For instance, transitioning from 3 (binary 011) to 4 (binary 100) necessitates flipping all three bits. In contrast, Gray codes are designed to avoid such multi-bit transitions between successive values. A 3-bit Gray code example is: \[\begin{array}{cccc}\text{Decimal} & \text{Binary} & \text{Gray Code} \\0 & 000 & 000 \\1 & 001 & 001 \\2 & 010 & 011 \\3 & 011 & 010 \\4 & 100 & 110 \\5 & 101 & 111 \\6 & 110 & 101 \\7 & 111 & 100 \\\end{array}\] In this Gray code sequence, only one bit changes between any two consecutive code words. This property is beneficial in applications like rotary encoders, position sensors, and minimizing glitches in digital systems during transitions.
Remark. Remark 5 (Classical Gray Code Applications). Classical Gray codes are useful in classical computation for minimizing bit flips between consecutive numbers, reducing errors in digital systems.
Classical Gray codes are useful in classical computation for minimizing bit flips between consecutive numbers, reducing errors in digital systems.
In quantum computing, the principle of Gray codes is adapted to systematically transform between computational basis states using single-qubit operations, typically controlled gates. This is essential for decomposing complex unitary operations into sequences of elementary gates that natively operate on one or two qubits. By employing Gray code transformations, operations on arbitrary pairs of basis states can be realized through sequences of simpler, controlled single-qubit gates, facilitating the synthesis of complex quantum algorithms from a limited set of available quantum gates.
Commutation of Measurement and Control
Equivalence of Quantum Circuits with Measurement and Control
In quantum circuits, the sequence of operations is generally critical. However, a notable property arises when considering the interplay between measurement and controlled operations. Specifically, under certain conditions, the order of measurement and control can be interchanged without affecting the overall outcome of the circuit.
Consider two quantum circuits involving a controlled unitary gate (CU), where qubit 1 acts as the control and qubit 2 as the target for the unitary operation \(U\). In the first circuit, depicted in 1, the controlled-U gate is applied first, followed by a measurement of the control qubit (qubit 1). In the second circuit, shown in 2, the control qubit (qubit 1) is measured before the unitary operation is applied to qubit 2. In this second configuration, the outcome of the measurement on qubit 1 is used as a classical control: if the measurement outcome is a specific value (e.g., 1), the unitary \(U\) is applied to qubit 2; otherwise, \(U\) is not applied.
The question is whether these two circuit configurations are equivalent, meaning they produce the same overall probabilistic outcomes and output states for any given input state. The lecture posits that these circuits are indeed equivalent, illustrating a form of commutation between measurement and control in quantum circuits.
Exercise: Proving Circuit Equivalence
To rigorously demonstrate the equivalence of Circuit 1 (1) and Circuit 2 (2), a detailed analysis of the quantum operations performed by each circuit is required. This involves a step-by-step comparison of their actions on an arbitrary input state. The recommended approach is to:
Analyze Circuit 1 (Measure After Control):
Controlled-U Gate Operation: Express the controlled-U gate as a \(4 \times 4\) unitary matrix acting on the two-qubit state space. If \(U\) is a \(2 \times 2\) unitary matrix, the controlled-U gate matrix in the computational basis \(\{\left| 00 \right\rangle, \left| 01 \right\rangle, \left| 10 \right\rangle, \left| 11 \right\rangle\}\) is given by: \[CU = \begin{pmatrix} I & 0 \\ 0 & U \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & U_{11} & U_{12} \\ 0 & 0 & U_{21} & U_{22} \end{pmatrix}\] where \(I\) is the \(2 \times 2\) identity matrix.
Measurement Operation: Describe the measurement on qubit 1 in the computational basis \(\{\left| 0 \right\rangle, \left| 1 \right\rangle\}\). Measurement is represented by projection operators \(P_0 = I \otimes \left| 0 \right\rangle\left\langle 0 \right|\) and \(P_1 = I \otimes \left| 1 \right\rangle\left\langle 1 \right|\) for outcomes 0 and 1 respectively, where \(I\) is the identity on qubit 2.
Probability and Post-Measurement State Calculation: For an arbitrary input state \(\left| \psi_{in} \right\rangle\), calculate the probability of obtaining measurement outcome 0 and 1, and the corresponding post-measurement states using the projection operators and the controlled-U gate operation.
Analyze Circuit 2 (Measure Before Control):
Initial Measurement: Describe the measurement on qubit 1 first. The measurement outcomes are probabilistic. Calculate the probabilities of measuring 0 and 1 on qubit 1 for an arbitrary input state \(\left| \psi_{in} \right\rangle\).
Classical Control: Define the classically controlled operation on qubit 2. If the measurement outcome on qubit 1 is 1, apply the unitary \(U\) to qubit 2. If the outcome is 0, do not apply \(U\) (or equivalently, apply the identity operation \(I\)).
Probability and Output State Calculation: For each measurement outcome (0 and 1) on qubit 1, determine the state of qubit 2 after the controlled operation (or identity). Calculate the overall probabilities of the final states and measurement outcomes, considering the probabilities of the initial measurements on qubit 1.
Comparison and Equivalence: Compare the probabilities of measurement outcomes and the resulting quantum states for both circuits for any arbitrary input state \(\left| \psi_{in} \right\rangle\). If these are identical for all possible input states, then the two circuits are equivalent.
This exercise is fundamental for understanding the principles of quantum control and measurement, and it highlights how classical control based on quantum measurement outcomes can be integrated into quantum circuits. Successfully completing this exercise will reinforce the understanding of quantum gate operations, measurement postulates, and the interplay between quantum and classical information processing in quantum circuits. The lecture notes indicate that all necessary concepts and tools required to solve this exercise have been previously covered.
Complexity of Quantum Circuit Simulation
Complexity Classes in Quantum Computation: BPP, BQP, and PSPACE
To evaluate the computational capabilities and limitations of quantum computers, it is crucial to understand the relevant complexity classes. This section introduces three fundamental classes: BPP, BQP, and PSPACE, which are essential for characterizing the power of both classical and quantum computation.
BPP (Bounded-Error Probabilistic Polynomial Time): This class encompasses decision problems solvable by classical probabilistic Turing machines in polynomial time, with a bounded probability of error. BPP represents problems that are efficiently solvable using randomized classical algorithms.
BQP (Bounded-Error Quantum Polynomial Time): BQP is the quantum counterpart to BPP. It includes decision problems solvable by quantum computers (specifically, quantum Turing machines or uniform families of quantum circuits) in polynomial time, again with a bounded probability of error. BQP characterizes problems efficiently solvable by quantum algorithms.
PSPACE (Polynomial Space): PSPACE is a classical complexity class containing decision problems solvable by deterministic Turing machines using a polynomial amount of memory space. PSPACE is known to be a broader class than polynomial time classes like P, BPP, and BQP, potentially including problems that require more computational resources in terms of time but are still manageable in space.
Understanding the relationships between these complexity classes is vital for assessing the potential advantages and limitations of quantum computation compared to classical approaches.
Bounded-Error Probabilistic Polynomial Time (BPP) in Detail
BPP is formally defined as the class of decision problems for which there exists a probabilistic Turing machine that operates in polynomial time and provides a correct answer with a probability of at least \(\frac{2}{3}\) for all instances. The choice of \(\frac{2}{3}\) as the probability bound is arbitrary; any constant probability strictly greater than \(\frac{1}{2}\) would define the same complexity class. This bounded error probability signifies that while BPP algorithms are not guaranteed to be correct in every run, the probability of error is bounded and can be made arbitrarily small by repeating the algorithm and taking a majority vote of the outcomes.
Definition 6 (BPP). BPP (Bounded-Error Probabilistic Polynomial Time) is the class of decision problems solvable by classical probabilistic Turing machines in polynomial time with bounded error.
BPP (Bounded-Error Probabilistic Polynomial Time) is the class of decision problems solvable by classical probabilistic Turing machines in polynomial time with bounded error. It represents problems efficiently solvable by randomized classical algorithms.
Probabilistic Turing Machines and the Acceptance Condition for BPP
For defining BPP, we consider a specific model of a probabilistic Turing machine: a precise non-deterministic Turing machine with a bounded degree of non-determinism, specifically degree 2. "Precise" in this context means that for any given input, all computational paths of the machine have the same length, ensuring a well-defined computation time.
The computation of such a machine on a given input \(x\) can be visualized as a computation tree. Each node in the tree represents a state of the computation, and branches represent the non-deterministic choices (at most two at each step). The leaves of this tree are labeled with outcomes, typically "yes" (accept) or "no" (reject).
The acceptance condition for BPP differs from that of standard non-deterministic Turing machines (like those defining NP), where acceptance occurs if at least one computational path leads to acceptance. In BPP, acceptance is determined by a majority rule based on the outcomes of all computational paths:
Acceptance: An input \(x\) is accepted if at least \(\frac{2}{3}\) of all computational paths in the computation tree result in a "yes" outcome.
Rejection: An input \(x\) is rejected if at least \(\frac{2}{3}\) of all computational paths result in a "no" outcome.
This condition ensures that for every input, the machine is biased towards either acceptance or rejection with a significant margin, thus bounding the probability of error. Machines that produce roughly equal numbers of "yes" and "no" outcomes for some inputs do not conform to the requirements for defining BPP languages under this definition.
Key Properties of BPP
The complexity class BPP exhibits several important properties that help in understanding its place within the landscape of computational complexity:
Closure under Complementation: BPP is closed under complementation. If a language \(L\) is in BPP, then its complement \(L^c\) is also in BPP. This symmetry arises because simply reversing the acceptance and rejection outcomes of a BPP machine for \(L\) yields a BPP machine for \(L^c\).
Inclusion of P: The class P (Polynomial Time) is a subset of BPP (\(P \subseteq BPP\)). Any problem solvable in deterministic polynomial time can also be solved by a probabilistic Turing machine in polynomial time with zero error probability, thus trivially satisfying the bounded error condition of BPP.
Relationship to Polynomial Hierarchy: BPP is known to be contained within the second level of the polynomial hierarchy, specifically \(BPP \subseteq \Sigma_2 \cap \Pi_2\). This inclusion suggests that BPP problems, while probabilistic, are not arbitrarily complex in terms of quantifier alternations, placing them within a relatively low level of the polynomial hierarchy.
Open Question: BPP vs. NP: It remains an open question whether BPP is contained within NP (Non-deterministic Polynomial Time). While it is widely conjectured that \(BPP = P\), implying that randomization does not fundamentally enhance computational power beyond deterministic polynomial time, this conjecture is unproven.
Inclusion in PSPACE: BPP is included in PSPACE (\(BPP \subseteq PSPACE\)). Probabilistic polynomial-time computations can be simulated in polynomial space by systematically exploring all possible random choices and computing the probabilities, although this simulation may not be time-efficient.
Bounded-Error Quantum Polynomial Time (BQP)
BQP, or Bounded-Error Quantum Polynomial Time, is the quantum analogue of the classical complexity class BPP. It is defined as the class of decision problems solvable by a quantum computer in polynomial time with a bounded probability of error. A language \(L\) is in BQP if there exists a uniform family of polynomial-size quantum circuits \(\{Q_n\}\) such that for any input \(x\) of size \(n\):
If \(x \in L\), then measuring the first qubit of the output state \(Q_n\left| 0^n \right\rangle\) in the computational basis yields the outcome \(\left| 1 \right\rangle\) (interpreted as "yes") with probability \(P(\text{yes}) \geq \frac{2}{3}\).
If \(x \notin L\), then measuring the first qubit of the output state \(Q_n\left| 0^n \right\rangle\) in the computational basis yields theoutcome \(\left| 1 \right\rangle\) (interpreted as "yes") with probability \(P(\text{yes}) \leq \frac{1}{3}\).
Similar to BPP, the probability threshold of \(\frac{2}{3}\) is arbitrary and can be replaced by any constant strictly greater than \(\frac{1}{2}\) without changing the class BQP.
Definition 7 (BQP). BQP (Bounded-Error Quantum Polynomial Time) is the class of decision problems solvable by quantum computers in polynomial time with bounded error.
BQP (Bounded-Error Quantum Polynomial Time) is the class of decision problems solvable by quantum computers in polynomial time with bounded error. It is the quantum counterpart to BPP and characterizes problems efficiently solvable by quantum algorithms.
Quantum Circuits and Probabilistic Nature of Output
Quantum circuits are constructed as sequences of quantum gates applied to qubits. For a given input, typically initialized to \(\left| 0^n \right\rangle\), a quantum circuit transforms this initial state into a potentially complex superposition of states. Measurement, a fundamental aspect of quantum mechanics, is then performed, typically in the computational basis, to extract an outcome. For decision problems in BQP, the measurement often involves observing the state of the first qubit, with outcome \(\left| 1 \right\rangle\) representing "yes" and \(\left| 0 \right\rangle\) representing "no."
The inherent probabilistic nature of quantum measurement means that the output of a quantum computation is not deterministic. Instead, outcomes are probabilistic, governed by the quantum amplitudes of the final state after circuit execution. The definition of BQP ensures that for problems within this class, quantum algorithms can provide the correct answer ("yes" or "no") with a high probability in polynomial time, despite this inherent probabilistic nature.
Relationship Between BPP and BQP
The relationship between BPP and BQP is a central question in quantum complexity theory, exploring the potential quantum advantage:
BPP is a subset of BQP (\(BPP \subseteq BQP\)): Quantum computers are capable of simulating classical probabilistic computations. Therefore, any problem solvable in BPP is also solvable in BQP. Classical probabilistic circuits can be effectively simulated using quantum circuits, for instance, by implementing classical gates using quantum gates like CNOT and Hadamard.
Open Question: Is BQP strictly larger than BPP (\(BQP \stackrel{?}{\supset} BPP\))? A major open question is whether quantum computers offer a computational advantage over classical probabilistic computers, i.e., whether BQP is strictly larger than BPP. It is widely believed that BQP is indeed more powerful, suggesting the existence of problems efficiently solvable by quantum computers that are intractable for classical probabilistic computers.
Integer Factorization as a Candidate for Separation: Integer factorization is a prominent example suggesting a potential separation between BQP and BPP. Shor’s algorithm provides a polynomial-time quantum algorithm for factoring large integers, a problem for which the best-known classical algorithms are super-polynomial in time complexity. This suggests that factorization is in BQP but is unlikely to be in BPP (or at least not in P), although this remains unproven for BPP.
Proof: BQP is Contained in PSPACE (\(BQP \subseteq PSPACE\))
A significant result in quantum complexity theory is that BQP is contained within PSPACE. This implies that any problem that can be solved by a quantum computer in polynomial time with bounded error can also be solved by a classical computer using polynomial space, albeit potentially with exponential time complexity. This result underscores that while quantum computers may offer exponential speedups for certain problems, they do not expand the set of problems solvable in terms of space complexity beyond what classical computers can achieve.
Theorem 8 (BQP \(\subseteq\) PSPACE). BQP is contained in PSPACE. Any problem solvable by a quantum computer in polynomial time with bounded error can also be solved by a classical computer using polynomial space.
BQP is contained in PSPACE. This theorem demonstrates that quantum computers, despite potential speed advantages, do not exceed classical computers in terms of space complexity.
To demonstrate that \(BQP \subseteq PSPACE\), we consider two approaches for simulating a quantum circuit classically: a matrix multiplication method (which is space-inefficient) and a Feynman path integral-inspired method (which is space-efficient).
Space-Inefficient Simulation via Matrix Multiplication
A straightforward, though computationally expensive in terms of space, method to simulate a quantum circuit is to explicitly compute the unitary matrix representing the entire circuit. For a quantum circuit composed of \(m\) gates \(U_1, U_2, \ldots, U_m\) acting on \(n\) qubits, the total unitary transformation \(U\) is given by the product of these gate matrices: \(U = U_m U_{m-1} \cdots U_1\).
The simulation process involves the following steps:
Construct the Circuit Matrix: Calculate the overall unitary matrix \(U\) by multiplying the matrices of individual quantum gates in reverse order of their application in the circuit. For a circuit operating on \(n\) qubits, these matrices are of size \(2^n \times 2^n\).
Evolve the Initial State: Apply the computed unitary matrix \(U\) to the initial state, typically \(\left| 0^n \right\rangle\), to obtain the final quantum state \(\left| \psi \right\rangle = U\left| 0^n \right\rangle\). This step involves matrix-vector multiplication.
Calculate Measurement Probabilities: To determine the probability of a specific measurement outcome, such as measuring "yes" (e.g., the first qubit being in state \(\left| 1 \right\rangle\)), project the final state \(\left| \psi \right\rangle\) onto the subspace corresponding to the "yes" outcome. The probability is then the squared norm of this projected state.
Complexity Analysis of Matrix Multiplication Simulation:
Space Complexity: Storing a \(2^n \times 2^n\) matrix \(U\) requires \(O(2^{2n})\) space, which is exponential in the number of qubits \(n\). This exponential space requirement is the primary limitation of this direct matrix multiplication approach.
Time Complexity: Matrix multiplication of \(2^n \times 2^n\) matrices and matrix-vector multiplication also incur exponential time complexity, approximately \(O(2^{3n})\) for matrix multiplication and \(O(2^{2n})\) for matrix-vector multiplication.
Due to the exponential space complexity, this direct matrix multiplication method does not demonstrate that BQP \(\subseteq\) PSPACE. It requires exponential space, exceeding the polynomial space bound of PSPACE.
Space-Efficient Simulation via Feynman Path Integral and Depth-First Search
A more space-efficient approach to simulate quantum circuits is inspired by the Feynman path integral formulation of quantum mechanics. This method conceptualizes quantum computation as a sum over all possible computational paths. When a sequence of quantum gates \(U_1, U_2, \ldots, U_m\) is applied to an initial state \(\left| x \right\rangle\), the evolution can be seen as exploring a tree of computational paths.
Algorithm 9 (Space-Efficient Quantum Circuit Simulation via DFS). Space-efficient simulation of quantum circuits using Depth-First Search (DFS) inspired by Feynman path integrals.
Space-efficient simulation of quantum circuits using Depth-First Search (DFS) inspired by Feynman path integrals. This algorithm leverages DFS to explore computational paths and compute transition amplitudes, achieving polynomial space complexity.
To calculate the amplitude of transitioning from an initial state \(\left| x \right\rangle\) to a final state \(\left| y \right\rangle\) after applying the circuit, we sum the quantum amplitudes of all possible paths from \(\left| x \right\rangle\) to \(\left| y \right\rangle\). This summation can be efficiently computed using a depth-first search (DFS) strategy.
Depth-First Search (DFS) Analogy for Quantum Circuit Simulation:
Initiiate DFS: Start the search at the initial state \(\left| x \right\rangle\).
Apply Gates Sequentially: For each gate \(U_i\) in the circuit sequence, consider its action on the current state. Applying a gate may lead to a superposition of states. Explore each resulting state as a branch in the computation path.
Recursive Path Exploration: Recursively apply subsequent gates (\(U_{i+1}, U_{i+2}, \ldots, U_m\)) to each state reached in the previous step, effectively traversing down a path in the computation tree.
Amplitude Accumulation at Final States: When a complete path (corresponding to applying all gates \(U_1, \ldots, U_m\)) is traversed, and a final basis state \(\left| y \right\rangle\) is reached, compute the product of the matrix elements (amplitudes) of all gates along this specific path. This product represents the amplitude of this particular computational path.
Summation of Path Amplitudes: Sum up the amplitudes of all paths that terminate at the desired final basis state \(\left| y \right\rangle\). This sum gives the total amplitude for transitioning from \(\left| x \right\rangle\) to \(\left| y \right\rangle\). The probability of this transition is the squared magnitude of this total amplitude.
Space Efficiency via DFS: Using DFS, the space required for simulation is significantly reduced because we only need to maintain the current path being explored in the computation tree. The depth of this tree is proportional to the number of gates in the circuit, which is polynomial in BQP. When backtracking in DFS, the memory used for a path is freed, and exploration proceeds along a new path. This avoids the need to store the entire computation tree or large matrices in memory simultaneously.
Complexity Analysis of DFS Simulation:
Space Complexity: The space required is dominated by the depth of the recursion stack in DFS, which is proportional to the circuit depth (number of gates). For a polynomial-size circuit, the depth is polynomial in \(n\). Thus, the space complexity is polynomial in \(n\), placing the simulation within PSPACE.
Time Complexity: While space-efficient, the DFS approach typically incurs a higher time complexity. In the worst case, it may explore a number of paths exponential in the circuit size, leading to an exponential time complexity. However, the focus here is on space complexity, and DFS achieves polynomial space.
Connection to Savitch’s Theorem and Transitive Closure
The space-efficient simulation of quantum circuits using DFS shares conceptual similarities with Savitch’s theorem and algorithms for computing transitive closure in classical complexity theory. Savitch’s theorem demonstrates that non-deterministic space complexity class NSPACE(f(n)) is contained in deterministic space complexity class DSPACE(\((f(n))^2\)). The DFS approach for quantum circuit simulation mirrors techniques used in proving Savitch’s theorem, particularly in exploring reachability in graphs within limited space.
Remark. Remark 10 (Connection to Savitch’s Theorem). The space-efficient simulation of quantum circuits using DFS is conceptually related to Savitch’s Theorem, highlighting space-efficient computation principles.
The space-efficient simulation of quantum circuits using DFS is conceptually related to Savitch’s Theorem, which shows the containment of non-deterministic space complexity in deterministic space complexity, highlighting space-efficient computation principles.
In essence, simulating a quantum circuit can be viewed as a reachability problem in a state space graph, where we aim to compute transition probabilities between quantum states. The DFS-based approach provides a space-efficient method to explore this state space and compute the required probabilities, thereby demonstrating that BQP \(\subseteq\) PSPACE. This result highlights that while quantum computers may offer potential speed advantages, their space requirements for simulation do not exceed the polynomial space capabilities of classical computers.
Open Problem: CNOT Circuit Minimization
Problem Statement: Minimizing CNOT Gates in Quantum Circuits
The CNOT circuit minimization problem is a fundamental challenge in quantum circuit optimization. It asks: given a quantum circuit composed exclusively of CNOT gates, how can we find an equivalent circuit that performs the same unitary transformation but uses the minimum possible number of CNOT gates? This problem is critical for reducing the resource overhead in quantum computations, as CNOT gates are often costly to implement in physical quantum systems.
More formally, the problem can be stated as follows:
Given a CNOT circuit \(C\), find another CNOT circuit \(C'\) such that:
\(C'\) implements the same unitary transformation as \(C\).
The number of CNOT gates in \(C'\), denoted as \(|C'|\), is minimized among all CNOT circuits equivalent to \(C\).
Solving this minimization problem is essential for efficient quantum circuit synthesis and for understanding the fundamental limits of quantum computation with CNOT gates. This is a significant open problem in quantum circuit optimization.
Solving this minimization problem is essential for efficient quantum circuit synthesis and for understanding the fundamental limits of quantum computation with CNOT gates.
Boolean Matrix Representation of CNOT Circuits
CNOT circuits can be effectively represented using Boolean matrices, which capture their action on the computational basis states. Since CNOT gates perform bit flips conditioned on the state of control qubits, their overall effect on basis states can be described within the framework of Boolean algebra. For a system of \(n\) qubits, a CNOT circuit can be represented by an \(n \times n\) Boolean matrix over the field \(\mathbb{F}_2 = \{0, 1\}\).
In this representation, the composition of CNOT circuits corresponds to the multiplication of their respective Boolean matrices. Specifically, if circuit \(C_1\) is represented by matrix \(M_1\) and circuit \(C_2\) by \(M_2\), the circuit formed by applying \(C_2\) after \(C_1\) is represented by the Boolean matrix product \(M_2 \cdot M_1\). This algebraic structure allows us to analyze and manipulate CNOT circuits using linear algebra over \(\mathbb{F}_2\).
Reformulation via Row Operations on Boolean Matrices
The CNOT circuit minimization problem can be elegantly reformulated in terms of Boolean matrix transformations. The key insight is that applying a CNOT gate between two qubits corresponds to a specific elementary row operation on the Boolean matrix representing the circuit.
The allowed row operation is defined as follows: given a Boolean matrix, we can choose two distinct rows, say row \(i\) and row \(j\), and replace row \(i\) with the bitwise XOR sum of row \(i\) and row \(j\). This operation is performed element-wise modulo 2.
For a Boolean matrix \(M\) and rows \(i, j\) (\(i \neq j\)), the allowed row operation is: \[R_i \leftarrow R_i \oplus R_j\] where \(R_k\) denotes the \(k\)-th row of \(M\), and \(\oplus\) represents bitwise XOR (addition modulo 2). This operation corresponds to applying a CNOT gate in the Boolean matrix representation.
This row operation is reversible, as applying the same operation again (\(R_i \leftarrow R_i \oplus R_j\)) restores the original row \(R_i\). This reversibility corresponds to the invertibility of quantum gates and circuits.
With this row operation, the CNOT circuit minimization problem can be restated as a problem of transforming a given Boolean matrix into the identity matrix using the minimum number of allowed row operations.
Given a Boolean matrix \(M\) representing a CNOT circuit, find the shortest sequence of allowed row operations that transforms \(M\) into the identity matrix \(I\). The length of this shortest sequence is equal to the minimum number of CNOT gates required for an equivalent circuit. This reformulation provides an algebraic approach to CNOT circuit minimization.
The sequence of row operations, when read in reverse order and interpreted as CNOT gates, provides the minimized CNOT circuit \(C'\).
Complexity and Open Questions
Despite its seemingly simple formulation in terms of Boolean matrices and elementary row operations, the computational complexity of the CNOT circuit minimization problem remains an open question. Determining the minimum number of CNOT gates required to implement a given Boolean transformation is a challenging task.
Remark. Remark 11 (Complexity of CNOT Minimization). The computational complexity of CNOT circuit minimization is an open problem, with its precise complexity class yet to be determined.
The computational complexity of CNOT circuit minimization is an open problem. Determining the minimum number of CNOT gates for a given Boolean transformation is a challenging task, and its precise complexity class (P, NP-complete, PSPACE-complete, etc.) is still unknown.
The precise complexity class of this problem is not yet established. Possible complexity classifications include:
P (Polynomial Time): Is there a polynomial-time algorithm to find the minimum CNOT count?
NP-complete: Is the problem computationally hard, similar to other NP-complete problems?
PSPACE-complete: Does the problem require polynomial space to solve, potentially harder than NP?
Other Complexity Classes: Could the problem belong to a different complexity class altogether?
The lecture notes suggest that classifying the complexity of CNOT circuit minimization is considered an "easy open problem" within the field. This implies that while unsolved, the problem is potentially tractable using existing tools and techniques from computational complexity theory. Resolving this open problem is important for advancing quantum circuit optimization techniques and for gaining a deeper theoretical understanding of the power and limitations of CNOT-based quantum computation. Further research into efficient algorithms and complexity analysis for CNOT circuit minimization is an active area in quantum information theory.
Conclusion
In this lecture, we rectified a prior inaccuracy regarding phase changes during rotations, emphasizing the Bloch sphere’s limitations in fully representing quantum state transformations, particularly concerning global phase factors. We then thoroughly examined two-level unitary operators, detailing their decomposition via controlled gates and the strategic use of Gray codes for efficient basis transformations to facilitate their implementation.
We explored the significant concept of commutation between measurement and control in quantum circuits, proposing the formal proof of equivalence between circuits with measurement before and after a controlled operation as an exercise.
A substantial portion of the lecture was dedicated to quantum circuit simulation complexity. We introduced and rigorously defined the complexity classes BPP, BQP, and PSPACE, elucidating their interrelations and implications within quantum computation. We demonstrated that while quantum computers offer potential computational speedups, the inclusion of BQP within PSPACE implies that quantum computation does not exceed classical computation in terms of space complexity. This was shown through both a space-inefficient matrix multiplication approach and a space-efficient Feynman path integral-inspired simulation using depth-first search.
Finally, we introduced the open problem of CNOT circuit minimization, framing it as a challenge in Boolean matrix manipulation and underscoring its importance for quantum circuit optimization and resource reduction. The complexity classification of this problem remains an active area of research.
Key Takeaways:
The Bloch sphere representation, while intuitive, has limitations, particularly in representing global phase factors for rotations beyond the Z-axis.
Two-level unitary operators are fundamental components for constructing general unitary operations and can be effectively implemented using controlled gates and Gray code-based basis transformations.
Measurement and control operations can commute in specific quantum circuit configurations, offering opportunities for circuit simplification and optimization.
Quantum circuits of polynomial size can be simulated in polynomial space on a classical computer, formally establishing that BQP is contained within PSPACE.
Minimizing CNOT gates in quantum circuits is a significant open problem with implications for quantum hardware efficiency and complexity theory.
Follow-up Questions and Next Lecture Topics:
Exercise: Provide a formal proof for the equivalence of the quantum circuits depicted in 1 and 2, demonstrating the commutation of measurement and control.
Analyze and discuss the operational effect of the controlled-Z gate on a two-qubit system.
The subsequent lecture will delve deeper into quantum teleportation, exploring its protocol and implications in detail.
Future lectures will continue to address open problems in quantum circuit optimization, synthesis, and complexity, building upon the concepts introduced in this chapter.
This lecture concludes our exploration of Chapter 4, providing a foundation for more advanced topics in quantum information processing and algorithms to be discussed in the lectures to come.