Lecture Notes on Tensor Products, Spectral Representation, and Matrix Decompositions
Introduction
This lecture introduces fundamental concepts in linear algebra relevant to quantum mechanics and signal processing. We will first review tensor products for vectors and operators, emphasizing their non-commutative property and their role in describing entanglement. Next, we will discuss the spectral representation of normal operators and the process of matrix diagonalization to find eigenbases. We will then explore matrix decompositions, specifically polar decomposition and singular value decomposition (SVD). For SVD, we will highlight its key properties and applications in dimensionality reduction and signal processing. Finally, we will demonstrate the calculation of eigenvalues and eigenvectors using the Hadamard matrix as a concrete example.
Tensor Products and Entanglement
Review of Tensor Products
The tensor product is a fundamental operation for combining vector spaces and operators. For vectors \(U = \begin{pmatrix} U_1 \\ U_2 \\ \vdots \\ U_n \end{pmatrix}\) and \(V = \begin{pmatrix} V_1 \\ V_2 \\ \vdots \\ V_m \end{pmatrix}\), their tensor product \(U \otimes V\) is a vector formed by taking all possible products of components \(U_i V_j\). Using index notation, with the convention that the left tensor factor corresponds to slow-changing indices and the right factor to fast-changing indices, the tensor product vector is: \[U \otimes V = \begin{pmatrix}U_1 V_1 \\ U_1 V_2 \\ \vdots \\ U_1 V_m \\U_2 V_1 \\ U_2 V_2 \\ \vdots \\ U_2 V_m \\\vdots \\U_n V_1 \\ U_n V_2 \\ \vdots \\ U_n V_m\end{pmatrix}\] For instance, if \(U = \begin{pmatrix} U_1 \\ U_2 \end{pmatrix}\) and \(V = \begin{pmatrix} V_1 \\ V_2 \end{pmatrix}\), the tensor product is: \[U \otimes V = \begin{pmatrix} U_1 V_1 \\ U_1 V_2 \\ U_2 V_1 \\ U_2 V_2 \end{pmatrix}\]
Tensor Product of Operators
For operators represented as matrices, the tensor product \(A \otimes B\) of matrices \(A\) and \(B\) is constructed by replacing each element \(A_{ij}\) of matrix \(A\) with the block \(A_{ij} B\). If \(A\) is an \(n \times n\) matrix and \(B\) is an \(m \times m\) matrix, then \(A \otimes B\) is an \(nm \times nm\) matrix: \[A \otimes B = \begin{pmatrix}A_{11} B & A_{12} B & \cdots & A_{1n} B \\A_{21} B & A_{22} B & \cdots & A_{2n} B \\\vdots & \vdots & \ddots & \vdots \\A_{n1} B & A_{n2} B & \cdots & A_{nn} B\end{pmatrix}\] A crucial property of the tensor product is its non-commutativity, meaning that in general \(A \otimes B \neq B \otimes A\).
Example 1 (Tensor Product of Pauli Matrices). Consider the Pauli matrices \(\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\) and \(\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\). Their tensor product \(\sigma_x \otimes\sigma_z\) is computed as: \[\sigma_x \otimes\sigma_z = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \otimes\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = \begin{pmatrix}0 \times \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} & 1 \times \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\1 \times \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} & 0 \times \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\end{pmatrix} = \begin{pmatrix}0 & 0 & 1 & 0 \\0 & 0 & 0 & -1 \\1 & 0 & 0 & 0 \\0 & -1 & 0 & 0\end{pmatrix}\]
Entanglement
Superposition in Composite Systems
When considering composite systems described by the tensor product of vector spaces, such as \(U \otimes V\), we can construct states that are more complex than simple tensor products of individual states from \(U\) and \(V\). Specifically, linear combinations (superpositions) of tensor product states can lead to phenomena beyond the description of individual spaces in isolation.
Definition of Entanglement
A quantum state \(\left|{\psi}\right\rangle\) in a composite Hilbert space \(U \otimes V\) is said to be entangled if it cannot be expressed as a simple tensor product of states from \(U\) and \(V\), i.e., it cannot be written in the form \(\left|{\psi}\right\rangle = \left|{\phi}\right\rangle_U \otimes\left|{\chi}\right\rangle_V\) for any states \(\left|{\phi}\right\rangle_U \in U\) and \(\left|{\chi}\right\rangle_V \in V\).
Entanglement represents a form of correlation between subsystems that is fundamentally quantum mechanical and has no classical analogue. In entangled states, the properties of individual subsystems are not independently defined, and measurements on one subsystem can instantaneously influence the state of the other, regardless of spatial separation. Entanglement is a crucial resource in quantum information processing and will be explored in greater detail in subsequent lectures.
Spectral Representation and Matrix Diagonalization
Spectral Representation of Normal Operators
A linear operator \(A\) on a finite-dimensional vector space is termed normal if it commutes with its adjoint, i.e., \(AA^\dagger = A^\dagger A\). Normal operators possess a crucial property: they admit a spectral representation. This representation expresses the operator as a weighted sum of projectors onto its eigenspaces.
Theorem 2 (Spectral Representation of Normal Operators). For a normal operator \(A\), the spectral representation is given by: \[A = \sum_{k} \lambda_k P_{V_k}\] where:
\(\lambda_k\) are the distinct eigenvalues of \(A\).
\(V_k\) is the eigenspace corresponding to the eigenvalue \(\lambda_k\).
\(P_{V_k}\) is the orthogonal projector onto the eigenspace \(V_k\).
The eigenspaces \(\{V_k\}\) are mutually orthogonal, and their direct sum spans the entire vector space \(V\), i.e., \(V = \bigoplus_k V_k\).
For specific types of normal operators, the eigenvalues have additional properties:
If \(A\) is self-adjoint (Hermitian), \(A = A^\dagger\), then all eigenvalues \(\lambda_k\) are real numbers.
If \(A\) is unitary, \(A A^\dagger = A^\dagger A = I\), then all eigenvalues \(\lambda_k\) have an absolute value of 1, i.e., \(|\lambda_k| = 1\).
Diagonalization of a Matrix
Diagonalization is the process of finding a basis in which the matrix representation of a linear operator becomes diagonal. For a normal operator, such a basis always exists and is composed of its eigenvectors.
Change of Basis to Diagonal Form
Consider an operator \(A\) represented by a matrix in an initial orthonormal basis \(\{U_i\}\). The matrix elements in this basis are \(A_{ij} = \left\langle{U_i}\middle|{A U_j}\right\rangle\), and the operator can be expressed as: \[A = \sum_{i,j} A_{ij} \left|{U_i}\middle\rangle\middle\langle{U_j}\right|\] Our goal is to find a new orthonormal basis \(\{V_k\}\), consisting of eigenvectors of \(A\), such that in this basis, the matrix representation of \(A\) is diagonal. In the eigenbasis \(\{V_k\}\), the spectral representation of \(A\) is: \[A = \sum_{k} \lambda_k \left|{V_k}\middle\rangle\middle\langle{V_k}\right|\] where \(\lambda_k\) are the eigenvalues corresponding to the eigenvectors \(\{V_k\}\). In matrix form, this diagonal representation \(\Lambda\) has elements \(\Lambda_{kl} = \left\langle{V_k}\right| A \left|{V_l}\right\rangle = \lambda_k \delta_{kl}\).
Finding the Change of Basis Matrix
To transition from the matrix representation in the basis \(\{U_i\}\) to the diagonal representation in the basis \(\{V_k\}\), we use a change of basis matrix \(R\). Let \(R_{jk} = \left\langle{U_j}\middle|{V_k}\right\rangle\) be the elements of the matrix \(R\) that transforms coordinates from the \(\{V_k\}\) basis to the \(\{U_j\}\) basis. Since both bases are orthonormal, \(R\) is a unitary matrix, and its inverse is \(R^\dagger\).
Theorem 3 (Diagonalization Theorem). The matrix representation of \(A\) in the basis \(\{V_k\}\), denoted by \(\Lambda\), is related to its representation in the basis \(\{U_i\}\), denoted by \(A\), through the transformation: \[\Lambda = R^\dagger A R\] or equivalently, \[A = R \Lambda R^\dagger\] where \(\Lambda\) is a diagonal matrix with the eigenvalues of \(A\) on its diagonal. The columns of the unitary matrix \(R\) are the coordinates of the eigenvectors \(\{V_k\}\) in the original basis \(\{U_i\}\).
The diagonalization process thus involves finding the unitary matrix \(R\) and the diagonal matrix \(\Lambda\) that satisfy \(A = R \Lambda R^\dagger\). This decomposition allows us to express the operator \(A\) in its eigenbasis, revealing its fundamental spectral properties.
Matrix Decompositions
Matrix decompositions are essential tools in linear algebra, allowing us to express matrices in terms of products of matrices with specific properties. Diagonalization, discussed in the previous section, is one such decomposition. Here, we introduce polar decomposition and singular value decomposition (SVD), which are particularly useful in various applications.
Polar Decomposition
Positive Operators
An operator \(J\) on a vector space \(V\) is called positive (or positive semi-definite) if for all vectors \(\left|{\psi}\right\rangle \in V\), \[\left\langle{\psi}\middle|{J\psi}\right\rangle \geq 0\] If \(\left\langle{\psi}\middle|{J\psi}\right\rangle > 0\) for all non-zero vectors \(\left|{\psi}\right\rangle \in V\), then \(J\) is positive definite. Positive operators have non-negative real eigenvalues.
Polar Decomposition Theorem
Theorem 4 (Polar Decomposition Theorem). Any bounded linear operator \(A\) on a complex Hilbert space \(V\) can be uniquely decomposed as a product \[A = UJ\] where \(U\) is a unitary operator and \(J\) is a positive operator, with \(J = \sqrt{A^\dagger A}\). This is known as the left polar decomposition. There also exists a right polar decomposition \[A = K U'\] where \(U'\) is a unitary operator and \(K = \sqrt{AA^\dagger}\) is a positive operator.
Proof. Sketch of Derivation. We focus on the left polar decomposition \(A = UJ\) with \(J = \sqrt{A^\dagger A}\). First, note that \(A^\dagger A\) is always a positive operator because for any \(\left|{\psi}\right\rangle\), \(\left\langle{\psi}\middle|{A^\dagger A \psi}\right\rangle = \left\langle{A\psi}\middle|{A\psi}\right\rangle = \|A\psi\|^2 \geq 0\). Therefore, the square root \(J = \sqrt{A^\dagger A}\) is well-defined and is also a positive operator.
To find the unitary operator \(U\), consider the action of \(J\) and \(A\) on a vector \(\left|{\psi}\right\rangle\). We want to define \(U\) such that \(A\left|{\psi}\right\rangle = UJ\left|{\psi}\right\rangle\). If \(J\) is invertible, we can formally write \(U = A J^{-1}\). To show that \(U\) is unitary, we consider \(U^\dagger U = (AJ^{-1})^\dagger (AJ^{-1}) = (J^{-1})^\dagger A^\dagger A J^{-1} = J^{-1} A^\dagger A J^{-1} = J^{-1} J^2 J^{-1} = I\). A more rigorous approach, valid even when \(J\) is not invertible, involves defining \(U\) on the range of \(J\). For \(\left|{\phi}\right\rangle = J\left|{\psi}\right\rangle\) in the range of \(J\), define \(U\left|{\phi}\right\rangle = A\left|{\psi}\right\rangle\). One can show that this definition of \(U\) is consistent and can be extended to a unitary operator on the entire space. The uniqueness of \(U\) when \(A\) is invertible follows from \(U = A J^{-1}\), as \(J = \sqrt{A^\dagger A}\) is uniquely determined.
The right polar decomposition \(A = KU'\) can be derived similarly by setting \(K = \sqrt{AA^\dagger}\) and \(U' = K^{-1} A\) (or extending this definition to the non-invertible case). ◻
Singular Value Decomposition (SVD)
Singular Value Decomposition Theorem
Theorem 5 (Singular Value Decomposition (SVD) Theorem). For any \(M \times N\) matrix \(A\), there exists a decomposition of the form \[A = U S V^\dagger\] where:
\(U\) is an \(M \times M\) unitary matrix.
\(V\) is an \(N \times N\) unitary matrix.
\(S\) is an \(M \times N\) rectangular diagonal matrix with non-negative real entries on the diagonal, called singular values.
For a square \(N \times N\) matrix, \(U, S, V\) are all \(N \times N\) matrices, and \(S\) is a diagonal matrix. The diagonal entries of \(S\), denoted \(\sigma_1, \sigma_2, \ldots, \sigma_p\) where \(p = \min(M, N)\), are the singular values of \(A\). Conventionally, they are ordered in descending order: \(\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_r > \sigma_{r+1} = \cdots = \sigma_p = 0\), where \(r\) is the rank of \(A\).
The singular values \(\{\sigma_i\}\) are the square roots of the non-zero eigenvalues of both \(A^\dagger A\) (an \(N \times N\) matrix) and \(AA^\dagger\) (an \(M \times M\) matrix). The columns of \(V\) are the eigenvectors of \(A^\dagger A\), known as right singular vectors, and the columns of \(U\) are the eigenvectors of \(AA^\dagger\), known as left singular vectors.
Applications of Singular Value Decomposition
SVD is a versatile matrix decomposition with wide-ranging applications across various fields.
Dimensionality Reduction and Low-Rank Approximation
SVD provides the optimal low-rank approximation of a matrix in the Frobenius norm. Given the SVD of \(A = USV^\dagger\), the best rank-\(k\) approximation \(A_k\) of \(A\) (for \(k \leq r = \text{rank}(A)\)) is obtained by setting all but the top \(k\) singular values to zero in \(S\), resulting in \(S_k\). Then, \(A_k = U S_k V^\dagger\). This approximation minimizes the Frobenius norm of the difference \(\|A - A_k\|_F = \sqrt{\sum_{i,j} |A_{ij} - (A_k)_{ij}|^2}\). This property is extensively used for dimensionality reduction in data analysis, feature extraction, and machine learning.
Signal Processing and Noise Reduction
In signal processing, SVD can be employed to separate signal from noise. If a data matrix represents a noisy signal, SVD can decompose it into components associated with different singular values. Typically, signal components are captured by the larger singular values and their corresponding singular vectors, while noise is distributed across smaller singular values. By truncating the SVD and reconstructing the matrix using only the dominant singular values, we can effectively reduce noise and enhance the underlying signal. This technique is valuable in applications such as image denoising, audio processing, and time series analysis.
Complexity Analysis of SVD
The computational complexity of computing the SVD of an \(M \times N\) matrix is typically \(O(M N^2)\) when \(M \geq N\) and \(O(M^2 N)\) when \(N > M\) using standard algorithms like Golub-Reinsch SVD algorithm. For square \(N \times N\) matrices, the complexity is approximately \(O(N^3)\). More efficient randomized algorithms exist that can achieve near linear time complexity in certain scenarios, especially for low-rank approximations.
Dimensionality Reduction: SVD allows for reducing the dimensionality of data while preserving most of the variance by keeping only the top singular values and vectors.
Noise Reduction: By filtering out components associated with small singular values, SVD can effectively reduce noise in signals and data.
Optimal Low-Rank Approximation: SVD provides the best approximation of a matrix by a lower-rank matrix in the Frobenius norm.
Exercise: Finding Eigenvalues and Eigenvectors - Hadamard Matrix Example
Operators as Outer Products: \(\sigma_x\) and \(\sigma_z\)
Operators can be represented using the outer product notation in a chosen basis. In the computational basis \(\{\left|{0}\right\rangle, \left|{1}\right\rangle\}\), the Pauli operators \(\sigma_x\) and \(\sigma_z\), and the identity operator \(I\) can be expressed as follows: \[\begin{aligned}\sigma_x &= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = \left|{0}\middle\rangle\middle\langle{1}\right| + \left|{1}\middle\rangle\middle\langle{0}\right| \\\sigma_z &= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = \left|{0}\middle\rangle\middle\langle{0}\right| - \left|{1}\middle\rangle\middle\langle{1}\right| \\I &= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = \left|{0}\middle\rangle\middle\langle{0}\right| + \left|{1}\middle\rangle\middle\langle{1}\right|\end{aligned}\]
Hadamard Operator and Matrix
The Hadamard operator \(H\) is defined by its action on the computational basis states: \[\begin{aligned}H\left|{0}\right\rangle &= \frac{1}{\sqrt{2}} (\left|{0}\right\rangle + \left|{1}\right\rangle) \\H\left|{1}\right\rangle &= \frac{1}{\sqrt{2}} (\left|{0}\right\rangle - \left|{1}\right\rangle)\end{aligned}\] The matrix representation of the Hadamard operator in the computational basis \(\{\left|{0}\right\rangle, \left|{1}\right\rangle\}\) is given by: \[H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\] In outer product notation, the Hadamard operator is: \[H = \frac{1}{\sqrt{2}} (\left|{0}\middle\rangle\middle\langle{0}\right| + \left|{0}\middle\rangle\middle\langle{1}\right| + \left|{1}\middle\rangle\middle\langle{0}\right| - \left|{1}\middle\rangle\middle\langle{1}\right|)\]
Eigenvalues of the Hadamard Matrix
To find the eigenvalues of the Hadamard matrix, we solve the characteristic equation \(\det(H - \lambda I) = 0\): \[\det(H - \lambda I) = \det \left( \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} - \lambda \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \right) = \det \begin{pmatrix} \frac{1}{\sqrt{2}} - \lambda & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} - \lambda \end{pmatrix}\] Calculating the determinant: \[\begin{aligned}\det(H - \lambda I) &= \left(\frac{1}{\sqrt{2}} - \lambda\right) \left(-\frac{1}{\sqrt{2}} - \lambda\right) - \left(\frac{1}{\sqrt{2}}\right) \left(\frac{1}{\sqrt{2}}\right) \\&= -\frac{1}{2} - \frac{\lambda}{\sqrt{2}} + \frac{\lambda}{\sqrt{2}} + \lambda^2 - \frac{1}{2} \\&= \lambda^2 - 1\end{aligned}\] Setting the determinant to zero, \(\lambda^2 - 1 = 0\), yields the eigenvalues \(\lambda = \pm 1\). Thus, the eigenvalues are \(\lambda_1 = 1\) and \(\lambda_2 = -1\).
Eigenvectors of the Hadamard Matrix
For each eigenvalue, we solve the eigenvector equation \((H - \lambda I) \left|{v}\right\rangle = 0\).
For \(\lambda_1 = 1\):
We solve \((H - I) \left|{v_1}\right\rangle = 0\): \[\left( \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} - \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \right) \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \implies \begin{pmatrix} \frac{1}{\sqrt{2}} - 1 & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} - 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}\] From the first row equation: \((\frac{1}{\sqrt{2}} - 1)x + \frac{1}{\sqrt{2}}y = 0 \implies y = (\sqrt{2} - 1)x\). Choosing \(x = 1\), we get \(y = \sqrt{2} - 1\). The eigenvector is \(\begin{pmatrix} 1 \\ \sqrt{2} - 1 \end{pmatrix}\). Normalizing this vector: \[\sqrt{1^2 + (\sqrt{2} - 1)^2} = \sqrt{1 + 2 - 2\sqrt{2} + 1} = \sqrt{4 - 2\sqrt{2}}\] The normalized eigenvector is \(\left|{v_1}\right\rangle = \frac{1}{\sqrt{4 - 2\sqrt{2}}} \begin{pmatrix} 1 \\ \sqrt{2} - 1 \end{pmatrix}\).
For \(\lambda_2 = -1\):
We solve \((H + I) \left|{v_2}\right\rangle = 0\): \[\left( \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} + \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \right) \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \implies \begin{pmatrix} \frac{1}{\sqrt{2}} + 1 & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} + 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}\] From the first row equation: \((\frac{1}{\sqrt{2}} + 1)x + \frac{1}{\sqrt{2}}y = 0 \implies y = -(\sqrt{2} + 1)x\). Choosing \(x = 1\), we get \(y = -(\sqrt{2} + 1)\). The eigenvector is \(\begin{pmatrix} 1 \\ -(\sqrt{2} + 1) \end{pmatrix}\). Normalizing this vector: \[\sqrt{1^2 + (-(\sqrt{2} + 1))^2} = \sqrt{1 + 2 + 2\sqrt{2} + 1} = \sqrt{4 + 2\sqrt{2}}\] The normalized eigenvector is \(\left|{v_2}\right\rangle = \frac{1}{\sqrt{4 + 2\sqrt{2}}} \begin{pmatrix} 1 \\ -(\sqrt{2} + 1) \end{pmatrix}\).
Conclusion
This lecture provided an overview of key linear algebra concepts essential for quantum mechanics and signal processing. We began by revisiting tensor products, emphasizing their non-commutative nature and their role in defining entanglement in composite systems. We then discussed the spectral representation of normal operators and the matrix diagonalization process for finding eigenbases. Furthermore, we explored polar decomposition and singular value decomposition (SVD) as powerful matrix decomposition techniques, highlighting SVD’s utility in dimensionality reduction and signal processing. Finally, we illustrated the eigenvalue and eigenvector calculation using the Hadamard matrix.
Key Takeaways:
Tensor products are non-commutative operations crucial for describing composite quantum systems and entanglement.
Entanglement emerges from complex superpositions in tensor product spaces, beyond simple product states.
Normal operators admit spectral representations and can be diagonalized into their eigenbases.
Polar decomposition and Singular Value Decomposition (SVD) are important matrix decomposition methods with wide-ranging applications.
Singular Value Decomposition (SVD) is particularly effective for dimensionality reduction, noise reduction, and optimal low-rank matrix approximation.
Further Exploration:
How can entanglement be quantified, and what are the different classifications of entangled states?
What are the algorithmic complexities of matrix diagonalization and SVD, especially for large-scale matrices, and what are the implications for computational efficiency?
How are SVD and polar decomposition practically applied in areas such as quantum state tomography, image compression, and recommendation algorithms?
Can the concepts of spectral representation and matrix decompositions be extended to operators in infinite-dimensional Hilbert spaces, relevant to continuous-variable quantum systems and functional analysis?