Importance of decompositions
Very important in restating and understanding the behavior of a linear operator. Also, important in solving problems: get decomposition, use it repeatedly. For algebraic manipulation: Factor the matrix: QR, LU, Eigenvalue decomposition, SVD.
EW revealing decompositions
See section on eigenvalues.
Eigenvalue decomposition
Aka Spectral Decomp. Only if
Take
A diagonalized into
Non-defectiveness connection
Left ev
Change of basis to ev
Unitary (Orth) diagonalizability
For \(A=A^{}\), \(A=-A^{}\), Q etc..
A unitarily diagonalizable iff it is normal: \(A^{}A = AA^{}\): From uniqueness of SVD, \(US^{2}U^{} = VS^{2}V^{}\); so,
\htext{\(A = QUQ^{*\)}{Schur} factorization}
(Schur).
Every
Singular Value Decomposition (SVD)
Reduced (Thin) SVD
If
Full SVD
Pad
Geometric view
Take \(Ax = U\SW V^{}x\): \(V^{}\) rotates the unit ball to unit ball:
From geometric action of
Existance
Every
Conditional uniqueness up to a sign
SVD is unique if
Singular value properties
See another section.
Finding SVD using EVD
Use eigenvalue decompositions: \(AA^{} = U \SW^{2}U^{}\), and \(A^{}A = V \SW^{2}V^{}\). Otherwise, find eigenvalue decomposition of
Polar decomposition
So, if
PA = LU
Here unit lower triangular L, upper triangular U.
Can also make: PA = LDU: For , unit upper triangular U, diagonal D.
Existance and uniqueness
Existance
See triangularization by row elimination algorithm in numerical analysis ref. That this runs to completion proves existance.
Uniqueness if P=I
A = LU unique: Else if
For hermitian positive definite matrices
As
As \(A = LDU = A^{}\), can take \(A = RR^{}\), where R =
Importance
Very useful in solving linear equations invloving the same matrix A: can store L, U for repeated reuse.
A = QR = LQ’
Express columns of
Taking the QR factorization of
Importance
Often, we need to get an orthogonal basis for range(A).
Column Rank deficient A
If
Rank revealing QR
In such cases, we can always assume that the 0 rows appear at the bottom, revealing the rank.
Factorization of Hermitian matrices
Unitary diagonalizability, SVD
From Schur factorization: Can write \(A=Q \EW Q^{}\). So, has full set of orthogonal eigenvectors. So, can write: \(A = \sum_i \ew_i q_i q_i^{}\).
Also, singular values
Symmetric LDU factorization
(Cholesky). \(A = R^{}R\). As \(A = LDU^{}=UDL^{}\), \(L=U^{}\). So, \(A = LDL^{} = LD^{1/2}D^{1/2}L^{} = R^{*}R\);
By SVD,
Square root of semidefinite A
\(A = (A^{1/2})^{}A^{1/2}\). Diagonalize, get \(A = QLQ^{}\),
Special linear operators
Orthogonal (Unitary) m*n matrix
Columns orthonormal:
Change of basis operation
Qx=b:
Alternative view:
Preservation of angles and 2 norms
So, \(\dprod{Qa, Qb} = b^{}Q^{}Qa = \dprod{a,b}\). Also,
Square Unitary matrix
Orthogonality of rows
If Q square, even rows orthogonal: \(Q^{}Q=I \implies QQ^{}=I\):
Rotation + reflection
Because of its being a change of basis operation, by geometry,
The distinction between orthogonal matrices constructed purely out of rotation matrices (proper rotation), and those involving orthogonal matrices which involve reflections (improper rotation) is important in geometric computations: in applications such as robotics, computational biology etc..
Determinant
Permutation matrix P
A permuted I. Permutes rows (PA) or columns (AP). Partial permutation matrix: every row or column has
Rotation matrix
To make a rotation matrix, take new orthogonal basis
Reflection matrix
Take reflection accross some basis vector (not any axis). This is just I with -1 instead of some 1.
Linear Projector P to a subspace S
Using General projection definition
See definition of the generalized projection operation in vector spaces ref. Here, we consider the case where projection happens to be a linear operator: that is, it corresponds to the minimization of a convex quadratic vector functional, where the feasible set is a vector space, the range space of the projector
Definition for the linear case
P such that
(I-P) projects to N(P): If Pv=0, (I-P)v=v; vice versa. Rank r projectors project to r dimension space.
Oblique projectors project along non orthogonal basis.
Orthogonal projector
Here,
If P=P*, P orth projector: If P=P*,
Ergo, (I-P) also orth proj. Also, \(P = \hat{Q}\hat{Q}^{}\) (As
Hermitian matrix
Aka Self Adjoint Operator. Symmetric matrix:
Notation: Symmetric matrices in
Skew/ anti symmetric matrix:
\(\dprod{Ax, y} = y^{}Ax = \dprod{x, A^{}y}\).
Importance
Appears often in analysis: Any \(B = \frac{B+B^{}}{2} + \frac{B-B^{}}{2}\): Hermitian + Skew Hermitian. Also in projector.
Many applications: Eg: Covariance matrix, adjascency matrix, kernel matrix.
Self adjointness under M
For SPD
Eigenvalues (ew)
All eigenvalues real: \
\(\conj{l}x^{}x = (lx)^{}x = (Ax)^{}x = x^{}(Ax) = lx^{*}x\), so
ev of Distinct ew are orthogonal: \(x_{2}^{}Ax_{1} = l_{1}x_{2}^{}x_{1} = l_{2}x_{2}^{}x_{1}, \therefore x_{2}^{}x_{1}(l_{1}-l_{2}) = 0\).
sw and norm
Unitary diagonalizability possible for A: see section on factorization of hermitian matrices. Thence,
Factorizations
For details, see section on factorization of hermitian matrices.
Skewed inner prod \htext{\(x^{*Ax\)}{..}}
\(x^{}Ay = (y^{}Ax)^{}\). So, \(x^{}Ax = \conj{x^{}Ax}\); so \(x^{}Ax\) real.
+ve definiteness
Definition
If \(\forall 0 \neq x \in C^{n}: x^{}Ax \in R; x^{}Ax \geq 0\),
If
Similarly, -ve (semi-)definite defined.
Importance
Important because Hessians of convex quadratic functionals are +ve semidefinite. Also, it is importance because of its connections with ew(A).
+ve semidefinite cone
The set of +ve semidefinite matrices, is a proper cone. If restricted to symmetric matrices, get
Matrix inequalities
Hence, inequalities wrt the cone defined. (Can write
Linear matrix inequality (LMI)
Analogy with reals
Hermitians analogous to R, +ve semidef like
Support number of a pencil
Non hermitian examples
Need not be hermitian always. \ Then, as \(x^{}Bx = x^{}B^{}x\), anti symmetric part in \(B = \frac{B+B^{}}{2} + \frac{B-B^{*}}{2}\) has no effect.
Diagonal elements, blocks
Similarly, for
Off-diagonal block signs invertible
Off-diagonal block signs are invertible without loosing +ve semidefiniteness. Pf: If
Eigenvalues: Real, +ve?
If
Determinant
+ve inner products
For +ve definite matrices, get +ve inner products: Take eigenvalue decompositions:
So, the +ve definite cone is self dual.
Invertibility
If
Hermitian +ve definiteness
Also, see properties of not-necessarily symmetric +ve semidefinite matrices.
From general matrices
Any \(B^{}B\) or \(BB^{}\) hermitian, +ve semidefinite: \(x^{}B^{}Bx = \norm{Bx}\). So, if B invertible, \(B^{}B\) is +ve definite. So, if B is long and thin, \(B^{}B\) is +ve definite, but if B is short and fat: so singular,
Connection to ew
If \(A = A^{}\), all eigenvalues
Magnitudes of ew same as that of sw: as you can easily derive SVD from eigenvalue decomposition. So, singular value properties carry over.
Connection to the diagonal
Diagonal dominance
If
Diagonal heaviness
The biggest element of
Check for +ve (semi) definiteness
Do Gaussian elimination, see if pivots
Block matrices: Schur complement connection
Take
Proof: rewrite as optimization problem
Take
+ve semidefinite cone
Denoted by
Self duality
If
When you consider the dual of a semidefinite program, this is important.
Speciality of the diagonal
Diagonally dominant matrix
Hermitian-ness and +ve semidefiniteness
A hermitian diagonally dominant matrix with non-negative diagonal elements is +ve semi-definite. Pf: Take
If symmetry condition is dropped, +ve semidefiniteness need not hold.
Other Matrices of note
Interesting matrix types
Block matrix; Block tridiagonal matrix.
Triangular matrix
Inverse of L, L’ is easy to find: \(L’{i,i} = L{i,i}^{-1}; L’{i, j} = -L{i,j}^{-1}\).
ew are on diagonal.
Polynomial matrix
Normal matrix
\(A^{}A= AA^{}\). By spectral thm,
Let
Rank 1 perturbation of I
\(A=I+uv^{}\). Easily invertible: \(A^{-1} = I + auv^{}\) for some scalar a.
k partial isometry
Positive matrix A
Get Doubly stochastic matrix
This is important in some applications: like making a composite matrix from the social and affiliation networks.
If
Can do Sinkhorn balancing: iteratively a] do row normalization, b] column normalization.
Stochastic matrices
Definition
If
Eigenspectrum and norm: Square A
1 is an ev
If
By Gerschgorin thm,
If
Product of stochastic matrices
So, if
Doubly Stochastic matrix S
S is bistochastic if \(S \geq 0, S1 = 1, 1^{}S = 1^{}\).
(Birkhoff):
Doubly Substochastic matrix Q
For permutation matrix P, PQ or QP also substochastic.
Q is dbl substochastic iff B has dbl stochastic dilation S: make deficiency vectors
Large matrices
Often an approximation to
Sparsity
Not density. Very few non zero entries per row:
Iterative algorithms
See numerical analysis ref.