Eigenvalue and co

Eigenvalue (ew)

ew problem

Aka: ew or eigenwert. For square A only. These are solutions to the eigenvalue problem: Ax=λx (Eigen = distinctive). This defines the Eigen pair: (λ,x), where λ is ew, x is a (right) ev (eigen vector).

Left and right eigenpairs

Also, can define left ev and ew by the relation xA=λx. ew of A and left ew of AT are same.

ew of A and ew of AT are the same if A is real: By Schur, \(A=QUQ^{}\), \(A^{}=QU^{}Q^{}\). As \(QU^{}Q^{}\) is a similarity transformation, and U is triangular, the ew of A are still diag(U).

In the case of diagonalizable A, ew decomposition AS=SΛ reveals that rows of S1 are the left ev: For details see ew decomposition section.

Characteristic polynomial

As (AλI)x=0, det(AλI)=0. 0 is always an ev.

Other things apart, this implies that ew of a triangular matrix are on its diagonal.

Mapping polynomials to matrices

Every polynomial p(λ)=λm+am1λm1..+a0 determinant of some matrix; Eg: [λa0 1λ+a1] for m=2; which is AλI for some companion matrix A.

Applications

Domain and range of A are the same space. Useful where iterative calculations: Ak or etA occur.

Physics: evolving systems generated by linear equations; Eg: resonance, stability.

Simpler algorithms: Reduces coupled system into a collection of scalar systems. \why

ew and ev properties

ew properties

For connection with matrices, distribution etc.. see later section.

Number of ew

There are n ew in C. Algebraic multiplicity of ew: number of times an ew appears as root of p(λ).

0 as an ew

If A full rank, 0 is not an ew: Ax=λx else A’s columns would be dependent. If 0 is an ew, E0=N(A).

Eigenspace of an ew

If x is an ev corresponding to λ, so is -x and kx for any scalar k. For every λ, ev span the null space of AλI.

An invariant subspace: AEλEλ. Eλ1=N(Aλ1I).

Independence of ev

ev x1,x2 (non-0) for distinct ew λ1,λ2 are linearly independent: otherwise action of A on collinear x1,x2 would involve the same scaling.

Defective matrices

Geometric multiplicity of λ algebraic multiplicity: Let n be geometric multiplicity of λ in A; Select orth vectors in Eλ to get V^; extend it to square V; take B=VAV=[λIC 0D]; but det(BzI)=det(λIzI)det(DzI)=(λz)ndet(DzI): so algebraic multiplicity of λ in B n, same in A.

If algebraic multiplicity of ew exceeds geometric multiplicity, ew is defective. If A has defective ew, it is defective. Eg: Diagonal matrix never defective.

Rayleigh quotient of x

r(x)=xTAxxTx. Like solving the least squares problem: xaAx for a; thereby approx eigenvalue. Range(r(x)): Field of values of numerical range of A: W(A). Includes Convex hull of Λ(A).

EV as stationary points

r(x)xj=2(Axr(x)x)jx2; their vector, the gradient r(x)=2(Axr(x)x)x2. ev are the stationary points: r(x)=0 iff x is ev and r(x) is ew.

Geometry

r(x):Sn1R. Normalized ev of A are stationary points of r(x) for xSn1.

maxmin thm for symmetric A

[Courant Fishcer]. λi descending. Then λi=maxS:dim(S)=iminy0SRA(y).

Quadratic Accuracy of ew wrt ev error

For A=AT, let qi be ev. Then, any x=iaiqi, r(x)=ai2λiai2. So, W(A) = Convex hull of Λ(A). So, maxxr(x)=λmax. Also, r(x)r(qi)=O(xqi2) as xqi or j:|aj/ai|ϵ.

Rayleigh quotient of M: Generalization

If A = mm, M is mn: thin, tall, required to be full rank.\ r(M)=tr((MTM)1(MTAM)).

Take svd: M=UΣV. Then \ maxr(M)=maxU|tr(UTAU)|=maxU|R(ui)|. This happens when U is top k unique ev(A).

ew and matrices: other connections

ew distribution

Set of ew: Spectrum of A: Λ(A). Spectral radius: ρ(A)=|λmax|. ew follows wigner’s semicircle distribution for random matrices. \why

ew and the diagonal

In Disks around the diagonal

(Gerschgorin) In complex plane, take disks with center Ai,i, radii jiAi,j; each ew lies in atleast one disk: Take ev v and ew l; take i=argmaxi vi; then |λai,i||jiai,jvjvi||jiai,j|. Good estimate of ew!

Monotonic dependence on diagonal

Take \(A = QTQ^{}\), take any +ve diagonal D; get \(A+D = Q(T+D)Q^{}\). Used to take AS+toBS+.

Effect of transformations

Similarity transformation

X nonsingular, square; AB=X1AX. A, B similar if X:B=X1AX.

Change of basis op: See Ax=b as AXx=Xb when Xx=x, Xb=b, so Bx=b.

pX1AX=det(λIX1AX)=det(X1(λIA)X)=det(λIA)=pA. So, ew’s, their agebraic multiplicities same. Geometric multiplicities same: X1Eλ is eigenspace of X1AX: if Ax=λx, BX1x=λX1x.

Eigenpairs of \htext{\(A^{k\)}{..}}

\(A^{k} = (QUQ^{})^{k} = QU^{k}Q^{}\). So, ew are λik. So, ew of A1 are 1λi: L1=(S1AS)1=S1A1S.

Matrix construction

Matrix with certian ew and diagonal entries

If real λ majorizes A, real symmetric A with ai,i=a(i), such that λ(i) are ew of A. Pf: By induction; assume true for n-1; Take gRn1 interleaved amongst λ which majorizes a; by ind hyp, take real symmetric B=QGQ with diagonal a and ew g; can make A=[Gy yTα] with ew λ; take A=[Q0 01]A[QT0 01]=[BQy yTQTα] with ew λ and diag a.

Extending \htext{Λ{EW} to A with certain interleaving \htext{λ(A)}{ew}}

Take λRn interleaved between lRn+1. Can make real A=[Λy yTa] with λ(A)=l.

Pf where λi differ: a=tr(A)tr(Λ)=liλi. Take det(tIA)=det[[I(tIΛ)1y 0T1][tIΛy yTta][Λy yTa]] (multiplicity two matrices with det = 1) =[(ta)yT(tIΛ)1y]det(tIΛ)=(ta)iyi2(tλi)1i(tλi)=pA(t).

Find y to make pA(li)=0. Characterize y, show it exists: Take f(t)=(tli),g(t)=(tλi), so f(t)=g(t)(tc)+r(t), where r(t) has degree n1. c=iliiλi=a; f(λi)=r(λi), so get Lagrange interpolation form of r(t)=if(λi)g(t)g(λi)(tλi). So, f(t)g(t)=(ta)if(λi)g(λi)(tλi); this is 0t=li and yi2=f(λi)g(λi). But, by interlacing, f(λi)=(1)ni+1j|λilj| and g(λi)=(1)niji|λiλj| oppose in sign, so y.

Pf where ewi recurs: Divide out (tλi)k from all, proceed as usual.

EW of special matrices

Real A: complex ew in pairs

If A is real, pA has real coefficients.

So, if l is ew of A, so is λ¯: complex roots of P appear in pairs. ew simple if its algebraic multiplicity 1.

Triangular and diagonal A

For triangular A, from det(AλI)=0, ew are on diagonal. So, tr(A)=Ai,i=λi. Also, λi=|A|. For diagonal A, eigenpairs are diagonal element (Ai,i,kei).

Nilpotent matrix A

A is a nilpotent op (see algebra ref). So, all ew are 0; |A|, tr(A) are 0. Any singular A = product of nilpotent matrices.

Singular A

0 is an ew.

Semidefiniteness and hermitianness

See other sections.

Generalized eigenvalue problem

Az=λBz. det(AλB)=0. {AλB} is called pencil.

General Rayleigh quotient of x

\(R(x) = \frac{x^{}Ax}{x^{}Bx}\). R(x)=2(AxR(x)Bx) is 0 exactly where R(x) is λ, x the ev.

Of M

M is tall, thin, with independent columns. \ \(R(M) = tr((M^{}BM)^{-1} (M^{}AM))\). Using svd \(M = U\SW V^{}\):\ get \(R(M) = tr_{U}(U^{}B^{-1}AU)\): same as common rayleigh quotient of B1A: see ev section. Maximized when U is the top ev’s of B1A.

Reductions to common ew problem

If B is invertible: B1Az=λz: so now an λ problem. Don’t want to invert: so solve some other way.

So, if B is invertible, symmetric, +ve definite: \(R(x) = \frac{x^{}Ax}{x^{}B^{1/2}B^{1/2}x}\); taking z=B1/2x, get \(R(z) = \frac{z^{}B^{-1/2}AB^{-1/2}z}{z^{}z}\). Max of R(x) is achieved somewhere in R(z) as zx is a 1 to 1 map. ew of B1/2AB1/2 are easier to find as it is symmetric.

Matrix to matrix functions

\htext{\((I-A)^{-1\)}{Neumann} series for square A}

Aka Neumann series. (IA)1=k=0Ak, if it converges. It converges when A has norm <1.

Matrix exponentiation for square A

eA=k=0Akk!: always converges. Also defines logA. If A nilpotent, series is finite.

Using expansion, aggregating suitably: eaXebX=e(a+b)X; If XY=YX:eXeY=eX+Y.

eXeX=I: So exponential of X is always invertible.

eYXY1=YeXY1.

\(e^{X^{}} = (e^{X})^{}\).

If D diagonal, easy to get eD. Thence easily get eA if A=XDX1 (A diagonalizable).

Relationship among ew

As eλ(A)=λ(eA),det(eA)=eλi(A)=etr(A).