Matrix¶
Eigenvectors in ellipsoids¶
- Correspondence between eigenvalues and eigenvectors in ellipsoids
- Relationship between ellipsoid radii and eigenvalues
symmetric positive semidefinite¶
every symmetric positive semi-definite matrix A can be written A = X^TX, not all positive semi-definite. As a counterexample, for \lambda > 0 and \vert \epsilon\vert < \lambda,
is positive semi-definite, but cannot be written A=X^TX.
refer to Proof of a matrix is positive semidefinite iff it can be written in the form X′X
eigenvectors and eigenvalues from X^TX to XX^T¶
- X^Tv with v eigenvector of v of XX^T to nonzero eigenvalue
- vectors in the null space of X
refer to Eigenvectors and eigenvalues from X^TX to XX^T?
Can matrix exponentials be negative?¶
Refer to Can matrix exponentials ever be negative? If so, under what conditions?.
One argument: if the matrix to have non-negative entries off-diagonal, then the matrix exponentials must be positive.
Spectral Theorem¶
Refer to Brillinger, D. R. (1981). Time series: data analysis and theory (Vol. 36). Siam..
Theorem 3.7.1¶
If \H is a J\times J Hermitian matrix, then
where \mu_j is the j-th latent value of \H and \U_j is the corrsponding latent vector.
Corollary 3.7.1¶
If \H is J\times J Hermitian, then it may written \U\M\bar \U^T where \M = \diag\{\mu_j;j=1,\ldots,J\} and \U=[\U_1,\ldots,\U_J] is unitary. Also if \H is non-negative definite, then \mu_j\ge 0,j=1,\ldots,J.
Theorem 3.7.2¶
If \Z is J\times K, then
where \mu_j^2 is the j-th latent value of \Z\bar\Z^T (or \bar \Z^T\Z), \U_j is the j-th latent vector of \Z\bar\Z^T and \V_j is the j-th latent vector of \bar \Z^T\bar \Z and it is understood \mu_j\ge 0.
Corollary 3.7.2¶
If \Z is J\times K, then it may be written \U\M\bar\V^T where the J\times K \M=\diag\{\mu_j:j=1,\ldots,J\}, the J\times J \U is unitary and K\times K \V is also unitary.
Theorem 3.7.4¶
Let \Z be J\times K. Among J\times K matrices \A of rank L\le J,K
is minimized by
The minimum achieved is \mu_{j+L}^2.
Corollary 3.7.4¶
The above choice of \A also minimizes
for \A of rank L\le J,K. The minimum achieved is
Orthogonal matrix¶
Orthogonal matrix implies that both of columns and rows are orthogonal.
Refer to Column Vectors orthogonal implies Row Vectors also orthogonal?
Decomposition¶
Cholesky¶
A symmetric, positive definite square matrix A has a Cholesky decomposition into a product of a lower triangular matrix L and its transpose L^T,
This decomposition can be used to convert the linear system Ax=b into a pair of triangular systems, Ly=b,L^Tx=y, which can be solved by forward and back-substitution.
If the matrix A is near singular, it is sometimes possible to reduce the condition number and recover a more accurate a more accurate solution vector x by scaling as
where S is a diagonal matrix whose elements are given by S_{ii}=1/\sqrt{A_{ii}}. This scaling is also known as Jacobi preconditioning.
QR¶
A general rectangular M-by-N matrix A has a QR decomposition into the product of an orthogonal M-by-M square matrix Q, where Q^TQ=I, and an M-by-N right-triangular matrix R,
This decomposition can be used to convert the linear system Ax=b into the triangular system Rx=Q^Tb, which can be solved by back-substitution.
Another use of the QR decomposition is to compute an orthonormal basis for a set of vectors. The first N columns of Q form an orthonormal basis for the range of A, ran(A), when A has full column rank.
\rk(AB) \leq \rk(A)¶
The range of the matrix M is
Recall that the rank of a matrix M is the dimension of the range R(M) of the matrix M. So we have
In general, if a vector space V is a subset of a vector space W, then we have
Thus, it suffices to show that the vector space \calR(AB) is a subset of the vector space \calR(A).
Consider any vector \y\in\calR(AB). Then there exists a vector \x\in\R^l such that \y=(AB)\x by the definition of the range.
Let \z=B\x\in\R^n.
Then we have
and thus the vector \y is in \calR(A). Thus \calR(AB) is a subset of \calR(A) and we have
refer to Rank of the Product of Matrices AB is Less than or Equal to the Rank of A
\rk(A)=\rk(AA^T)=\rk(A^TA)¶
Note that the left null space (the null space of A^T) is the orthogonal complement to the column space of A, that is,
Therefore, \mathrm{Nul}(A^T)\cap \mathrm{Col}(A), and so forth.
Now consider the matrix A^TA. Then
But since the null space of A^T only intersects trivially with \mathrm{Col}(A), then \mathrm{Col}(A^TA) must have the same dimension as \mathrm{Col}(A), which gives us the equality of ranks.
Refer to Christopher A. Wong (https://math.stackexchange.com/users/22059/christopher-a-wong), Rank of product of a matrix and its transpose, URL (version: 2012-10-16), maybe also Prove \rk(A^TA)=\rk(A) for any A\in M_{m\times n}
Only real eigenvalues in symmetric matrices¶
refer to The Case of Complex Eigenvalues
Calculate the Inverse of a 3x3 matrix¶
By Adjugate Matrix,
- Check the determinant of the matrix.
- Transpose the original matrix.
- Find the determinant of each of the 2x2 minor matrices.
- Create the matrix of cofactors. (assign a sign)
- Divide each term of the adjugate matrix by the determinant.
By Linear Row Reduction