- strict triangular form
A system is said to be in
strict triangular form
if, in the kth equation, the coefficients of the first k-1 variables are all zero and the coefficient of$x_{k}$ is nonzero. - Elementary Row Operations
- I. Interchange two rows.
- II. Multiply a row by a nonzero real number.
- III. Replace a row by its sum with a multiple of another row.
-
Row Echelon Form
A matrix is said to be in
row echelon form
- If the first nonzero entry in each nonzero row is 1
- If row k does not consist entirely of zeros, the number of leading zero entries in row k+1 is greater than the number of leading zero entries in row k
- If there are rows whose entries are all zeros, they are below the rows having nonzero entries
-
Reduced Row Echelon Form
- The matrix is in row echelon form.
- The first nonzero entry in each row is the only nonzero entry in its column.
-
Equivalent Conditions for Nonsingularity
- A is nonsingular
- Ax=0 has only the trivial sulution 0.
- A is row equivalent to I
-
Triangular Factorization(A=LU)
$$ AB= \begin{pmatrix} \vec{a}{1}B \ \vec{a}{2}B \ ...\ \vec{a}_{m}B \ \end{pmatrix} $$
-
Block Multiplication
- $$ A\begin{pmatrix}B_{1}&B_{2}\end{pmatrix}= \begin{pmatrix}AB_{1} & AB_{2}\end{pmatrix} $$
- $$ \begin{pmatrix} A_{1} \ A_{2} \end{pmatrix}B= \begin{pmatrix} A_{1}B \ A_{2}B \end{pmatrix} $$
- $$ \begin{pmatrix} A_{1} & A_{2} \end{pmatrix} \begin{pmatrix} B_{1} \ B_{2} \end{pmatrix} =A_{1}B_{1}+A_{2}B_{2} $$
- $$ \begin{pmatrix} A_{11} & A_{12} \ A_{21} & A_{22} \ \end{pmatrix} \begin{pmatrix} B_{11} & B_{12} \ B_{21} & B_{22} \ \end{pmatrix}= \begin{pmatrix} A_{11}B_{11}+A_{12}B_{21} & A_{11}B_{12}+A_{12}B_{22} \ A_{21}B_{11}+A_{22}B_{21} & A_{21}B_{12}+A_{22}B_{22} \end{pmatrix} $$
- let
$$
A=\begin{pmatrix}
A_{11} & ... & A_{1t} \
... \
A_{s1} & ... & A_{st}
\end{pmatrix}
B=\begin{pmatrix}
B_{11} & ... & B_{1r} \
... \
B_{t1} & ... & B_{tr}
\end{pmatrix}
$$
we have
$$
AB=\begin{pmatrix}
C_{11} & ... & C_{1r} \
... \
C_{s1} & ... & C_{sr}
\end{pmatrix}
$$
where
$$
C_{ij}=\sum_{k=1}^tA_{ik}B_{kj}
$$
The number of columns of
$A_{ik}$ must equal the number of rows of$B_{kj}$
-
Outer Product Expansions
- inner product(scalar product):
$x^{T}y$ - outer product:
$xy^{T}$ - outer product expansion: X is mn matrix, Y is kn $$ XY^{T}=\begin{pmatrix} x_{1},x_{2},...,x_{n} \end{pmatrix} \begin{pmatrix} y_{1}^{T} \ y_{2}^{T} \ ... \ y_{n}^{T} \ \end{pmatrix}= x_{1}y_{1}^{T}+x_{2}y_{2}^{T}+...+x_{n}y_{n}^{T} $$
- $$ \begin{pmatrix} a_{1},a_{2},...,a_{n} \end{pmatrix}^{T}= \begin{pmatrix} a_{1}^{T} \ a_{2}^{T} \ ... \ a_{n}^{T} \ \end{pmatrix} $$
- inner product(scalar product):
Notation: for example, the determinant of
is
-
Definition of
cofactor
Let A=(
$a_{ij}$ ) be an nn matrix, and let $M_{ij}$ denote the (n-1)(n-1) matrix obtained from A by deleting the row and column containing$a_{ij}$ . The determinant of$M_{ij}$ is called theminor
of$a_{ij}$ . We define thecofactor
$A_{ij}$ of$a_{ij}$ by$$A_{ij}=(-1)^{i+j}det(M_{ij})$$
eg,
the cofactor expansion of row 1 is
the cofactor expansion of row 3 is
We can represent the matrix as cofactor expansion using any row or column
-
determinant
of n×n matrixdet(A), is a scalar associated with the matrix A that is defined inductively as
$$ def(A)= \left{ \begin{array}{ll} a_{11} & n=1 \ a_{11}A_{11}+a_{12}A_{12}+...+a_{1n}A_{1n} & n>1 \end{array} \right. $$
-
Theorem 2.1.1
: If A is an n×n matrix with n ≥ 2, then det(A) can be expressed as a cofactor expansion using any row or column of A -
Theorem 2.1.2
: If A is an n×n matrix, then$det(A^{T})=det(A)$ -
Theorem 2.1.3
: If A is an n×n triangular matrix, then det(A) equals the product of the diagonal elements of A -
Theorem 2.1.4
: Let A be n×n matrix.- If A has a row or column consisting entirely of zeros, then det(A)=0
- If A has two identical rows or two identical columns, then det(A)=0
If E is an elementary matrix, then
$$det(EA)=det(E)det(A)$$ where
$$ det(E)= \left{ \begin{array}{ll} -1 & E;is;of;type;I \\ \alpha \neq 0 & E;is;of;type;II \\ 1 & E;is;of;type;III \\ \end{array} \right. $$
-
Theorem 2.2.2
: An n×n matrix is singular if and only ifdet(A)=0
-
another way to calculate det(A)
- Reduce A to row echelon form.
$U=E_{k}E_{k-1}...E_{1}A$ - If the last row of U consists entirely of 0, A is singular and det(A)=0
- Otherwise, we can reduce A to triangular form T using only operations I and III.
$T=E_{m}E_{m-1}...E_{1}A$ , and$det(A)=±det(T)=±t_{11}t_{22}...t_{nn}$ , the sign is positive if row operation I has been used an even number of times and negative otherwise.
- Reduce A to row echelon form.
-
Theorem 2.2.3
: If A and B are n×n matrices, then$$det(AB)=det(A)det(B)$$
-
Adjoint
of a n×n matrix A$$ adjA=\begin{pmatrix} A_{11} & A_{21} &...& A_{n1} \ A_{12} & A_{22} &...& A_{n2} \ ...\ A_{1n} & A_{2n} &...& A_{nn} \ \end{pmatrix} $$
To calculate adjA, we replace each elements of A by its cofactor and then tranpose the result.
$$ A^{-1}=\frac{1}{det(A)}adjA\quad when;det(A) \neq 0 $$
-
Theorem 2.3.1
: Cramer's RuleLet A be an n×n nonsingular matrix,
$b∈R^{n}$ . Let$A_{i}$ be the matrix obtained by replacing the ith column of A by b. If x is the unique solution of Ax=b, then$$ x_{i}=\frac{det(A_{i})}{det(A)}\quad for; i=1,2,...,n $$
-
Vector Space Axioms
Let V be a set on which the operations of addition and scalar multiplication are defined. By this we mean that, with each pair of elements x and y in V, we can associate a unique element x+y that is also in V, and with each element x in V and each scalar
$\alpha$ , we can associate a unique element$\alpha x$ in V. The setV
, together with the operations of addition and scalar multiplication, is said to form avector space
if the following axioms are satisfied:- A1. x+y=y+x for any x and y in V
- A2. (x+y)+z=x+(y+z) for any x, y, and z in V.
- A3. There exists an element 0 in V such that x + 0 = x for each x ∈ V.
- A4. For each x ∈ V, there exists an element −x in V such that x + (−x) = 0.
- A5. α(x + y) = αx + αy for each scalar α and any x and y in V.
- A6. (α + β)x = αx + βx for any scalars α and β and any x ∈ V.
- A7. (αβ)x = α(βx) for any scalars α and β and any x ∈ V.
- A8. 1 · x = x for all x ∈ V.
We will refer to the set V as the universal set for the vector space. Its elements are called
vectors
.We also have 2 closure properties.
- C1. If x ∈ V and α is a scalar, then αx ∈ V.
- C2. If x, y ∈ V, then x + y ∈ V.
-
definition of
subspace
If S is a
nonempty
subset of a vector space V, and S satisfies the conditions- (i) αx ∈ S whenever x ∈ S for any scalar α
- (ii) x + y ∈ S whenever x ∈ S and y ∈ S
then S is said to be a
subspace
of V.
note: Every subspace of a vector space is a vector space in its own right.