31
화공수학 Ch. 6: Linear Algebra Part B. Linear Algebra, Vector Calculus Chap. 6: Linear Algebra; Matrices, Vectors, Determinants, Linear Systems of Equations - Theory and application of linear systems of equations, linear transformations, and eigenvalue problems. - Vectors, matrices, determinants, … 6.1. Basic concepts - Basic concepts and rules of matrix and vector algebra - Matrix: rectangular array of numbers (or functions) Reference: “Matrix Computations” by G.H. Golub, C.F. Van Loan (1996) elements or entries row column 4 0 3 1 2 5 Ex. 1) 5x – 2y + z = 0 3x + 4z = 0 Coefficient matrix: 0 32 5 8 4 . 0 2

Part B. Linear Algebra, Vector Calculus Chap. 6: Linear Algebra; … · 2013-12-19 · Ch. 6: Linear Algebra 화공수학 Part B. Linear Algebra, Vector Calculus Chap. 6: Linear Algebra;

  • Upload
    others

  • View
    67

  • Download
    3

Embed Size (px)

Citation preview

화공수학Ch. 6: Linear Algebra

Part B. Linear Algebra, Vector CalculusChap. 6: Linear Algebra; Matrices, Vectors, Determinants,

Linear Systems of Equations

- Theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

- Vectors, matrices, determinants, …

6.1. Basic concepts- Basic concepts and rules of matrix and vector algebra- Matrix: rectangular array of numbers (or functions)

Reference: “Matrix Computations” by G.H. Golub, C.F. Van Loan (1996) elements or entries

row

column

−403125

Ex. 1) 5x – 2y + z = 03x + 4z = 0

Coefficient matrix:

− 0325

84.02

화공수학Ch. 6: Linear Algebra

0)x,...,x,x(f

0)x,...,x,x(f0)x,...,x,x(f

n21n

n212

n211

=

==

nnnn22n11n

2nn2222121

1nn1212111

bxaxaxa

bxaxaxabxaxaxa

=+++

=+++=+++

bxA =

=

n

2

1

n

2

1

nn2n1n

n22221

n11211

b

bb

x

xx

aaa

aaaaaa

General Notations and Concepts

[ ]

==

mn2m1m

n22221

n11211

jk

aaa

aaaaaa

aA

b,a:Vector

B,A:Matrix

- rectangular matrix that is notsquare

m x n matrixm = n: square matrix

aii: principal or main diagonal of the matrix.(important for simultaneouslinear equations)

ijaorARow (Eqn.)

Column (Variable)

화공수학Ch. 6: Linear Algebra

VectorsRow vector:

Transposition

[ ]m1 b,...,bb =

=

m

1

c

cc

Column vector:

=

m

1T

b

bb Transpose

)mn(A)nm(A T ××

[ ]

==

mnn2n1

2m2212

1m2111

kjT

aaa

aaaaaa

aA

Symmetric matrices: Skew-symmetric matrices:

Square matrices

)aa(AA ijT

ijT ==

)aa(AA ijT

ijT −=−=

Properties of transpose:

( ) ( ) ( )( ) ( ) TTTTTT

TTTTTTTT

ABCCBA,AkAk

,ABBA,BABA,AA

==

=+=+=

화공수학Ch. 6: Linear Algebra

Equalities of Matrices: same size

Matrix Addition: same size

Scalar Multiplication:

6.2. Matrix Multiplication- Multiplication of matrices by matrices

(number of columns of 1st factor A = number of rows of 2nd factor B)

( ) ( )

( ) 0AA

A0A

WVUWVU

ABBA

=−+

=+

++=++

+=+

( )ijij baBA ==

( )ijijij bacBAC +=+=

AkA)k( −=−

( )( )( ) ( )AckAkc

AkAcAkc

BcAcBAc

=

+=+

+=+

lnlmmnCBA

×××=

=

=m

1kkjikij baC

화공수학Ch. 6: Linear Algebra

Difference from Multiplication of Numbers- Matrix multiplication is not commutative in general: - does not necessarily imply - does not necessarily imply (even when A=0)

Special Matrices

ABBA ≠0BA = 0ABor0Bor0A ===DACA = DC =

( ) ( ) ( )( ) ( )

( ) CBCACBA

CBACBA

BkABAkBAk

+=+

=

==

mn

n

n

a

aaaaa

00

0 222

11211

Upper triangular matrix

mnmm aaa

aaa

21

2221

11

000

Lower triangular matrix

mna

aaaa

00

00

2221

1211

Banded matrix Results of finite difference solutionsfor PDE or ODE.

(especially, tridiagonal)

Matrix can be decomposed into Lower & Upper triangular matrices LU decomposition

화공수학Ch. 6: Linear Algebra

Special Matrices:

jiij aa = Symmetric matrix,

)ji(0a,0a ijii ≠=≠ Diagonal matrix

)ji,0I(1IorI ijii ≠== Identity (or unit) matrix

Inner Product of Vectors

Product in Terms of Row and Column Vectors: (jth first row of A).(kth first column of B)

[ ] =

=

=⋅n

1kkk

n

2

1

n21 ba

b

bb

aaaba

kjjk bac ⋅=pmpnnm

CBA×××

=

⋅⋅⋅

⋅⋅⋅⋅⋅⋅

=

pm2m1m

p22212

p12111

bababa

babababababa

BA

화공수학Ch. 6: Linear Algebra

Linear Transformations

6.3. Linear Systems of Equations: Gauss Elimination- The most important practical use of matrices: the solution of linear systems of equations

Linear System, Coefficient Matrix, Augmented Matrix

wCwBAy

wBx,xAy

==

==

mnmn22m11m

2nn2222121

1nn1212111

bxaxaxa

bxaxaxabxaxaxa

=+++

=+++=+++

bxA =

=

m

2

1

n

2

1

mn2m1m

n22221

n11211

b

bb

x

xx

aaa

aaaaaa

b = 0: homogeneous system (this has at least one solution, i.e., x = 0)b ≠ 0: nonhomogeneous system

m

2

1

mn2m1m

n22221

n11211

b

bb

aaa

aaaaaa

Augmented matrix

화공수학Ch. 6: Linear Algebra

Singular

Same slopeDifferent interceptNo solution

Infinite solutions Very close to singular ill-conditioned - difficult to identify the solution- extremely sensitive to round-off error

x1

x2Precisely one solution

Ex. 1) Geometric interpretation: Existence of solutions

화공수학Ch. 6: Linear Algebra

Gauss Elimination- Standard method for solving linear systems- The elimination of unknowns can be extended to large sets of equations.

(Forward elimination of unknowns & solution through back substitution)

=

−−−−−−

4

3

2

1

4

3

2

1

4441

24232221

141312 ''''1

bbbb

xxxx

aa

aaaaaaa

=

−−−−−−

4

3

2

1

4

3

2

1

4441

242322

141312

''

'''0'''1

bbbb

xxxx

aa

aaaaaa

During this operations the first row: pivot row (a11: pivot element)And then second row becomes pivot row: a’22 pivot element

=

−−−

4

3

2

1

4

3

2

1

34

2423

141312

''''

10'10''10'''1

bbbb

xxxx

aaaaaa

Upper triangular form

+=

−=

−=→=+=N

ijjijii xabx

xabxbxaxbx

1

434333434344

''

'''','

Repeat back substitution, moving upward

화공수학Ch. 6: Linear Algebra

Elementary Row Operation: Row-Equivalent Systems- Interchange of two rows- Addition of a constant multiple of one row to another row- Multiplication of a row by a nonzero constant c

Overdetermined: equations > unknownsDetermined: equations = unknownsUnderdetermined: equations < unknowns

Consistent: system has at least one solution.Inconsistent: system has no solutions at all.

Homogeneous solution: xh

Nonhomogeneous solution: xp

xp+xh is also a solutions of the nonhomogeneous systems

Homogeneous systems: always consistent (why? trivial solution exists, x=0)Theorem: A homogeneous system possess nontrivial solutions if the number of m of

equations is less than the number of n of unknowns (m < n)

0xA =bxA =

( ) bb0xAxAxxA phph =+=+=+

화공수학Ch. 6: Linear Algebra

Echelon Form: Information Resulting from It

−−

1200023/13/10

3123and

0003/13/10

123

m

1r

rnrnrrr

*2nn2222

1nn1212111

b~0

b~0

b~xcxc

bxcxc

bxaxaxa

=

=

=++

=++

=+++

+

(a) No solution: if r < m and one of the numbers

is not zero.

(b) Precisely one solution: if r=n and

, if present, are zero.

(c) Infinitely many solutions: if r < n and

, if present, are zero.

m1r b~,...,b~ +

m1r b~,...,b~ +

m1r b~,...,b~ +

Existence and uniqueness of the solutions Next issue

화공수학Ch. 6: Linear Algebra

Gauss-Jordan elimination: particularly well suited to digital computation

=

−−−

4

3

2

1

4

3

2

1

34

2423

141312

''''

10'10''10'''1

bbbb

xxxx

aaaaaa

Multiplying the second row by a’12 and subtracting the second row from the first

=

−−−

4

3

2

1

4

3

2

1

34

2423

1413

'''''

10'10''10''''01

bbbb

xxxx

aaaaa

=

4

3

2

1

4

3

2

1

''''''''

1000010000100001

bbbb

xxxx

44

33

22

11

''''''''

bxbxbxbx

====

- The most efficient approach is to eliminate all elements both above and below the pivotelement in order to clear to zero the entire column containing the pivot element, except ofcourse for the 1 in the pivot position on the main diagonal.

화공수학Ch. 6: Linear Algebra

Gauss elimination:- Forward elimination + back-substitution

(computation effort ↑)- Inefficient when solving equations with the same coefficient A, but with different rhs constants B

LU Decomposition: - Well suited for those situations where many rhs B must be evaluated for a single value of A Elimination step can be formulated so that it involves only operations on the matrix of coefficient A.

- Provides an efficient means to compute the matrix inverse.

(1) LU Decomposition- LU decomposition separates the time-consuming elimination of A from the manipulations of rhs B. once A has been “decomposed”, multiple rhs vectors can be evaluated effectively.

• Overview of LU Decomposition

-Upper triangular form:

- Assume lower diagonal matrix with 1’s on the diagonal:

=

3

2

1

3

2

1

33

2322

131211

ddd

xxx

u00uu0uuu

0DxU =−

=

1ll01l001

L

3231

21

BxA =

0BxA =−

LU-Decomposition

화공수학Ch. 6: Linear Algebra

So that

- Two-step strategy for obtaining solutions:LU decomposition step: A L and USubstitution step: D from LD=B

x from Ux=D

• LU Decomposition Version of Gauss Eliminationa. Gauss Elimination:

Use f21 = a21/a11 to eliminate a21 upper triangular matrix form !f31 = a31/a11 to eliminate a31f32 = a’32/a’22 to eliminate a32

Store f’s

BDL,AULBxA)DxU(L ==∴−=−

=

3

2

1

3

2

1

333231

232221

131211

bbb

xxx

aaaaaaaaa

333231

232221

131211

''aff'a'af

aaaULA →

=

33

2322

131211

''a00'a'a0

aaaU

=

1ff01f001

L

3231

21

화공수학Ch. 6: Linear Algebra

b. Forward-substitution step:

Back-substitution step: (identical to the back-substitution phase of conventional Gauss elimination)

=

=−=

=1i

1jjijii n,...,3,2iforbadd

BDL

1,...,2n,1nifora

xadx&a/dx

ii

n

1ijjiji

innnn −−=−

==

+=

화공수학Ch. 6: Linear Algebra

6.4. Rank of a Matrix: Linear Dependence. Vector Space- Key concepts for existence and uniqueness of solutionsLinear Independence and Dependence of VectorsGiven set of m vectors: a1, …, am (same size)Linear combination: (cj: any scalars)

- Conditions satisfying above relation:(1) (Only) all zero cj’s: a1, …, am are linearly independent.(2) Above relation holds with scalars not all zero linearly dependent.

Ex. 1) Linear independence and dependence

- Vectors can be expressed into linearly independent subset.

mm2211 acacac +++

0acacac mm2211 =+++

)c/ckwhere(akaka)ex 1jjmm221 −=++=

[ ][ ][ ]1502121a

5424426a2203a

3

2

1

−−=−=

=0aa5.0a6 321 =−−

Two vectors are linearly independent.

화공수학Ch. 6: Linear Algebra

Rank of a Matrix- Maximum number of linearly independent row vectors of a matrix A=[ajk]: rank A

Ex. 3) Rank

rank = 2

- rank A=0 iff A=0

Theorem 1: (rank in terms of column vectors)The rank of a matrix A equals the maximum number of linearly independent column vectors of A. A and AT same rank.

- Maximum number of linearly independent row vectors of A(r) cannot exceed the maximum number of linearly independent column vectors of A.

Ex. 4)

−−−=

150212154244262203

A

−+

−=

−+

−=

21420

2129

216

3

32

15542

,21

420

32

216

3

32

0242

화공수학Ch. 6: Linear Algebra

Vector Space, Dimension, Basis- Vector space: a set V of vectors such that with any two vectors a and b in V all their

linear combination αa+βb are elements of V.

Let V be a set of elements on which two operations called vector addition and scalar multiplication are defined. Then V is said to be a vector space if the following ten properties are satisfied.

Axioms for vector addition(i) If x and y are in V, then x+y is in V.(ii) For all x, y in V, x+y = y+x(iii) For all x, y, z in V, x+(y+z) = (x+y) +z(iv) There is a unique vector 0 in V such that 0 + x = x + 0 = x(v) For each x in V, there exists a vector –x such that x+(-x) = (-x)+x=0

Axioms for scalar multiplications(vi) If k is any scalar and x is in V, then kx is in V(vii) k(x+y) = kx + ky(viii) (k1+k2)x = k1x + k2x(ix) k1(k2x) = (k1k2)x(x) 1x = x

화공수학Ch. 6: Linear Algebra

Vector Space, Dimension, Basis- Subspace: If a subset W of a vector space V is itself a vector space under the

operations of vector addition and scalar multiplication defined on V, thenW is called a subspace of V. (i) If x and y are in W, then x+y is in W.(ii) If x is in W and k is any scalar, then kx is W.

- Dimension: (dim V) the maximum number of linearly independent vectors in V.- Basis for V: a linearly independent set in V consisting of a maximum possible

number of vectors in V.(number of vectors of a basis for V = dim V)

- Span: the set of linear combinations of given vectors a1, …, ap with the same numberof components (Span is a vector space)

Ex. 5) Consider a matrix A in Ex.1.: vector space of dim 2, basis a1, a2 or a1, a3

- Row space of A: span of the row vectors of a matrix AColumn space of A: span of the column vectors of a matrix A

Theorem 2: The row space and the column space of a matrix A have the same dimension, equal to rank A.

화공수학Ch. 6: Linear Algebra

Invariance of Rank under Elementary Row OperationsTheorem 3: Row-equivalent matrices

Row-equivalent matrices have the same rank.(Echelon form of A: no change of rank property)

Ex. 6)

Practical application of rank in connection with the linearly independence and dependence of the vectors

Theorem 4: p vectors x1, …, xp, (with n components) are linearly independent if the matrix with row vectors x1, …, xp has rank p: they are linearly dependent ifthat rank is less than p.

Theorem 5: p vectors with n < p components are always linearly dependent.

Theorem 6: The vector space Rn consisting of all vectors with n components hasdimension n.

−−−=

000058284202203

150212154244262203

A rank A=2

화공수학Ch. 6: Linear Algebra

6.5. Solutions of Linear Systems: Existence, Uniqueness, General FormTheorem 1: Fundamental theorem for linear systems(a) Existence: m equations in n unknowns

has solutions iff the coefficient matrix A and the augmented matrix have the samerank.

(b) Uniqueness: above system has precisely one solution iff this common rank r of Aand equals n.

(c) Infinitely many solutions: If rank of A = r < n, system has infinitely many solutions.

(d) Gauss elimination: If solutions exist, they can all be obtained by the Gauss elimination.

mnmn22m11m

2nn2222121

1nn1212111

bxaxaxa

bxaxaxabxaxaxa

=+++

=+++=+++

A~

A~

화공수학Ch. 6: Linear Algebra

The Homogeneous Linear SystemTheorem 2: Homogeneous system- A homogeneous linear system always has the trivial solution, x1=0,…, xn=0

- Nontrivial solutions exist iff rank A<n. - If rank A=r<n, these solutions, together with x=0, form a vector space of dimension

n-r, called the solution space of above system.- If x1 and x2 are solution vectors, then x=c1x1+c2x2 is also solution vector.

Theorem 3: A homogeneous linear system with fewer equations than unknowns always has nontrivial solutions.

The Nonhomogeneous Linear SystemTheorem 4: If a nonhomogeneous linear system of equations of the form Ax=b has

solutions, then all these solutions are of the form

0xaxaxa

0xaxaxa0xaxaxa

nmn22m11m

nn2222121

nn1212111

=+++

=+++=+++

h0 xxx += (x0 is any fixed solution of Ax=b, xh: solution of homogeneous system)

화공수학Ch. 6: Linear Algebra

6.6. Determinants. Cramer’s Rule- Impractical in computations, but important in engineering applications (eigenvalues,

DEs, vector algebra, etc.) - associated with an nxn square matrixSecond-order Determinants

Ex. 2) Cramer’s rule:

Third-order Determinants

Ex. 3) Cramer’s rule

211222112221

1211 aaaaaaaa

AdetD −===

Dbaba

Dbaba

x,D

baabD

abab

xbxaxabxaxa 121211221

111

2212221222

121

12222121

1212111 −==−===+=+

2322

131231

3332

131221

3332

232211

333231

232221

131211

aaaa

aaaaa

aaaaa

aaaaaaaaaa

D +−==

Daabaabaab

D,...DDx

bxaxaxabxaxaxabxaxaxa

33323

23222

13121

11

1

3333232131

2323222121

1313212111

===++=++=++

화공수학Ch. 6: Linear Algebra

Determinant of Any Order n

nn2n1n

n22221

n11211

aaa

aaaaaa

AdetD

==

)n,,2,1k(CaCaCaD

)n,,2,1j(CaCaCaD

nknkk2k2k1k1

jnjn2j2j1j1j

=+++=

=+++=

jkkj

jk M)1(C +−=

Minor of ajk in DCofactor of ajk in D

)n,,2,1k(Ma)1(D

)n,,2,1j(Ma)1(D

n

1jjkjk

kj

n

1kjkjk

kj

=−=

=−=

=

+

=

+

Ex. 4) Third-order determinant

21213332

131221 MC,

aaaa

M −==

화공수학Ch. 6: Linear Algebra

General Properties of DeterminantsTheorem 1: (a) Interchange of two rows multiplies the value of the determinant by –1.(b) Addition of a multiple of a row to another row does not alter the value of the determinant.(c) Multiplication of a row by c multiplies the value of the determinant by c.

Ex. 7) Determinant by reduction to triangular form

Theorem 2: (d) Transposition leaves the value of a determinant unaltered.(e) A zero row or column renders the value of a determinant zero.(f) Proportional rows or columns render the value of a determinant zero. In particular, a determinant with two identical rows or columns has the value zero.

25.470008.34.200

129506402

19831620

01546402

D−

−−

=

화공수학Ch. 6: Linear Algebra

Theorem 1: (a) Interchange of two rows multiplies the value of the determinant by –1.(b) Addition of a multiple of a row to another row does not alter the value of the

determinant.(c) Multiplication of a row by c multiplies the value of the determinant by c.

Ex. 7) Determinant by reduction to triangular form

Theorem 2: (d) Transposition leaves the value of a determinant unaltered.(e) A zero row or column renders the value of a determinant zero.(f) Proportional rows or columns render the value of a determinant zero.

In particular, a determinant with two identical rows or columns has the value zero.

General Properties of Determinants

25.470008.34.200

129506402

19831620

01546402

D−

−−

=

Adetk)Akdet( n=

화공수학Ch. 6: Linear Algebra

Rank in Terms of Determinants- Rank ~~~ determinants

Theorem 3: An m x n matrix A=[ajk] has rank r ≥1 iff A has an r x r submatrix withnonzero determinant, whereas the determinant of every square submatrixwith r+1 or more rows that A has is zero.

- If A is square, n x n, it has rank n iff det A≠0.

Cramer’s Rule- Not practical in computations, but theoretically interesting in DEs and others

Theorem 4: (a) Linear system of n equations in the same number of unknowns, x

If this system has a nonzero coefficient determinant D=det A, it hasprecisely one solution.

nnnn22n11n

2nn2222121

1nn1212111

bxaxaxa

bxaxaxabxaxaxa

=+++

=+++=+++

DDx,,

DDx,

DDx n

21

21

1 ===

화공수학Ch. 6: Linear Algebra

(b) If the system is homogeneous and D≠0, it has only the trivial solution x=0. If D=0,the homogeneous system also has nontrivial solutions.

6.7. Inverse of a Matrix: Gauss-Jordan Elimination

- If A has an inverse, then A is a nonsingular matrix (unique inverse !)~~ no inverse, ~~ a singular matrix.

Theorem 1: Existence of the inverseThe inverse A-1 of an n x n matrix A exists iff rank A=n, hence iff det A≠0. Hence A is nonsingular if rank A = n, and is singular if rank A < n.

Determination of the Inverse- n x n matrix A n x n identity matrix I, I matrix A-1

IAAAA 11 == −−

=

1000

00100001

aaa

aaaaaa

)IA(

nn2n1n

n22221

n11211

)AI()IA( 1−

화공수학Ch. 6: Linear Algebra

Matrix Inverse

a. Calculating the inverse- The inverse can be computed in a column-by-column fashion by generating solutions with unit vectors as the rhs constants.

resulting solution will be the first column of the matrix inverse.

… the second column of the matrix inverse

Best way: use the LU decomposition algorithm (evaluate multiple rhs vectors)

b. Matrix inversion by Gauss-Jordan eliminationThe square matrix A-1 assumes the role of the column vector of unknown x, while square matrix I assumes the role of the rhs column vector B.

=

001

b

=

010

b

IAAAA == −− 11

−−−−−−

−−

434141414341414143

100010001

102100210021

232102123021211

1000100021

21112121211

100010001

211121112

화공수학Ch. 6: Linear Algebra

Some Useful Formulas for InverseTheorem 2: The inverse of a nonsingular n x n matrix A=[ajk] is given by

where Ajk is the cofactor of ajk in det A.

Ex. 3) For 3 x 3 matrix

- Diagonal matrices A have an inverse iff all ajj≠0. Then A-1 is diagonal with entries 1/a11, …, 1/ann.

Ex. 4) Inverse of a diagonal matrix

-

[ ]

==−

nnn2n1

2n2212

1n2111

Tjk

1

AAA

AAAAAA

Adet1A

Adet1A

−=

= −

1121

12221

2221

1211

aaaa

Adet1A

aaaa

A

111 AC)CA( −−− =

화공수학Ch. 6: Linear Algebra

Vanishing of Matrix Products. Cancellation Law - Matrix multiplication is not commutative in general: - does not necessarily imply - does not necessarily imply (even when A=0)

Theorem 3: Cancellation lawLet A, B, C be n x n matrices. Then

(a) If rank A=n and AB=AC, then B=C(b) If rank A=n, then AB=0 implies B=0. Hence if AB=0, but A≠0 as well as B≠0,

then rank A < n and rank B < n.(c) If A is singular, so are BA and AB.

Determinants of Matrix ProductsTheorem 4: For any n x n matrices A and B

ABBA ≠0BA = 0ABor0Bor0A ===

DACA = DC =

( ) ( ) BdetAdetABdetBAdet ==