MIT | 数据分析、信号处理和机器学习中的矩阵方法 笔记系列 Lecture 4 Eigenvalues and Eigenvectors

  • Post author:
  • Post category:其他


本系列为MIT Gilbert Strang教授的”数据分析、信号处理和机器学习中的矩阵方法”的学习笔记。

  • Gilbert Strang & Sarah Hansen | Spring 2018
  • 18.065: Matrix Methods in Data Analysis, Signal Processing, and Machine Learning
  • 视频网址: https://ocw.mit.edu/courses/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/
  • 关注下面的公众号,回复“


    矩阵方法


    ”,即可获取


    本系列完整的pdf笔记文件~


内容在CSDN、知乎和微信公众号同步更新

在这里插入图片描述

  • Markdown源文件暂未开源,如有需要可联系邮箱
  • 笔记难免存在问题,欢迎联系邮箱指正


Lecture 0: Course Introduction


Lecture 1 The Column Space of



A

A






A





Contains All Vectors



A

x

Ax






A


x






Lecture 2 Multiplying and Factoring Matrices


Lecture 3 Orthonormal Columns in



Q

Q






Q





Give



Q

Q

=

I

Q’Q=I







Q






















Q




=








I






Lecture 4 Eigenvalues and Eigenvectors


Lecture 5 Positive Definite and Semidefinite Matrices


Lecture 6 Singular Value Decomposition (SVD)


Lecture 7 Eckart-Young: The Closest Rank



k

k






k





Matrix to



A

A






A






Lecture 8 Norms of Vectors and Matrices


Lecture 9 Four Ways to Solve Least Squares Problems


Lecture 10 Survey of Difficulties with



A

x

=

b

Ax=b






A


x




=








b






Lecture 11 Minimizing ||x|| Subject to



A

x

=

b

Ax=b






A


x




=








b






Lecture 12 Computing Eigenvalues and Singular Values


Lecture 13 Randomized Matrix Multiplication


Lecture 14 Low Rank Changes in



A

A






A





and Its Inverse


Lecture 15 Matrices



A

(

t

)

A(t)






A


(


t


)





Depending on



t

t






t





, Derivative =



d

A

/

d

t

dA/dt






d


A


/


d


t






Lecture 16 Derivatives of Inverse and Singular Values


Lecture 17 Rapidly Decreasing Singular Values


Lecture 18 Counting Parameters in SVD, LU, QR, Saddle Points


Lecture 19 Saddle Points Continued, Maxmin Principle


Lecture 20 Definitions and Inequalities


Lecture 21 Minimizing a Function Step by Step


Lecture 22 Gradient Descent: Downhill to a Minimum


Lecture 23 Accelerating Gradient Descent (Use Momentum)


Lecture 24 Linear Programming and Two-Person Games


Lecture 25 Stochastic Gradient Descent


Lecture 26 Structure of Neural Nets for Deep Learning


Lecture 27 Backpropagation: Find Partial Derivatives


Lecture 28 Computing in Class [No video available]


Lecture 29 Computing in Class (cont.) [No video available]


Lecture 30 Completing a Rank-One Matrix, Circulants!


Lecture 31 Eigenvectors of Circulant Matrices: Fourier Matrix


Lecture 32 ImageNet is a Convolutional Neural Network (CNN), The Convolution Rule


Lecture 33 Neural Nets and the Learning Function


Lecture 34 Distance Matrices, Procrustes Problem


Lecture 35 Finding Clusters in Graphs


Lecture 36 Alan Edelman and Julia Language



Lecture 4: Eigenvalues and Eigenvectors


Last time:

  • Orthogonal Matrices



    Q

    Q






    Q




This time:

  • Eigenvectors and Eigenvalues of matrix



    A

    A






    A




  • Eigenvectors and Eigenvalues of Symmetric Matrices



    S

    S






    S






4.1 Egienvectors and Eigenvalues of matrix A

  • For



    A

    x

    Ax






    A


    x





    (



    A

    A






    A





    is



    n

    ×

    n

    n\times n






    n




    ×








    n





    )

    • if x is especially chosen well
    • Ax就能恰好和x有着

      same direction:
    • 即**




      A

      x

      =

      λ

      x

      Ax=\lambda x






      A


      x




      =








      λ


      x






      **

    • Eigenvectors



      x

      x






      x





      and Eigenvalues



      λ

      \lambda






      λ




Q1:


What is “Eigenvectors” good for?

由定义, 特征向量x很special, 但它useful吗?

special is good, but useful is even better



  • 性质1:关于



    A

    k

    A^k







    A










    k













    • Given



      A

      x

      =

      λ

      x

      Ax = \lambda x






      A


      x




      =








      λ


      x







    • \Rightarrow















      A

      2

      x

      A^2 x







      A










      2









      x





      =



      A

      (

      A

      x

      )

      A (Ax)






      A


      (


      A


      x


      )





      =



      A

      (

      λ

      )

      x

      A (\lambda)x






      A


      (


      λ


      )


      x





      =



      λ

      2

      \lambda^2







      λ










      2















      x

      x






      x




    • x也是



      A

      2

      A^2







      A










      2












      的特征向量!

      ▪ Eigenvalue:



      λ

      2

      \lambda^2







      λ










      2











    • 同理,



      A

      n

      x

      =

      λ

      n

      x

      A^n x = \lambda^n x







      A










      n









      x




      =









      λ










      n









      x








      A

      1

      x

      =

      1

      λ

      x

      A^{-1} x = \frac{1}{\lambda} x







      A














      1










      x




      =




















      λ
















      1





















      x





      (前提是



      λ

      0

      \lambda \not ={0}






      λ










































      =









      0






      即 A不可逆)





      e

      A

      t

      x

      =

      e

      λ

      t

      x

      e^{At}x = e^{\lambda t}x







      e











      A


      t










      x




      =









      e











      λ


      t










      x




    • Take any vector v



      R

      n

      \in R^n
















      R










      n












      ,

      ▪ 假设矩阵A有n个不同的特征向量



      A

      x

      i

      =

      λ

      i

      x

      i

      Ax_i = \lambda_i x_i






      A



      x










      i




















      =









      λ










      i



















      x










      i





















      (



      i

      =

      1

      ,

      2

      ,

      .

      .

      .

      ,

      n

      i = 1,2, …, n






      i




      =








      1


      ,




      2


      ,




      .


      .


      .


      ,




      n





      )

      ▪ 若v能够被写为这些eigenvectors



      x

      i

      x_i







      x










      i





















      的combination:



      v

      =

      c

      1

      x

      1

      +

      c

      2

      x

      2

      +

      c

      3

      x

      3

      +

      .

      .

      .

      +

      c

      n

      x

      n

      v = c_1 x_1 + c_2 x_2 + c_3 x_3 + … + c_n x_n






      v




      =









      c










      1



















      x










      1




















      +









      c










      2



















      x










      2




















      +









      c










      3



















      x










      3




















      +








      .


      .


      .




      +









      c










      n



















      x










      n




















      ▪ 则有:



      A

      k

      v

      =

      c

      1

      λ

      1

      k

      x

      1

      +

      c

      2

      λ

      2

      k

      x

      2

      +

      c

      3

      λ

      3

      k

      x

      3

      +

      .

      .

      .

      +

      c

      n

      λ

      n

      k

      x

      n

      A^k v = c_1 \lambda_1^k x_1 + c_2 \lambda_2^k x_2 + c_3 \lambda_3^k x_3 + … + c_n \lambda_n^k x_n







      A










      k









      v




      =









      c










      1



















      λ










      1








      k



















      x










      1




















      +









      c










      2



















      λ










      2








      k



















      x










      2




















      +









      c










      3



















      λ










      3








      k



















      x










      3




















      +








      .


      .


      .




      +









      c










      n



















      λ










      n








      k



















      x










      n




















    • 因此,


      特征值特征向量的第一个用处:帮助计算



      A

      k

      x

      A^k x







      A










      k









      x









      A

      A






      A





      的矩阵函数、方程等

      ▪ 例如,已知



      V

      k

      =

      A

      k

      v

      V_k = A^k v







      V










      k




















      =









      A










      k









      v





      , 那么利用Eigenvectors和Eigenvalues, 求



      V

      k

      +

      1

      =

      A

      V

      k

      V_{k+1} = A V_k







      V











      k


      +


      1





















      =








      A



      V










      k

























      d

      V

      /

      d

      t

      =

      A

      v

      dV/dt = Av






      d


      V


      /


      d


      t




      =








      A


      v





      就会很方便



  • 性质2:关于相似矩阵

    • Similar matrix: B is similar to A:





      \exist












      invertible matrix



      M

      :

      B

      =

      M

      1

      A

      M

      M: B = M^{-1} A M






      M




      :








      B




      =









      M














      1










      A


      M




    • Then, A and B have the same eigenvalues

    • i.e.,


      Similar matrices



      \Rightarrow












      Same Eigenvalues

    • MATLAB计算EigenValues的方法:

      ▪ 通过多次similar transform: change



      A

      A






      A








      \Rightarrow















      M

      1

      M_1







      M










      1
























      \Rightarrow















      M

      2

      M_2







      M










      2





















      , … ✅



      \Rightarrow












      finally to a triangular matrix

      ▪ 从而 acquire the eigenvalues

      ▪ (If A is a

      symmetric matrix

      , it will go to a diagonal matrix (不仅仅是triangular matrix))

    • Proof: (Similar matrices



      \Rightarrow












      Same Eigenvalues)

      ▪ what we know:



      B

      =

      M

      1

      A

      M

      B = M^{-1} A M






      B




      =









      M














      1










      A


      M




      ▪ 若



      M

      1

      A

      M

      y

      =

      λ

      y

      M^{-1} A M y = \lambda y







      M














      1










      A


      M


      y




      =








      λ


      y








      \Rightarrow















      A

      M

      y

      =

      λ

      M

      y

      AMy = \lambda My






      A


      M


      y




      =








      λ


      M


      y








      \Rightarrow













      KaTeX parse error: Undefined control sequence: \lamda at position 10: A (My) = \̲l̲a̲m̲d̲a̲ ̲(My)





      \Rightarrow















      λ

      \lambda






      λ





      也是A的一个 eigenvalue

    • 练习1:证明

      AB has the same non-zero eigenvalues as BA?





      A

      ,

      B

      R

      n

      ×

      n

      \forall A, B \in R^{n\times n}









      A


      ,




      B














      R











      n


      ×


      n














      want

      show that AB and BA are similar:





      M

      (

      A

      B

      )

      M

      1

      =

      B

      A

      M(AB)M^{-1} = BA






      M


      (


      A


      B


      )



      M














      1












      =








      B


      A






      ✅ Just Let





      M

      =

      B

      M = B






      M




      =








      B







      #

      ✅ 前提是特征值non-zero: 否则cannot determine whether



      M

      1

      M^{-1}







      M














      1













      exists

    • 练习2:

      矩阵的特征值是否满足可加性、可乘性?

      ▪ If I know the eigenvalues



      λ

      (

      A

      )

      ,

      λ

      (

      B

      )

      \lambda(A), \lambda(B)






      λ


      (


      A


      )


      ,




      λ


      (


      B


      )





      of



      A

      A






      A





      and



      B

      B






      B





      , repectively, can I get eigenvalues of AB by



      λ

      (

      A

      )

      λ

      (

      b

      )

      \lambda(A)\lambda(b)






      λ


      (


      A


      )


      λ


      (


      b


      )





      ?



      No!

      ▪ also



      λ

      (

      A

      )

      +

      λ

      (

      B

      )

      λ

      (

      A

      +

      B

      )

      \lambda(A) + \lambda(B) \not ={\lambda(A+B)}






      λ


      (


      A


      )




      +








      λ


      (


      B


      )










































      =









      λ


      (


      A




      +




      B


      )







4.2 Egienvectors and Eigenvalues of real Symmetric Matrix S

  • Are eigenvalues of



    S

    S






    S





    are real numbers?

    • Other real matrices could have imaginary eigenvalues

    • Let



      A

      S

      =

      [

      0

      1

      1

      0

      ]

      AS = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}






      A


      S




      =










      [













      0











      1





























      1








      0




















      ]









      a 90



      ^\circ






























      rotation

      ▪ Anit-symmetric matrix

      ▪ Are eigenvalues os



      A

      S

      AS






      A


      S





      are imaginary numbers

      ▪ 物理理解:


      90度旋转矩阵AS, 乘以任何一个向量x都不可能不改变该向量的方向






      A

      x

      =

      λ

      x

      Ax = \lambda x






      A


      x




      =








      λ


      x





      ▪ 所以


      特征值



      λ

      \lambda






      λ





      和特征向量



      x

      x






      x





      不是实数而是虚数!

      ▪ 数学计算:





      A

      x

      =

      λ

      x

      Ax = \lambda x






      A


      x




      =








      λ


      x








      \Rightarrow












      存在非0的x,使得



      (

      A

      λ

      I

      )

      x

      =

      0

      (A-\lambda I) x = 0






      (


      A













      λ


      I


      )


      x




      =








      0








      \Rightarrow















      (

      A

      λ

      I

      )

      (A-\lambda I)






      (


      A













      λ


      I


      )





      不可逆



      \Rightarrow















      d

      e

      t

      (

      A

      λ

      I

      )

      det(A-\lambda I)






      d


      e


      t


      (


      A













      λ


      I


      )





      = 0





      d

      e

      t

      (

      A

      S

      λ

      I

      )

      =

      [

      λ

      1

      1

      λ

      ]

      det(AS – \lambda I) = \begin{bmatrix} \lambda & 1 \\ -1 & – \lambda \end{bmatrix}






      d


      e


      t


      (


      A


      S













      λ


      I


      )




      =










      [













      λ











      1





























      1











      λ




















      ]







      =



      λ

      2

      \lambda ^2







      λ










      2












      + 1 = 0



      \Rightarrow















      λ

      =

      i

      ,

      i

      \lambda = i, -i






      λ




      =








      i


      ,







      i








      \Rightarrow














To way to check the eigenvalues:


  • 特征值之和 = 对角线之和 (trace)

    • Add all eigenvalues = Add all diagonal elements
    • a way to check the calculated eigenvalues

  • 特征值之积 = Determinant (det



    A

    A






    A





    )


  • For Symmetric Matrix:

    • eigenvalues are real;

    • eigenvectors are orthogonal

      ▪ 意味着

      full set of eigenvectors

      ! (even if some eigenvalues may be repeated)

    • check: using

      trace

      and

      det

    • Example:



      S

      =

      [

      0

      1

      1

      0

      ]

      S = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}






      S




      =










      [













      0








      1





























      1








      0




















      ]






      ▪ Eigenvalues



      λ

      =

      1

      ,

      1

      \lambda = 1, -1






      λ




      =








      1


      ,







      1




      ▪ Eigenvectors



      x

      =

      [

      1

      1

      ]

      ,

      [

      1

      1

      ]

      x = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ -1 \end{bmatrix}






      x




      =










      [













      1








      1




















      ]






      ,






      [













      1











      1




















      ]










      Λ

      =

      [

      1

      0

      0

      1

      ]

      \Lambda = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix}






      Λ




      =










      [













      1








      0





























      0











      1




















      ]








    • 特征值相同的矩阵是否相似?

      ▪ 上面的



      S

      S






      S









      Λ

      \Lambda






      Λ





      have the same eigenvalues

      ▪ If they are similar:



      \exist












      invertible



      M

      M






      M








      S

      M

      =

      M

      Λ

      SM = M\Lambda






      S


      M




      =








      M


      Λ




      👉 令



      M

      =

      [

      x

      1

      ,

      x

      2

      ]

      M = [x_1, x_2]






      M




      =








      [



      x










      1


















      ,





      x










      2


















      ]





      , then



      S

      [

      x

      1

      x

      2

      ]

      S \begin{bmatrix} x_1 & x_2\\ \end{bmatrix}






      S






      [














      x










      1














































      x










      2




































      ]







      =



      [

      x

      1

      x

      2

      ]

      [

      λ

      1

      0

      0

      λ

      2

      ]

      \begin{bmatrix} x_1 & x_2\\ \end{bmatrix} \begin{bmatrix} \lambda_1 & 0\\ 0 & \lambda_2 \end{bmatrix}








      [














      x










      1














































      x










      2




































      ]








      [














      λ










      1
























      0





























      0









      λ










      2




































      ]






      ▪ 看上去是相似的。但是上面的推导有一个前提:


      特征值构成的矩阵



      M

      M






      M





      必须可逆!

      ▪ If Matrix A has eigenvectors



      x

      1

      ,

      x

      2

      ,

      .

      .

      .

      ,

      x

      n

      x_1, x_2, …, x_n







      x










      1


















      ,





      x










      2


















      ,




      .


      .


      .


      ,





      x










      n





















      and eigenvalues



      λ

      1

      ,

      λ

      2

      ,

      .

      .

      .

      ,

      λ

      n

      \lambda_1, \lambda_2, …, \lambda_n







      λ










      1


















      ,





      λ










      2


















      ,




      .


      .


      .


      ,





      λ










      n




















      👉



      A

      [

      x

      1

      ,

      x

      2

      ,

      .

      .

      .

      ,

      x

      n

      ]

      =

      [

      x

      1

      ,

      x

      2

      ,

      .

      .

      .

      ,

      x

      n

      ]

      [

      λ

      1

      .

      .

      .

      0

      .

      .

      .

      .

      .

      .

      .

      .

      .

      0

      .

      .

      .

      .

      .

      .

      λ

      2

      ]

      A[x_1, x_2, …, x_n] = [x_1, x_2, …, x_n] \begin{bmatrix} \lambda_1 & … & 0\\ … & … & … \\ 0 & … & …\lambda_2 \end{bmatrix}






      A


      [



      x










      1


















      ,





      x










      2


















      ,




      .


      .


      .


      ,





      x










      n


















      ]




      =








      [



      x










      1


















      ,





      x










      2


















      ,




      .


      .


      .


      ,





      x










      n


















      ]





















































      λ










      1
























      .


      .


      .








      0





























      .


      .


      .








      .


      .


      .








      .


      .


      .





























      0








      .


      .


      .








      .


      .


      .



      λ










      2











































































      👉



      \Rightarrow















      A

      X

      =

      X

      Λ

      AX = X \Lambda






      A


      X




      =








      X


      Λ




      🚩



      \Rightarrow














      A与



      Λ

      \Lambda






      Λ





      相似:



      A

      =

      X

      Λ

      X

      1

      A = X \Lambda X^{-1}






      A




      =








      X


      Λ



      X














      1















      (

      前提是



      X

      X






      X





      可逆

      )

      ✅ 利用上述公式,可直接得到



      A

      2

      A^2







      A










      2












      的特征值:



      A

      2

      =

      X

      Λ

      2

      X

      1

      A^2 = X \Lambda^2 X^{-1}







      A










      2











      =








      X



      Λ










      2










      X














      1












      ▪ 对于Symmetric matrix



      S

      S






      S





      :

      👉 its eigenvectors matrix X is orthogonal: denoted as



      Q

      Q






      Q




      👉



      \Rightarrow

















      S

      =

      Q

      Λ

      Q

      1

      =

      Q

      Λ

      Q

      T

      S = Q \Lambda Q^{-1} = Q \Lambda Q^T






      S




      =








      Q


      Λ



      Q














      1












      =








      Q


      Λ



      Q










      T













      👉


      Spectral Theorem


      : Every symmetric matrix 都能分解为



      Q

      Λ

      Q

      T

      Q \Lambda Q^T






      Q


      Λ



      Q










      T












      的形式



版权声明:本文为qazwsxrx原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。