Stanford CS205a

Eigenvalues & SVD

Supplemental Julia Notebook

Prof. Doug James

Matrix inverse & eigs

In [1]:
A = rand(3,3)
Out[1]:
3×3 Array{Float64,2}:
 0.838168  0.894378  0.860048 
 0.560628  0.311312  0.287067 
 0.155911  0.821748  0.0381491
In [2]:
eigA = eigvals(A)
Out[2]:
3-element Array{Complex{Float64},1}:
   1.61149+0.0im     
 -0.211928+0.267408im
 -0.211928-0.267408im
In [3]:
# 1/eigenvalues match...
1./eigA
Out[3]:
3-element Array{Complex{Float64},1}:
 0.620545-0.0im    
 -1.82037-2.29692im
 -1.82037+2.29692im
In [4]:
# ... the eigenvalues of the inverse(A)
eigvals(inv(A))
Out[4]:
3-element Array{Complex{Float64},1}:
 0.620545+0.0im    
 -1.82037+2.29692im
 -1.82037-2.29692im

Matrix square root

In [5]:
## Symmetric positive definite matrix:
B = rand(3,3);
A = B'*B + 0.0001*eye(3)
Out[5]:
3×3 Array{Float64,2}:
 0.0968649  0.139353  0.331046
 0.139353   0.507817  0.728221
 0.331046   0.728221  1.53727 
In [6]:
# Has a matrix square root
sqrtA = sqrtm(A)
Out[6]:
3×3 Array{Float64,2}:
 0.204495   0.0575041  0.227464
 0.0575041  0.576255   0.415259
 0.227464   0.415259   1.1459  
In [7]:
# Notice that the eigenvalues of sqrt(A)...
eigvals(sqrtA)
Out[7]:
3-element Array{Float64,1}:
 0.147315
 0.371388
 1.40795 
In [8]:
# Are equal to the sqrt of the eigs of A:
sqrt.(eigvals(A))
Out[8]:
3-element Array{Float64,1}:
 0.147315
 0.371388
 1.40795 

Polar Decomposition

In [9]:
# Given A, find R s.t. A = R U:
A = rand(3,3)
Out[9]:
3×3 Array{Float64,2}:
 0.184173  0.0912081  0.749521
 0.383199  0.643191   0.870778
 0.667197  0.195219   0.323629
In [10]:
# Find U first, then R:
U2 = A'*A;
U  = sqrtm(U2)  # sqrt root of U*U
Out[10]:
3×3 Array{Float64,2}:
 0.678215  0.218509  0.343789
 0.218509  0.51118   0.388681
 0.343789  0.388681  1.07495 
In [11]:
# Now A=RU --> R=A invU
R = A*inv(U)  # not efficient algorithm (for now), but true
Out[11]:
3×3 Array{Float64,2}:
 -0.0180764  -0.480526   0.876794  
  0.0390546   0.875929   0.480856  
  0.999074   -0.042935  -0.00293309
In [12]:
# Verify R is orthogonal:
R'*R
Out[12]:
3×3 Array{Float64,2}:
 1.0           1.63064e-15   3.33934e-17
 1.63064e-15   1.0          -1.00546e-15
 3.33934e-17  -1.00546e-15   1.0        
In [13]:
# Factorization A=RU is accurate:
norm(A-R*U)/norm(A)
Out[13]:
1.3207184582579895e-16

Thin SVD using svd()

In [14]:
?(svd)
search: svd svds svdvals svdfact svdvals! svdfact! isvalid

In [15]:
A = rand(4,2)
Out[15]:
4×2 Array{Float64,2}:
 0.537532  0.575281
 0.792717  0.185602
 0.699253  0.996808
 0.791883  0.72888 
In [16]:
# Thin SVD:
(U,S,V) = svd(A)
Out[16]:
(
[-0.409043 0.0895816; -0.364472 -0.865563; -0.621606 0.490516; -0.559866 -0.0465768],

[1.92195,0.479357],
[-0.721563 -0.692349; -0.692349 0.721563])
In [17]:
U
Out[17]:
4×2 Array{Float64,2}:
 -0.409043   0.0895816
 -0.364472  -0.865563 
 -0.621606   0.490516 
 -0.559866  -0.0465768
In [18]:
S
Out[18]:
2-element Array{Float64,1}:
 1.92195 
 0.479357
In [19]:
diagm(S)
Out[19]:
2×2 Array{Float64,2}:
 1.92195  0.0     
 0.0      0.479357
In [20]:
V'
Out[20]:
2×2 Array{Float64,2}:
 -0.721563  -0.692349
 -0.692349   0.721563

Verifying A = V*diagm(S)*V'

In [21]:
A
Out[21]:
4×2 Array{Float64,2}:
 0.537532  0.575281
 0.792717  0.185602
 0.699253  0.996808
 0.791883  0.72888 
In [22]:
U*diagm(S)*V'
Out[22]:
4×2 Array{Float64,2}:
 0.537532  0.575281
 0.792717  0.185602
 0.699253  0.996808
 0.791883  0.72888 
In [23]:
## Relative Error:
norm(U*diagm(S)*V'-A)/norm(A)
Out[23]:
4.32189512827081e-16
In [ ]:

Vandermonde Matrix & SVD

In [24]:
function vander(N)
    V = [(i/N)^(j-1) for i=1:N, j=1:N];
end
Out[24]:
vander (generic function with 1 method)
In [25]:
N = 10;
V = vander(N)
Out[25]:
10×10 Array{Float64,2}:
 1.0  0.1  0.01  0.001  0.0001  …  1.0e-7     1.0e-8      1.0e-9     
 1.0  0.2  0.04  0.008  0.0016     1.28e-5    2.56e-6     5.12e-7    
 1.0  0.3  0.09  0.027  0.0081     0.0002187  6.561e-5    1.9683e-5  
 1.0  0.4  0.16  0.064  0.0256     0.0016384  0.00065536  0.000262144
 1.0  0.5  0.25  0.125  0.0625     0.0078125  0.00390625  0.00195313 
 1.0  0.6  0.36  0.216  0.1296  …  0.0279936  0.0167962   0.0100777  
 1.0  0.7  0.49  0.343  0.2401     0.0823543  0.057648    0.0403536  
 1.0  0.8  0.64  0.512  0.4096     0.209715   0.167772    0.134218   
 1.0  0.9  0.81  0.729  0.6561     0.478297   0.430467    0.38742    
 1.0  1.0  1.0   1.0    1.0        1.0        1.0         1.0        
In [26]:
sv = svdvals(V)
Out[26]:
10-element Array{Float64,1}:
 4.65476    
 2.10904    
 0.650574   
 0.16112    
 0.0325356  
 0.00526855 
 0.000656848
 5.8534e-5  
 3.25623e-6 
 8.30914e-8 
In [27]:
## Matrix two-norm is largest singval:
norm(V)
Out[27]:
4.654763958770751
In [28]:
sv[1]
Out[28]:
4.654763958770751

Condition number in singular values:

In [29]:
cond(V)
Out[29]:
5.6019784622350566e7
In [30]:
## Definition of condition number in singular values:
sv[1]/sv[length(sv)]
Out[30]:
5.6019784622350566e7
In [ ]:

Plotting singular values of Vandermonde matrix

In [61]:
using Plots
N = 1000;
V = vander(N) / norm(vander(N));  ## normalize V for clarity
sv = svdvals(V);
plot(log10.(sv))
Out[61]:

Tikhonov Regularization & SVD

In [62]:
A0  = V'*V;  
alpha = 1.e-10; 
A   = A0 + alpha*eye(N);
sv0 = svdvals(A0);
sv  = svdvals(A);
plot(log10.([sv0, sv]))
Out[62]:

Augmented matrix approach to regularization

In [63]:
A = [V; sqrt(alpha)*eye(N)];
In [64]:
plot(log10.([svdvals(V), svdvals(A)]))
Out[64]:
In [65]:
plot(log10.([sv0, svdvals(A'*A)]))
Out[65]:
In [ ]: