## Column Spaces and QR¶

Supplemental Julia Notebook

Prof. Doug James

## Condition number of¶

\begin{align} \textrm{cond} \; A^T A & = \|A^T A\| \|(A^T A)^{-1}\| \\ & \approx \|A^T\| \|A\| \|A^{-1}\| \|(A^T)^{-1}\| \qquad \mbox{(esp. in two-norm)}\\ & = \|A\|^2 \|A^{-1}\|^2 \\ & = (\textrm{cond} \; A)^2 \end{align}

## Experiment: $\textrm{cond} \; A^T A \approx (\textrm{cond} \; A)^2$¶

In :
N   = 10;
A   = rand(N,N);
AtA = A'*A;

In :
cond(A)

Out:
3222.353911452386
In :
(cond(A))^2

Out:
1.038356473065249e7
In :
cond(AtA)

Out:
1.0383564733178914e7

## Gram-Schmidt Algorithm (Example 5.1)¶

In :
## Input independent vectors
v1 = [1.; 0.; 0.];
v2 = [1.; 1.; 1.];
v3 = [1.; 1.; 0.];
A = [v1 v2 v3]

Out:
3×3 Array{Float64,2}:
1.0  1.0  1.0
0.0  1.0  1.0
0.0  1.0  0.0
In :
# VECTOR v1.
# Normalize v1:
a1 = v1/norm(v1)   ## book calls these a_i but q_i is better

Out:
3-element Array{Float64,1}:
1.0
0.0
0.0
In :
# VECTOR v2.
# Remove the span of a1 from v2:
a2 = v2 - a1*a1'*v2

Out:
3-element Array{Float64,1}:
0.0
1.0
1.0
In :
# normalize a2:
a2 /= norm(a2)

Out:
3-element Array{Float64,1}:
0.0
0.707107
0.707107
In :
# VECTOR v3.
# Remove the span of a1 & a2 from v3:
a3 = v3 - a1*a1'*v3 - a2*a2'*v3

Out:
3-element Array{Float64,1}:
0.0
0.5
-0.5
In :
# normalize a3:
a3 /= norm(a3)

Out:
3-element Array{Float64,1}:
0.0
0.707107
-0.707107
In :
## Orthogonal basis matrix formed by Gram-Schmidt:
Q = [a1 a2 a3]

Out:
3×3 Array{Float64,2}:
1.0  0.0        0.0
0.0  0.707107   0.707107
0.0  0.707107  -0.707107
In :
Q'*Q

Out:
3×3 Array{Float64,2}:
1.0  0.0          0.0
0.0  1.0          2.77556e-16
0.0  2.77556e-16  1.0        
In :
## Upper triangle matrix (encoded in our operations above)
R = Q'*A

Out:
3×3 Array{Float64,2}:
1.0  1.0          1.0
0.0  1.41421      0.707107
0.0  3.33067e-16  0.707107
In [ ]:



## QR Implementations in Julia¶

In :
?(qr)

search: qr qrfact qrfact! sqrt sqrtm isqrt QuickSort PartialQuickSort


Out:
qr(A [,pivot=Val{false}][;thin=true]) -> Q, R, [p]

Compute the (pivoted) QR factorization of A such that either A = Q*R or A[:,p] = Q*R. Also see qrfact. The default is to compute a thin factorization. Note that R is not extended with zeros when the full Q is requested.

qr(v::AbstractVector)

Computes the polar decomposition of a vector.

Input:

• v::AbstractVector - vector to normalize

Outputs:

• w - A unit vector in the direction of v
• r - The norm of v

normalize, normalize!, LinAlg.qr!

In :
?(qrfact)

search: qrfact qrfact!


Out:
qrfact(A) -> SPQR.Factorization

Compute the QR factorization of a sparse matrix A. A fill-reducing permutation is used. The main application of this type is to solve least squares problems with \. The function calls the C library SPQR and a few additional functions from the library are wrapped but not exported.

qrfact(A [,pivot=Val{false}]) -> F

Computes the QR factorization of A. The return type of F depends on the element type of A and whether pivoting is specified (with pivot==Val{true}).

Return type eltype(A) pivot Relationship between F and A
QR not BlasFloat either A==F[:Q]*F[:R]
QRCompactWY BlasFloat Val{false} A==F[:Q]*F[:R]
QRPivoted BlasFloat Val{true} A[:,F[:p]]==F[:Q]*F[:R]

BlasFloat refers to any of: Float32, Float64, Complex64 or Complex128.

The individual components of the factorization F can be accessed by indexing:

Component Description QR QRCompactWY QRPivoted
F[:Q] Q (orthogonal/unitary) part of QR ✓ (QRPackedQ) ✓ (QRCompactWYQ) ✓ (QRPackedQ)
F[:R] R (upper right triangular) part of QR
F[:p] pivot Vector
F[:P] (pivot) permutation Matrix

The following functions are available for the QR objects: size, \. When A is rectangular, \ will return a least squares solution and if the solution is not unique, the one with smallest norm is returned.

Multiplication with respect to either thin or full Q is allowed, i.e. both F[:Q]*F[:R] and F[:Q]*A are supported. A Q matrix can be converted into a regular matrix with full which has a named argument thin.

!!! note qrfact returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that the Q and R matrices can be stored compactly rather as two separate dense matrices.

The data contained in QR or QRPivoted can be used to construct the QRPackedQ type, which is a compact representation of the rotation matrix:

$$Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T)$$

where $\tau_i$ is the scale factor and $v_i$ is the projection vector associated with the $i^{th}$ Householder elementary reflector.

The data contained in QRCompactWY can be used to construct the QRCompactWYQ type, which is a compact representation of the rotation matrix

$$Q = I + Y T Y^T$$

where Y is $m \times r$ lower trapezoidal and T is $r \times r$ upper triangular. The *compact WY* representation [^Schreiber1989] is not to be confused with the older, *WY* representation [^Bischof1987]. (The LAPACK documentation uses V in lieu of Y.)

[^Bischof1987]: C Bischof and C Van Loan, "The WY representation for products of Householder matrices", SIAM J Sci Stat Comput 8 (1987), s2-s13. [doi:10.1137/0908009](http://dx.doi.org/10.1137/0908009)

[^Schreiber1989]: R Schreiber and C Van Loan, "A storage-efficient WY representation for products of Householder transformations", SIAM J Sci Stat Comput 10 (1989), 53-57. [doi:10.1137/0910005](http://dx.doi.org/10.1137/0910005)
In [ ]:



## QR Example¶

In :
M = 5;
N = 2;
A = rand(M,N)

Out:
5×2 Array{Float64,2}:
0.260917  0.555757
0.68724   0.477956
0.862184  0.323348
0.230656  0.841944
0.405023  0.239465
In :
QR1 = qrfact(A, Val{false})  ## QR with pivot=false (for demo purposes)

Out:
Base.LinAlg.QRCompactWY{Float64,Array{Float64,2}}([-1.22515 -0.851695; 0.462457 0.826251; … ; 0.155213 -0.624; 0.272548 0.144251],[1.21297 -1.01404; 6.94162e-310 1.20929])
In :
Q1 = QR1[:Q]   ## "Fat" QR with null-space basis, too

Out:
5×5 Base.LinAlg.QRCompactWYQ{Float64,Array{Float64,2}}:
-0.212968   0.453099    -0.203175  -0.821032  -0.184315
-0.560945   0.00024438  -0.690906   0.374908  -0.259679
-0.70374   -0.334067     0.58745   -0.103852  -0.193046
-0.188268   0.824927     0.340959   0.401694   0.0802437
-0.330592  -0.0509507   -0.141485  -0.114919   0.924602 
In :
R1 = QR1[:R]

Out:
2×2 Array{Float64,2}:
-1.22515  -0.851695
0.0       0.826251
In :
## Note: Julia permits dimension mismatch here with QRCompactWYQ
Q1 * R1

Out:
5×2 Array{Float64,2}:
0.260917  0.555757
0.68724   0.477956
0.862184  0.323348
0.230656  0.841944
0.405023  0.239465
In :
A

Out:
5×2 Array{Float64,2}:
0.260917  0.555757
0.68724   0.477956
0.862184  0.323348
0.230656  0.841944
0.405023  0.239465
In [ ]:


In :
(Q2, R2) = qr(A)  ## Default: unpivoted "thin QR"

Out:
(
[-0.212968 0.453099; -0.560945 0.00024438; … ; -0.188268 0.824927; -0.330592 -0.0509507],

[-1.22515 -0.851695; 0.0 0.826251])
In :
Q2  ## thin basis

Out:
5×2 Array{Float64,2}:
-0.212968   0.453099
-0.560945   0.00024438
-0.70374   -0.334067
-0.188268   0.824927
-0.330592  -0.0509507 
In :
R2

Out:
2×2 Array{Float64,2}:
-1.22515  -0.851695
0.0       0.826251
In :
Q2*R2

Out:
5×2 Array{Float64,2}:
0.260917  0.555757
0.68724   0.477956
0.862184  0.323348
0.230656  0.841944
0.405023  0.239465
In :
A

Out:
5×2 Array{Float64,2}:
0.260917  0.555757
0.68724   0.477956
0.862184  0.323348
0.230656  0.841944
0.405023  0.239465

## Solve Ax=b with $\vec{b}=\hat{1}$¶

In :
b = ones(M)/sqrt(M); ## unit vector
x1 = QR1 \ b ## QR1 from qrfact (fat Q)

Out:
2-element Array{Float64,1}:
0.392681
0.483478
In :
x2 = R2 \ (Q2'*b) ## QR2 from qr (thin Q, no pivoting)

Out:
2-element Array{Float64,1}:
0.392681
0.483478
In :
## Residual:
norm(A*x1-b), norm(A*x2-b)

Out:
(0.20786446598748637,0.2078644659874864)
In [ ]: