The proof of the Perron-Frobenius theorem can seem very abstract, but if you play with some examples it is easier to understand.
This notebook presents the proof with computational examples.
Step 4 below uses JuMP to turn Perron-Frobenius into a computational algorithm.
There are a few variations on the theorem some with more and some with less information but the basic version says that if A has all positive entries, then the maximum absolute eigenvalue is real and positive and there is a corresponding real positive eigenvector.
Step #1. Assume all(x.>0) and all(y.>0) and define τ as the minimum of y./x
# Define τ(y,x) on vectors
τ(y::Vector, x::Vector) = minimum(y./x)
τ (generic function with 1 method)
Notice that for 0 ≤ t ≤ τ(y,x) we have all(y .≥ t*x)
# Example
y = [10,5,6,9]
x = [1,2,3,4]
τ(y,x)
2.0
all(y.≥2x), all(y.≥1.99x),all(y.≥2.01x) # check these by hand
(true, true, false)
Step #2. If all(A.>0) and all(y.≥0) and y is not the zero vector then all(A*y.>0) (strictly greater)
# Example
A= [ 1 2 3;4 5 6; 7 8 9]
y = [0, .1, .0]
A * y # any one positive entry multiplies an entire positive column of A
3-element Array{Float64,1}: 0.2 0.5 0.8
Step #3:
τ(Ax,x)=τ(A²x,Ax) if x in an eigenvector with all(x.≥0).
τ(Ax,x) < τ(A²x,Ax) if x is not an eigenvector.
# An example
n = 6
A = rand(n,n)
x = rand(n)
[τ(A^k*x, A^(k-1)*x) for k=1:7] # This sequence will be increasing, but to an eig limit.
7-element Array{Float64,1}: 1.34884 2.40402 2.68214 2.75781 2.77293 2.77552 2.77666
Step #4. Let tmax be the maximum of τ(Ax,x) for all non-zero x. We will prove mathematically that x is a positive eigenvector and τ(Ax,x) is the eigenvalue. Before we do it mathematically, let's see it computationally:
One way to form this maximum problem is write this as a constrained optimization:
$\max t$ subject to
$x_i \ge 0 $
$y=Ax$
$y[i]/x[i] \ge t$
$sum(x)=1$
We will use the highly popular Julia Jump Package created at MIT (though not in math!), and used widely for operations research and in business schools:
# Pkg.add("JuMP")
using JuMP
# Pkg.add("Ipopt") (On my mac, this worked with 0.6.2 but not 0.6.0)
using Ipopt
A = rand(7,7)
7×7 Array{Float64,2}: 0.971603 0.325743 0.863038 0.0234046 0.962918 0.496618 0.799348 0.62474 0.140906 0.448296 0.505187 0.0646877 0.149136 0.205624 0.448767 0.58146 0.47302 0.443701 0.303789 0.114217 0.892493 0.808785 0.588347 0.839119 0.883789 0.920193 0.515088 0.22442 0.0089511 0.242133 0.783681 0.420531 0.965035 0.544011 0.334241 0.300799 0.990369 0.401669 0.427284 0.207415 0.309122 0.329326 0.0314547 0.723179 0.476076 0.445037 0.249261 0.404243 0.502455
n=size(A,1)
m = Model(solver=IpoptSolver(print_level=2))
@variable(m, t); @objective(m, Max, t)
@variable(m, x[1:n]>=0); @constraint(m, sum(x)==1)
@variable(m, y[1:n]); @constraint(m, y .== A*x)
@NLconstraint(m, [i=1:n], t <= y[i]/x[i]) # nonlinear constraint
status = solve(m)
x = getvalue.(x)
t = getobjectivevalue(m)
3.3686909584508244
norm(A*x-t*x) # demonstrate we have found an eigenpair through optimization
2.4817758919083086e-6
Step 5: Demonstrate that if x above were not an eigenvector, then the t could not have been the solution to the optimum problem.
As we saw in step 3, if x had not been an eigenvector, then τ(Ax,x) < τ(A²x,Ax), so τ(Ax,x) was not the maximum.
Step 6: Any complex eigenvector, eigenvalue pair has absolute eigenvalue <= tmax:
If Ax = λx then all( Aabs.(x) .≥ abs(λ)abs.(x)) by the triangle inequality. Thus abs(λ) <= tmax.
For example:
A = rand(5,5)
5×5 Array{Float64,2}: 0.996711 0.656579 0.453247 0.61344 0.50697 0.591166 0.616613 0.987583 0.246784 0.442663 0.949881 0.454748 0.831274 0.708647 0.458239 0.069995 0.108182 0.0296905 0.434673 0.0322304 0.105186 0.918176 0.831151 0.126704 0.0709903
eigvals(A)
5-element Array{Complex{Float64},1}: 2.58617+0.0im 0.125586+0.34277im 0.125586-0.34277im 0.351225+0.0im -0.238306+0.0im
Λ,X=eig(A);x=X[:,2];λ=Λ[2]
0.1255862966495158 + 0.3427698069874712im
norm(A*x-λ*x)
8.355107029738416e-16
τ(A*abs.(x),abs.(x)) - abs(λ) # This is non-negative
0.6784932085048402