Developed by Randy LeVeque for a course on Approximation Theory and Spectral Methods at the University of Washington.
See http://faculty.washington.edu/rjl/classes/am590a2013/codes.html for more IPython Notebook examples.
The heat equation $u_t = u_{xx}$ is solved on the interval $[-1,1]$ with $u(x,0) = \exp(-50 x^2)$ and boundary condition $u(0,t) = 0$.
Chebyshev differentiation in space is combined with the second-order accurate trapezoid method for the time discretization.
import pymatbridge
ip = get_ipython()
pymatbridge.load_ipython_extension(ip)
Chebfun is required -- to run this notebook you will need to download and add the correct directory to the path here:
%%matlab
path(path,'/Users/rjl/chebfun/chebfun_v4.2.2889/chebfun')
First note that the solution to the heat equation on the entire real axis (no boundaries) with data $u(x,0) = \exp(-\beta x^2)$ is given by $$\hat u(x,t) = \sqrt{\frac{-t_0}{(t-t_0)}} \exp(-x^2 / 4(t-t_0))$$ where $t_0 = -1/4\beta$. This can be viewed as the solution to the heat equation with delta function initial data at time $t_0$.
This does not solve the heat equation with the boundary conditions $u(-1,t)=u(1,t)=0$, although it is a reasonable approximation for small $t$ if $\beta$ is large enough.
To check spectral convergence, however, we need a better approximation to the true solution. This can be obtained by considering the problem on the full real axis with data at time $t_0$ given by $u(x,t_0) = \delta(x) - \delta(x-2) - \delta(x+2)$. By linearity the solution is $u(x,t) = \hat u(x,t) - \hat u(x-2,t) - \hat u(x+2,t)$ and the for small enough $t$ the solutions nearly exactly cancel out at the points $x=\pm 1$. (For larger $t$ they won't and one could get even better approximations by including additional images at $x = \pm 4, \pm 6, \ldots$ via the "method of images".)
To illustrate this, here is the function $\hat u(x,t)$ at $t=0.1$, where it is easier to see that the boundary conditions $u(-1,t)=u(1,t)=0$ are violated:
%%matlab
beta = 50;
t0 = -1./(4*beta);
uhat = @(x,t) sqrt(-t0/(t-t0)) * exp(-x.^2 / (4*(t-t0)));
x = linspace(-3,3,1000);
t = 0.1;
plot(x,uhat(x,t),'b')
hold on
plot(x,0*x,'k')
plot([-1,-1],[-.25,.25],'k')
plot([1,1],[-.25,.25],'k')
Here's the function $\bar u(x,t) = \hat u(x,t) - \hat u(x-2,t) - \hat u(x+2,t)$ at the same time:
%%matlab
ubar = @(x,t) uhat(x,t)-uhat(x-2,t)-uhat(x+2,t);
plot(x,ubar(x,t),'b')
hold on
plot(x,0*x,'k')
plot([-1,-1],[-.25,.25],'k')
plot([1,1],[-.25,.25],'k')
At time $t=0.02$ the function $\hat u(\pm 1,t)$ is still very small, but still large enough to mask the convergence we hope to see:
%%matlab
uhat(1,0.02)
ans = 2.030346582452640e-05
So in the tests below we use the improved ubar function defined above.
Set up the Chebyshev differentiation operator:
%%matlab
x = chebfun('x');
L = chebop(@(u) diff(u,2));
%%matlab
Nvals = 20:5:40; % spatial grids resolutions
for n=Nvals
Ln = L(n);
Ln = Ln(2:n-1,2:n-1);
x = chebpts(n);
x = x(2:n-1);
errors = [];
dts = [];
% number of time steps logarithmically spaced:
nmax_vals = floor(logspace(2,3.5,8));
for nmax = nmax_vals
tfinal = 0.02;
dt = tfinal/nmax;
dts = [dts, dt];
A = eye(n-2) - 0.5*dt*Ln;
% initial data:
u = exp(-beta*x.^2);
% time stepping loop:
for nt=1:nmax
u = A\(u + 0.5*dt*Ln*u);
end
utrue = ubar(x,tfinal);
error = norm(u-utrue,inf);
errors = [errors; error];
end
loglog(dts,errors,'o-')
hold on
text(3e-6,errors(end),sprintf('n = %g',n),'fontsize',15)
end
% plot triangle showing slope dt^2 for expected second order accuracy
xs1 = 2e-5;
xs2 = 1e-4;
ys1 = 1e-8;
ys2 = (xs2/xs1)^2 * ys1;
xs = [xs1,xs2,xs2,xs1];
ys = [ys1,ys1,ys2,ys1];
loglog(xs,ys,'r')
set(gca,'fontsize',15)
title('Log-log plot of error vs. \Delta t')
Note that for the finer grids ($n=35, 40$) we observed convergence at the expected rate $O(\Delta t^2)$ until the error reaches the accuracy allowed by the spatial resolution. We see spectral convergence of the spatial error: as the number of grid points increases linearly the error decreases like $C^{-n}$.
Note that if we used $\hat u$ instead of $\bar u$ as an approximation to the true solution, then all of these curves would bottom out at about 2e-5 and the convergence with decreasing $\Delta t$ would not be visible.