Lecturer: David Kaplan
Jupyter Notebook Authors: David Kaplan & Cameron Hummels
This is a Jupyter notebook lesson taken from the GROWTH Summer School 2019. For other lessons and their accompanying lectures, please see: http://growth.caltech.edu/growth-school-2019.html
Produce and analyze radio light curve data, fitting a broken power law to it.
See GROWTH school webpage for detailed instructions on how to install these modules and packages. Nominally, you should be able to install the python modules with pip install <module>
. The external astromatic packages are easiest installed using package managers (e.g., rpm
, apt-get
).
None
import numpy as np
import matplotlib.pyplot as plt
import emcee
from astropy.io import ascii
import corner
import os
from timeit import default_timer as timer
import matplotlib.colors as colors
import matplotlib.cm as cmx
from scipy.optimize import least_squares, curve_fit
from scipy.stats import f
def load_data(filename='radio_lightcurve.dat'):
data = ascii.read(filename)
return data
os.chdir('data')
data = load_data()
Make a log-log plot of the flux density as a function of time. Make sure to modularise your code so that we can re-use parts of it later on. For bonus points use different markers for each telescope, and use a colour scale to denote the observation frequency.
def plot_data(ax, sm, data, scaled=False, **kwargs):
telescope_marker_dict = {'VLA':'s', 'ATCA':'o', 'GMRT':'d'}
for row in data:
freq = row['frequency']
colorval = sm.to_rgba(freq)
telescope = row['telescope']
marker = telescope_marker_dict[telescope]
if scaled:
flux = row['scaled_flux']
rms = row['scaled_rms']
else:
flux = row['flux']
rms = row['rms']
ax.errorbar(row['delta_t'], flux, rms, linestyle='', marker=marker, c=colorval, **kwargs)
return
def cmap_setup(cmap='viridis', min_freq=0, max_freq=17):
freq_cmap = plt.cm.get_cmap(cmap)
cNorm = colors.Normalize(vmin=min_freq, vmax=max_freq)
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cmap)
sm = scalarMap
sm._A = []
return sm
def make_plot(data, scaled=False, model=None, params=None, tvals=np.arange(10,400), plot_models=False):
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111)
sm = cmap_setup()
plot_data(ax, sm, data, scaled=scaled)
cbar = fig.colorbar(sm,fraction=0.046, pad=0.04)
cbar.set_label('Frequency (GHz)')
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Time (days)')
if scaled:
ax.set_ylabel('Scaled Flux Density ($\mu$Jy)')
else:
ax.set_ylabel('Flux Density ($\mu$Jy)')
if model:
plot_model(model, params, tvals, ax)
if plot_models:
plot_physical_models(ax)
ax.set_xlim(10,300)
make_plot(data)
Write a function to take a subset of the data and calculate the spectral index and its uncertainty. Using multi-band observation at 162 days post-merger calculate the spectral index.
def calc_power_law(freq,S0,alpha):
S = S0 * (freq) ** alpha
return S
def alpha_calc(data):
freqs = data['frequency']
flux = data['flux']
flux_errs = data['rms']
popt, pcov = curve_fit(calc_power_law, freqs, flux ,sigma=flux_errs, p0=(50,-0.61),absolute_sigma=True)
alpha = popt[1]
alpha_err = np.sqrt(np.diag(pcov))[1]
return alpha, alpha_err
sel_data = data[data['delta_t'] == 162.89]
alpha, alpha_err = alpha_calc(sel_data)
print("alpha = %.1f+/-%.1f"%(alpha, alpha_err))
alpha = -0.6+/-0.1
Write a function to take the observed data and scale it to a specific frequency based on an estimated spectral index. Don't forget to include uncertainties! You should add two columns to your data table called "scaled_flux" and "scaled_rms"
def scale_data(data, alpha, alpha_err, ref_freq=3.0):
f_scale = (ref_freq/data['frequency'])**alpha
rms_scale = np.abs(f_scale*np.log(ref_freq/data['frequency'])*alpha_err)
scaled_flux = data['flux'] * f_scale
scaled_rms = np.abs(scaled_flux) * np.sqrt((data['rms']/data['flux'])**2 + (rms_scale/f_scale)**2)
data['scaled_flux'] = scaled_flux
data['scaled_rms'] = scaled_rms
return data
Modify your plot_data() and make_plot() functions to add a keyword parameter "scaled" that is by default False. If scaled=True, plot_data() should plot the scaled data instead of the observed data.
Scale the data to 3 GHz based on your estimated spectral index and associated uncertainty, then plot the scaled lightcurve
data = scale_data(data, -0.6,0.1)
make_plot(data, scaled=True)
We now want to characterise the radio lightcurve. You should be able to see that it initially rises according to a power law, peaks somewhere between 100 and 200 days post-merger and then declines according to a different power law.
However, when we published the first paper demonstrating evidence of a turnover we only had observations up to 200 days post-merger. We will now determine evidence for a turnover using that subset of data.
Create a new table called tdata with the data up to 200 days post-merger and plot the data using your make_plot() function.
data = load_data()
data = scale_data(data, -0.6,0.05)
tdata = data[data['delta_t'] < 200]
make_plot(tdata, scaled=True)
We can fit this data with a "smoothed broken power law", which combines two power laws with a smoothing parameter around the break point. One functional form of this is given by
$S(t) = F_{\rm peak} \left[ \left(\dfrac{t}{t_{\rm peak}}\right)^{-s\delta_1} + \left(\dfrac{t}{t_{\rm peak}}\right)^{-s\delta_2}\right]^{-1/s}$
Write a function smooth_broken_power_law() that outputs a smoothed broken power law that also scales based on frequency and spectral index
def smooth_broken_power_law(t, nu, F_peak, t_peak, delta_1, delta_2, alpha, log_s, nu0=3.0):
s = 10**log_s
return (nu/nu0)**alpha * F_peak * ((t/t_peak)**(-s*delta_1) + (t/t_peak)**(-s*delta_2))**(-1.0/s)
We now want to fit a smoothed broken power law to our data and see if there is evidence for a turnover in the radio lightcurve. In our paper we do this via a parameter grid-search to minimise $\chi^2$.
Here we will perform an MCMC fit using the emcee package, to determine lightcurve parameters and the spectral index of the source. First you will need to write 3 functions that define your Probability, Prior and Likelihood.
We will use a uniform prior with $\delta_1>0$ (since we require the lightcurve to initially rise), $0<t_{\rm peak}<300$ (since our data only covers the period up to 200 days) and $s<100$. The parameters will be passed to the function via a tuple.
def lnprior(theta):
F_peak, t_peak, delta_1, delta_2, alpha, log_s = theta
if 0.0 < t_peak < 300.0 and delta_1 > 0.0 and log_s < 3:
return 0.0
else:
return -np.inf
We will now write a likelihood function that takes the lightcurve parameters inside the tuple theta, along with the observed data.
def lnlike(theta, t, nu, S, S_err):
F_peak, t_peak, delta_1, delta_2, alpha, log_s = theta
model = smooth_broken_power_law(t, nu, F_peak, t_peak, delta_1, delta_2, alpha, log_s)
inv_sigma2 = 1.0/S_err**2
return -0.5*(np.sum((S-model)**2*inv_sigma2 - np.log(inv_sigma2)))
Now write a function to calculate the marginal probability using the lnlike() and lnprior() functions you calculated above
def lnprob(theta, t, nu, S, S_err):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, t, nu, S, S_err)
We will now fit the data using the emcee package. The function get_starting_pos() provided below will set up an array of walker starting positions for given lightcurve parameters. Examine the lightcurve and estimate some reasonable values for these parameters and add them to the function.
Now write a function called run_mcmc() that will load the observed data, take the starting position and then run the emcee Ensemple Sampler. Use a small number of iterations and walkers initially (100/20) to see how long the code takes to run on your laptop. Then increase both parameters to a larger number so that the algorithm takes ~90 seconds to run.
def get_starting_pos(nwalkers, ndim=6):
F_peak = 110
t_peak = 150
delta_1 = 0.8
delta_2 = -2
alpha = -0.6
log_s = 1
pos = [np.asarray([F_peak, t_peak, delta_1, delta_2, alpha, log_s]) + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
return pos
def run_mcmc(data, niters=1000, nthreads=1, nwalkers=200, ndim=6):
t = data['delta_t']
nu = data['frequency']
S = data['flux']
S_err = data['rms']
pos = get_starting_pos(nwalkers, ndim=ndim)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(t, nu, S, S_err),threads=nthreads)
start = timer()
sampler.run_mcmc(pos, niters)
end = timer()
print("Computation time: %f s"%(end-start))
return sampler
We now want to inspect our chain to see if our algorithm has converged to a reasonable solution. First, extract the chain from the sampler, and then write a function to make a figure showing how each walker moves around the parameter space. Your figure should have 6 subplots (1 for each dimension), iteration number on the x-axis and parameter value on the y-axis.
MCMC algorithms typically use a burn-in phase, where the sampler is moving towards the optimum solution and not yet accurately sampling the parameter space. Add a parameter chain_cut to your function that plots a vertical line at the end of the burn-in.
sampler = run_mcmc(tdata)
chain = sampler.chain
/home/growth/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in power after removing the cwd from sys.path. /home/growth/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in power after removing the cwd from sys.path.
Computation time: 68.812542 s
def make_chain_plot(chain, chain_cut):
niters = chain.shape[1]
ndim = chain.shape[2]
fig, axes = plt.subplots(ndim,1,sharex=True)
fig.set_size_inches(7, 20)
param_names = ['$F_{{\\rm peak}, 3\.{\\rm GHz}}$', '$t_{{\\rm peak}}$','$\\delta_1$','$\\delta_2$', '$\\alpha$', '$\\log_{10}(s)$']
for i, (ax,param_name) in enumerate(zip(axes,param_names)):
ax.plot(chain[:,:,i].T,linestyle='-',color='k',alpha=0.3)
ax.set_ylabel(param_name)
ax.set_xlim(0,niters)
ax.axvline(chain_cut,c='r',linestyle='--')
chain_cut = 200
make_chain_plot(chain, chain_cut)
Now that we know that our algorithm is converging, and we know how long the burn-in takes we can begin to estimate parameters. The function below will make a corner plot from the good part of your chain.
good_chain = chain[:, chain_cut:, :]
def make_corner_plot(good_chain, savefile='corner.png'):
param_names = ['$F_{{\\rm peak}, 3\.{\\rm GHz}}$', '$t_{{\\rm peak}}$','$\\delta_1$','$\\delta_2$', '$\\alpha$', '$\\log_{10}(s)$']
ndim = good_chain.shape[2]
fig = corner.corner(good_chain.reshape((-1, ndim)), labels=param_names, quantiles=[0.16, 0.5, 0.84], show_titles=True)
plt.savefig(savefile)
make_corner_plot(good_chain)
WARNING:root:Too few points to create valid contours WARNING:root:Too few points to create valid contours