**Lecturer:** David Kaplan

**Jupyter Notebook Authors:** Dougal Dobie, David Kaplan & Cameron Hummels

This is a Jupyter notebook lesson taken from the GROWTH Summer School 2020. For other lessons and their accompanying lectures, please see: http://growth.caltech.edu/growth-school-2020.html

Produce and analyze radio light curve data, fitting a broken power law to it.

- Plot the radio light curve
- Determine its spectral index
- Scale the data based on its spectral index
- Fit the data with a power law
- Fit the data with a broken power law with a smooth turnover
- Interpret these results with a physical model

See GROWTH school webpage for detailed instructions on how to install these modules and packages. Nominally, you should be able to install the python modules with `pip install <module>`

. The external astromatic packages are easiest installed using package managers (e.g., `rpm`

, `apt-get`

).

- python 3
- astropy
- numpy
- scipy
- matplotlib
- emcee
- corner

None

In [1]:

```
import numpy as np
import matplotlib.pyplot as plt
import emcee
from astropy.io import ascii
import corner
import os
from timeit import default_timer as timer
import matplotlib.colors as colors
import matplotlib.cm as cmx
from scipy.optimize import least_squares, curve_fit
from scipy.stats import f
```

We write a basic function to load in the radio lightcurve. It's pretty easy using `astropy`

. But you should look at the file `radio_lightcurve.dat`

and make sure that the data returned look appropriate. It's also good to keep track of some basic info like:

- How many observations are there?
- What is the first observation date? What is the last?
- What is the lowest frequency? What is the highest frequency?
- Do all of the observations actually report
*detections*, where the source is detected at $>3\sigma$ significance?

We will use the `astropy`

`Table`

module, so each data-set works as a table. This way all relevant columns get treated together. This isn't required, but it makes things easier.

The first few lines of `radio_lightcurve.dat`

```
delta_t telescope frequency flux rms
10.37 VLA 6.2 7.8 2.5
16.42 VLA 3.0 18.7 6.3
17.39 VLA 3.0 15.1 3.9
...
```

The first line is the header (with `delta_t`

= $\Delta_t$ the time since the GW trigger in days).

In [2]:

```
def load_data(filename='data/radio_lightcurve.dat'):
data = ascii.read(filename)
return data
data = load_data()
```

Answers to the questions above:

Plot the flux density as a function of time. This works best as a log-log plot. Making a basic plot is easy, but we want to be a little fancier. There are some variations we might make later on so it helps to modularise your code *now*.

For bonus points use different markers for each telescope, and use a colour scale to denote the observation frequency.

The basic elements are below. Fill in the `plot_data`

and `make_plot`

functions

In [1]:

```
def plot_data(ax, sm, data, scaled=False, **kwargs):
telescope_marker_dict = {'VLA':'s', 'ATCA':'o', 'GMRT':'d'}
for row in data:
#Loop over each row of the data, set the marker colour based on frequency and the marker style based on telescope
return
def cmap_setup(cmap='viridis', min_freq=0, max_freq=17):
'''
This function will set up a scalar map for you to colour your markers by frequency
'''
freq_cmap = plt.cm.get_cmap(cmap)
cNorm = colors.Normalize(vmin=min_freq, vmax=max_freq)
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cmap)
sm = scalarMap
sm._A = []
return sm
def make_plot(data, scaled=False, model=None, params=None, tvals=np.arange(10,400), plot_models=False):
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111)
#Get the scalar map, plot the data using the plot_data function above
sm = cmap_setup()
plot_data(ax, sm, data, scaled=scaled)
#Set up a colourbar
#Set axis scales to log
#Label axes, set axis limits etc.
make_plot(data)
```

File "<ipython-input-1-0aa4370154c2>", line 7 return ^ IndentationError: expected an indented block

As with many things in astrophysics, the emission in the radio regime is *non-thermal* in origin (unlike the early emission in the optical/UV/infrared, which is a thermal blackbody). What that means is that generally has a power-law spectrum:
$$ S_\nu(\nu) \propto \nu^\alpha$$
Strictly this only works for data observed at *exactly* the same time, but real observations don't work that way. A single telescope can (usually) only observe at a single frequency, and different telescopes are separated in time by schedules, longitude, etc. So we need to be a bit more generous about how we define simultaneous. Luckily, the light-curve is mostly evolving pretty slowly, so this isn't necessarily a problem.

Write a function to take a subset of the data that was observed roughly simultaneously and calculate the spectral index $\alpha$ and its uncertainty. Using multi-band observation at 162 days post-merger calculate the spectral index. Questions:

- How many observations did you use, over what time window?
- What do you get for $\alpha$ and its uncertainty?
- Do you think using a finite time window introduced significant errors into this calculation?

In [ ]:

```
def calc_power_law(freq,S0,alpha):
S = S0 * (freq) ** alpha
return S
def alpha_calc(data):
'''
Write a function to take in some subset of data and fit a power law to the spectrum using the curve_fit function.
Your function should return a tuple containing the spectral index and associated uncertainty.
Hint: Don't forget to include uncertainties (and set absolute_sigma=True)
'''
return alpha, alpha_err
```

In [ ]:

```
#Select the data at the ~162 day epoch and print the spectral index + uncertainty
sel_data = data[data['delta_t'] == 162.89]
alpha, alpha_err = alpha_calc(sel_data)
print("alpha = %.1f+/-%.1f"%(alpha, alpha_err))
```

Answers to the questions above:

Based on the $\alpha$ you calculated above, what happens if you assume that *all* of the observations can be fit using the same frequency power-law? i.e., if $\alpha$ were the same at all times? If this is the case then scaling all of the data to a single frequency makes it easier to understand the temporal properties of the lightcurve as a function of only 1 variable, not 2.

Write a function to take the observed data and scale it to a specific frequency based on an estimated spectral index.

Don't forget to include uncertainties! You should add two columns to your data table called "scaled_flux" and "scaled_rms".

Questions:

- Can you think of reasons why the spectral index should stay the same? Why it should change?

Answer to the question above:

In [ ]:

```
def scale_data(data, alpha, alpha_err, ref_freq=3.0):
#calculate a scaling factor for the flux density and uncertainty
f_scale =
rms_scale =
#scale the flux and uncertainty - don't forget to add errors in quadrature
scaled_flux =
scaled_rms =
#Add two new columns to the data table
data['scaled_flux'] = scaled_flux
data['scaled_rms'] = scaled_rms
return data
```

Modify your `plot_data()`

and `make_plot()`

functions to add a keyword parameter "scaled" that is by default False. If `scaled=True`

, `plot_data()`

should plot the scaled data instead of the observed data.

Scale the data to 3 GHz based on your estimated spectral index and associated uncertainty, then plot the scaled lightcurve

In [ ]:

```
data = scale_data(data, -0.6,0.1)
make_plot(data, scaled=True)
```

We now want to characterise the radio lightcurve. You should be able to see that it initially rises according to a power law, peaks somewhere between 100 and 200 days post-merger and then declines according to a different power law.

However, when we published the first paper demonstrating evidence of a turnover we only had observations up to 200 days post-merger. We will now determine evidence for a turnover using that subset of data.

Create a new table called `tdata`

with the data up to 200 days post-merger and plot the data using your `make_plot()`

function.

In [2]:

```
data = load_data()
data = scale_data(data, -0.6,0.05)
# use nice Table functionality to select only a subset of the data
# instead of needing to use explicit indices (like where()) we can just
# index with a boolean array
tdata = data[data['delta_t'] < 200]
make_plot(tdata, scaled=True)
```

We can fit this data with a "smoothed broken power law", which combines two power laws with a smoothing parameter around the break point. One functional form of this is given by

$S_\nu(t) = S_{\rm \nu,peak} \left[ \left(\dfrac{t}{t_{\rm peak}}\right)^{-s\delta_1} + \left(\dfrac{t}{t_{\rm peak}}\right)^{-s\delta_2}\right]^{-1/s}$

Here the spectral index is still $\alpha$, but we've also introduced *temporal* power-law indices $\delta_1$ (before the break) and $\delta_2$ (after the break). $S_{\rm \nu,peak}$ is the flux density at peak, and $s$ controls the smoothness of the break.

Write a function `smooth_broken_power_law()`

that outputs a smoothed broken power law that also scales based on frequency and spectral index

In [ ]:

```
def smooth_broken_power_law(t, nu, S_peak, t_peak, delta_1, delta_2, alpha, log_s, nu0=3.0):
return
```

We now want to fit a smoothed broken power law to our data and see if there is evidence for a turnover in the radio lightcurve. In our paper we do this via a parameter grid-search to minimise the goodness-of-fit parameter $\chi^2$, i.e., compute $\chi^2(S_{\nu,\rm peak},t_{\rm peak},\delta_1,\delta_2,\alpha,s)$ for a 6-dimension parameter grid. However, grid searches are slow and innefficient. It's better to concentrate your effort in the part of the fit where the data "prefer" to go. We can do this using a slightly more complicated statistical technique

Here we will perform an Markov Chain Monte Carlo (MCMC) maximum likelihood fit using the `emcee`

package, to determine lightcurve parameters and the spectral index of the source. First you will need to write 3 functions that define your Probability, Prior and Likelihood.

If you want more details about MCMC fitting etc., there are many great resources out there (start with the `emcee`

page above). But basically we can go back to Bayes' theorem:

where $P({\rm model} | {\rm data})$ is the probability of the model given the data, which is what we want to find out. The term $P({\rm model})$ is the *prior*: any information we have about what the model parameters could be (or could not be). The term $P({\rm data} | {\rm model})$ is the *likelihood*: how probable would it be to generate the data given the set of model parameters we have. If we can integrate this product over all possible model parameters, we can end up with the distributions of how likely it would be to have model parameters starting from the data, which is what we want. (Some of you will note that we have left out the denominator above which is often called the *evidence* for the data, but we are just treating is at a normalizing factor to be ignored).

We will use uniform *priors* with:

- $\delta_1>0$ (since we require the lightcurve to initially rise)
- $0<t_{\rm peak}<300$ (since our data only covers the period up to 200 days)
- $s<100$

The parameters will be passed to the function via a tuple. Note that `emcee`

wants the log of the prior. We don't care about the absolute scale, so if the prior values are OK we pass 0 (i.e., $\log 1$) and if they are not we pass `-np.inf`

(i.e., $\log 0 = -\infty$).

In [ ]:

```
def lnprior(theta):
S_peak, t_peak, delta_1, delta_2, alpha, log_s = theta
if 0.0 < t_peak < 300.0 and delta_1 > 0.0 and log_s < 2:
return 0.0
else:
return -np.inf
```

We will now write a likelihood function that takes the lightcurve parameters inside the tuple theta, along with the observed data. As before, we pass the log of the likelihood. The simple assumption is that the likelihood ${\cal L}$:

$$ {\cal L} \propto \frac{1}{\sigma^2}e^{-\chi^2/2}$$So $\log {\cal L} = -\chi^2 / 2 - \log \sigma^2$ plus a constant (which we will ignore). We will also be explicit about defining our $\chi^2$:

$$ \chi^2(S_{\nu,\rm peak},t_{\rm peak},\delta_1,\delta_2,\alpha,s) = \sum_i \left(\frac{{\rm data}_i - {\rm model}_i(S_{\nu,\rm peak},t_{\rm peak},\delta_1,\delta_2,\alpha,s)}{\sigma_i}\right)^2$$with $\sigma_i$ the uncertainty on each point, data$_i$ the data, and model$_i$ the model (a function of the parameters). Minimizing $\chi^2$ will maximize the likelihood (or the log likelihood).

In [ ]:

```
def lnlike(theta, t, nu, S, S_err):
S_peak, t_peak, delta_1, delta_2, alpha, log_s = theta
model = smooth_broken_power_law(t, nu, S_peak, t_peak, delta_1, delta_2, alpha, log_s)
inv_sigma2 = 1.0/S_err**2
return -0.5*(np.sum((S-model)**2*inv_sigma2 - 2*np.log(inv_sigma2)))
```

`lnlike()`

and `lnprior()`

functions you calculated above. Since we are taking the log, the produce above becomes a sum:

In [ ]:

```
def lnprob(theta, t, nu, S, S_err):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, t, nu, S, S_err)
```

We will now fit the data using the `emcee`

package. The function `get_starting_pos()`

provided below will set up an array of "walker" starting positions for given lightcurve parameters. Examine the lightcurve and estimate some reasonable values for these parameters and add them to the function.

Now write a function called `run_mcmc()`

that will load the observed data, take the starting position and then run the emcee Ensemple Sampler. Use a small number of iterations and walkers initially (100/20) to see how long the code takes to run on your laptop. Then increase both parameters to a larger number so that the algorithm takes ~90 seconds to run.

In [ ]:

```
def get_starting_pos(nwalkers, ndim=6):
S_peak =
t_peak =
delta_1 =
delta_2 =
alpha =
log_s =
pos = [np.asarray([S_peak, t_peak, delta_1, delta_2, alpha, log_s]) + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
return pos
```

In [ ]:

```
def run_mcmc(data, niters=500, nthreads=1, nwalkers=200, ndim=6):
t = data['delta_t']
nu = data['frequency']
S = data['flux']
S_err = data['rms']
pos = get_starting_pos(nwalkers, ndim=ndim)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(t, nu, S, S_err),threads=nthreads)
start = timer()
sampler.run_mcmc(pos, niters, progress=True)
end = timer()
print("Computation time: %f s"%(end-start))
return sampler
```

We now want to inspect our chain to see if our algorithm has converged to a reasonable solution. First, extract the chain from the sampler, and then write a function to make a figure showing how each walker moves around the parameter space. Your figure should have 6 subplots (1 for each dimension), iteration number on the x-axis and parameter value on the y-axis.

MCMC algorithms typically use a burn-in phase, where the sampler is moving towards the optimum solution and not yet accurately sampling the parameter space. Add a parameter chain_cut to your function that plots a vertical line at the end of the burn-in. Remember: we want *fuzzy caterpillars*.

In [ ]:

```
sampler = run_mcmc(tdata)
chain = sampler.chain
```

In [ ]:

```
def make_chain_plot(chain, chain_cut):
niters = chain.shape[1]
ndim = chain.shape[2]
fig, axes = plt.subplots(ndim,1,sharex=True)
fig.set_size_inches(7, 20)
param_names = ['$S_{{\\nu,\\rm peak}, 3\.{\\rm GHz}}$', '$t_{{\\rm peak}}$','$\\delta_1$','$\\delta_2$', '$\\alpha$', '$\\log_{10}(s)$']
for i, (ax,param_name) in enumerate(zip(axes,param_names)):
#plot the chain for the given parameter
ax.set_ylabel(param_name)
ax.set_xlim(0,niters)
ax.axvline(chain_cut,c='r',linestyle='--')
chain_cut =
make_chain_plot(chain, chain_cut)
```

You might want to print some of those to make sure they make sense and are consistent with your corner plots above.

Now that we know that our algorithm is converging, and we know how long the burn-in takes we can begin to estimate parameters. The function below will make a corner plot from the good part of your chain (we are ignoring "thinning," which is needed in general).

In [ ]:

```
flat_samples = sampler.get_chain(discard=chain_cut, flat=True)
```

In [3]:

```
def make_corner_plot(flat_samples, savefile='corner.png'):
param_names = ['$S_{{\\rm peak}, 3\.{\\rm GHz}}$', '$t_{{\\rm peak}}$','$\\delta_1$','$\\delta_2$', '$\\alpha$', '$\\log_{10}(s)$']
fig = corner.corner(flat_samples, labels=param_names, quantiles=[0.16, 0.5, 0.84], show_titles=True)
plt.savefig(savefile)
make_corner_plot(flat_samples)
```

In [ ]:

```
def get_best_params(chain):
ndim = chain.shape[1]
vals = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(chain, [16, 50, 84],axis=0)))
param_names = ['S_peak', 't_peak', 'delta_1', 'delta_2', 'alpha', 'log_s']
param_dict = dict(zip(param_names,vals))
return param_dict
best_params = get_best_params(good_chain)
```

`calc_chi2()`

, that will calculate the $\chi^2$ for the fit. We will use this later to compare different lightcurve models

In [ ]:

```
def calc_chi2(best_params, param_names, model, data, nu0=3.0):
args = []
for param in param_names:
val = best_params[param][0]
args.append(val)
best_fit = model(data['delta_t'], nu0, *args)
chi2 = np.sum((best_fit-data['scaled_flux'])**2/data['scaled_rms']**2)
return chi2
param_names = ['S_peak', 't_peak', 'delta_1', 'delta_2', 'alpha', 'log_s']
chi2_best = calc_chi2(best_params, param_names, smooth_broken_power_law, tdata)
print(chi2_best)
```

We will now plot our best fit on top of the observational data.

Write a function plot_model() that takes in a function that calculates the model fit (in this case, our smooth_broken_power_law function), the best parameters, an array of values to plot the model for and a matplotlib axis to plot it on. Modify your make_plot function to take in an optional argument for the model to plot. If the model is supplied your make_plot() function should call your plot_model() function.

In [ ]:

```
def plot_model(model, params, tvals, ax):
return
plotting_data = scale_data(tdata, best_params['alpha'][0],np.max(best_params['alpha'][1:]))
make_plot(plotting_data,scaled=True,model=smooth_broken_power_law,params=args, plot_models=False)
```

But how do we know that the lightcurve has definitely turned over?

We can perform a similar process as above to fit a standard power law to our data and then use an **F-test** to determine which model (turnover or no turnover) provides the best fit. We have provided a `power_law()`

function that calculates the a power-law fit to the data. Now write a series of functions to perform an MCMC fit using a standard power law; `lnprior_noturnover()`

, `lnlike_noturnover()`

, `lnprob_noturnover()`

, `get_starting_pos_noturnover()`

, `run_mcmc_noturnover()`

.

In [ ]:

```
def power_law(t, nu, F0, delta_1, alpha, nu0=3.0):
return (nu/nu0)**alpha * F0 * t**delta_1
def lnprior_noturnover(theta):
F0, delta_1, alpha = theta
if delta_1 > 0.0:
return 0.0
else:
return -np.inf
def lnlike_noturnover(theta, t, nu, S, S_err):
return lnline_val
def lnprob_noturnover(theta, t, nu, S, S_err):
lp = lnprior_noturnover(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike_noturnover(theta, t, nu, S, S_err)
def get_starting_pos_noturnover(nwalkers, ndim=6):
F0 =
delta_1 =
alpha =
pos = [np.asarray([F0, delta_1, alpha]) + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
return pos
def run_mcmc_noturnover(data, niters=1000, nthreads=1, nwalkers=200, ndim=3):
return sampler
```

In [ ]:

```
sampler_noturnover = run_mcmc_noturnover(tdata)
```

In [ ]:

```
def make_chain_plot_noturnover(chain, chain_cut):
chain_noturnover = sampler_noturnover.chain
chain_cut_nt =
make_chain_plot_noturnover(chain_noturnover, chain_cut)
```

Make a corner plot for the standard power law fit

In [ ]:

```
def make_corner_plot_noturnover(good_chain, savefile='corner.png'):
plt.savefig(savefile)
good_chain_nt = chain_noturnover[:, chain_cut_nt:, :]
make_corner_plot_noturnover(good_chain_nt)
```

How do the values of $\delta_1$ and $\alpha$ compare to the previous fit. How does the calculated radio spectral index compare to the spectral index determined from fitting data all the way from the radio to X-rays ($\alpha=-0.585\pm 0.005$; Margutti et al. 2018, ApJ, 856, L18)?

Now write a function to get the best parameters for the standard power law fit.

In [ ]:

```
def get_best_params_noturnover(chain):
return param_dict
best_params_nt = get_best_params_noturnover(good_chain_nt)
```

In [ ]:

```
param_names_nt = ['F0', 'delta_1', 'alpha']
args_nt = []
for param in param_names_nt:
val = best_params_nt[param][0]
args_nt.append(val)
plotting_data_nt = scale_data(tdata, best_params_nt['alpha'][0],np.max(best_params_nt['alpha'][1:]))
make_plot(plotting_data_nt,scaled=True,model=power_law,params=args_nt)
```

In [ ]:

```
chi2_nt = calc_chi2(best_params_nt, ['F0', 'delta_1', 'alpha'], power_law, tdata)
print(chi2_nt)
```

In [ ]:

```
def calculate_ftest(chi2_t, p_t, chi2_nt, p_nt, n):
return 1-pval
n =
p_t =
p_nt =
calculate_ftest(chi2_best, p_t, chi2_nt, p_nt, n)
```

`1-pval`

is very small, that suggests that the probability of getting such an `F-test`

value by chance is very unlikely, so it would suggest that the turnover fit is preferred. In this case, which model is preferred? With what confidence can we say that we prefer one model over the other?

We're now going to use the full radio lightcurve to determine the best fitting parameters for the smooth broken power law. Load the full dataset and pass it to your previous run_mcmc() function.

In [ ]:

```
data = load_data()
full_sampler = run_mcmc(data)
full_chain = full_sampler.chain
```

Plot the chain and estimate the burn-in length

In [ ]:

```
full_chain_cut = 200
make_chain_plot(full_chain, full_chain_cut)
```

Make a corner plot

In [ ]:

```
full_flat_samples = full_sampler.get_chain(discard=chain_cut, flat=True)
make_corner_plot(full_flat_samples, savefile='corner_fulldata.png')
```

Calculate the best parameters and the $\chi^2$ for the best fit.

In [ ]:

```
```

`data/jet_cocoon/contribution.txt`

) can also be overplotted by using passing "plot_models=True" to the make_plot function.

In [ ]:

```
```

In [1]:

```
def plot_physical_models(ax, fname='data/jet_cocoon_contribution.txt'):
return ax
```

In [ ]:

```
make_plot(plotting_data, scaled=True, model=smooth_broken_power_law, params=full_args, plot_models=True)
```

So we have now determined the final temporal/spectral behavior of the radio data. What does this tell us?

The fact that the spectral index stayed basically constant over time and that it was the same spectrum from radio to X-ray wavelengths (across ~9 order of magnitude in frequency) tells us that the emission is *optically-thin synchrotron emission* (Sari et al. 1998, ApJ, 497, L17), which we can use to make other inferences. We also know that with synchrotron emission you can get a change in the spectrum at high frequencies if the highest-energy electrons have cooled: we see no evidence for a cooling break.

The temporal behavior can be used to constrain the geometry of the emission. This is what we (and others) did in a series of papers:

- Hallinan et al. (2017)
- Alexander et al. (2017)
- Mooley et al. (2018)
- Margutti et al. (2018)
- Dobie et al. (2018)
- Resmi et al. (2018)
- Alexander et al. (2018)
- Hajela et al. (2019)
- Makhathini et al. (2020)

In general we can say that $\alpha$ relates to the intrinsic distribution of relativistic electrons as: $$\alpha = -\frac{p-1}{2}$$ where the electrons have a power-law distribution of energies with index $p$. Moreover, we can relate the same $p$ to the temporal index in the decay phase in a way that depends on the geomety:

- If the emission is dominated by a jet (collimated, relativistic) we expect $\delta_2=-p$ in the decay phase
- If it is more spherical it should be:

(Sari et al. 1998 again).

Questions:

- What do you get for $p$ based on your value of $\alpha$?
- Which scenario, jet or sphere, do you find best fits in the decay phase?

In [ ]:

```
```