This notebook provides a quick overview of the Bayesian inference tools in PyCBC. It is a combination of inference notebooks 1-6. For more details, see the separate tutorial notebooks.
Note that it's best to go through this tutorial in order as some sections may rely on earlier ones.
import sys
!{sys.executable} -m pip install pycbc ligo-common emcee==2.2.1 'numpy<=1.23.0' --no-cache-dir
Requirement already satisfied: pycbc in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (1.16.12) Requirement already satisfied: lalsuite in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (6.79) Requirement already satisfied: ligo-common in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (1.0.3) Requirement already satisfied: numpy>=1.16.0 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (1.18.5) Requirement already satisfied: scipy>=0.16.0; python_version >= "3.5" in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (1.5.4) Requirement already satisfied: beautifulsoup4>=4.6.0 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (4.9.3) Requirement already satisfied: cython>=0.29 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (0.29.21) Requirement already satisfied: ligo-segments in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (1.2.0) Requirement already satisfied: jinja2 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (2.11.2) Requirement already satisfied: pillow in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (8.0.1) Requirement already satisfied: emcee==2.2.1 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (2.2.1) Requirement already satisfied: decorator>=3.4.2 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (4.4.2) Requirement already satisfied: matplotlib>=1.5.1 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (3.3.3) Requirement already satisfied: h5py>=2.5 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (3.1.0) Requirement already satisfied: astropy>=2.0.3; python_version > "3.0" in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (4.1) Requirement already satisfied: six>=1.10.0 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (1.15.0) Requirement already satisfied: Mako>=1.0.1 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (1.1.3) Requirement already satisfied: gwdatafind in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (1.0.4) Requirement already satisfied: tqdm in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (4.52.0) Requirement already satisfied: requests>=1.2.1 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (2.25.0) Requirement already satisfied: mpld3>=0.3 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (0.5.1) Requirement already satisfied: lscsoft-glue>=1.59.3 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pycbc) (2.0.0) Requirement already satisfied: python-dateutil in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from lalsuite) (2.8.1) Requirement already satisfied: soupsieve>1.2; python_version >= "3.0" in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from beautifulsoup4>=4.6.0->pycbc) (2.0.1) Requirement already satisfied: MarkupSafe>=0.23 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from jinja2->pycbc) (1.1.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from matplotlib>=1.5.1->pycbc) (2.4.7) Requirement already satisfied: cycler>=0.10 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from matplotlib>=1.5.1->pycbc) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from matplotlib>=1.5.1->pycbc) (1.3.1) Requirement already satisfied: cached-property; python_version < "3.8" in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from h5py>=2.5->pycbc) (1.5.2) Requirement already satisfied: pyOpenSSL in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from gwdatafind->pycbc) (19.1.0) Requirement already satisfied: certifi>=2017.4.17 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from requests>=1.2.1->pycbc) (2020.11.8) Requirement already satisfied: idna<3,>=2.5 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from requests>=1.2.1->pycbc) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from requests>=1.2.1->pycbc) (3.0.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from requests>=1.2.1->pycbc) (1.26.2) Requirement already satisfied: cryptography>=2.8 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from pyOpenSSL->gwdatafind->pycbc) (3.2.1) Requirement already satisfied: cffi!=1.11.3,>=1.8 in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from cryptography>=2.8->pyOpenSSL->gwdatafind->pycbc) (1.14.3) Requirement already satisfied: pycparser in /Users/cdcapano/.virtualenvs/pycbc-1.16.12/lib/python3.7/site-packages (from cffi!=1.11.3,>=1.8->cryptography>=2.8->pyOpenSSL->gwdatafind->pycbc) (2.20)
A model
in pycbc inference represents the problem you are trying to solve. It contains the definition of the likelihood function you want to explore and details the parameters you are using. In this section, we'll walk through using models with pycbc inference and see how to create your own.
Let's see what models are available. All models are accessible via a dictionary in the models
module. Each model has a unique name:
from pycbc.inference import models
for model_name in models.models:
print(model_name)
test_eggbox test_normal test_rosenbrock test_volcano test_prior gaussian_noise marginalized_phase marginalized_polarization brute_parallel_gaussian_marginalize single_template relative
The models starting with test_
are analytic models. These have predefined likelihood functions that are given by some standard distributions used in testing samplers. The other models are for gravitational-wave astronomy: they take in data and calculate a likelihood using an inner product between the data and a signal model. Currently, all of the gravitational-wave models in PyCBC assume that the data is stationary Gaussian noise in the absence of a signal. The difference between the models is they make varying simplfying assumptions, in order to speed up likelihood evaluation.
Below, we'll start with an analytic model to illustrate some basics of how PyCBC Inference works. We'll then use one of the simplified models to quickly estimate some parameters of GW170817.
The simplest case is a problem with a single parameter. We'll go through how to estimate this parameter using an analytic model. In this case, we'll use the normal distribution.
Create an instance of a pre-made Model. This is an analytic model (i.e. no data used) that we employ largely for testing the capabilities of different samplers. This will create a likelihood surface in one dimensions (x) with zero mean and unit variance
from pycbc.inference import models
from pycbc.distributions import Uniform
my_model = models.TestNormal(('x'), mean=(0))
We need to choose a sampler, in this case emcee. We need to provide the model we are using along with the prior and number of walkers. Emcee is an 'ensemble' sampler so it consists of many points which are traversing the space and help each other explore the likelihood surface.
from pycbc.inference import sampler
engine = sampler.EmceeEnsembleSampler(my_model, nwalkers=1000, nprocesses=8)
Before we start we need to decide the initial positions of the walkers In this case we choose that they be distributed randomly between -1 and 1. We use the 'Uniform' distribution class. It is a common feature that these classes take the parameter name along with parameters that may define the distribution itself (such as bounds and other distribution-specific shape determining variables).
_ = engine.set_p0(prior=(Uniform(x=(-1, 1))))
Run the mcmc for 200 iterations
engine.run_mcmc(200)
We can get the entire history of where the "walkers" have been by looking at the samples attribute. For each variable parameter, we get an array with dimensions nwalkers x num_iterations. This is the format for the 'Emcee' sampler. Other samples may have other formats for their parameter chains. For example, parallel tempered samplers will have an additional dimension which represents the temperature.
import pylab, numpy
xchain = engine.samples['x']
The chain has 2 dimensions, the first axis is the walker and the second is the iteration. We'll plot the final position of each walker
values = xchain[:,-1]
pylab.hist(values, density=True)
pylab.xlabel('x')
pylab.show()
print(values.mean(), numpy.var(values))
0.05259656538866139 0.9782071183162703
Try changing the mean of the analytic distribution. We've provided a random number below (no peaking!) What do you estimate for the mean of the distribution?
from numpy.random import uniform, seed
seed(0)
a_number = uniform(-100, 100)
You may have noticed that in the previous example we did not specify any prior. If you do not specify a prior, a flat prior (with no boundaries) will be used. Let's now specify a prior. To do so, we will use the distributions package. This contains many pre-made pdfs which can be used to build a prior. In fact we've been using this already to set initial walker positions.
from pycbc.distributions import Uniform
Let's create a prior on our variable that is uniform between $[0, 10)$. The resulting posterior should be a truncated normal distribution.
prior_x = Uniform(x=(0, 10))
In order to provide the prior to the model, we need to wrap it in a JointDistribution. The JointDistribution provides a common API for the models to evaluate the prior at a given point. If we had a problem with multiple parameters, each with their own prior distribution, we could provide the different distributions JointDistribution
; it will take the product over all the distributions, providing the model a single number for the prior pdf. The JointDistribution
also allows you to provide arbitrary constraints on parameters, although we will not cover that here.
from pycbc.distributions import JointDistribution
prior = JointDistribution(['x'], prior_x)
# Prior is a standard keyword that models which inherit from BaseModel
# can take.
my_model = models.TestNormal('x', prior=prior)
engine = sampler.EmceeEnsembleSampler(my_model, nwalkers=1000, nprocesses=8)
# Note that we do not need to provide anything to `set_p0` to set the initial positions
# this time. By default, the sampler will draw from the prior in this case.
engine.set_p0()
engine.run_mcmc(400)
Since the TestNormal
model's likelihood has zero mean and unit variance, forcing $x$ to only accept positive values via the prior means that our posterior should be a $\chi$ distribution with 1 degree of freedom. Let's check:
import pylab, numpy
xchain = engine.samples['x']
values = xchain[:,-1]
pylab.figure()
pylab.hist(values, density=True, label='measured')
from scipy import stats
xpts = numpy.linspace(0, 3, num=100)
y = stats.chi.pdf(xpts, 1.)
pylab.plot(xpts, y, label='$\chi$ dist. with 1 d.o.f.')
pylab.xlabel('x')
pylab.legend()
<matplotlib.legend.Legend at 0x130f9f2d0>
Try changing the prior for this problem. How does it change the resulting distribution?
Now that we have some experience with models in pycbc inference, let's take a look at some of the existing models that pycbc inference provides targeted at gravitational-wave data analysis. We'll start with the SingleTemplate
model. This model is useful when we know the intrinsic parameters of a source (i.e. component masses, spins), but we don't know the extrinsic parameters (i.e. sky location, distance, binary orientation). This will allow us to estimate the distance to GW170817 and the inclination of the orbital plane from our viewing angle.
This model requires a specifc set of data products.
We will make use of PyCBC gw signal processing tools to prepare this data.
from pycbc.catalog import Merger
from pycbc.psd import interpolate, inverse_spectrum_truncation
from pycbc.frame import read_frame
from pycbc.filter import highpass, resample_to_delta_t
from astropy.utils.data import download_file
m = Merger("GW170817")
# List of observatories we'll analyze
ifos = ['H1',
'V1',
'L1',
]
# we'll keep track of the filename locations as we'll need them later
data_filenames = {}
# The single template waveform model needs these data products
psds = {}
data = {}
for ifo in ifos:
print("Processing {} data".format(ifo))
# Download the gravitational wave data for GW170817
url = "https://dcc.ligo.org/public/0146/P1700349/001/{}-{}1_LOSC_CLN_4_V1-1187007040-2048.gwf"
fname = download_file(url.format(ifo[0], ifo[0]), cache=True)
data_filenames[ifo] = fname
# Read the gravitational wave data and do some minimal
# conditioning of the data.
ts = read_frame(fname, "{}:LOSC-STRAIN".format(ifo),
start_time=int(m.time - 260),
end_time=int(m.time + 40))
ts = highpass(ts, 15.0) # Remove low frequency content
ts = resample_to_delta_t(ts, 1.0/2048) # Resample data to 2048 Hz
ts = ts.time_slice(m.time-112, m.time + 16) # Limit to times around the signal
data[ifo] = ts.to_frequencyseries() # Convert to a frequency series by taking the data's FFT
# Estimate the power spectral density of the data
psd = interpolate(ts.psd(4), ts.delta_f)
psd = inverse_spectrum_truncation(psd, int(4 * psd.sample_rate),
trunc_method='hann',
low_frequency_cutoff=20.0)
psds[ifo] = psd
Processing H1 data Processing V1 data Processing L1 data
A number of parameters must also be provided as 'static' parameters. These include
If a model supports other intrinsic parameters (such as components spins), they may also optionally be provided.
There are also a fixed set of 'variable' parameters. These are the only ones which we can obtain estimates of with this model. These are
It's important to note that anything which could be a variable paramater, can be transformed into a static parameter by supplying a specific value for it. We take advantage of this below to limit our analyis to only sample over 'distance', 'inclination', and 'tc'. We set the sky location to the location of NGC 4993, the galaxy where an electromagnetic counterpart to GW170817 was observed.
from pycbc.inference import models, sampler
from pycbc.distributions import Uniform, JointDistribution, SinAngle
import numpy
static = {'mass1':1.3757,
'mass2':1.3757,
'f_lower':25.0,
'approximant':"TaylorF2",
'polarization':0,
'ra': 3.44615914,
'dec': -0.40808407
}
variable = ('distance',
'inclination',
'tc')
inclination_prior = SinAngle(inclination=None)
distance_prior = Uniform(distance=(10, 100))
tc_prior = Uniform(tc=(m.time-0.1, m.time+0.1))
prior = JointDistribution(
variable, inclination_prior, distance_prior, tc_prior)
We are not ready to create our SingleTemplate model instance. Note how the variable and static parameters are passed to the model. This is a common way this information can be passed for built-in pycbc inference models.
Notice that we are no longer using the Emcee sampler. While Emcee is sufficient for many problems, EmceePT, a parallel tempered version of Emcee is more effective at most gravitational-wave data analysis problems. There is one additional parameter we need to give to EmcceePT which is the number of temperatures. The output of this sampler will thus be 3-dimensional (temps x walkers x iterations). The 'coldest' temperature (0) will contain our actual results.
import copy
model = models.SingleTemplate(variable, copy.deepcopy(data),
low_frequency_cutoff={'H1':25, 'L1':25, 'V1':25},
psds = psds,
static_params = static,
prior = prior,
sample_rate = 8192,
)
smpl = sampler.EmceePTSampler(model, 3, 200, nprocesses=8)
_ = smpl.set_p0() # If we don't set p0, it will use the models prior to draw initial points!
# Note it may take ~1-3 minutes for this to run
smpl.run_mcmc(200)
In addition to the sampled parameters, we can also get the likelihood values our model produces. We don't go into it here, but it is also possible for models to make arbitary auxiliary information about each sample available.
lik = smpl.model_stats['loglikelihood']
s = smpl.samples
# Note how we have to access the arrays differently that before since there is an additional dimension.
# The zeroth element of that dimension represents the 'coldest' and is the one we want for our results.
# The other temperatures represent a modified form of the likelihood that allows walkers to traverse
# the space more freely.
pylab.scatter(s['distance'][0,:,-1],
s['inclination'][0,:,-1],
c=lik[0,:,-1])
pylab.xlabel('Distance (Mpc)')
pylab.ylabel('Inclination (Radians)')
c = pylab.colorbar()
c.set_label('Loglikelihood')
pylab.show()
_ = pylab.hist(s['distance'][0,:,-100::20].flatten(), bins=30)
We can see how our ensemble of walkers evolves with time using the animation utilities of matplotlib. We haven't covered the concept of "burn-in" in this tutorial, however, if you watch the animation, you can see the point that the distribution is effectively burned-in.
%matplotlib inline
from matplotlib import animation
import pylab
# We'll plot the initial position of the walkers
fig = pylab.figure(10)
a = pylab.scatter(s['distance'][0,:,0],
s['inclination'][0,:,0],
c=lik[0,:,0])
pylab.xlabel('Distance (Megaparsecs)')
pylab.ylabel('Inclination (radians)')
c = pylab.colorbar()
c.set_label('Loglikelihood')
# This function will update the plot with the ith iteration of our mcmc chain.
def animate(i):
dat = numpy.array([s['distance'][0,:,i], s['inclination'][0,:,i]])
a.set_offsets(dat.T)
a.set_array(lik[0,:,i])
return (a, )
nsamples = len(s['distance'][0,0,:])
ani = animation.FuncAnimation(fig, animate, frames=nsamples,
interval=200, blit=True)
from matplotlib.animation import PillowWriter
from IPython.display import Image
# Note to get this to play, you may need to right click on the image and
# download to your computer or open the image in a new tab of your browser
ani.save('move.gif', writer=PillowWriter(fps=5))
with open('move.gif','rb') as f:
display(Image(data=f.read(), format='png'))
pycbc_inference
command-line script¶The examples above illustrated how the models and samplers are used in the inference module. While this is useful for quick checks, it's not feasible to do large-scale parameter estimation problems in a jupyter notebook. Instead, when actually doing parameter estimation, we use the command-line script pycbc_inference
. This pulls together the sampler, models, and distributions modules so that we may perform Bayesian inference.
The key arguments that pycbc_inference
takes are an output file and one or more config-files
(run pycbc_inference --help
to see all of the arguments it takes). The config files entirely specify the problem that you will be analyzing. In them, you specify the model to use, the variable parameters, and the prior on each of the parameters. If the model involves gravitational-wave data, you also specify what data to load and how to estimate the PSD. To see how this works, let's repeat the above analysis where we estimate the distance and inclination of GW170817 using the SingleTempate
model.
Note: in order to do everything from within the notebook, we'll create the config files by echoing a python string to a file. In a normal situation, you would just use your favorite text editor to create the config files.
model_config = """
[model]
name = single_template
low-frequency-cutoff = 25.
"""
!echo '{model_config}' > model.ini
!cat model.ini
[model] name = single_template low-frequency-cutoff = 25.
Since this model uses data, we also need to create a data section that specifies what data to read for the analysis and the PSD estimation. See the PyCBC Inference docs for more information about what options are available and what they mean.
data_config = """
[data]
instruments = H1 L1 V1
trigger-time = {event_tc}
analysis-start-time = -260
analysis-end-time = 40
psd-estimation = median
psd-start-time = -260
psd-end-time = 40
psd-segment-length = 4
psd-segment-stride = 2
psd-inverse-length = 4
strain-high-pass = 15
pad-data = 8
sample-rate = 2048
frame-files = H1:{h1file} L1:{l1file} V1:{v1file}
channel-name = H1:LOSC-STRAIN L1:LOSC-STRAIN V1:LOSC-STRAIN
""".format(event_tc=Merger("GW170817").time,
h1file=data_filenames['H1'],
l1file=data_filenames['L1'],
v1file=data_filenames['V1'])
!echo '{data_config}' > data.ini
!cat data.ini
[data] instruments = H1 L1 V1 trigger-time = 1187008882.4 analysis-start-time = -260 analysis-end-time = 40 psd-estimation = median psd-start-time = -260 psd-end-time = 40 psd-segment-length = 4 psd-segment-stride = 2 psd-inverse-length = 4 strain-high-pass = 15 pad-data = 8 sample-rate = 2048 frame-files = H1:/Users/cdcapano/.astropy/cache/download/url/c484b43c2c76aaf6fca1a057f4d6ed52/contents L1:/Users/cdcapano/.astropy/cache/download/url/9bb63179cc5cca254380aeb2ee25fd0d/contents V1:/Users/cdcapano/.astropy/cache/download/url/51cfa540d575027e191b029fedbda5db/contents channel-name = H1:LOSC-STRAIN L1:LOSC-STRAIN V1:LOSC-STRAIN
Now let's specify the prior. We need to provide a section that lists the variable parameters and another that specifies the static parameters. For every variable parameter we have to provide one or more prior
section(s) that specifies the prior distribution.
prior_config = """
[variable_params]
distance =
inclination =
delta_tc =
[static_params]
mass1 = 1.3757
mass2 = 1.3757
f_lower = 25.0
approximant = TaylorF2
polarization = 0
ra = 3.44615914
dec = -0.40808407
[prior-distance]
name = uniform
min-distance = 10
max-distance = 100
[prior-inclination]
name = sin_angle
[prior-delta_tc]
name = uniform
min-delta_tc = -0.1
max-delta_tc = 0.1
[waveform_transforms-tc]
name = custom
inputs = delta_tc
tc = ${data|trigger-time} + delta_tc
"""
!echo '{prior_config}' > prior.ini
!cat prior.ini
[variable_params] distance = inclination = delta_tc = [static_params] mass1 = 1.3757 mass2 = 1.3757 f_lower = 25.0 approximant = TaylorF2 polarization = 0 ra = 3.44615914 dec = -0.40808407 [prior-distance] name = uniform min-distance = 10 max-distance = 100 [prior-inclination] name = sin_angle [prior-delta_tc] name = uniform min-delta_tc = -0.1 max-delta_tc = 0.1 [waveform_transforms-tc] name = custom inputs = delta_tc tc = ${data|trigger-time} + delta_tc
But wait! There's something odd here: the list of variable parameters contains a delta_tc
, whereas we need to provide a tc
for the coalescence time. This illustrates that the variable_params
can be anything we like them to be with any name (we could of called delta_tc
foobar
if we liked). They do not need to be predefined in code anywhere. However, the waveform models do need specific parameters to be provided in order to generate a template waveform. If we use a parameter that is not recognized by the waveform model (see here for a list of known parameters), then we must provide a waveform_transforms
section that tells the code how to convert that parameter into something that the waveform model recognizes. In this case, delta_tc
is not recognized by the waveform model, but tc
is. So, we've provided a section that converts delta_tc
into tc
by adding the trigger time. Also note that when providing the trigger time we used ${data|trigger-time}
. We can use variable substitution in config files. In this case, we've told the config file to get the trigger time that's stored in the [data]
section. This illustrates why we've chosen to make delta_tc
the variable parameter: we do not need to copy and paste hard-to-read GPS times all over the place. Instead, we only need provide a time in one location, then refer to it elsewhere. This makes it clear and easy to specify the tc prior, which is just a uniform window +/- 0.1s around the estimated time. Another advantage of this is we could analyze other events using the same prior file, just swapping out data files.
We will see some other, more advanced uses of waveform transforms below.
Finally, we need to specify the sampler settings. To do that, we need to provide a sampler
section.
sampler_config = """
[sampler]
name = emcee_pt
ntemps = 3
nwalkers = 200
niterations = 400
max-samples-per-chain = 1000
[sampler-burn_in]
burn-in-test = halfchain
"""
!echo '{sampler_config}' > sampler.ini
!cat sampler.ini
[sampler] name = emcee_pt ntemps = 3 nwalkers = 200 niterations = 400 max-samples-per-chain = 1000 [sampler-burn_in] burn-in-test = halfchain
Note that we've also provided a burn-in test. Here, we're telling it just use the second half of the chain. This means that at the end of the run, it will consider the second half of the chains to be post-burn in. An autocorrelation time will be computed over this and a posterior extracted. More sophisticated burn-in tests are available; see the pycbc.inference.burn_in module for more details.
pycbc_inference
¶Now that we've set up our config files, we can run pycbc_inference
:
!pycbc_inference --verbose \
--config-files model.ini data.ini prior.ini sampler.ini \
--output-file inference.hdf \
--seed 28572013 \
--nprocesses 8 \
--force
2020-11-17 18:10:12,748 Using seed 28572013 2020-11-17 18:10:12,749 Running with CPU support: 1 threads 2020-11-17 18:10:12,749 Reading configuration file 2020-11-17 18:10:12,750 Setting up model 2020-11-17 18:10:12,752 Setting up priors for each parameter 2020-11-17 18:10:12,752 No sampling_params section read from config file 2020-11-17 18:10:12,752 Loading waveform transforms 2020-11-17 18:10:12,755 Determining analysis times to use 2020-11-17 18:10:12,755 Padding H1 analysis start and end times by 2 (= psd-inverse-length/2) seconds to account for PSD wrap around effects. 2020-11-17 18:10:12,755 Padding L1 analysis start and end times by 2 (= psd-inverse-length/2) seconds to account for PSD wrap around effects. 2020-11-17 18:10:12,755 Padding V1 analysis start and end times by 2 (= psd-inverse-length/2) seconds to account for PSD wrap around effects. 2020-11-17 18:10:12,755 Reading Frames 2020-11-17 18:10:16,401 Highpass Filtering 2020-11-17 18:10:16,452 Converting to float64 2020-11-17 18:10:16,455 Resampling data 2020-11-17 18:10:16,601 Highpass Filtering 2020-11-17 18:10:16,630 Remove Padding 2020-11-17 18:10:16,631 Reading Frames 2020-11-17 18:10:20,465 Highpass Filtering 2020-11-17 18:10:20,514 Converting to float64 2020-11-17 18:10:20,516 Resampling data 2020-11-17 18:10:20,623 Highpass Filtering 2020-11-17 18:10:20,651 Remove Padding 2020-11-17 18:10:20,651 Reading Frames 2020-11-17 18:10:24,466 Highpass Filtering 2020-11-17 18:10:24,519 Converting to float64 2020-11-17 18:10:24,521 Resampling data 2020-11-17 18:10:24,636 Highpass Filtering 2020-11-17 18:10:24,660 Remove Padding 2020-11-17 18:10:24,662 Will generate a different time series for PSD estimation 2020-11-17 18:10:24,662 Reading Frames 2020-11-17 18:10:28,251 Highpass Filtering 2020-11-17 18:10:28,299 Converting to float64 2020-11-17 18:10:28,304 Resampling data 2020-11-17 18:10:28,471 Highpass Filtering 2020-11-17 18:10:28,499 Remove Padding 2020-11-17 18:10:28,499 Reading Frames 2020-11-17 18:10:32,286 Highpass Filtering 2020-11-17 18:10:32,332 Converting to float64 2020-11-17 18:10:32,335 Resampling data 2020-11-17 18:10:32,497 Highpass Filtering 2020-11-17 18:10:32,522 Remove Padding 2020-11-17 18:10:32,523 Reading Frames 2020-11-17 18:10:36,316 Highpass Filtering 2020-11-17 18:10:36,362 Converting to float64 2020-11-17 18:10:36,365 Resampling data 2020-11-17 18:10:36,536 Highpass Filtering 2020-11-17 18:10:36,561 Remove Padding 2020-11-17 18:10:36,561 Applying gates to PSD data 2020-11-17 18:10:39,779 Setting up sampler 2020-11-17 18:10:39,827 Setting max samples per chain to 1000 2020-11-17 18:10:39,828 Looking for checkpoint file 2020-11-17 18:10:39,828 Checkpoint not found or not valid 2020-11-17 18:10:39,828 Creating file inference.hdf.checkpoint 2020-11-17 18:10:40,370 Running sampler for 0 to 400 iterations 2020-11-17 18:11:18,541 Writing samples to inference.hdf.checkpoint with thin interval 1 2020-11-17 18:11:18,572 Writing samples to inference.hdf.bkup with thin interval 1 2020-11-17 18:11:18,608 Updating burn in 2020-11-17 18:11:18,608 Evaluating halfchain burn-in test 2020-11-17 18:11:18,609 Is burned in: True 2020-11-17 18:11:18,610 Burn-in iteration: 200 2020-11-17 18:11:18,615 Computing autocorrelation time 2020-11-17 18:11:18,632 ACT: 29.0 2020-11-17 18:11:18,645 Validating checkpoint and backup files 2020-11-17 18:11:18,661 Clearing samples from memory 2020-11-17 18:11:18,661 Calculating log evidence 2020-11-17 18:11:18,665 log Z, dlog Z: -969883.4441190477, 0.7795871599810198 2020-11-17 18:11:18,667 Moving checkpoint to output 2020-11-17 18:11:18,667 Deleting backup file 2020-11-17 18:11:18,682 Done
Now that it's done, we can see that an inference.hdf
file exists in our directory:
!ls -lh inference.hdf
-rw-r--r-- 1 cdcapano staff 246M Nov 17 18:11 inference.hdf
This contains all of the samples, and the information needed to extract the posterior from it, along with other diagnostic information about the sampler. Below, we show how to plot a posterior from this.
pycbc_inference_plot_posterior
¶The command-line tool pycbc_inference_plot_posterior
is used to make corner and posterior plots of our results. We can give it either the direct output of pycbc_inference
(here, inference.hdf
) or a posterior file created using pycbc_inference_extract_samples
. There are several plotting configuration options available (see pycbc_inference_plot_posterior --help
for the list of options). Here we'll illustrate a few.
Let's create a corner plot of our results. We'll show every sample in our posterior colored by the signal-to-noise ratio (here defined as $\sqrt{2\mathcal{L}}$ where $\mathcal{L}$ is the log likelihood ratio).
!pycbc_inference_plot_posterior --verbose \
--input-file inference.hdf \
--output-file posterior.png \
--plot-scatter --plot-marginal --z-arg snr
2020-11-17 18:11:21,739 Reading input file inference.hdf 2020-11-17 18:11:21,740 Loading samples 2020-11-17 18:11:21,746 Loaded 1400 samples 2020-11-17 18:11:21,746 Getting samples for colorbar 2020-11-17 18:11:21,750 Plotting 2020-11-17 18:11:23,140 Done
Image('posterior.png', height=480)
By default, we see that it has plotted all of the variable_params
. Let's just plot distance vs inclination instead. We can do that via the --parameters
argument. Let's also make a density plot with 2D marginal contours rather than a scatter plot. We'll do that by dropping the --plot-scatter
and --z-arg
commands, adding --plot-density
and --plot-contours
instead.
!pycbc_inference_plot_posterior --verbose \
--input-file inference.hdf \
--output-file posterior-dist_inc.png \
--plot-density --plot-contours --plot-marginal \
--parameters inclination distance
2020-11-17 18:11:25,059 Reading input file inference.hdf 2020-11-17 18:11:25,060 Loading samples 2020-11-17 18:11:25,065 Loaded 1400 samples 2020-11-17 18:11:25,065 Plotting 2020-11-17 18:11:26,333 Done
Image('posterior-dist_inc.png', height=480)
The --parameters
can be used to do more than just turn certain parameters on or off. We can also apply math functions to parameters using standard python syntax. Any numpy function may be used. Functions from the pycbc.conversions, pycbc.coordinates, or pycbc.cosmology modules are also available. To see all of the parameters in our results file, along with all the functions that may be carried out on them, we can use the --file-help
argument. This will print a message to screen then exit:
!pycbc_inference_plot_posterior --input-file inference.hdf --file-help
Parameters available with this (these) input file(s): delta_tc distance inclination logjacobian loglikelihood logprior Available pycbc functions (see http://pycbc.org/pycbc/latest/html for more details): cartesian_to_spherical, cartesian_to_spherical_azimuthal, cartesian_to_spherical_polar, cartesian_to_spherical_rho, chi_a, chi_eff, chi_eff_from_spherical, chi_p, chi_p_from_spherical, chi_p_from_xi1_xi2, chi_perp_from_mass1_mass2_xi2, chi_perp_from_spinx_spiny, chirp_distance, cosmological_quantity_from_redshift, det_tc, distance_from_comoving_volume, dquadmon_from_lambda, eta_from_mass1_mass2, eta_from_q, eta_from_tau0_tau3, final_mass_from_f0_tau, final_mass_from_initial, final_spin_from_f0_tau, final_spin_from_initial, freq_from_final_mass_spin, invq_from_mass1_mass2, lambda_from_mass_tov_file, lambda_tilde, mass1_from_mass2_eta, mass1_from_mchirp_eta, mass1_from_mchirp_q, mass1_from_mtotal_eta, mass1_from_mtotal_q, mass1_from_tau0_tau3, mass2_from_mass1_eta, mass2_from_mchirp_eta, mass2_from_mchirp_mass1, mass2_from_mchirp_q, mass2_from_mtotal_eta, mass2_from_mtotal_q, mass2_from_tau0_tau3, mass_from_knownmass_eta, mchirp_from_mass1_mass2, mtotal_from_mass1_mass2, mtotal_from_mchirp_eta, mtotal_from_tau0_tau3, nltides_gw_phase_diff_isco, optimal_dec_from_detector, optimal_ra_from_detector, phi1_from_phi_a_phi_s, phi2_from_phi_a_phi_s, phi_a, phi_from_spinx_spiny, phi_s, primary_mass, primary_spin, primary_xi, q_from_mass1_mass2, redshift, redshift_from_comoving_volume, secondary_mass, secondary_spin, secondary_xi, snr_from_loglr, spherical_to_cartesian, spin1x_from_xi1_phi_a_phi_s, spin1y_from_xi1_phi_a_phi_s, spin1z_from_mass1_mass2_chi_eff_chi_a, spin2x_from_mass1_mass2_xi2_phi_a_phi_s, spin2y_from_mass1_mass2_xi2_phi_a_phi_s, spin2z_from_mass1_mass2_chi_eff_chi_a, spin_from_pulsar_freq, tau0_from_mass1_mass2, tau0_from_mtotal_eta, tau3_from_mass1_mass2, tau3_from_mtotal_eta, tau_from_final_mass_spin, xi1_from_spin1x_spin1y, xi2_from_mass1_mass2_spin2x_spin2y Available numpy functions: abs, absolute, add, arccos, arccosh, arcsin, arcsinh, arctan, arctan2, arctanh, bitwise_and, bitwise_not, bitwise_or, bitwise_xor, cbrt, ceil, conj, conjugate, copysign, cos, cosh, deg2rad, degrees, divide, divmod, equal, exp, exp2, expm1, fabs, float_power, floor, floor_divide, fmax, fmin, fmod, frexp, gcd, greater, greater_equal, heaviside, hypot, invert, isfinite, isinf, isnan, isnat, lcm, ldexp, left_shift, less, less_equal, log, log10, log1p, log2, logaddexp, logaddexp2, logical_and, logical_not, logical_or, logical_xor, matmul, maximum, minimum, mod, modf, multiply, negative, nextafter, not_equal, positive, power, rad2deg, radians, reciprocal, remainder, right_shift, rint, sign, signbit, sin, sinh, spacing, sqrt, square, subtract, tan, tanh, true_divide, trunc Recognized constants: e euler_gamma inf nan pi Python arthimetic (+ - * / // ** %), binary (&, |, etc.), and comparison (>, <, >=, etc.) operators may also be used. The following are additional command-line options that may be provided, along with the input files that understand them: usage: inference.hdf optional arguments: --thin-start THIN_START Sample number to start collecting samples. If none provided, will use the input file's `thin_start` attribute. --thin-interval THIN_INTERVAL Interval to use for thinning samples. If none provided, will use the input file's `thin_interval` attribute. --thin-end THIN_END Sample number to stop collecting samples. If none provided, will use the input file's `thin_end` attribute. --iteration ITERATION Only retrieve the given iteration. To load the last n-th sampe use -n, e.g., -1 will load the last iteration. This overrides the thin-start/interval/end options. --walkers WALKERS [WALKERS ...], --chains WALKERS [WALKERS ...] Only retrieve samples from the listed walkers/chains. Default is to retrieve from all walkers/chains. --temps TEMPS [TEMPS ...] Get the given temperatures. May provide either a sequence of integers specifying the temperatures to plot, or 'all' for all temperatures. Default is to only plot the coldest (= 0) temperature chain.
One of the available functions, redshift, takes in luminosity distance and converts it to redshift assuming a standard cosmology. Let's use that, and some of the numpy functions, to plot redshift versus inclination angle in degrees. Let's also add a line at the redshift of NGC4993 to see how well our results agree. We'll do that using the --expected-parameters
command.
!pycbc_inference_plot_posterior --verbose \
--input-file inference.hdf \
--output-file posterior-redshift_inc.png \
--plot-density --plot-contours --plot-marginal \
--parameters 'inclination*180/pi:$\iota$ (deg)' 'redshift(distance):redshift' \
--expected-parameters 'redshift(distance):0.009727'
2020-11-17 18:11:29,925 Reading input file inference.hdf 2020-11-17 18:11:29,926 Loading samples 2020-11-17 18:11:29,930 Loaded 1400 samples 2020-11-17 18:11:30,610 Plotting 2020-11-17 18:11:31,901 Done
Image('posterior-redshift_inc.png', height=480)
The functions available to the --parameters
argument of pycbc_inference_plot_posterior
are actually used throughout the inference
module (if you're curious, this is made possible via FieldArrays, which is a custom wrapping of numpy record arrays; see the inference_5_results_io
notebook for more details). Of particular note, you can use the same set of functions in the waveform_transforms
section(s) of the config file. This allows you to carry out more complicated inference without needing to modify the PyCBC source code.
To illustrate this, let's modify the prior in the above problem to use a prior uniform in comoving volume rather than a prior uniform in distance.
First, we'll need boundaries for our comoving volume prior. Let's use the cosmology module to get the comoving volume corresponding to a luminosity distance of 10 and 100 Mpc:
from pycbc import cosmology
vmin, vmax = cosmology.cosmological_quantity_from_redshift(cosmology.redshift([10., 100.]), 'comoving_volume')
print(vmin, vmax)
4160.561734509789 3921535.7960404437
Now let's modify our prior. We'll swap distance
for comoving_volume
. Since the waveform's require luminosity distance, we'll provide a waveform transform that uses the distance_from_comoving_volume function in the cosmology module:
prior_config = """
[variable_params]
comoving_volume =
inclination =
delta_tc =
[static_params]
mass1 = 1.3757
mass2 = 1.3757
f_lower = 25.0
approximant = TaylorF2
polarization = 0
ra = 3.44615914
dec = -0.40808407
[prior-comoving_volume]
name = uniform
min-comoving_volume = 4160
max-comoving_volume = 3921536
[waveform_transforms-distance]
name = custom
inputs = comoving_volume
distance = distance_from_comoving_volume(comoving_volume)
[prior-inclination]
name = sin_angle
[prior-delta_tc]
name = uniform
min-delta_tc = -0.1
max-delta_tc = 0.1
[waveform_transforms-tc]
name = custom
inputs = delta_tc
tc = ${data|trigger-time} + delta_tc
"""
!echo '{prior_config}' > prior.ini
!cat prior.ini
[variable_params] comoving_volume = inclination = delta_tc = [static_params] mass1 = 1.3757 mass2 = 1.3757 f_lower = 25.0 approximant = TaylorF2 polarization = 0 ra = 3.44615914 dec = -0.40808407 [prior-comoving_volume] name = uniform min-comoving_volume = 4160 max-comoving_volume = 3921536 [waveform_transforms-distance] name = custom inputs = comoving_volume distance = distance_from_comoving_volume(comoving_volume) [prior-inclination] name = sin_angle [prior-delta_tc] name = uniform min-delta_tc = -0.1 max-delta_tc = 0.1 [waveform_transforms-tc] name = custom inputs = delta_tc tc = ${data|trigger-time} + delta_tc
comoving_volume
if you do not provide a --parameters
argument. Add the parameters argument to plot distance instead. Do the same for redshift.ra
and dec
into the [variable_params]
. What prior should you use? Take a look at the list of available distributions. Answer here (no peaking!)In the previous example we quickly estimated the extrinsic parameters of GW170817 by fixing the intrinsic. This was fast, since it did not require regenerating a waveform for each likelihood evaluation. Now let's try estimating the masses of GW170817. This is generally slower, since it requires generating a waveform on each likelihood call. However, in the example below, we'll use the relative binning model. This uses a technique presented in Zackay et al.. Basically, a reference waveform is used that's close to the peak likelihood. A linear approximation is used to interpolate the likelihood around this waveform, reducing the number of frequency points that we need to evaluate, and speeding up the analysis.
To do this, we'll use the same data.ini
and sampler.ini
files as above. We'll change the model
to the relative
one and provide the necessary arguments to generate the fiducial waveform. We'll make the chirp mass (mchirp
) and symmetric mass ratio (eta
) the variable parameters; for speed, we'll fix the extrinsic parameters.
For speed, we'll fix the extrinsic parameters. We'll use our previous results using the SingleTemplate
model to get the maximum likelihood values of the distance, inclination, and coalescence time. To do that, we'll use pycbc_inference_table_summary
to print out a table of the values.
!pycbc_inference_table_summary \
--input-file inference.hdf \
--output-file posterior_summary.html \
--verbose
2020-11-17 18:11:34,725 Reading input file inference.hdf 2020-11-17 18:11:34,727 Loading samples 2020-11-17 18:11:34,734 Loaded 1400 samples
from IPython.display import HTML
HTML('posterior_summary.html')
Parameter | 90% Credible Interval | Maximum Posterior | Maximum Likelihood |
---|---|---|---|
$\iota$ | $2.53^{+0.41}_{-0.43}$ | $2.31$ | $2.80$ |
$d_L$ (Mpc) | $41^{+7.6}_{-10.8}$ | $35.2$ | $46.3$ |
$\Delta t_c~(\rm{s})$ | $0.028382^{+0.000098}_{-0.000092}$ | $0.028380$ | $0.028383$ |
model_config = """
[model]
name = relative
low-frequency-cutoff = 25.0
high-frequency-cutoff = 1024.0
epsilon = 0.03
mass1_ref = 1.3757
mass2_ref = 1.3757
tc_ref = ${data|trigger-time}
"""
!echo '{model_config}' > model.ini
!cat model.ini
[model] name = relative low-frequency-cutoff = 25.0 high-frequency-cutoff = 1024.0 epsilon = 0.03 mass1_ref = 1.3757 mass2_ref = 1.3757 tc_ref = ${data|trigger-time}
prior_config = """
[variable_params]
mass1 =
mass2 =
[static_params]
f_lower = 25.0
approximant = TaylorF2
polarization = 0
ra = 3.44615914
dec = -0.40808407
distance = 48
inclination = 3
delta_tc = 0.028365
[prior-mass1]
name = uniform
min-mass1 = 1
max-mass1 = 2
[prior-mass2]
name = uniform
min-mass2 = 1
max-mass2 = 2
[waveform_transforms-tc]
name = custom
inputs = delta_tc
tc = ${data|trigger-time} + delta_tc
"""
prior_config = """
[variable_params]
mchirp =
eta =
[static_params]
f_lower = 25.0
approximant = TaylorF2
polarization = 0
ra = 3.44615914
dec = -0.40808407
distance = 48
inclination = 3.0
delta_tc = 0.028365
[prior-mchirp]
name = uniform
min-mchirp = 1.1876
max-mchirp = 1.2076
[prior-eta]
name = uniform
min-eta = 0.23
max-eta = 0.25
[waveform_transforms-mass1+mass2]
; transform from mchirp, eta to mass1, mass2 for waveform generation
name = mchirp_eta_to_mass1_mass2
[waveform_transforms-tc]
name = custom
inputs = delta_tc
tc = ${data|trigger-time} + delta_tc
"""
!echo '{prior_config}' > prior.ini
!cat prior.ini
[variable_params] mchirp = eta = [static_params] f_lower = 25.0 approximant = TaylorF2 polarization = 0 ra = 3.44615914 dec = -0.40808407 distance = 48 inclination = 3.0 delta_tc = 0.028365 [prior-mchirp] name = uniform min-mchirp = 1.1876 max-mchirp = 1.2076 [prior-eta] name = uniform min-eta = 0.23 max-eta = 0.25 [waveform_transforms-mass1+mass2] ; transform from mchirp, eta to mass1, mass2 for waveform generation name = mchirp_eta_to_mass1_mass2 [waveform_transforms-tc] name = custom inputs = delta_tc tc = ${data|trigger-time} + delta_tc
Note: this may take several minutes to run.
!pycbc_inference --verbose \
--config-files model.ini data.ini prior.ini sampler.ini \
--output-file inference-masses.hdf \
--seed 3214897 \
--nprocesses 8 \
--force
2020-11-17 18:11:36,855 Using seed 3214897 2020-11-17 18:11:36,856 Running with CPU support: 1 threads 2020-11-17 18:11:36,856 Reading configuration file 2020-11-17 18:11:36,858 Setting up model 2020-11-17 18:11:36,861 Setting up priors for each parameter 2020-11-17 18:11:36,861 No sampling_params section read from config file 2020-11-17 18:11:36,861 Loading waveform transforms 2020-11-17 18:11:36,865 Determining analysis times to use 2020-11-17 18:11:36,865 Padding H1 analysis start and end times by 2 (= psd-inverse-length/2) seconds to account for PSD wrap around effects. 2020-11-17 18:11:36,865 Padding L1 analysis start and end times by 2 (= psd-inverse-length/2) seconds to account for PSD wrap around effects. 2020-11-17 18:11:36,865 Padding V1 analysis start and end times by 2 (= psd-inverse-length/2) seconds to account for PSD wrap around effects. 2020-11-17 18:11:36,865 Reading Frames 2020-11-17 18:11:40,494 Highpass Filtering 2020-11-17 18:11:40,544 Converting to float64 2020-11-17 18:11:40,546 Resampling data 2020-11-17 18:11:40,671 Highpass Filtering 2020-11-17 18:11:40,699 Remove Padding 2020-11-17 18:11:40,700 Reading Frames 2020-11-17 18:11:44,462 Highpass Filtering 2020-11-17 18:11:44,509 Converting to float64 2020-11-17 18:11:44,511 Resampling data 2020-11-17 18:11:44,612 Highpass Filtering 2020-11-17 18:11:44,638 Remove Padding 2020-11-17 18:11:44,638 Reading Frames 2020-11-17 18:11:48,428 Highpass Filtering 2020-11-17 18:11:48,477 Converting to float64 2020-11-17 18:11:48,480 Resampling data 2020-11-17 18:11:48,589 Highpass Filtering 2020-11-17 18:11:48,615 Remove Padding 2020-11-17 18:11:48,616 Will generate a different time series for PSD estimation 2020-11-17 18:11:48,617 Reading Frames 2020-11-17 18:11:52,180 Highpass Filtering 2020-11-17 18:11:52,228 Converting to float64 2020-11-17 18:11:52,231 Resampling data 2020-11-17 18:11:52,383 Highpass Filtering 2020-11-17 18:11:52,410 Remove Padding 2020-11-17 18:11:52,410 Reading Frames 2020-11-17 18:11:56,085 Highpass Filtering 2020-11-17 18:11:56,133 Converting to float64 2020-11-17 18:11:56,135 Resampling data 2020-11-17 18:11:56,284 Highpass Filtering 2020-11-17 18:11:56,309 Remove Padding 2020-11-17 18:11:56,309 Reading Frames 2020-11-17 18:12:00,026 Highpass Filtering 2020-11-17 18:12:00,074 Converting to float64 2020-11-17 18:12:00,076 Resampling data 2020-11-17 18:12:00,223 Highpass Filtering 2020-11-17 18:12:00,247 Remove Padding 2020-11-17 18:12:00,248 Applying gates to PSD data 2020-11-17 18:12:01,537 Generating fiducial waveform from 25.0 to 1024.0 Hz 2020-11-17 18:12:02,317 Computing frequency bins 2020-11-17 18:12:02,318 Using powerlaw indices: [-1.66666667 -0.66666667 1. 1.66666667 2.33333333] 2020-11-17 18:12:02,888 Using 1048 bins for this model 2020-11-17 18:12:02,917 Calculating summary data at frequency resolution 0.003289473684210526 Hz 2020-11-17 18:12:03,037 Setting up sampler 2020-11-17 18:12:03,069 Setting max samples per chain to 1000 2020-11-17 18:12:03,069 Looking for checkpoint file 2020-11-17 18:12:03,070 Checkpoint not found or not valid 2020-11-17 18:12:03,070 Creating file inference-masses.hdf.checkpoint 2020-11-17 18:12:03,131 Running sampler for 0 to 400 iterations 2020-11-17 18:13:26,653 Writing samples to inference-masses.hdf.checkpoint with thin interval 1 2020-11-17 18:13:26,683 Writing samples to inference-masses.hdf.bkup with thin interval 1 2020-11-17 18:13:26,710 Updating burn in 2020-11-17 18:13:26,710 Evaluating halfchain burn-in test 2020-11-17 18:13:26,711 Is burned in: True 2020-11-17 18:13:26,712 Burn-in iteration: 200 2020-11-17 18:13:26,719 Computing autocorrelation time 2020-11-17 18:13:26,732 ACT: 24.0 2020-11-17 18:13:26,749 Validating checkpoint and backup files 2020-11-17 18:13:26,764 Clearing samples from memory 2020-11-17 18:13:26,764 Calculating log evidence 2020-11-17 18:13:26,770 log Z, dlog Z: -969869.4353768589, 0.658135580830276 2020-11-17 18:13:26,773 Moving checkpoint to output 2020-11-17 18:13:26,773 Deleting backup file 2020-11-17 18:13:26,776 Done
!pycbc_inference_plot_posterior --verbose \
--input-file inference-masses.hdf \
--output-file posterior.png \
--plot-scatter --plot-marginal --z-arg snr \
--parameters mchirp eta
2020-11-17 18:13:28,828 Reading input file inference-masses.hdf 2020-11-17 18:13:28,829 Loading samples 2020-11-17 18:13:28,834 Loaded 1800 samples 2020-11-17 18:13:28,834 Getting samples for colorbar 2020-11-17 18:13:28,839 Plotting 2020-11-17 18:13:29,171 Substituting symbol M from STIXNonUnicode 2020-11-17 18:13:29,244 Substituting symbol M from STIXNonUnicode 2020-11-17 18:13:29,450 Substituting symbol M from STIXNonUnicode 2020-11-17 18:13:29,493 Substituting symbol M from STIXNonUnicode 2020-11-17 18:13:30,058 Done
Image('posterior.png', height=480)
primary_mass
and secondary_mass
functions.