In [1]:
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')

In [2]:
%matplotlib notebook
import paramonte as pm
import numpy as np
import scipy as sp
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
print('\n'.join(f'{m.__name__} {m.__version__}' for m in globals().values() if getattr(m, '__version__', None)))
sns.set()

paramonte 2.3.0
numpy 1.19.2
scipy 1.5.2
pandas 1.1.3
seaborn 0.11.0
matplotlib 3.3.2


## Running ParaDRAM simulations in parallel on multiple processors¶

1. The single-chain parallelism, in which only a single Markov chain is generated, but all processors contribute to the construction of this chain.
2. The multi-chain parallelism, in which each processor creates its Markov chain separately from the rest of the processors. However, at the end of the simulation, all processors communicate with each other to compute the probability that convergence to the target density has occurred and that all processors have sampled the same region of high probability in the domain of the objective function.

### Which parallelism paradigm should be used when?¶

• The single-chain parallelism becomes very useful for large-scale problems that are highly computationally demanding.
• The multi-chain parallelism is useful when you suspect that the target objective function that has to be sampled is multi-modal. In such cases, the multi-chain parallelism could provide further evidence on whether convergence to single-mode or multi-modal target density function has occurred or not.

In either parallelism case, the ParaMonte library currently uses the MPI library for inter-process communications. As such, if you want to run a ParaDRAM simulation in parallel, you will have to first save your Python scripts in external Python files and then call them from the command line via the MPI launcher application.

To see how this can be done, consider the simple toy problem of sampling a 4-dimensional Multivariate Normal (MVN) distribution as described in this jupyter notebook.

In [3]:
import paramonte as pm
pm.verify()

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::                                                                                       ::::

_/_/_/_/                                   _/_/    _/_/
_/    _/                                  _/_/_/_/_/                     _/
_/    _/ _/_/_/_/   _/ /_/_/ _/_/_/_/     _/  _/  _/   _/_/   _/_/_/   _/_/_/  _/_/_/
_/_/_/   _/    _/   _/_/     _/    _/     _/      _/  _/   _/ _/    _/   _/   _/_/_/_/
_/       _/    _/   _/       _/    _/     _/      _/  _/   _/ _/    _/   _/   _/
_/_/_/       _/_/_/_/ _/         _/_/_/_/ _/_/_/  _/_/_/  _/_/  _/    _/   _/_/   _/_/_/

ParaMonte
plain powerful parallel
Monte Carlo library
Version 2.3.0

::::                                                                                       ::::
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

ParaMonte - NOTE: The ParaMonte::Kernel samplers have no Python package dependencies
ParaMonte - NOTE: beyond numpy. However, the ParaMonte::Python post-processing and
ParaMonte - NOTE: visualization tools require the following Python packages,
ParaMonte - NOTE:
ParaMonte - NOTE:     numpy : 1.19.2
ParaMonte - NOTE:     scipy : 1.5.2
ParaMonte - NOTE:     pandas : 1.1.2
ParaMonte - NOTE:     seaborn : 0.11.0
ParaMonte - NOTE:     matplotlib : 3.3.2
ParaMonte - NOTE:
ParaMonte - NOTE: If you do not intend to use the postprocessing and visualization tools,
ParaMonte - NOTE: you can ignore this message. Otherwise, UPDATE THE ABOVE PACKAGES TO
ParaMonte - NOTE: THE REQUESTED VERSIONS OR NEWER, SO THAT THE VISUALIZATION TOOLS
ParaMonte - NOTE: OF THE ParaMonte::Python LIBRARY FUNCTION PROPERLY.

ParaMonte - NOTE: Intel MPI library for 64-bit architecture detected at:
ParaMonte - NOTE:
ParaMonte - NOTE:     C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2020.1.216\windows\mpi\intel64\bin
ParaMonte - NOTE:
ParaMonte - NOTE: To perform ParaMonte simulations in parallel on a single node,
ParaMonte - NOTE: run the following two commands, in the form and order specified,
ParaMonte - NOTE: on a Python-aware mpiexec-aware command-line interface such as
ParaMonte - NOTE: Anaconda3 Windows command prompt:
ParaMonte - NOTE:
ParaMonte - NOTE:     "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2020.1.216\windows\mpi\intel64\bin\mpivars.bat"
ParaMonte - NOTE:
ParaMonte - NOTE:     mpiexec -localonly -n NUM_PROCESSES python main.py
ParaMonte - NOTE:
ParaMonte - NOTE: where,
ParaMonte - NOTE:
ParaMonte - NOTE:     0.   the first command defines the essential environment variables,
ParaMonte - NOTE:          and the second command runs in the simulation in parallel, where,
ParaMonte - NOTE:     1.   you should replace NUM_PROCESSES with the number of processors
ParaMonte - NOTE:          you wish to assign to your simulation task and,
ParaMonte - NOTE:     2.   the flag '-localonly' indicates a parallel simulation on only
ParaMonte - NOTE:          a single node (this flag will obviate the need for the MPI
ParaMonte - NOTE:          https://software.intel.com/en-us/get-started-with-mpi-for-windows
ParaMonte - NOTE:     3.   main.py is the Python file which serves as the entry point to
ParaMonte - NOTE:          your simulation, where you call the ParaMonte sampler routines.
ParaMonte - NOTE:
ParaMonte - NOTE: Note that the above two commands must be executed on a command-line that
ParaMonte - NOTE: recognizes both Python and mpiexec applications, such as the Anaconda
ParaMonte - NOTE: command-line interface. For more information, in particular, on how
ParaMonte - NOTE: to register to run Hydra services for multi-node simulations
ParaMonte - NOTE: on Windows servers, visit:
ParaMonte - NOTE:
ParaMonte - NOTE:     https://www.cdslab.org/paramonte

ParaMonte - NOTE: To check for the MPI library installation status or display the above
ParaMonte - NOTE: messages in the future, type the following on the Python command-line:
ParaMonte - NOTE:
ParaMonte - NOTE:     import paramonte as pm
ParaMonte - NOTE:     pm.verify()
ParaMonte - NOTE:
ParaMonte - NOTE: To get started, type the following on the Python command-line,
ParaMonte - NOTE:
ParaMonte - NOTE:     import paramonte as pm
ParaMonte - NOTE:     pm.helpme()


In [4]:
pm.checkForUpdate() # check for new versions of the library

ParaMonte - NOTE: You have the latest version of the ParaMonte library.
ParaMonte - NOTE: To see the most recent changes to the library, visit,
ParaMonte - NOTE:
ParaMonte - NOTE:     https://www.cdslab.org/paramonte/notes/overview/paramonte-python-release-notes



## Running a single-chain ParaDRAM simulation in parallel on multiple processors¶

We will save our parallel script in a file with the same name as this Jupyter Notebook's name,

In [5]:
with open("./sampling_multivariate_normal_distribution_via_paradram_parallel_singleChain.py","w") as file:
contents = """
import numpy as np

NDIM = 4                                # number of dimensions of the domain of the MVN PDF
MEAN =  np.double([-10, 15., 20., 0.0]) # This is the mean of the MVN PDF.
COVMAT = np.double( [ [1.0,.45,-.3,0.0] # This is the covariance matrix of the MVN PDF.
, [.45,1.0,0.3,-.2]
, [-.3,0.3,1.0,0.6]
, [0.0,-.2,0.6,1.0]
] )

INVCOV = np.linalg.inv(COVMAT) # This is the inverse of the covariance matrix of the MVN distribution.

# The following is the log of the coefficient used in the definition of the MVN.

MVN_COEF = NDIM * np.log( 1. / np.sqrt(2.*np.pi) ) + np.log( np.sqrt(np.linalg.det(INVCOV)) )

# the logarithm of objective function: log(MVN)

def getLogFunc(point):
'''
Return the logarithm of the MVN PDF.
'''
normedPoint = MEAN - point
return MVN_COEF - 0.5 * ( np.dot(normedPoint,np.matmul(INVCOV,normedPoint)) )

import paramonte as pm

pmpd = pm.ParaDRAM() # define a ParaMonte sampler instance

pmpd.mpiEnabled = True # This is essential as it enables the invocation of the MPI-parallelized ParaDRAM routines.

pmpd.spec.overwriteRequested = True # overwrite existing output files if needed
pmpd.spec.randomSeed = 3751 # initialize the random seed to generate reproducible results.
pmpd.spec.outputFileName = "./out/mvn_parallel_singleChain"
pmpd.spec.progressReportPeriod = 20000
pmpd.spec.chainSize = 30000 # the default 100,000 unique points is too large for this simple example.

pmpd.runSampler ( ndim = 4
, getLogFunc = getLogFunc
)
"""
file.write(contents)


Here is the saved output MPI-parallelized Python script.

Note the only difference in the above parallel script with the serial version, which is the extra Python statement pmpd.mpiEnabled = True. This flag tells the ParaDRAM sampler initiate the simulation in parallel and silence all output messages that would otherwise be printed by all processes. .

IMPORTANT: At this point, we have assumed that you already have an MPI runtime library installed on your system. We highly recommend the use of the Intel MPI library on your system if it is Windows or Linux, and Open-MPI if it is macOS. You can run pm.verify() on your python command line, just as described in the Jupyter notebook for the serial sampling of the MVN distribution, to verify the existence of the MPI library on your system.

We will now run this code in parallel on 3 processors. We will invoke the mpiexe launcher to run the code in parallel, however, depending on your system, your platform, or the supercomputer on which you are running this code, you may need a different MPI launcher (e.g., ibrun, mpirun, ...). In the following, we will assume that you will be using the Intel MPI library if your operating system is Windows (as implied by the flag -localonly).

Now, we run the MPI-enabled Python script in parallel on three cores, on the terminal (not in the Python session). On Linux or macOS, we can try the following command,

mpiexec -n 3 python main_mpi_singleChain.py


On windows, if you are using the Intel MPI library, we can try the following,

mpiexec -localonly -n 3 python main_mpi_singleChain.py


Otherwise, the same syntax and flags as used in the cases of Linux and macOS should work fine. To understand the meaning of the extra -localonly flag, see the ParaMonte library documentation page.

The following command combines the above two commands in a single line so that it works, whether you are using a Windows machine or Linux/macOS,

In [6]:
!ls && \
mpiexec -n 3 python sampling_multivariate_normal_distribution_via_paradram_parallel_singleChain.py || \
mpiexec -localonly -n 3 python sampling_multivariate_normal_distribution_via_paradram_parallel_singleChain.py


************************************************************************************************************************************
************************************************************************************************************************************
****                                                                                                                            ****
****                                                                                                                            ****
****                                                         ParaMonte                                                          ****
****                                                  Plain Powerful Parallel                                                   ****
****                                                    Monte Carlo Library                                                     ****
****                                                                                                                            ****
****                                                       Version 1.4.0                                                        ****
****                                                                                                                            ****
****                                              Build: Thu Oct 29 01:05:57 2020                                               ****
****                                                                                                                            ****
****                                                   Department of Physics                                                    ****
****                                              Computational & Data Science Lab                                              ****
****                                          Data Science Program, College of Science                                          ****
****                                            The University of Texas at Arlington                                            ****
****                                                                                                                            ****
****                                                  originally developed at                                                   ****
****                                                                                                                            ****
****                                                 Multiscale Modeling Group                                                  ****
****                                          Center for Computational Oncology (CCO)                                           ****
****                                 Oden Institute for Computational Engineering and Sciences                                  ****
****                               Department of Aerospace Engineering and Engineering Mechanics                                ****
****                                     Department of Neurology, Dell-Seton Medical School                                     ****
****                                            Department of Biomedical Engineering                                            ****
****                                             The University of Texas at Austin                                              ****
****                                                                                                                            ****
****                                                                                                                            ****
****                                                                                                                            ****
****                                                   [email protected]                                                    ****
****                                                  [email protected]                                                   ****
****                                                   [email protected]                                                    ****
****                                                                                                                            ****
****                                                       cdslab.org/pm                                                        ****
****                                                                                                                            ****
****                                             https://www.cdslab.org/paramonte/                                              ****
****                                                                                                                            ****
****                                                                                                                            ****
************************************************************************************************************************************
************************************************************************************************************************************

************************************************************************************************************************************
****                                                                                                                            ****
****                                       Setting up the ParaDRAM simulation environment                                       ****
****                                                                                                                            ****
************************************************************************************************************************************

ParaDRAM - NOTE: Variable outputFileName detected among the input variables to ParaDRAM:
ParaDRAM - NOTE: Absolute path to the current working directory:
ParaDRAM - NOTE: Generating the requested directory for the ParaDRAM output files:

ParaDRAM - NOTE: Generating the output report file:

ParaDRAM - NOTE: Running the simulation in parallel on 3 processes.
ParaDRAM - NOTE: Please see the output report and progress files for further realtime simulation details.

Accepted/Total Func. Call   Dynamic/Overall Acc. Rate   Elapsed/Remained Time [s]
=========================   =========================   =========================

4490 / 20000             0.2223 / 0.2223             0.2480 / 1.4090
10297 / 40000             0.2901 / 0.2562             0.5330 / 1.0199
16265 / 60000             0.3000 / 0.2708             0.8100 / 0.6840
22469 / 80000             0.3083 / 0.2801             1.0840 / 0.3633
28630 / 100000             0.3101 / 0.2861             1.3600 / 0.0651
30000 / 104525             0.3081 / 0.2871             1.4200 / 0.0000

ParaDRAM - NOTE: Computing the statistical properties of the Markov chain...

ParaDRAM - NOTE: Computing the final decorrelated sample size...

ParaDRAM - NOTE: Generating the output sample file:

ParaDRAM - NOTE: Computing the statistical properties of the final refined sample...


'ls' is not recognized as an internal or external command,
operable program or batch file.


The sampler has now generated 5 output files that are accessible here, all prefixed with mvn_parallel_singleChain_*. In particular, the simulation report file contains a lot of interesting information about the performance of the parallel simulation. We can process these files in the same way we did for the serial version of sampling the MVN PDF via the ParaDRAM sampler. For example, to parse the contents of the report file, we can try,

In [7]:
import paramonte as pm
print(pm.version.interface.get())
print(pm.version.kernel.get())

report = pmpd.reportList[0]

ParaMonte Python Interface Version 2.3.0
ParaMonte Python Kernel Version 1.4.0

ParaDRAM - NOTE: 1 files detected matching the pattern: "./out/mvn_parallel_singleChain*_report.txt"

done in 0.008975 seconds.

ParaDRAM - NOTE: The processed report files are now stored in the newly-created
ParaDRAM - NOTE: component reportList of the ParaDRAM object as a Python list.
ParaDRAM - NOTE: For example, to access the entire contents of the first (or the only) report file, try:
ParaDRAM - NOTE: where you will have to replace pmpd with your ParaDRAM instance name.
ParaDRAM - NOTE: To access the simulation statistics and information, examine the contents of the
ParaDRAM - NOTE: components of the following structures:
ParaDRAM - NOTE:     pmpd.reportList[0].contents.print()  # to print the contents of the report file.
ParaDRAM - NOTE:     pmpd.reportList[0].setup             # to get information about the simulation setup.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.time        # to get the timing information of the simulation.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.chain       # to get the statistics of the simulation output sample.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.numFuncCall # to get information about the number of function calls.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.parallelism # to get information about the simulation parallelism.
ParaDRAM - NOTE:     pmpd.reportList[0].spec              # to get the simulation specification in the report file.



There are a lot of detailed information about different aspects of the parallel simulation in this file. Here is a glance through some of the information extracted from the file,

In [8]:
print(report.stats.time.perFuncCall.value)
print(report.stats.time.perFuncCall.description)

1.313000702847873e-05
This is the average pure time cost of each function call, in seconds.

In [9]:
print(report.stats.time.perInterProcessCommunication.value)
print(report.stats.time.perInterProcessCommunication.description)

2.315229732036249e-06
This is the average time cost of inter-process communications per used (accepted or rejected or delayed-rejection) function call, in seconds.

In [10]:
print(report.stats.parallelism.current.numProcess.value)
print(report.stats.parallelism.current.numProcess.description)

3
This is the number of processes (images) used in this simulation.

In [11]:
print(report.stats.parallelism.current.speedup.value)
print(report.stats.parallelism.current.speedup.description)

1.573338694004443
This is the estimated maximum speedup gained via singleChain parallelization model compared to serial mode.

In [12]:
print(report.stats.parallelism.optimal.current.speedup.value)
print(report.stats.parallelism.optimal.current.speedup.description)

1.573338694004443
This is the predicted optimal maximum speedup gained via singleChain parallelization model, given the current MCMC sampling efficiency.

In [13]:
print(report.stats.parallelism.optimal.absolute.speedup.value)
print(report.stats.parallelism.optimal.absolute.speedup.description)

1.962074970857193
This is the predicted absolute optimal maximum speedup gained via singleChain parallelization model, under any MCMC sampling efficiency. This simulation will likely NOT benefit from any additional computing processors beyond the predicted absolute optimal number, 3, in the above. This is true for any value of MCMC sampling efficiency. Keep in mind that the predicted absolute optimal number of processors is just an estimate whose accuracy depends on many runtime factors, including the topology of the communication network being used, the number of processors per node, and the number of tasks to each processor or node.

In [14]:
print(report.stats.parallelism.processContribution.value)
print(report.stats.parallelism.processContribution.description)

[13792, 9551, 6657]
These are contributions of individual processes to the construction of the MCMC chain. Essentially, they represent the total number of accepted states by the corresponding processor, starting from the first processor to the last. This information is mostly informative in parallel Fork-Join (singleChain) simulations.


The ParaDRAM sampler also automatically computes the strong scaling behavior of the parallel simulation under the current and absolutely optimal simulation conditions. For example, we can plot these scaling results like the following,

In [15]:
print(report.stats.parallelism.optimal.current.scaling.strong.speedup.value)
print(report.stats.parallelism.optimal.current.scaling.strong.speedup.description)

[1.0, 1.4744375, 1.5733387, 1.5095088, 1.394962, 1.2742799]
This is the predicted strong-scaling speedup behavior of the singleChain parallelization model, given the current MCMC sampling efficiency, for increasing numbers of processes, starting from a single process.

In [16]:
print(report.stats.parallelism.optimal.absolute.scaling.strong.speedup.value)
print(report.stats.parallelism.optimal.absolute.scaling.strong.speedup.description)

[1.0, 1.7002015, 1.962075, 1.9436468, 1.809423, 1.6461051]
This is the predicted absolute strong-scaling speedup behavior of the singleChain parallelization model, under any MCMC sampling efficiency, for increasing numbers of processes, starting from a single process.

In [17]:
%matplotlib notebook
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
fig = plt.figure()
ax = fig.gca()
ax.plot(report.stats.parallelism.optimal.current.scaling.strong.speedup.value)
ax.plot(report.stats.parallelism.optimal.absolute.scaling.strong.speedup.value)
ax.set_ylabel("Speedup")
ax.set_xlabel("Number of Processors")
ax.legend(labels = ["current","absolutely-optimal"])

Out[17]:
<matplotlib.legend.Legend at 0x190cb224488>

### The efficiency of the parallel simulation¶

As we see in the above plot and the information extracted from the report file, the Estimated maximum speedup gained via singleChain parallelization model compared to serial mode, was only moderate (less than twice). This is partly because the example objective function here is too easy to compute and partly because this simulation was performed on a decent fast quad-core processor.

But, more importantly, note the Predicted absolute optimal maximum speedup gained via singleChain parallelization model, under any MCMC sampling efficiency, which tells us that no matter how you configure this simulation, the speedup gained by running this simulation in parallel can be at most a factor of ~2 better than the performance of the serial run of the same problem with the same simulation specifications, regardless of how many CPU cores you may use for the parallel simulation.

The ability of the sampler to give us such detailed efficiency reports is remarkable as it can help us set up our parallel simulations more reasonably and optimally, without wasting any extra computational resources with no efficiency gain.

When you are working with expensive large-scale simulations, it is, therefore, a good idea to run some tests of your simulation and check the output of the report file to find the predicted optimal number of physical cores for the parallel simulation and then, request the same number of cores as predicted when invoking the MPI launcher.

Similar to the report file, the rest of the simulation data can be also parsed and analyzed. However, such tasks are identical to the case of the serial simulation and we, therefore, suffice to direct the reader of this notebook to the Jupyter notebook for the serial version of this simulation problem.

## Running a multi-chain ParaDRAM simulation in parallel on multiple processors¶

There is another mode of parallelization, the multiChain or multi-chain mode, by which the ParaDRAM sampler can sample points from the objective function. In this mode, the sampler will generate multiple chains, each corresponding to one physical core on the computer. Each of these chains will independently explore the domain of the objective function.

To make the exploration even more interesting and robust (i.e., to ensure convergence to the target objective function by all independent chains), we can also let the initial starting point of the MCMC sampler to be chosen at random. To do so, we will have to also specify a domain from which the initial random start points will be sampled, otherwise, the default domain extends from negative infinity to positive infinity, which is problematic for computer simulations,

In [18]:
with open("./sampling_multivariate_normal_distribution_via_paradram_parallel_multiChain.py","w") as file:
contents = """
import numpy as np

NDIM = 4                                # number of dimensions of the domain of the MVN PDF
MEAN =  np.double([-10, 15., 20., 0.0]) # This is the mean of the MVN PDF.
COVMAT = np.double( [ [1.0,.45,-.3,0.0] # This is the covariance matrix of the MVN PDF.
, [.45,1.0,0.3,-.2]
, [-.3,0.3,1.0,0.6]
, [0.0,-.2,0.6,1.0]
] )

INVCOV = np.linalg.inv(COVMAT) # This is the inverse of the covariance matrix of the MVN distribution.

# The following is the log of the coefficient used in the definition of the MVN.

MVN_COEF = NDIM * np.log( 1. / np.sqrt(2.*np.pi) ) + np.log( np.sqrt(np.linalg.det(INVCOV)) )

# the logarithm of objective function: log(MVN)

def getLogFunc(point):
'''
Return the logarithm of the MVN PDF.
'''
normedPoint = MEAN - point
return MVN_COEF - 0.5 * ( np.dot(normedPoint,np.matmul(INVCOV,normedPoint)) )

import paramonte as pm

pmpd = pm.ParaDRAM() # define a ParaMonte sampler instance

pmpd.mpiEnabled = True # This is essential as it enables the invocation of the MPI-parallelized ParaDRAM routines.

pmpd.spec.overwriteRequested = True # overwrite existing output files if needed
pmpd.spec.randomSeed = 3751 # initialize the random seed to generate reproducible results.
pmpd.spec.outputFileName = "./out/mvn_parallel_multiChain"
pmpd.spec.progressReportPeriod = 20000
pmpd.spec.chainSize = 30000 # the default 100,000 unique points is too large for this simple example, so set it to 30000.

# set up a random initial starting point for each of the independent MCMC chains,
# by defining the domain of the random start points. The following defines the
# boundaries of the NDIM(=four)-dimensional hypercube from within which the
# random initial start points will be drawn by the sampler,

pmpd.spec.randomStartPointRequested = True # This is essential, otherwise, random initialization won't happen
pmpd.spec.randomStartPointDomainLowerLimitVec = NDIM * [-25]
pmpd.spec.randomStartPointDomainUpperLimitVec = NDIM * [+25]

# set the parallelization model to multichain

pmpd.spec.parallelizationModel = "multi chain" # the value is case and white-space insensitive

pmpd.runSampler ( ndim = 4
, getLogFunc = getLogFunc
)
"""
file.write(contents)


Now, we run the MPI-enabled Python script in parallel on three cores, on the terminal (not in the Python session),

In [19]:
!ls && \
mpiexec -n 3 python sampling_multivariate_normal_distribution_via_paradram_parallel_multiChain.py || \
mpiexec -localonly -n 3 python sampling_multivariate_normal_distribution_via_paradram_parallel_multiChain.py


************************************************************************************************************************************
************************************************************************************************************************************
****                                                                                                                            ****
****                                                                                                                            ****
****                                                         ParaMonte                                                          ****
****                                                  Plain Powerful Parallel                                                   ****
****                                                    Monte Carlo Library                                                     ****
****                                                                                                                            ****
****                                                       Version 1.4.0                                                        ****
****                                                                                                                            ****
****                                              Build: Thu Oct 29 01:05:57 2020                                               ****
****                                                                                                                            ****
****                                                   Department of Physics                                                    ****
****                                              Computational & Data Science Lab                                              ****
****                                          Data Science Program, College of Science                                          ****
****                                            The University of Texas at Arlington                                            ****
****                                                                                                                            ****
****                                                  originally developed at                                                   ****
****                                                                                                                            ****
****                                                 Multiscale Modeling Group                                                  ****
****                                          Center for Computational Oncology (CCO)                                           ****
****                                 Oden Institute for Computational Engineering and Sciences                                  ****
****                               Department of Aerospace Engineering and Engineering Mechanics                                ****
****                                     Department of Neurology, Dell-Seton Medical School                                     ****
****                                            Department of Biomedical Engineering                                            ****
****                                             The University of Texas at Austin                                              ****
****                                                                                                                            ****
****                                                                                                                            ****
****                                                                                                                            ****
****                                                   [email protected]                                                    ****
****                                                  [email protected]                                                   ****
****                                                   [email protected]                                                    ****
****                                                                                                                            ****
****                                                       cdslab.org/pm                                                        ****
****                                                                                                                            ****
****                                             https://www.cdslab.org/paramonte/                                              ****
****                                                                                                                            ****
****                                                                                                                            ****
************************************************************************************************************************************
************************************************************************************************************************************

************************************************************************************************************************************
****                                                                                                                            ****
****                                       Setting up the ParaDRAM simulation environment                                       ****
****                                                                                                                            ****
************************************************************************************************************************************

ParaDRAM - NOTE: Variable outputFileName detected among the input variables to ParaDRAM:
ParaDRAM - NOTE: Absolute path to the current working directory:
ParaDRAM - NOTE: Generating the requested directory for the ParaDRAM output files:

ParaDRAM - NOTE: Generating the output report file:

ParaDRAM - NOTE: Running the simulation in parallel on 3 processes.
ParaDRAM - NOTE: Please see the output report and progress files for further realtime simulation details.

Accepted/Total Func. Call   Dynamic/Overall Acc. Rate   Elapsed/Remained Time [s]
=========================   =========================   =========================

2312 / 20000             0.1154 / 0.1154             0.2880 / 3.4490
6582 / 40000             0.2128 / 0.1641             0.6120 / 2.1774
11289 / 60000             0.2376 / 0.1886             0.9050 / 1.5000
16478 / 80000             0.2613 / 0.2068             1.1970 / 0.9823
21912 / 100000             0.2705 / 0.2195             1.4980 / 0.5529
27517 / 120000             0.2803 / 0.2296             1.8000 / 0.1624
30000 / 128508             0.2916 / 0.2337             1.9300 / 0.0000

ParaDRAM - NOTE: Computing the statistical properties of the Markov chain...

ParaDRAM - NOTE: Computing the final decorrelated sample size...

ParaDRAM - NOTE: Generating the output sample file:

ParaDRAM - NOTE: Computing the statistical properties of the final refined sample...

ParaDRAM - NOTE: Computing the inter-chain convergence probabilities...

ParaDRAM - NOTE: The smallest KS probabilities for the inter-chain sampling convergence:
ParaDRAM - NOTE: 1.756220230600487E-002 for SampleVariable3 on the chains generated by processes 1 and 2.
ParaDRAM - NOTE: 1.756220230600487E-002 for SampleVariable3 on the chains generated by processes 2 and 1.
ParaDRAM - NOTE: 3.510737888357553E-002 for SampleVariable1 on the chains generated by processes 3 and 2.


'ls' is not recognized as an internal or external command,
operable program or batch file.


Unlike the the other modes of simulation, wether serial or singleChain-parallel, the multiChain-parallel ParadRAM simulation generates 5 * number_of_cores output files (prefixed by mvn_parallel_multiChain*) on the system, separated from each other by their processor IDs (starting from 1).

#### The Kolmogorov-Smirnov test of similarity of the independent samples from the independent MCMC chains¶

By looking at the end of any of the output _report.txt files, we will notice that the Kolmogorov-Smirnov (KS) probabilities of the similarities of pairs of these independent samples from indepedent MCMC chains is generally quite high, indicating the high level of similarities between the independent samples obtained from the independent MCMC chains. This means that there is no evidence of the lack of convergence of the MCMC samples to the target objective function. We know this for sure in this particular example, because the structure of the objective function is known to us. In other problems, however, this may never be known, in other words, we can just hope that the convergence has occurred.

Here is a KS-probability-table excerpt from the first processor's output *_report.txt file,

In [20]:
import paramonte as pm
print(pm.version.interface.get())
print(pm.version.kernel.get())

, renabled = True
)[0] # keep only the first report file's contents

ParaMonte Python Interface Version 2.3.0
ParaMonte Python Kernel Version 1.4.0

ParaDRAM - NOTE: 3 files detected matching the pattern: "./out/mvn_parallel_multiChain*_report.txt"

done in 0.008976 seconds.

done in 0.007977 seconds.

done in 0.006981 seconds.

ParaDRAM - NOTE: The processed report files are now stored in the output variable as a
ParaDRAM - NOTE: Python list. For example, to access the contents of the first (or the only)
ParaDRAM - NOTE: report file stored in an output variable named reportList, try:
ParaDRAM - NOTE: where you will have to replace pmpd with your ParaDRAM instance name.
ParaDRAM - NOTE: To access the simulation statistics and information, examine the contents of the
ParaDRAM - NOTE: components of the following structures:
ParaDRAM - NOTE:     reportList[0].contents.print()  # to print the contents of the report file.
ParaDRAM - NOTE:     reportList[0].setup             # to get information about the simulation setup.
ParaDRAM - NOTE:     reportList[0].stats.time        # to get the timing information of the simulation.
ParaDRAM - NOTE:     reportList[0].stats.chain       # to get the statistics of the simulation output sample.
ParaDRAM - NOTE:     reportList[0].stats.numFuncCall # to get information about the number of function calls.
ParaDRAM - NOTE:     reportList[0].stats.parallelism # to get information about the simulation parallelism.
ParaDRAM - NOTE:     reportList[0].spec              # to get the simulation specification in the report file.


In [21]:
print(report.stats.chain.refined.kstest.prob.value)
print(report.stats.chain.refined.kstest.prob.description)

                       ProcessID   probKS(SampleLogFunc) probKS(SampleVariable1) probKS(SampleVariable2) probKS(SampleVariable3) probKS(SampleVariable4)                        2         0.13943076E+000         0.30169409E-001         0.27181411E+000         0.17562202E-001         0.69328775E+000                        3         0.58678698E+000         0.41178962E+000         0.73120508E+000         0.17759241E+000         0.16880407E+000
This is the table pairwise inter-chain Kolmogorov-Smirnov (KS) convergence (similarity) probabilities. Higher KS probabilities are better, indicating less evidence for a lack of convergence.


We can also read and visualize the results of these parallel runs just as before,

In [22]:
%matplotlib notebook
import paramonte as pm
, renabled = True
)

ParaDRAM - WARNING: The delimiter is neither given as input to readMarkovchain()
ParaDRAM - WARNING: nor set as a simulation specification of the ParaDRAM object.
ParaDRAM - WARNING: This information is essential, otherwise how could the output files be parsed?
ParaDRAM - WARNING: For now, the ParaDRAM sampler will assume a comma-separated
ParaDRAM - WARNING: file format for the contents of the chain file(s) to be parsed.

ParaDRAM - NOTE: 3 files detected matching the pattern: "./out/mvn_parallel_multiChain_*_chain.txt"

ParaDRAM - NOTE: ndim = 4, count = 128508
ParaDRAM - NOTE: parsing file contents...
ParaDRAM - NOTE: computing the sample correlation matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.014997 seconds.
ParaDRAM - NOTE: computing the sample covariance matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.013961 seconds.
ParaDRAM - NOTE: computing the sample autocorrelations...
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.001003 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a jointplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a histplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot1 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot2 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contourf plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a grid plot object from scratch... done in 0.001012 seconds.

ParaDRAM - NOTE: ndim = 4, count = 313770
ParaDRAM - NOTE: parsing file contents...
ParaDRAM - NOTE: computing the sample correlation matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.015956 seconds.
ParaDRAM - NOTE: computing the sample covariance matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.014958 seconds.
ParaDRAM - NOTE: computing the sample autocorrelations...
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a jointplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a histplot plot object from scratch... done in 0.000966 seconds.
ParaDRAM - NOTE: creating a kdeplot1 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot2 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contourf plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a grid plot object from scratch... done in 0.000997 seconds.

ParaDRAM - NOTE: ndim = 4, count = 101395
ParaDRAM - NOTE: parsing file contents...
ParaDRAM - NOTE: computing the sample correlation matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.01594 seconds.
ParaDRAM - NOTE: computing the sample covariance matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.014992 seconds.
ParaDRAM - NOTE: computing the sample autocorrelations...
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.000997 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a jointplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a histplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot1 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot2 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contourf plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a grid plot object from scratch... done in 0.000997 seconds.

ParaDRAM - NOTE: The processed markovChain files are now stored in the output variable as a
ParaDRAM - NOTE: Python list. For example, to access the contents of the first (or the only)
ParaDRAM - NOTE: markovChain file stored in an output variable named markovChainList, try:
ParaDRAM - NOTE: where you will have to replace pmpd with your ParaDRAM instance name.
ParaDRAM - NOTE: To access the plotting tools, try:
ParaDRAM - NOTE:     markovChainList[0].plot.<PRESS TAB TO SEE THE LIST OF PLOTS>
ParaDRAM - NOTE:     markovChainList[0].plot.line()         # to make 2D line plots.
ParaDRAM - NOTE:     markovChainList[0].plot.scatter()      # to make 2D scatter plots.
ParaDRAM - NOTE:     markovChainList[0].plot.lineScatter()  # to make 2D line-scatter plots.
ParaDRAM - NOTE:     markovChainList[0].plot.line3()        # to make 3D line plots.
ParaDRAM - NOTE:     markovChainList[0].plot.scatter3()     # to make 3D scatter plots.
ParaDRAM - NOTE:     markovChainList[0].plot.lineScatter3() # to make 3D line-scatter plots.
ParaDRAM - NOTE:     markovChainList[0].plot.contour()      # to make fast 2D kernel density plots.
ParaDRAM - NOTE:     markovChainList[0].plot.contourf()     # to make fast 2D kernel density filled contour plots.
ParaDRAM - NOTE:     markovChainList[0].plot.contour3()     # to make fast 3D kernel density contour plots.
ParaDRAM - NOTE:     markovChainList[0].plot.histplot()     # to make seaborn 1D distribution plots.
ParaDRAM - NOTE:     markovChainList[0].plot.kdeplot1()     # to make seaborn 1D kernel density plots.
ParaDRAM - NOTE:     markovChainList[0].plot.kdeplot2()     # to make seaborn 2D kernel density plots.
ParaDRAM - NOTE:     markovChainList[0].plot.grid()         # to make GridPlot
ParaDRAM - NOTE: To plot or inspect the variable autocorrelations or the correlation/covariance matrices, try:
ParaDRAM - NOTE:     markovChainList[0].stats.<PRESS TAB TO SEE THE LIST OF COMPONENTS>


In [24]:
markovList[0].plot.line()

ParaDRAM - NOTE: making the line plot...
done in 1.252649 seconds.


Let's take the log of the x-axis for better visualization,

In [25]:
markovList[0].plot.line()
markovList[0].plot.line.currentFig.axes.set_xscale("log")

ParaDRAM - NOTE: making the line plot...
done in 1.247663 seconds.


We can compare this plot with the resulting Markov chain from, say, processor #3,

In [26]:
markovList[2].plot.line()
markovList[2].plot.line.currentFig.axes.set_xscale("log")

ParaDRAM - NOTE: making the line plot...
done in 1.001355 seconds.


Or perhaps, compare all of the independent (compact) chains (of uniquely sampled points) on the same plot,

In [27]:
import matplotlib.pyplot as plt
import paramonte as pm

plt.figure() # one figure for all plots

for process in range(pmpd.reportList[0].stats.parallelism.current.numProcess.value):

pmpd.chainList[process].plot.line.figure.enabled = False # all plots appear in the same plot
pmpd.chainList[process].plot.line.ccolumns = None # turn off color-mapping
pmpd.chainList[process].plot.line()

pmpd.chainList[process].plot.line.currentFig.axes.set_xscale("log")

ParaDRAM - NOTE: 3 files detected matching the pattern: "./out/mvn_parallel_multiChain*_report.txt"

done in 0.012984 seconds.

done in 0.007978 seconds.

done in 0.008975 seconds.

ParaDRAM - NOTE: The processed report files are now stored in the newly-created
ParaDRAM - NOTE: component reportList of the ParaDRAM object as a Python list.
ParaDRAM - NOTE: For example, to access the entire contents of the first (or the only) report file, try:
ParaDRAM - NOTE: where you will have to replace pmpd with your ParaDRAM instance name.
ParaDRAM - NOTE: To access the simulation statistics and information, examine the contents of the
ParaDRAM - NOTE: components of the following structures:
ParaDRAM - NOTE:     pmpd.reportList[0].contents.print()  # to print the contents of the report file.
ParaDRAM - NOTE:     pmpd.reportList[0].setup             # to get information about the simulation setup.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.time        # to get the timing information of the simulation.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.chain       # to get the statistics of the simulation output sample.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.numFuncCall # to get information about the number of function calls.
ParaDRAM - NOTE:     pmpd.reportList[0].stats.parallelism # to get information about the simulation parallelism.
ParaDRAM - NOTE:     pmpd.reportList[0].spec              # to get the simulation specification in the report file.

ParaDRAM - WARNING: The delimiter is neither given as input to readChain()
ParaDRAM - WARNING: nor set as a simulation specification of the ParaDRAM object.
ParaDRAM - WARNING: This information is essential, otherwise how could the output files be parsed?
ParaDRAM - WARNING: For now, the ParaDRAM sampler will assume a comma-separated
ParaDRAM - WARNING: file format for the contents of the chain file(s) to be parsed.

ParaDRAM - NOTE: 3 files detected matching the pattern: "./out/mvn_parallel_multiChain*_chain.txt"

ParaDRAM - NOTE: ndim = 4, count = 30000
ParaDRAM - NOTE: parsing file contents...
ParaDRAM - NOTE: computing the sample correlation matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.014946 seconds.
ParaDRAM - NOTE: computing the sample covariance matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.014961 seconds.
ParaDRAM - NOTE: computing the sample autocorrelations...
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.001 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a jointplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a histplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot1 plot object from scratch... done in 0.000998 seconds.
ParaDRAM - NOTE: creating a kdeplot2 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contourf plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a grid plot object from scratch... done in 0.000998 seconds.

ParaDRAM - NOTE: ndim = 4, count = 30000
ParaDRAM - NOTE: parsing file contents...
ParaDRAM - NOTE: computing the sample correlation matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.01496 seconds.
ParaDRAM - NOTE: computing the sample covariance matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.015957 seconds.
ParaDRAM - NOTE: computing the sample autocorrelations...
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.001034 seconds.
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter3 plot object from scratch... done in 0.001057 seconds.
ParaDRAM - NOTE: creating a jointplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a histplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot1 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot2 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contourf plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a grid plot object from scratch... done in 0.001995 seconds.

ParaDRAM - NOTE: ndim = 4, count = 30000
ParaDRAM - NOTE: parsing file contents...
ParaDRAM - NOTE: computing the sample correlation matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.013932 seconds.
ParaDRAM - NOTE: computing the sample covariance matrix...
ParaDRAM - NOTE: creating a heatmap plot object from scratch... done in 0.013963 seconds.
ParaDRAM - NOTE: computing the sample autocorrelations...
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line plot object from scratch... done in 0.001029 seconds.
ParaDRAM - NOTE: creating a scatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a line3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a scatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a lineScatter3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a jointplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a histplot plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot1 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a kdeplot2 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour3 plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contourf plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a contour plot object from scratch... done in 0.0 seconds.
ParaDRAM - NOTE: creating a grid plot object from scratch... done in 0.000997 seconds.

ParaDRAM - NOTE: The processed chain files are now stored in the newly-created
ParaDRAM - NOTE: component chainList of the ParaDRAM object as a Python list.
ParaDRAM - NOTE: For example, to access the contents of the first (or the only) chain file, try:
ParaDRAM - NOTE: where you will have to replace pmpd with your ParaDRAM instance name.
ParaDRAM - NOTE: To access the plotting tools, try:
ParaDRAM - NOTE:     pmpd.chainList[0].plot.<PRESS TAB TO SEE THE LIST OF PLOTS>
ParaDRAM - NOTE:     pmpd.chainList[0].plot.line()         # to make 2D line plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.scatter()      # to make 2D scatter plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.lineScatter()  # to make 2D line-scatter plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.line3()        # to make 3D line plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.scatter3()     # to make 3D scatter plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.lineScatter3() # to make 3D line-scatter plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.contour()      # to make fast 2D kernel density plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.contourf()     # to make fast 2D kernel density filled contour plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.contour3()     # to make fast 3D kernel density contour plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.histplot()     # to make seaborn 1D distribution plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.kdeplot1()     # to make seaborn 1D kernel density plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.kdeplot2()     # to make seaborn 2D kernel density plots.
ParaDRAM - NOTE:     pmpd.chainList[0].plot.grid()         # to make GridPlot
ParaDRAM - NOTE: To plot or inspect the variable autocorrelations or the correlation/covariance matrices, try:
ParaDRAM - NOTE:     pmpd.chainList[0].stats.<PRESS TAB TO SEE THE LIST OF COMPONENTS>


ParaDRAM - NOTE: making the line plot... done in 0.096741 seconds.
ParaDRAM - NOTE: making the line plot... done in 0.042886 seconds.
ParaDRAM - NOTE: making the line plot... done in 0.041889 seconds.


Impressively, all independent chains, even though all start at random locations in the domain of the objective function, end up at the same sole peak of the objective function, as expected and illustrated in the above figure.

There are many more functionalities and features of the ParaMonte library that were neither explored nor mentioned in this example Jupyter notebook. You can explore them by checking the existing components of each attribute of the ParaDRAM sampler class and by visiting the ParaMonte library's documentation website.

In [ ]: