Report Generation Tutorial

PyGSTi is able to construct polished report documents, which provide high-level summaries as well as detailed analyses of results (Gate Set Tomography (GST) and model-testing results in particular). Reports are intended to be quick and easy way of analyzing Model-type estimates, and pyGSTi's report generation functions are specifically designed to interact with the Results object (producted by several high-level algorithm functions - see, for example, the GST overview tutorial and GST functions tutorial.). The report generation functions in pyGSTi takes one or more results (often Results-type) objects as input and produces an HTML file as output. The HTML format allows the reports to include interactive plots and switches (see the workspace switchboard tutorial, making it easy to compare different types of analysis or data sets.

PyGSTi's reports are stand-alone HTML documents which cannot run Python. Thus, all the results displayed in a report must be pre-computed (in Python). If you find yourself wanting to fiddle with things and feel that these reports are too static, please consider using a Workspace object (see the Workspace tutorial) within a Jupyter notebook, where you can intermix report tables/plots and Python. Internally, functions like create_standard_report (see below) are just canned routines which use a WorkSpace object to generate various tables and plots and then insert them into a HTML template.

Note to veteran users: PyGSTi has for some time now transitioned to producing HTML (rather than LaTeX/PDF) reports. The way to generate such reports is largely unchanged, with one important exception. Previously, the Results object had various report-generation methods included within it. We've found this is too restrictive, as we'd sometimes like to generate a report which utilizes the results from multiple runs of GST (to compare them, for instance). Thus, the Results class is now just a container for a DataSet and its related Models, CircuitStructures, etc. All of the report-generation capability is now housed in within separate report functions, which we now demonstrate.

Get some Results

We start by performing GST using do_long_sequence_gst, as usual, to create a Results object (we could also have just loaded one from file). See the GST functions tutorial for more details.

In [1]:
import pygsti
from pygsti.construction import std1Q_XYI

target_model = std1Q_XYI.target_model()
fiducials = std1Q_XYI.fiducials
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8,16]
ds = pygsti.io.load_dataset("../tutorial_files/Example_Dataset.txt", cache=True)

#Run GST
target_model.set_all_parameterizations("TP") #TP-constrained
results = pygsti.do_long_sequence_gst(ds, target_model, fiducials, fiducials, germs,
                                      maxLengths, verbosity=3)
Loading from cache file: ../tutorial_files/Example_Dataset.txt.cache
--- Circuit Creation ---
   1282 sequences created
   Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
--- LGST ---
  Singular values of I_tilde (truncating to first 4 of 6) = 
  4.243730350963286
  1.1796261581655645
  0.9627515645786063
  0.9424890722054706
  0.033826151547621315
  0.01692336936843073
  
  Singular values of target I_tilde (truncating to first 4 of 6) = 
  4.242640687119286
  1.414213562373096
  1.414213562373096
  1.4142135623730954
  2.484037189058858e-16
  1.506337939585075e-16
  
    Resulting model:
    
    rho0 = TPSPAMVec with dimension 4
     0.71-0.02 0.03 0.75
    
    
    Mdefault = TPPOVM with effect vectors:
    0: FullSPAMVec with dimension 4
     0.73   0   0 0.65
    
    1: ComplementSPAMVec with dimension 4
     0.69   0   0-0.65
    
    
    
    Gi = 
    TPDenseOp with shape (4, 4)
     1.00   0   0   0
     0.01 0.92-0.03 0.02
     0.01-0.01 0.90 0.02
    -0.01   0   0 0.91
    
    
    Gx = 
    TPDenseOp with shape (4, 4)
     1.00   0   0   0
       0 0.91-0.01   0
    -0.02-0.02-0.04-0.99
    -0.05 0.03 0.81   0
    
    
    Gy = 
    TPDenseOp with shape (4, 4)
     1.00   0   0   0
     0.05   0   0 0.98
     0.01   0 0.89-0.03
    -0.06-0.82   0   0
    
    
    
    
--- Iterative MLGST: Iter 1 of 5  92 operation sequences ---: 
  --- Minimum Chi^2 GST ---
    bulk_evaltree: created initial tree (92 strs) in 0s
    bulk_evaltree: split tree (1 subtrees) in 0s
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
    --- Outer Iter 0: norm_f = 87.7156, mu=0, |J|=1139.8
    --- Outer Iter 1: norm_f = 72.5388, mu=331.735, |J|=4004.6
    --- Outer Iter 2: norm_f = 49.7369, mu=110.578, |J|=4000.74
    --- Outer Iter 3: norm_f = 49.7313, mu=36.8595, |J|=4000.76
    --- Outer Iter 4: norm_f = 49.7312, mu=12.2865, |J|=4000.76
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Sum of Chi^2 = 49.7312 (92 data params - 31 model params = expected mean of 61; p-value = 0.848365)
  Completed in 0.2s
  2*Delta(log(L)) = 49.9289
  Iteration 1 took 0.2s
  
--- Iterative MLGST: Iter 2 of 5  168 operation sequences ---: 
  --- Minimum Chi^2 GST ---
    bulk_evaltree: created initial tree (168 strs) in 0s
    bulk_evaltree: split tree (1 subtrees) in 0s
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
    --- Outer Iter 0: norm_f = 151.528, mu=0, |J|=4116.86
    --- Outer Iter 1: norm_f = 116.544, mu=1805.29, |J|=4114.08
    --- Outer Iter 2: norm_f = 111.925, mu=601.763, |J|=4113.57
    --- Outer Iter 3: norm_f = 111.481, mu=200.588, |J|=4113.47
    --- Outer Iter 4: norm_f = 111.47, mu=66.8625, |J|=4113.46
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Sum of Chi^2 = 111.47 (168 data params - 31 model params = expected mean of 137; p-value = 0.94621)
  Completed in 0.2s
  2*Delta(log(L)) = 111.83
  Iteration 2 took 0.2s
  
--- Iterative MLGST: Iter 3 of 5  450 operation sequences ---: 
  --- Minimum Chi^2 GST ---
    bulk_evaltree: created initial tree (450 strs) in 0s
    bulk_evaltree: split tree (1 subtrees) in 0s
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
    --- Outer Iter 0: norm_f = 496.301, mu=0, |J|=4503.89
    --- Outer Iter 1: norm_f = 422.607, mu=2013.07, |J|=4503.59
    --- Outer Iter 2: norm_f = 421.667, mu=671.023, |J|=4503.46
    --- Outer Iter 3: norm_f = 421.662, mu=223.674, |J|=4503.46
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Sum of Chi^2 = 421.662 (450 data params - 31 model params = expected mean of 419; p-value = 0.454312)
  Completed in 0.4s
  2*Delta(log(L)) = 422.134
  Iteration 3 took 0.5s
  
--- Iterative MLGST: Iter 4 of 5  862 operation sequences ---: 
  --- Minimum Chi^2 GST ---
    bulk_evaltree: created initial tree (862 strs) in 0s
    bulk_evaltree: split tree (1 subtrees) in 0s
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
    --- Outer Iter 0: norm_f = 854.092, mu=0, |J|=5091.17
    --- Outer Iter 1: norm_f = 813.479, mu=2302.96, |J|=5076.94
    --- Outer Iter 2: norm_f = 813.094, mu=767.654, |J|=5076.79
    --- Outer Iter 3: norm_f = 813.093, mu=255.885, |J|=5076.81
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Sum of Chi^2 = 813.093 (862 data params - 31 model params = expected mean of 831; p-value = 0.664967)
  Completed in 0.7s
  2*Delta(log(L)) = 814.492
  Iteration 4 took 0.8s
  
--- Iterative MLGST: Iter 5 of 5  1282 operation sequences ---: 
  --- Minimum Chi^2 GST ---
    bulk_evaltree: created initial tree (1282 strs) in 0s
    bulk_evaltree: split tree (1 subtrees) in 0s
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
    --- Outer Iter 0: norm_f = 1263.73, mu=0, |J|=5733.35
    --- Outer Iter 1: norm_f = 1250.68, mu=2582.97, |J|=5732.82
    --- Outer Iter 2: norm_f = 1250.62, mu=860.99, |J|=5733.21
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Sum of Chi^2 = 1250.62 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.497713)
  Completed in 1.0s
  2*Delta(log(L)) = 1252.41
  Iteration 5 took 1.1s
  
  Switching to ML objective (last iteration)
  --- MLGST ---
    --- Outer Iter 0: norm_f = 626.205, mu=0, |J|=3678.46
    --- Outer Iter 1: norm_f = 626.2, mu=3871.69, |J|=3391.16
    --- Outer Iter 2: norm_f = 626.195, mu=4.22891e+07, |J|=3314.59
    --- Outer Iter 3: norm_f = 626.186, mu=2.22294e+07, |J|=3102.02
    --- Outer Iter 4: norm_f = 626.178, mu=2.05642e+07, |J|=3014.46
    --- Outer Iter 5: norm_f = 626.177, mu=2.02336e+07, |J|=3121.67
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Maximum log(L) = 626.177 below upper bound of -2.13594e+06
      2*Delta(log(L)) = 1252.35 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.483889)
    Completed in 1.1s
  2*Delta(log(L)) = 1252.35
  Final MLGST took 1.1s
  
Iterative MLGST Total Time: 4.1s
  -- Adding Gauge Optimized (go0) --

Make a report

Now that we have results, we use the create_standard_report method within pygsti.report to generate a report.
pygsti.report.create_standard_report is the most commonly used report generation function in pyGSTi, as it is appropriate for smaller models (1- and 2-qubit) which have operations that are or can be represeted as dense matrices and/or vectors.

If the given filename ends in ".pdf" then a PDF-format report is generated; otherwise the file name specifies a folder that will be filled with HTML pages. To open a HTML-format report, you open the main.html file directly inside the report's folder. Setting auto_open=True makes the finished report open in your web browser automatically.

In [2]:
#HTML
pygsti.report.create_standard_report(results, "../tutorial_files/exampleReport", 
                                     title="GST Example Report", verbosity=1, auto_open=True)

print("\n")

#PDF
pygsti.report.create_standard_report(results, "../tutorial_files/exampleReport.pdf", 
                                     title="GST Example Report", verbosity=1, auto_open=True)
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to ../tutorial_files/exampleReport directory
Opening ../tutorial_files/exampleReport/main.html...
*** Report Generation Complete!  Total time 25.961s ***


*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
/usr/local/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning:


Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.

Latex file(s) successfully generated.  Attempting to compile with pdflatex...
Opening ../tutorial_files/exampleReport.pdf...
*** Report Generation Complete!  Total time 77.0165s ***
ERROR: pdflatex returned code 1 Check exampleReport.log to see details.
Out[2]:
<pygsti.report.workspace.Workspace at 0x129921128>

There are several remarks about these reports worth noting:

  1. The HTML reports are the primary report type in pyGSTi, and are much more flexible. The PDF reports are more limited (they can only display a single estimate and gauge optimization), and essentially contain a subset of the information and descriptive text of a HTML report. So, if you can, use the HTML reports. The PDF report's strength is its portability: PDFs are easily displayed by many devices, and they embed all that they need neatly into a single file. If you need to generate a PDF report from Results objects that have multiple estimates and/or gauge optimizations, consider using the Results object's view method to single out the estimate and gauge optimization you're after.
  2. It's best to use Firefox when opening the HTML reports. (If there's a problem with your brower's capabilities it will be shown on the screen when you try to load the report.)
  3. You'll need pdflatex on your system to compile PDF reports.
  4. To familiarize yourself with the layout of an HTML report, click on the gray "Help" link on the black sidebar.

Multiple estimates in a single report

Next, let's analyze the same data two different ways: with and without the TP-constraint (i.e. whether the gates must be trace-preserving) and furthermore gauge optmimize each case using several different SPAM-weights. In each case we'll call do_long_sequence_gst with gaugeOptParams=False, so that no gauge optimization is done, and then perform several gauge optimizations separately and add these to the Results object via its add_gaugeoptimized function.

In [3]:
#Case1: TP-constrained GST
tpTarget = target_model.copy()
tpTarget.set_all_parameterizations("TP")
results_tp = pygsti.do_long_sequence_gst(ds, tpTarget, fiducials, fiducials, germs,
                                      maxLengths, gaugeOptParams=False, verbosity=1)

#Gauge optimize
est = results_tp.estimates['default']
mdlFinal = est.models['final iteration estimate']
mdlTarget = est.models['target']
for spamWt in [1e-4,1e-2,1.0]:
    mdl = pygsti.gaugeopt_to_target(mdlFinal,mdlTarget,{'gates':1, 'spam':spamWt})
    est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, mdl, "Spam %g" % spamWt)
--- Circuit Creation ---
--- LGST ---
--- Iterative MLGST: [##################################################] 100.0%  1282 operation sequences ---
Iterative MLGST Total Time: 3.8s
In [4]:
#Case2: "Full" GST
fullTarget = target_model.copy()
fullTarget.set_all_parameterizations("full")
results_full = pygsti.do_long_sequence_gst(ds, fullTarget, fiducials, fiducials, germs,
                                           maxLengths, gaugeOptParams=False, verbosity=1)

#Gauge optimize
est = results_full.estimates['default']
mdlFinal = est.models['final iteration estimate']
mdlTarget = est.models['target']
for spamWt in [1e-4,1e-2,1.0]:
    mdl = pygsti.gaugeopt_to_target(mdlFinal,mdlTarget,{'gates':1, 'spam':spamWt})
    est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, mdl, "Spam %g" % spamWt)
--- Circuit Creation ---
--- LGST ---
--- Iterative MLGST: [##################################################] 100.0%  1282 operation sequences ---
Iterative MLGST Total Time: 3.9s

We'll now call the same create_standard_report function but this time instead of passing a single Results object as the first argument we'll pass a dictionary of them. This will result in a HTML report that includes switches to select which case ("TP" or "Full") as well as which gauge optimization to display output quantities for. PDF reports cannot support this interactivity, and so if you try to generate a PDF report you'll get an error.

In [5]:
ws = pygsti.report.create_standard_report({'TP': results_tp, "Full": results_full},
                                         "../tutorial_files/exampleMultiEstimateReport",
                                         title="Example Multi-Estimate Report", 
                                         verbosity=2, auto_open=True)
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
  targetSpamBriefTable                          took 0.201891 seconds
  targetGatesBoxTable                           took 0.268738 seconds
  datasetOverviewTable                          took 0.599083 seconds
  bestGatesetSpamParametersTable                took 0.001285 seconds
  bestGatesetSpamBriefTable                     took 2.048558 seconds
  bestGatesetSpamVsTargetTable                  took 0.33649 seconds
  bestGatesetGaugeOptParamsTable                took 0.000436 seconds
  bestGatesetGatesBoxTable                      took 1.570876 seconds
  bestGatesetChoiEvalTable                      took 3.876884 seconds
  bestGatesetDecompTable                        took 1.6544 seconds
  bestGatesetEvalTable                          took 0.00531 seconds
  bestGermsEvalTable                            took 0.040365 seconds
  bestGatesetVsTargetTable                      took 0.187189 seconds
/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/theory.py:204: UserWarning:

Output may be unreliable because the model is not approximately trace-preserving.

  bestGatesVsTargetTable_gv                     took 0.925095 seconds
  bestGatesVsTargetTable_gvgerms                took 0.317397 seconds
  bestGatesVsTargetTable_gi                     took 0.014471 seconds
  bestGatesVsTargetTable_gigerms                took 0.054638 seconds
  bestGatesVsTargetTable_sum                    took 0.818716 seconds
  bestGatesetErrGenBoxTable                     took 7.480171 seconds
  metadataTable                                 took 0.001853 seconds
  stdoutBlock                                   took 0.000206 seconds
  profilerTable                                 took 0.000871 seconds
  softwareEnvTable                              took 0.000413 seconds
  exampleTable                                  took 0.086893 seconds
  singleMetricTable_gv                          took 0.838561 seconds
  singleMetricTable_gi                          took 0.04354 seconds
  fiducialListTable                             took 0.000611 seconds
  prepStrListTable                              took 0.000239 seconds
  effectStrListTable                            took 0.0003 seconds
  colorBoxPlotKeyPlot                           took 0.095837 seconds
  germList2ColTable                             took 0.000381 seconds
  progressTable                                 took 2.919536 seconds
*** Generating plots ***
  gramBarPlot                                   took 0.217321 seconds
  progressBarPlot                               took 1.468046 seconds
  progressBarPlot_sum                           took 0.000597 seconds
  finalFitComparePlot                           took 0.711186 seconds
  bestEstimateColorBoxPlot                      took 12.230943 seconds
  bestEstimateTVDColorBoxPlot                   took 4.842788 seconds
  bestEstimateColorScatterPlot                  took 4.917573 seconds
  bestEstimateColorHistogram                    took 1.924395 seconds
  progressTable_scl                             took 7.8e-05 seconds
  progressBarPlot_scl                           took 7e-05 seconds
  bestEstimateColorBoxPlot_scl                  took 0.000162 seconds
  bestEstimateColorScatterPlot_scl              took 0.000156 seconds
  bestEstimateColorHistogram_scl                took 0.000157 seconds
  progressTable_ume                             took 7.1e-05 seconds
  progressBarPlot_ume                           took 6.9e-05 seconds
  bestEstimateColorBoxPlot_ume                  took 0.000194 seconds
  bestEstimateColorScatterPlot_ume              took 0.000191 seconds
  bestEstimateColorHistogram_ume                took 0.000188 seconds
  dataScalingColorBoxPlot                       took 6.6e-05 seconds
  unmodeledErrorBudgetTable                     took 0.000104 seconds
Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.
Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.
Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.
Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.
  dsComparisonSummary                           took 0.22732 seconds
  dsComparisonHistogram                         took 0.848321 seconds
  dsComparisonBoxPlot                           took 1.279172 seconds
*** Merging into template file ***
  Rendering topSwitchboard                      took 0.000104 seconds
  Rendering maxLSwitchboard1                    took 7.7e-05 seconds
  Rendering targetSpamBriefTable                took 0.344551 seconds
  Rendering targetGatesBoxTable                 took 0.325133 seconds
  Rendering datasetOverviewTable                took 0.001299 seconds
  Rendering bestGatesetSpamParametersTable      took 0.007272 seconds
  Rendering bestGatesetSpamBriefTable           took 2.667283 seconds
  Rendering bestGatesetSpamVsTargetTable        took 0.007938 seconds
  Rendering bestGatesetGaugeOptParamsTable      took 0.003673 seconds
  Rendering bestGatesetGatesBoxTable            took 1.972593 seconds
  Rendering bestGatesetChoiEvalTable            took 2.811332 seconds
  Rendering bestGatesetDecompTable              took 0.976653 seconds
  Rendering bestGatesetEvalTable                took 0.025112 seconds
  Rendering bestGermsEvalTable                  took 0.08723 seconds
  Rendering bestGatesetVsTargetTable            took 0.004299 seconds
  Rendering bestGatesVsTargetTable_gv           took 0.012343 seconds
  Rendering bestGatesVsTargetTable_gvgerms      took 0.018309 seconds
  Rendering bestGatesVsTargetTable_gi           took 0.004365 seconds
  Rendering bestGatesVsTargetTable_gigerms      took 0.005716 seconds
  Rendering bestGatesVsTargetTable_sum          took 0.012835 seconds
  Rendering bestGatesetErrGenBoxTable           took 3.893303 seconds
  Rendering metadataTable                       took 0.01158 seconds
  Rendering stdoutBlock                         took 0.001182 seconds
  Rendering profilerTable                       took 0.002821 seconds
  Rendering softwareEnvTable                    took 0.002449 seconds
  Rendering exampleTable                        took 0.943825 seconds
  Rendering metricSwitchboard_gv                took 5.4e-05 seconds
  Rendering metricSwitchboard_gi                took 4.1e-05 seconds
  Rendering singleMetricTable_gv                took 0.034174 seconds
  Rendering singleMetricTable_gi                took 0.027452 seconds
  Rendering fiducialListTable                   took 0.005128 seconds
  Rendering prepStrListTable                    took 0.004599 seconds
  Rendering effectStrListTable                  took 0.006621 seconds
  Rendering colorBoxPlotKeyPlot                 took 0.067882 seconds
  Rendering germList2ColTable                   took 0.007428 seconds
  Rendering progressTable                       took 0.007178 seconds
  Rendering gramBarPlot                         took 0.125275 seconds
  Rendering progressBarPlot                     took 0.105399 seconds
  Rendering progressBarPlot_sum                 took 0.10687 seconds
  Rendering finalFitComparePlot                 took 0.054535 seconds
  Rendering bestEstimateColorBoxPlot            took 1.040426 seconds
  Rendering bestEstimateTVDColorBoxPlot         took 1.042014 seconds
  Rendering bestEstimateColorScatterPlot        took 1.502723 seconds
  Rendering bestEstimateColorHistogram          took 0.783763 seconds
  Rendering progressTable_scl                   took 0.000847 seconds
  Rendering progressBarPlot_scl                 took 0.001318 seconds
  Rendering bestEstimateColorBoxPlot_scl        took 0.001162 seconds
  Rendering bestEstimateColorScatterPlot_scl    took 0.000931 seconds
  Rendering bestEstimateColorHistogram_scl      took 0.000939 seconds
  Rendering progressTable_ume                   took 0.000822 seconds
  Rendering progressBarPlot_ume                 took 0.000633 seconds
  Rendering bestEstimateColorBoxPlot_ume        took 0.000813 seconds
  Rendering bestEstimateColorScatterPlot_ume    took 0.000613 seconds
  Rendering bestEstimateColorHistogram_ume      took 0.000916 seconds
  Rendering dataScalingColorBoxPlot             took 0.000834 seconds
  Rendering unmodeledErrorBudgetTable           took 0.0008 seconds
  Rendering dscmpSwitchboard                    took 4.3e-05 seconds
  Rendering dsComparisonSummary                 took 0.068067 seconds
  Rendering dsComparisonHistogram               took 0.421229 seconds
  Rendering dsComparisonBoxPlot                 took 0.618521 seconds
Output written to ../tutorial_files/exampleMultiEstimateReport directory
Opening ../tutorial_files/exampleMultiEstimateReport/main.html...
*** Report Generation Complete!  Total time 82.3834s ***

In the above call we capture the return value in the variable ws - a Workspace object. PyGSTi's Workspace objects function as both a factory for figures and tables as well as a smart cache for computed values. Within create_standard_report a Workspace object is created and used to create all the figures in the report. As an intended side effect, each of these figures is cached, along with some of the intermediate results used to create it. As we'll see below, a Workspace can also be specified as input to create_standard_report, allowing it to utilize previously cached quantities.

Another way: Because both results_tp and results_full above used the same dataset and operation sequences, we could have combined them as two estimates in a single Results object (see the previous tutorial on pyGSTi's Results object). This can be done by renaming at least one of the "default"-named estimates in results_tp or results_full (below we rename both) and then adding the estimate within results_full to the estimates already contained in results_tp:

In [6]:
results_tp.rename_estimate('default','TP')
results_full.rename_estimate('default','Full')
results_both = results_tp.copy() #copy just for neatness
results_both.add_estimates(results_full, estimatesToAdd=['Full'])

Creating a report using results_both will result in the same report we just generated. We'll demonstrate this anyway, but in addition we'll supply create_standard_report a ws argument, which tells it to use any cached values contained in a given input Workspace to expedite report generation. Since our workspace object has the exact quantities we need cached in it, you'll notice a significant speedup. Finally, note that even though there's just a single Results object, you still can't generate a PDF report from it because it contains multiple estimates.

In [7]:
pygsti.report.create_standard_report(results_both,
                                     "../tutorial_files/exampleMultiEstimateReport2",
                                     title="Example Multi-Estimate Report (v2)", 
                                     verbosity=2, auto_open=True, ws=ws)
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
  targetSpamBriefTable                          took 0.00044 seconds
  targetGatesBoxTable                           took 0.000298 seconds
  datasetOverviewTable                          took 0.00022 seconds
  bestGatesetSpamParametersTable                took 0.001013 seconds
  bestGatesetSpamBriefTable                     took 0.001373 seconds
  bestGatesetSpamVsTargetTable                  took 0.000931 seconds
  bestGatesetGaugeOptParamsTable                took 0.000709 seconds
  bestGatesetGatesBoxTable                      took 0.001309 seconds
  bestGatesetChoiEvalTable                      took 0.001177 seconds
  bestGatesetDecompTable                        took 0.00092 seconds
  bestGatesetEvalTable                          took 0.000392 seconds
  bestGermsEvalTable                            took 0.050954 seconds
  bestGatesetVsTargetTable                      took 0.006293 seconds
  bestGatesVsTargetTable_gv                     took 0.000973 seconds
  bestGatesVsTargetTable_gvgerms                took 0.002265 seconds
  bestGatesVsTargetTable_gi                     took 0.000398 seconds
  bestGatesVsTargetTable_gigerms                took 0.066392 seconds
  bestGatesVsTargetTable_sum                    took 0.000719 seconds
  bestGatesetErrGenBoxTable                     took 0.000676 seconds
  metadataTable                                 took 0.003255 seconds
  stdoutBlock                                   took 0.000167 seconds
  profilerTable                                 took 0.000897 seconds
  softwareEnvTable                              took 0.000129 seconds
  exampleTable                                  took 0.000102 seconds
  singleMetricTable_gv                          took 0.963833 seconds
  singleMetricTable_gi                          took 0.045002 seconds
  fiducialListTable                             took 0.000195 seconds
  prepStrListTable                              took 0.000207 seconds
  effectStrListTable                            took 0.000191 seconds
  colorBoxPlotKeyPlot                           took 0.000375 seconds
  germList2ColTable                             took 0.0015 seconds
  progressTable                                 took 0.707788 seconds
*** Generating plots ***
  gramBarPlot                                   took 0.000454 seconds
  progressBarPlot                               took 0.760048 seconds
  progressBarPlot_sum                           took 0.000572 seconds
  finalFitComparePlot                           took 0.088992 seconds
  bestEstimateColorBoxPlot                      took 5.80522 seconds
  bestEstimateTVDColorBoxPlot                   took 2.450461 seconds
  bestEstimateColorScatterPlot                  took 2.432255 seconds
  bestEstimateColorHistogram                    took 1.908432 seconds
  progressTable_scl                             took 0.000801 seconds
  progressBarPlot_scl                           took 6e-05 seconds
  bestEstimateColorBoxPlot_scl                  took 0.000156 seconds
  bestEstimateColorScatterPlot_scl              took 0.000152 seconds
  bestEstimateColorHistogram_scl                took 0.000142 seconds
  progressTable_ume                             took 7.9e-05 seconds
  progressBarPlot_ume                           took 6.5e-05 seconds
  bestEstimateColorBoxPlot_ume                  took 0.000201 seconds
  bestEstimateColorScatterPlot_ume              took 0.000191 seconds
  bestEstimateColorHistogram_ume                took 0.000183 seconds
  dataScalingColorBoxPlot                       took 0.000117 seconds
  unmodeledErrorBudgetTable                     took 0.000191 seconds
*** Merging into template file ***
  Rendering topSwitchboard                      took 9.9e-05 seconds
  Rendering maxLSwitchboard1                    took 8.2e-05 seconds
  Rendering targetSpamBriefTable                took 0.333739 seconds
  Rendering targetGatesBoxTable                 took 0.323861 seconds
  Rendering datasetOverviewTable                took 0.001096 seconds
  Rendering bestGatesetSpamParametersTable      took 0.006244 seconds
  Rendering bestGatesetSpamBriefTable           took 1.933245 seconds
  Rendering bestGatesetSpamVsTargetTable        took 0.007183 seconds
  Rendering bestGatesetGaugeOptParamsTable      took 0.003619 seconds
  Rendering bestGatesetGatesBoxTable            took 1.909 seconds
  Rendering bestGatesetChoiEvalTable            took 2.84683 seconds
  Rendering bestGatesetDecompTable              took 0.981645 seconds
  Rendering bestGatesetEvalTable                took 0.023699 seconds
  Rendering bestGermsEvalTable                  took 0.088832 seconds
  Rendering bestGatesetVsTargetTable            took 0.004196 seconds
  Rendering bestGatesVsTargetTable_gv           took 0.012065 seconds
  Rendering bestGatesVsTargetTable_gvgerms      took 0.019041 seconds
  Rendering bestGatesVsTargetTable_gi           took 0.004192 seconds
  Rendering bestGatesVsTargetTable_gigerms      took 0.005039 seconds
  Rendering bestGatesVsTargetTable_sum          took 0.010589 seconds
  Rendering bestGatesetErrGenBoxTable           took 4.822777 seconds
  Rendering metadataTable                       took 0.011822 seconds
  Rendering stdoutBlock                         took 0.001504 seconds
  Rendering profilerTable                       took 0.002887 seconds
  Rendering softwareEnvTable                    took 0.002497 seconds
  Rendering exampleTable                        took 0.056633 seconds
  Rendering metricSwitchboard_gv                took 3.9e-05 seconds
  Rendering metricSwitchboard_gi                took 2.9e-05 seconds
  Rendering singleMetricTable_gv                took 0.019223 seconds
  Rendering singleMetricTable_gi                took 0.015186 seconds
  Rendering fiducialListTable                   took 0.002964 seconds
  Rendering prepStrListTable                    took 0.002168 seconds
  Rendering effectStrListTable                  took 0.002232 seconds
  Rendering colorBoxPlotKeyPlot                 took 0.061218 seconds
  Rendering germList2ColTable                   took 0.004123 seconds
  Rendering progressTable                       took 0.007311 seconds
  Rendering gramBarPlot                         took 0.109358 seconds
  Rendering progressBarPlot                     took 0.106966 seconds
  Rendering progressBarPlot_sum                 took 0.112411 seconds
  Rendering finalFitComparePlot                 took 0.052846 seconds
  Rendering bestEstimateColorBoxPlot            took 1.025283 seconds
  Rendering bestEstimateTVDColorBoxPlot         took 1.063756 seconds
  Rendering bestEstimateColorScatterPlot        took 1.556072 seconds
  Rendering bestEstimateColorHistogram          took 0.787144 seconds
  Rendering progressTable_scl                   took 0.000644 seconds
  Rendering progressBarPlot_scl                 took 0.000733 seconds
  Rendering bestEstimateColorBoxPlot_scl        took 0.001121 seconds
  Rendering bestEstimateColorScatterPlot_scl    took 0.000887 seconds
  Rendering bestEstimateColorHistogram_scl      took 0.000945 seconds
  Rendering progressTable_ume                   took 0.001037 seconds
  Rendering progressBarPlot_ume                 took 0.000782 seconds
  Rendering bestEstimateColorBoxPlot_ume        took 0.000873 seconds
  Rendering bestEstimateColorScatterPlot_ume    took 0.000761 seconds
  Rendering bestEstimateColorHistogram_ume      took 0.000577 seconds
  Rendering dataScalingColorBoxPlot             took 0.000746 seconds
  Rendering unmodeledErrorBudgetTable           took 0.00082 seconds
Output written to ../tutorial_files/exampleMultiEstimateReport2 directory
Opening ../tutorial_files/exampleMultiEstimateReport2/main.html...
*** Report Generation Complete!  Total time 34.166s ***
Out[7]:
<pygsti.report.workspace.Workspace at 0x12c273b38>

Multiple estimates and do_stdpractice_gst

It's no coincidence that a Results object containing multiple estimates using the same data is precisely what's returned from do_stdpractice_gst (see docstring for information on its arguments, and see the GST functions tutorial). This allows one to run GST multiple times, creating several different "standard" estimates and gauge optimizations, and plot them all in a single (HTML) report.

In [8]:
results_std = pygsti.do_stdpractice_gst(ds, target_model, fiducials, fiducials, germs,
                                        maxLengths, verbosity=4, modes="TP,CPTP,Target",
                                        gaugeOptSuite=('single','toggleValidSpam'))

# Generate a report with "TP", "CPTP", and "Target" estimates
pygsti.report.create_standard_report(results_std, "../tutorial_files/exampleStdReport", 
                                     title="Post StdPractice Report", auto_open=True,
                                     verbosity=1)
-- Std Practice:  Iter 1 of 3  (TP) --: 
  --- Circuit Creation ---
     1282 sequences created
     Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
  --- LGST ---
    Singular values of I_tilde (truncating to first 4 of 6) = 
    4.243730350963286
    1.1796261581655645
    0.9627515645786063
    0.9424890722054706
    0.033826151547621315
    0.01692336936843073
    
    Singular values of target I_tilde (truncating to first 4 of 6) = 
    4.242640687119286
    1.414213562373096
    1.414213562373096
    1.4142135623730954
    2.484037189058858e-16
    1.506337939585075e-16
    
      Resulting model:
      
      rho0 = TPSPAMVec with dimension 4
       0.71-0.02 0.03 0.75
      
      
      Mdefault = TPPOVM with effect vectors:
      0: FullSPAMVec with dimension 4
       0.73   0   0 0.65
      
      1: ComplementSPAMVec with dimension 4
       0.69   0   0-0.65
      
      
      
      Gi = 
      TPDenseOp with shape (4, 4)
       1.00   0   0   0
       0.01 0.92-0.03 0.02
       0.01-0.01 0.90 0.02
      -0.01   0   0 0.91
      
      
      Gx = 
      TPDenseOp with shape (4, 4)
       1.00   0   0   0
         0 0.91-0.01   0
      -0.02-0.02-0.04-0.99
      -0.05 0.03 0.81   0
      
      
      Gy = 
      TPDenseOp with shape (4, 4)
       1.00   0   0   0
       0.05   0   0 0.98
       0.01   0 0.89-0.03
      -0.06-0.82   0   0
      
      
      
      
  --- Iterative MLGST: Iter 1 of 5  92 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (92 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
      --- Outer Iter 0: norm_f = 87.7156, mu=0, |J|=1139.8
      --- Outer Iter 1: norm_f = 72.5388, mu=331.735, |J|=4004.6
      --- Outer Iter 2: norm_f = 49.7369, mu=110.578, |J|=4000.74
      --- Outer Iter 3: norm_f = 49.7313, mu=36.8595, |J|=4000.76
      --- Outer Iter 4: norm_f = 49.7312, mu=12.2865, |J|=4000.76
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 49.7312 (92 data params - 31 model params = expected mean of 61; p-value = 0.848365)
    Completed in 0.2s
    2*Delta(log(L)) = 49.9289
    Iteration 1 took 0.2s
    
  --- Iterative MLGST: Iter 2 of 5  168 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (168 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
      --- Outer Iter 0: norm_f = 151.528, mu=0, |J|=4116.86
      --- Outer Iter 1: norm_f = 116.544, mu=1805.29, |J|=4114.08
      --- Outer Iter 2: norm_f = 111.925, mu=601.763, |J|=4113.57
      --- Outer Iter 3: norm_f = 111.481, mu=200.588, |J|=4113.47
      --- Outer Iter 4: norm_f = 111.47, mu=66.8625, |J|=4113.46
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 111.47 (168 data params - 31 model params = expected mean of 137; p-value = 0.94621)
    Completed in 0.2s
    2*Delta(log(L)) = 111.83
    Iteration 2 took 0.2s
    
  --- Iterative MLGST: Iter 3 of 5  450 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (450 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
      --- Outer Iter 0: norm_f = 496.301, mu=0, |J|=4503.89
      --- Outer Iter 1: norm_f = 422.607, mu=2013.07, |J|=4503.59
      --- Outer Iter 2: norm_f = 421.667, mu=671.023, |J|=4503.46
      --- Outer Iter 3: norm_f = 421.662, mu=223.674, |J|=4503.46
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 421.662 (450 data params - 31 model params = expected mean of 419; p-value = 0.454312)
    Completed in 0.4s
    2*Delta(log(L)) = 422.134
    Iteration 3 took 0.5s
    
  --- Iterative MLGST: Iter 4 of 5  862 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (862 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
      --- Outer Iter 0: norm_f = 854.092, mu=0, |J|=5091.17
      --- Outer Iter 1: norm_f = 813.479, mu=2302.96, |J|=5076.94
      --- Outer Iter 2: norm_f = 813.094, mu=767.654, |J|=5076.79
      --- Outer Iter 3: norm_f = 813.093, mu=255.885, |J|=5076.81
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 813.093 (862 data params - 31 model params = expected mean of 831; p-value = 0.664967)
    Completed in 0.7s
    2*Delta(log(L)) = 814.492
    Iteration 4 took 0.8s
    
  --- Iterative MLGST: Iter 5 of 5  1282 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (1282 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).
      --- Outer Iter 0: norm_f = 1263.73, mu=0, |J|=5733.35
      --- Outer Iter 1: norm_f = 1250.68, mu=2582.97, |J|=5732.82
      --- Outer Iter 2: norm_f = 1250.62, mu=860.99, |J|=5733.21
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 1250.62 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.497713)
    Completed in 1.0s
    2*Delta(log(L)) = 1252.41
    Iteration 5 took 1.1s
    
    Switching to ML objective (last iteration)
    --- MLGST ---
      --- Outer Iter 0: norm_f = 626.205, mu=0, |J|=3678.46
      --- Outer Iter 1: norm_f = 626.2, mu=3871.69, |J|=3391.16
      --- Outer Iter 2: norm_f = 626.195, mu=4.22891e+07, |J|=3314.59
      --- Outer Iter 3: norm_f = 626.186, mu=2.22294e+07, |J|=3102.02
      --- Outer Iter 4: norm_f = 626.178, mu=2.05642e+07, |J|=3014.46
      --- Outer Iter 5: norm_f = 626.177, mu=2.02336e+07, |J|=3121.67
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
      Maximum log(L) = 626.177 below upper bound of -2.13594e+06
        2*Delta(log(L)) = 1252.35 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.483889)
      Completed in 1.1s
    2*Delta(log(L)) = 1252.35
    Final MLGST took 1.1s
    
  Iterative MLGST Total Time: 3.9s
  -- Performing 'single' gauge optimization on TP estimate --
      -- Adding Gauge Optimized (single) --
  -- Performing 'Spam 0.001' gauge optimization on TP estimate --
      -- Adding Gauge Optimized (Spam 0.001) --
  -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --
      -- Adding Gauge Optimized (Spam 0.001+v) --
-- Std Practice:  Iter 2 of 3  (CPTP) --: 
  --- Circuit Creation ---
     1282 sequences created
     Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
  --- Iterative MLGST: Iter 1 of 5  92 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (92 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).
      --- Outer Iter 0: norm_f = 1.10824e+07, mu=0, |J|=776.628
      --- Outer Iter 1: norm_f = 13800.3, mu=76.022, |J|=1599.57
      --- Outer Iter 2: norm_f = 763.11, mu=25.3407, |J|=816.023
      --- Outer Iter 3: norm_f = 255.531, mu=23.2393, |J|=642.003
      --- Outer Iter 4: norm_f = 56.0615, mu=7.74643, |J|=683.416
      --- Outer Iter 5: norm_f = 50.4927, mu=4.60597, |J|=690.695
      --- Outer Iter 6: norm_f = 50.2238, mu=302.543, |J|=691.859
      --- Outer Iter 7: norm_f = 50.08, mu=625.953, |J|=692.087
      --- Outer Iter 8: norm_f = 50.0063, mu=1252.23, |J|=692.272
      --- Outer Iter 9: norm_f = 49.994, mu=1254.18, |J|=692.395
      --- Outer Iter 10: norm_f = 49.9873, mu=1254.18, |J|=692.445
      --- Outer Iter 11: norm_f = 49.9826, mu=1245.43, |J|=692.488
      --- Outer Iter 12: norm_f = 49.9788, mu=1164.85, |J|=692.528
      --- Outer Iter 13: norm_f = 49.9754, mu=960.324, |J|=692.568
      --- Outer Iter 14: norm_f = 49.9717, mu=788.918, |J|=692.612
      --- Outer Iter 15: norm_f = 49.968, mu=778.874, |J|=692.664
      --- Outer Iter 16: norm_f = 49.9657, mu=870.975, |J|=692.716
      --- Outer Iter 17: norm_f = 49.9647, mu=1483.87, |J|=692.763
      --- Outer Iter 18: norm_f = 49.9579, mu=1500.05, |J|=692.766
      --- Outer Iter 19: norm_f = 49.9549, mu=1482.36, |J|=692.793
      --- Outer Iter 20: norm_f = 49.9525, mu=889.486, |J|=692.816
      --- Outer Iter 21: norm_f = 49.9487, mu=296.495, |J|=692.85
      --- Outer Iter 22: norm_f = 49.9404, mu=279.821, |J|=692.941
      --- Outer Iter 23: norm_f = 49.936, mu=2197.07, |J|=692.96
      --- Outer Iter 24: norm_f = 49.9345, mu=732.357, |J|=692.977
      --- Outer Iter 25: norm_f = 49.9299, mu=244.119, |J|=693.015
      --- Outer Iter 26: norm_f = 49.9165, mu=81.373, |J|=693.097
      --- Outer Iter 27: norm_f = 49.8855, mu=48.3574, |J|=693.193
      --- Outer Iter 28: norm_f = 49.8755, mu=343.197, |J|=693.547
      --- Outer Iter 29: norm_f = 49.8723, mu=2564.54, |J|=693.554
      --- Outer Iter 30: norm_f = 49.8711, mu=875.858, |J|=693.568
      --- Outer Iter 31: norm_f = 49.8681, mu=291.953, |J|=693.602
      --- Outer Iter 32: norm_f = 49.8599, mu=97.3176, |J|=693.686
      --- Outer Iter 33: norm_f = 49.8568, mu=719.379, |J|=694.836
      --- Outer Iter 34: norm_f = 49.8484, mu=1438.71, |J|=694.773
      --- Outer Iter 35: norm_f = 49.846, mu=1953.37, |J|=694.828
      --- Outer Iter 36: norm_f = 49.8408, mu=2078.72, |J|=694.759
      --- Outer Iter 37: norm_f = 49.837, mu=2085.7, |J|=694.723
      --- Outer Iter 38: norm_f = 49.8343, mu=2069.11, |J|=694.717
      --- Outer Iter 39: norm_f = 49.832, mu=1700.46, |J|=694.721
      --- Outer Iter 40: norm_f = 49.8295, mu=942.418, |J|=694.734
      --- Outer Iter 41: norm_f = 49.8257, mu=695.968, |J|=694.767
      --- Outer Iter 42: norm_f = 49.8248, mu=1014.39, |J|=694.948
      --- Outer Iter 43: norm_f = 49.8195, mu=2058.22, |J|=694.832
      --- Outer Iter 44: norm_f = 49.8172, mu=2052.16, |J|=694.821
      --- Outer Iter 45: norm_f = 49.8155, mu=1503.29, |J|=694.826
      --- Outer Iter 46: norm_f = 49.8134, mu=530.976, |J|=694.84
      --- Outer Iter 47: norm_f = 49.8085, mu=393.107, |J|=694.898
      --- Outer Iter 48: norm_f = 49.8069, mu=2812.7, |J|=694.886
      --- Outer Iter 49: norm_f = 49.8059, mu=937.568, |J|=694.894
      --- Outer Iter 50: norm_f = 49.8031, mu=312.523, |J|=694.916
      --- Outer Iter 51: norm_f = 49.7955, mu=104.174, |J|=694.971
      --- Outer Iter 52: norm_f = 49.7797, mu=44.277, |J|=695.102
      --- Outer Iter 53: norm_f = 49.7774, mu=367.359, |J|=695.394
      --- Outer Iter 54: norm_f = 49.774, mu=2818.86, |J|=695.263
      --- Outer Iter 55: norm_f = 49.7733, mu=1867.59, |J|=695.261
      --- Outer Iter 56: norm_f = 49.7726, mu=622.531, |J|=695.272
      --- Outer Iter 57: norm_f = 49.7704, mu=207.51, |J|=695.303
      --- Outer Iter 58: norm_f = 49.765, mu=69.1701, |J|=695.393
      --- Outer Iter 59: norm_f = 49.7637, mu=553.511, |J|=695.481
      --- Outer Iter 60: norm_f = 49.7625, mu=1122.46, |J|=695.477
      --- Outer Iter 61: norm_f = 49.7614, mu=1122.43, |J|=695.485
      --- Outer Iter 62: norm_f = 49.7605, mu=1076.81, |J|=695.496
      --- Outer Iter 63: norm_f = 49.7596, mu=840.997, |J|=695.513
      --- Outer Iter 64: norm_f = 49.7586, mu=555.013, |J|=695.536
      --- Outer Iter 65: norm_f = 49.7573, mu=516.685, |J|=695.584
      --- Outer Iter 66: norm_f = 49.7569, mu=718.533, |J|=695.684
      --- Outer Iter 67: norm_f = 49.7564, mu=1151.92, |J|=695.741
      --- Outer Iter 68: norm_f = 49.7543, mu=1159.8, |J|=695.66
      --- Outer Iter 69: norm_f = 49.7536, mu=1142.95, |J|=695.664
      --- Outer Iter 70: norm_f = 49.7529, mu=688.757, |J|=695.678
      --- Outer Iter 71: norm_f = 49.752, mu=229.586, |J|=695.705
      --- Outer Iter 72: norm_f = 49.7501, mu=206.313, |J|=695.813
      --- Outer Iter 73: norm_f = 49.7493, mu=1588.09, |J|=695.796
      --- Outer Iter 74: norm_f = 49.7489, mu=529.363, |J|=695.807
      --- Outer Iter 75: norm_f = 49.748, mu=176.454, |J|=695.84
      --- Outer Iter 76: norm_f = 49.7456, mu=58.8181, |J|=695.929
      --- Outer Iter 77: norm_f = 49.7415, mu=39.4064, |J|=696.175
      --- Outer Iter 78: norm_f = 49.7409, mu=1162.82, |J|=696.202
      --- Outer Iter 79: norm_f = 49.7406, mu=974.639, |J|=696.207
      --- Outer Iter 80: norm_f = 49.7404, mu=324.88, |J|=696.222
      --- Outer Iter 81: norm_f = 49.7398, mu=108.293, |J|=696.264
      --- Outer Iter 82: norm_f = 49.7383, mu=36.0977, |J|=696.377
      --- Outer Iter 83: norm_f = 49.7361, mu=29.9593, |J|=696.675
      --- Outer Iter 84: norm_f = 49.7356, mu=856.051, |J|=696.696
      --- Outer Iter 85: norm_f = 49.7354, mu=793.038, |J|=696.696
      --- Outer Iter 86: norm_f = 49.7354, mu=264.346, |J|=696.708
      --- Outer Iter 87: norm_f = 49.7351, mu=88.1154, |J|=696.744
      --- Outer Iter 88: norm_f = 49.7345, mu=29.3718, |J|=696.843
      --- Outer Iter 89: norm_f = 49.7335, mu=18.2704, |J|=697.084
      --- Outer Iter 90: norm_f = 49.7334, mu=467.768, |J|=697.106
      --- Outer Iter 91: norm_f = 49.7333, mu=369.005, |J|=697.118
      --- Outer Iter 92: norm_f = 49.7332, mu=123.002, |J|=697.136
      --- Outer Iter 93: norm_f = 49.7331, mu=41.0005, |J|=697.188
      --- Outer Iter 94: norm_f = 49.7328, mu=35.6311, |J|=697.337
      --- Outer Iter 95: norm_f = 49.7327, mu=288.264, |J|=697.348
      --- Outer Iter 96: norm_f = 49.7326, mu=287.805, |J|=697.363
      --- Outer Iter 97: norm_f = 49.7326, mu=240.026, |J|=697.38
      --- Outer Iter 98: norm_f = 49.7325, mu=119.89, |J|=697.4
      --- Outer Iter 99: norm_f = 49.7324, mu=76.7863, |J|=697.442
      --- Outer Iter 100: norm_f = 49.7324, mu=123.618, |J|=697.524
      --- Outer Iter 101: norm_f = 49.7323, mu=253.711, |J|=697.519
      --- Outer Iter 102: norm_f = 49.7323, mu=253.72, |J|=697.532
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 49.7323 (92 data params - 31 model params = expected mean of 61; p-value = 0.848339)
    Completed in 5.7s
    2*Delta(log(L)) = 49.9301
    Iteration 1 took 5.7s
    
  --- Iterative MLGST: Iter 2 of 5  168 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (168 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).
      --- Outer Iter 0: norm_f = 151.52, mu=0, |J|=967.039
      --- Outer Iter 1: norm_f = 130.967, mu=116.389, |J|=928.618
      --- Outer Iter 2: norm_f = 111.594, mu=38.7965, |J|=947.135
      --- Outer Iter 3: norm_f = 111.489, mu=25.4472, |J|=946.409
      --- Outer Iter 4: norm_f = 111.485, mu=56.0514, |J|=946.049
      --- Outer Iter 5: norm_f = 111.48, mu=59.7501, |J|=946.009
      --- Outer Iter 6: norm_f = 111.477, mu=478.572, |J|=945.79
      --- Outer Iter 7: norm_f = 111.476, mu=499.083, |J|=945.722
      --- Outer Iter 8: norm_f = 111.476, mu=509.545, |J|=945.664
      --- Outer Iter 9: norm_f = 111.476, mu=509.992, |J|=945.611
      --- Outer Iter 10: norm_f = 111.475, mu=508.735, |J|=945.566
      --- Outer Iter 11: norm_f = 111.475, mu=479.54, |J|=945.524
      --- Outer Iter 12: norm_f = 111.475, mu=380.228, |J|=945.484
      --- Outer Iter 13: norm_f = 111.475, mu=276.25, |J|=945.438
      --- Outer Iter 14: norm_f = 111.474, mu=261.037, |J|=945.382
      --- Outer Iter 15: norm_f = 111.474, mu=284.527, |J|=945.338
      --- Outer Iter 16: norm_f = 111.474, mu=510.574, |J|=945.312
      --- Outer Iter 17: norm_f = 111.474, mu=512.86, |J|=945.255
      --- Outer Iter 18: norm_f = 111.473, mu=500.515, |J|=945.225
      --- Outer Iter 19: norm_f = 111.473, mu=277.122, |J|=945.197
      --- Outer Iter 20: norm_f = 111.473, mu=92.3741, |J|=945.15
      --- Outer Iter 21: norm_f = 111.473, mu=89.0899, |J|=945.033
      --- Outer Iter 22: norm_f = 111.473, mu=696.009, |J|=945.01
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 111.473 (168 data params - 31 model params = expected mean of 137; p-value = 0.946189)
    Completed in 1.3s
    2*Delta(log(L)) = 111.833
    Iteration 2 took 1.4s
    
  --- Iterative MLGST: Iter 3 of 5  450 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (450 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).
      --- Outer Iter 0: norm_f = 496.32, mu=0, |J|=1561.66
      --- Outer Iter 1: norm_f = 426.477, mu=86.5337, |J|=1540.16
      --- Outer Iter 2: norm_f = 421.67, mu=28.8446, |J|=1549.6
      --- Outer Iter 3: norm_f = 421.67, mu=56.7229, |J|=1549.64
      --- Outer Iter 4: norm_f = 421.663, mu=41.9593, |J|=1549.24
      --- Outer Iter 5: norm_f = 421.663, mu=325.909, |J|=1549.16
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 421.663 (450 data params - 31 model params = expected mean of 419; p-value = 0.454296)
    Completed in 0.7s
    2*Delta(log(L)) = 422.138
    Iteration 3 took 0.8s
    
  --- Iterative MLGST: Iter 4 of 5  862 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (862 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).
      --- Outer Iter 0: norm_f = 854.094, mu=0, |J|=2193.25
      --- Outer Iter 1: norm_f = 823.991, mu=391.282, |J|=2165.95
      --- Outer Iter 2: norm_f = 813.8, mu=130.427, |J|=2181.12
      --- Outer Iter 3: norm_f = 813.166, mu=64.052, |J|=2186.38
      --- Outer Iter 4: norm_f = 813.111, mu=55.6099, |J|=2185.8
      --- Outer Iter 5: norm_f = 813.109, mu=110.423, |J|=2184.67
      --- Outer Iter 6: norm_f = 813.107, mu=853.18, |J|=2184.58
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 813.107 (862 data params - 31 model params = expected mean of 831; p-value = 0.664836)
    Completed in 1.3s
    2*Delta(log(L)) = 814.503
    Iteration 4 took 1.4s
    
  --- Iterative MLGST: Iter 5 of 5  1282 operation sequences ---: 
    --- Minimum Chi^2 GST ---
      bulk_evaltree: created initial tree (1282 strs) in 0s
      bulk_evaltree: split tree (1 subtrees) in 0s
      Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
       groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).
      --- Outer Iter 0: norm_f = 1263.74, mu=0, |J|=2688.77
      --- Outer Iter 1: norm_f = 1250.78, mu=223.791, |J|=2680.2
      --- Outer Iter 2: norm_f = 1250.64, mu=74.5969, |J|=2682.22
      --- Outer Iter 3: norm_f = 1250.64, mu=1192.45, |J|=2682.15
      --- Outer Iter 4: norm_f = 1250.64, mu=1193, |J|=2682.06
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
    Sum of Chi^2 = 1250.64 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.497589)
    Completed in 1.5s
    2*Delta(log(L)) = 1252.42
    Iteration 5 took 1.7s
    
    Switching to ML objective (last iteration)
    --- MLGST ---
      --- Outer Iter 0: norm_f = 626.208, mu=0, |J|=1896.56
      --- Outer Iter 1: norm_f = 626.182, mu=111.219, |J|=1897.22
      Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
      Maximum log(L) = 626.182 below upper bound of -2.13594e+06
        2*Delta(log(L)) = 1252.36 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.483808)
      Completed in 0.7s
    2*Delta(log(L)) = 1252.36
    Final MLGST took 0.7s
    
  Iterative MLGST Total Time: 11.7s
  -- Performing 'single' gauge optimization on CPTP estimate --
      -- Adding Gauge Optimized (single) --
  -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --
      -- Adding Gauge Optimized (Spam 0.001) --
  -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --
      -- Adding Gauge Optimized (Spam 0.001+v) --
-- Std Practice:  Iter 3 of 3  (Target) --: 
  --- Circuit Creation ---
     1282 sequences created
     Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
  -- Performing 'single' gauge optimization on Target estimate --
      -- Adding Gauge Optimized (single) --
  -- Performing 'Spam 0.001' gauge optimization on Target estimate --
      -- Adding Gauge Optimized (Spam 0.001) --
  -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --
      -- Adding Gauge Optimized (Spam 0.001+v) --
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to ../tutorial_files/exampleStdReport directory
Opening ../tutorial_files/exampleStdReport/main.html...
*** Report Generation Complete!  Total time 97.2188s ***
Out[8]:
<pygsti.report.workspace.Workspace at 0x145b27400>

Reports with confidence regions

To display confidence intervals for reported quantities, you must do two things:

  1. you must specify the confidenceLevel argument to create_standard_report.
  2. the estimate(s) being reported must have a valid confidence-region-factory.

Constructing a factory often means computing a Hessian, which can be time consuming, and so this is not done automatically. Here we demonstrate how to construct a valid factory for the "Spam 0.001" gauge-optimization of the "CPTP" estimate by computing and then projecting the Hessian of the likelihood function.

In [9]:
#Construct and initialize a "confidence region factory" for the CPTP estimate
crfact = results_std.estimates["CPTP"].add_confidence_region_factory('Spam 0.001', 'final')
crfact.compute_hessian(comm=None) #we could use more processors
crfact.project_hessian('intrinsic error')

pygsti.report.create_standard_report(results_std, "../tutorial_files/exampleStdReport2", 
                                     title="Post StdPractice Report (w/CIs on CPTP)",
                                     confidenceLevel=95, auto_open=True, verbosity=1)
    
--- Hessian Projector Optimization from separate SPAM and Gate weighting ---
  Resulting intrinsic errors: 0.00716691 (gates), 0.00379068 (spam)
  Resulting sqrt(mean(operationCIs**2)): 0.00983088
  Resulting sqrt(mean(spamCIs**2)): 0.0077282
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to ../tutorial_files/exampleStdReport2 directory
Opening ../tutorial_files/exampleStdReport2/main.html...
*** Report Generation Complete!  Total time 112.6s ***
Out[9]:
<pygsti.report.workspace.Workspace at 0x15e407b00>

Reports with multiple different data sets

We've already seen above that create_standard_report can be given a dictionary of Results objects instead of a single one. This allows the creation of reports containing estimates for different DataSets (each Results object only holds estimates for a single DataSet). Furthermore, when the data sets have the same operation sequences, they will be compared within a tab of the HTML report.

Below, we generate a new data set with the same sequences as the one loaded at the beginning of this tutorial, proceed to run standard-practice GST on that dataset, and create a report of the results along with those of the original dataset. Look at the "Data Comparison" tab within the gauge-invariant error metrics category.

In [10]:
#Make another dataset & estimates
depol_gateset = target_model.depolarize(op_noise=0.1)
datagen_gateset = depol_gateset.rotate((0.05,0,0.03))

#Compute the sequences needed to perform Long Sequence GST on 
# this Model with sequences up to lenth 512
circuit_list = pygsti.construction.make_lsgst_experiment_list(
    std1Q_XYI.target_model(), std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,
    std1Q_XYI.germs, [1,2,4,8,16,32,64,128,256,512])
ds2 = pygsti.construction.generate_fake_data(datagen_gateset, circuit_list, nSamples=1000,
                                             sampleError='binomial', seed=2018)
results_std2 = pygsti.do_stdpractice_gst(ds2, target_model, fiducials, fiducials, germs,
                                     maxLengths, verbosity=3, modes="TP,CPTP,Target",
                                     gaugeOptSuite=('single','toggleValidSpam'))

pygsti.report.create_standard_report({'DS1': results_std, 'DS2': results_std2},
                                    "../tutorial_files/exampleMultiDataSetReport", 
                                    title="Example Multi-Dataset Report", 
                                    auto_open=True, verbosity=1)
-- Std Practice:  Iter 1 of 3  (TP) --: 
  --- Circuit Creation ---
     1282 sequences created
     Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
  --- LGST ---
    Singular values of I_tilde (truncating to first 4 of 6) = 
    4.244829997162508
    1.1936677889884049
    0.9868539533169907
    0.932197724091589
    0.04714742318656945
    0.012700520808584604
    
    Singular values of target I_tilde (truncating to first 4 of 6) = 
    4.242640687119286
    1.414213562373096
    1.414213562373096
    1.4142135623730954
    2.484037189058858e-16
    1.506337939585075e-16
    
  --- Iterative MLGST: Iter 1 of 5  92 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 50.2568 (92 data params - 31 model params = expected mean of 61; p-value = 0.835246)
    Completed in 0.2s
    2*Delta(log(L)) = 50.4026
    Iteration 1 took 0.2s
    
  --- Iterative MLGST: Iter 2 of 5  168 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 112.85 (168 data params - 31 model params = expected mean of 137; p-value = 0.934965)
    Completed in 0.2s
    2*Delta(log(L)) = 112.943
    Iteration 2 took 0.2s
    
  --- Iterative MLGST: Iter 3 of 5  450 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 409.836 (450 data params - 31 model params = expected mean of 419; p-value = 0.616314)
    Completed in 0.3s
    2*Delta(log(L)) = 410.099
    Iteration 3 took 0.4s
    
  --- Iterative MLGST: Iter 4 of 5  862 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 833.69 (862 data params - 31 model params = expected mean of 831; p-value = 0.467224)
    Completed in 0.5s
    2*Delta(log(L)) = 834.058
    Iteration 4 took 0.6s
    
  --- Iterative MLGST: Iter 5 of 5  1282 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 1262.38 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405135)
    Completed in 0.8s
    2*Delta(log(L)) = 1263.06
    Iteration 5 took 1.0s
    
    Switching to ML objective (last iteration)
    --- MLGST ---
      Maximum log(L) = 631.509 below upper bound of -2.13633e+06
        2*Delta(log(L)) = 1263.02 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.400201)
      Completed in 1.2s
    2*Delta(log(L)) = 1263.02
    Final MLGST took 1.2s
    
  Iterative MLGST Total Time: 3.5s
  -- Performing 'single' gauge optimization on TP estimate --
  -- Performing 'Spam 0.001' gauge optimization on TP estimate --
  -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --
-- Std Practice:  Iter 2 of 3  (CPTP) --: 
  --- Circuit Creation ---
     1282 sequences created
     Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
  --- Iterative MLGST: Iter 1 of 5  92 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 50.2612 (92 data params - 31 model params = expected mean of 61; p-value = 0.835132)
    Completed in 3.8s
    2*Delta(log(L)) = 50.4048
    Iteration 1 took 3.9s
    
  --- Iterative MLGST: Iter 2 of 5  168 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 112.852 (168 data params - 31 model params = expected mean of 137; p-value = 0.934949)
    Completed in 1.2s
    2*Delta(log(L)) = 112.944
    Iteration 2 took 1.2s
    
  --- Iterative MLGST: Iter 3 of 5  450 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 409.843 (450 data params - 31 model params = expected mean of 419; p-value = 0.616221)
    Completed in 3.8s
    2*Delta(log(L)) = 410.108
    Iteration 3 took 3.9s
    
  --- Iterative MLGST: Iter 4 of 5  862 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 833.69 (862 data params - 31 model params = expected mean of 831; p-value = 0.467216)
    Completed in 1.0s
    2*Delta(log(L)) = 834.062
    Iteration 4 took 1.1s
    
  --- Iterative MLGST: Iter 5 of 5  1282 operation sequences ---: 
    --- Minimum Chi^2 GST ---
    Sum of Chi^2 = 1262.38 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405131)
    Completed in 1.1s
    2*Delta(log(L)) = 1263.06
    Iteration 5 took 1.2s
    
    Switching to ML objective (last iteration)
    --- MLGST ---
      Maximum log(L) = 631.5 below upper bound of -2.13633e+06
        2*Delta(log(L)) = 1263 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.400337)
      Completed in 0.6s
    2*Delta(log(L)) = 1263
    Final MLGST took 0.6s
    
  Iterative MLGST Total Time: 11.9s
  -- Performing 'single' gauge optimization on CPTP estimate --
  -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --
  -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --
-- Std Practice:  Iter 3 of 3  (Target) --: 
  --- Circuit Creation ---
     1282 sequences created
     Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing
  -- Performing 'single' gauge optimization on Target estimate --
  -- Performing 'Spam 0.001' gauge optimization on Target estimate --
  -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.
The datasets are INCONSISTENT at 5.00% significance.
  - Details:
    - The aggregate log-likelihood ratio test is significant at 20.33 standard deviations.
    - The aggregate log-likelihood ratio test standard deviations signficance threshold is 1.98
    - The number of sequences with data that is inconsistent is 14
    - The maximum SSTVD over all sequences is 0.15
    - The maximum SSTVD was observed for Qubit * ---|Gx|-|Gi|-|Gi|-|Gi|-|Gi|---

The datasets are INCONSISTENT at 5.00% significance.
  - Details:
    - The aggregate log-likelihood ratio test is significant at 20.33 standard deviations.
    - The aggregate log-likelihood ratio test standard deviations signficance threshold is 1.98
    - The number of sequences with data that is inconsistent is 14
    - The maximum SSTVD over all sequences is 0.15
    - The maximum SSTVD was observed for Qubit * ---|Gx|-|Gi|-|Gi|-|Gi|-|Gi|---

Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.
*** Merging into template file ***
Output written to ../tutorial_files/exampleMultiDataSetReport directory
Opening ../tutorial_files/exampleMultiDataSetReport/main.html...
*** Report Generation Complete!  Total time 202.698s ***
Out[10]:
<pygsti.report.workspace.Workspace at 0x16fce9ef0>

Other cool create_standard_report options

Finally, let us highlight a few of the additional arguments one can supply to create_standard_report that allows further control over what gets reported.

  • Setting the link_to argument to a tuple of 'pkl', 'tex', and/or 'pdf' will create hyperlinks within the plots or below the tables of the HTML linking to Python pickle, LaTeX source, and PDF versions of the content, respectively. The Python pickle files for tables contain pickled pandas DataFrame objects, wheras those of plots contain ordinary Python dictionaries of the data that is plotted. Applies to HTML reports only.

  • Setting the brevity argument to an integer higher than $0$ (the default) will reduce the amount of information included in the report (for details on what is included for each value, see the doc string). Using brevity > 0 will reduce the time required to create, and later load, the report, as well as the output file/folder size. This applies to both HTML and PDF reports.

Below, we demonstrate both of these options in very brief (brevity=4) report with links to pickle and PDF files. Note that to generate the PDF files you must have pdflatex installed.

In [11]:
pygsti.report.create_standard_report(results_std,
                                    "../tutorial_files/exampleBriefReport", 
                                    title="Example Brief Report", 
                                    auto_open=True, verbosity=1,
                                    brevity=4, link_to=('pkl','pdf'))
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to ../tutorial_files/exampleBriefReport directory
Opening ../tutorial_files/exampleBriefReport/main.html...
*** Report Generation Complete!  Total time 144.395s ***
Out[11]:
<pygsti.report.workspace.Workspace at 0x176bb3c18>

Advanced Reports: create_report_notebook

In addition to the standard HTML-page reports demonstrated above, pyGSTi is able to generate a Jupyter notebook containing the Python commands to create the figures and tables within a general report. This is facilitated by Workspace objects, which are factories for figures and tables (see previous tutorials). By calling create_report_notebook, all of the relevant Workspace initialization and calls are dumped to a new notebook file, which can be run (either fully or partially) by the user at their convenience. Creating such "report notebooks" has the advantage that the user may insert Python code amidst the figure and table generation calls to inspect or modify what is display in a highly customizable fashion. The chief disadvantages of report notebooks is that they require the user to 1) have a Jupyter server up and running and 2) to run the notebook before any figures are displayed.

The line below demonstrates how to create a report notebook using create_report_notebook. Note that the argument list is very similar to create_general_report.

In [12]:
pygsti.report.create_report_notebook(results, "../tutorial_files/exampleReport.ipynb", 
                                     title="GST Example Report Notebook", confidenceLevel=None,
                                     auto_open=True, connected=False, verbosity=3)
Report Notebook created as ../tutorial_files/exampleReport.ipynb

Multi-qubit reports

The dimension of the density matrix space with with more than 2 qubits starts to become quite large, and Models for 3+ qubits rarely allow every element of the operation process matrices to vary independently. As such, many of the figures generated by create_standard_report are both too unwieldy (displaying a $64 \times 64$ grid of colored boxes for each operation) and not very helpful (you don't often care about what each element of an operation matrix is). For this purpose, we are developing a report that doesn't just dump out and analyze operation matrices as a whole, but looks at a Model's structure to determine how best to report quantities. This "n-qubit report" is invoked using pygsti.report.create_nqnoise_report, and has similar arguments to create_standard_report. It is, however still under development, and while you're welcome to try it out, it may crash or not work in other weird ways.

In [ ]: