PyGSTi is able to construct polished report documents, which provide high-level summaries as well as detailed analyses of results (Gate Set Tomography (GST) and model-testing results in particular). Reports are intended to be quick and easy way of analyzing Model
-type estimates, and pyGSTi's report generation functions are specifically designed to interact with the Results
object (producted by several high-level algorithm functions - see, for example, the GST overview tutorial and GST functions tutorial.). The report generation functions in pyGSTi takes one or more results (often Results
-type) objects as input and produces an HTML file as output. The HTML format allows the reports to include interactive plots and switches (see the workspace switchboard tutorial, making it easy to compare different types of analysis or data sets.
PyGSTi's reports are stand-alone HTML documents which cannot run Python. Thus, all the results displayed in a report must be pre-computed (in Python). If you find yourself wanting to fiddle with things and feel that these reports are too static, please consider using a Workspace
object (see the Workspace tutorial) within a Jupyter notebook, where you can intermix report tables/plots and Python. Internally, functions like create_standard_report
(see below) are just canned routines which use a WorkSpace
object to generate various tables and plots and then insert them into a HTML template.
Note to veteran users: PyGSTi has for some time now transitioned to producing HTML (rather than LaTeX/PDF) reports. The way to generate such reports is largely unchanged, with one important exception. Previously, the Results
object had various report-generation methods included within it. We've found this is too restrictive, as we'd sometimes like to generate a report which utilizes the results from multiple runs of GST (to compare them, for instance). Thus, the Results
class is now just a container for a DataSet
and its related Model
s, CircuitStructure
s, etc. All of the report-generation capability is now housed in within separate report functions, which we now demonstrate.
Results
¶We start by performing GST using do_long_sequence_gst
, as usual, to create a Results
object (we could also have just loaded one from file). See the GST functions tutorial for more details.
import pygsti
from pygsti.construction import std1Q_XYI
target_model = std1Q_XYI.target_model()
fiducials = std1Q_XYI.fiducials
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8,16]
ds = pygsti.io.load_dataset("../tutorial_files/Example_Dataset.txt", cache=True)
#Run GST
target_model.set_all_parameterizations("TP") #TP-constrained
results = pygsti.do_long_sequence_gst(ds, target_model, fiducials, fiducials, germs,
maxLengths, verbosity=3)
Loading from cache file: ../tutorial_files/Example_Dataset.txt.cache --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing --- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.243730350963286 1.1796261581655645 0.9627515645786063 0.9424890722054706 0.033826151547621315 0.01692336936843073 Singular values of target I_tilde (truncating to first 4 of 6) = 4.242640687119286 1.414213562373096 1.4142135623730956 1.4142135623730954 2.5038933168948026e-16 2.023452063009528e-16 Resulting model: rho0 = TPSPAMVec with dimension 4 0.71-0.02 0.03 0.75 Mdefault = TPPOVM with effect vectors: 0: FullSPAMVec with dimension 4 0.73 0 0 0.65 1: ComplementSPAMVec with dimension 4 0.69 0 0-0.65 Gi = TPDenseOp with shape (4, 4) 1.00 0 0 0 0.01 0.92-0.03 0.02 0.01-0.01 0.90 0.02 -0.01 0 0 0.91 Gx = TPDenseOp with shape (4, 4) 1.00 0 0 0 0 0.91-0.01 0 -0.02-0.02-0.04-0.99 -0.05 0.03 0.81 0 Gy = TPDenseOp with shape (4, 4) 1.00 0 0 0 0.05 0 0 0.98 0.01 0 0.89-0.03 -0.06-0.82 0 0 --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 86.3537, mu=0, |J|=1010.99 --- Outer Iter 1: norm_f = 49.6491, mu=79.0766, |J|=1009.86 --- Outer Iter 2: norm_f = 49.5669, mu=26.3589, |J|=1008.85 --- Outer Iter 3: norm_f = 49.5665, mu=8.78629, |J|=1008.87 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 49.5665 (92 data params - 31 model params = expected mean of 61; p-value = 0.85235) Completed in 0.2s 2*Delta(log(L)) = 49.6936 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 150.19, mu=0, |J|=1397.23 --- Outer Iter 1: norm_f = 111.389, mu=138.539, |J|=1388.05 --- Outer Iter 2: norm_f = 111.209, mu=46.1798, |J|=1387.46 --- Outer Iter 3: norm_f = 111.208, mu=15.3933, |J|=1387.45 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 111.208 (168 data params - 31 model params = expected mean of 137; p-value = 0.948166) Completed in 0.2s 2*Delta(log(L)) = 111.486 Iteration 2 took 0.2s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 498.77, mu=0, |J|=2295.79 --- Outer Iter 1: norm_f = 421.84, mu=346.423, |J|=2300.79 --- Outer Iter 2: norm_f = 421.713, mu=115.474, |J|=2300.65 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 421.713 (450 data params - 31 model params = expected mean of 419; p-value = 0.453619) Completed in 0.3s 2*Delta(log(L)) = 422.191 Iteration 3 took 0.3s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 851.493, mu=0, |J|=3309.82 --- Outer Iter 1: norm_f = 806.348, mu=636.017, |J|=3286.21 --- Outer Iter 2: norm_f = 806.308, mu=212.006, |J|=3286.08 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 806.308 (862 data params - 31 model params = expected mean of 831; p-value = 0.724212) Completed in 0.5s 2*Delta(log(L)) = 807.505 Iteration 4 took 0.6s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 1263, mu=0, |J|=4223.66 --- Outer Iter 1: norm_f = 1245.9, mu=917.211, |J|=4227.36 --- Outer Iter 2: norm_f = 1245.88, mu=305.737, |J|=4228.06 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 1245.88 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.53552) Completed in 0.9s 2*Delta(log(L)) = 1247.4 Iteration 5 took 1.0s Switching to ML objective (last iteration) --- MLGST --- --- Outer Iter 0: norm_f = 623.698, mu=0, |J|=2989.23 --- Outer Iter 1: norm_f = 623.667, mu=458.353, |J|=2990.87 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Maximum log(L) = 623.667 below upper bound of -2.13594e+06 2*Delta(log(L)) = 1247.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.523935) Completed in 0.4s 2*Delta(log(L)) = 1247.33 Final MLGST took 0.4s Iterative MLGST Total Time: 2.6s -- Adding Gauge Optimized (go0) --
Now that we have results
, we use the create_standard_report
method within pygsti.report
to generate a report.
pygsti.report.create_standard_report
is the most commonly used report generation function in pyGSTi, as it is appropriate for smaller models (1- and 2-qubit) which have operations that are or can be represeted as dense matrices and/or vectors.
If the given filename ends in ".pdf
" then a PDF-format report is generated; otherwise the file name specifies a folder that will be filled with HTML pages. To open a HTML-format report, you open the main.html
file directly inside the report's folder. Setting auto_open=True
makes the finished report open in your web browser automatically.
#HTML
pygsti.report.create_standard_report(results, "../tutorial_files/exampleReport",
title="GST Example Report", verbosity=1, auto_open=True)
print("\n")
#PDF
pygsti.report.create_standard_report(results, "../tutorial_files/exampleReport.pdf",
title="GST Example Report", verbosity=1, auto_open=True)
*** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/exampleReport directory Opening ../tutorial_files/exampleReport/main.html... *** Report Generation Complete! Total time 34.5031s *** *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file ***
/usr/local/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.
Latex file(s) successfully generated. Attempting to compile with pdflatex... Opening ../tutorial_files/exampleReport.pdf... *** Report Generation Complete! Total time 72.7787s ***
ERROR: pdflatex returned code 1 Check exampleReport.log to see details.
<pygsti.report.workspace.Workspace at 0x12821ba90>
There are several remarks about these reports worth noting:
Results
objects that have multiple estimates and/or gauge optimizations, consider using the Results
object's view
method to single out the estimate and gauge optimization you're after.pdflatex
on your system to compile PDF reports.Next, let's analyze the same data two different ways: with and without the TP-constraint (i.e. whether the gates must be trace-preserving) and furthermore gauge optmimize each case using several different SPAM-weights. In each case we'll call do_long_sequence_gst
with gaugeOptParams=False
, so that no gauge optimization is done, and then perform several gauge optimizations separately and add these to the Results
object via its add_gaugeoptimized
function.
#Case1: TP-constrained GST
tpTarget = target_model.copy()
tpTarget.set_all_parameterizations("TP")
results_tp = pygsti.do_long_sequence_gst(ds, tpTarget, fiducials, fiducials, germs,
maxLengths, gaugeOptParams=False, verbosity=1)
#Gauge optimize
est = results_tp.estimates['default']
mdlFinal = est.models['final iteration estimate']
mdlTarget = est.models['target']
for spamWt in [1e-4,1e-2,1.0]:
mdl = pygsti.gaugeopt_to_target(mdlFinal,mdlTarget,{'gates':1, 'spam':spamWt})
est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, mdl, "Spam %g" % spamWt)
--- Circuit Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 1282 operation sequences --- Iterative MLGST Total Time: 2.3s
#Case2: "Full" GST
fullTarget = target_model.copy()
fullTarget.set_all_parameterizations("full")
results_full = pygsti.do_long_sequence_gst(ds, fullTarget, fiducials, fiducials, germs,
maxLengths, gaugeOptParams=False, verbosity=1)
#Gauge optimize
est = results_full.estimates['default']
mdlFinal = est.models['final iteration estimate']
mdlTarget = est.models['target']
for spamWt in [1e-4,1e-2,1.0]:
mdl = pygsti.gaugeopt_to_target(mdlFinal,mdlTarget,{'gates':1, 'spam':spamWt})
est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, mdl, "Spam %g" % spamWt)
--- Circuit Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 1282 operation sequences --- Iterative MLGST Total Time: 2.7s
We'll now call the same create_standard_report
function but this time instead of passing a single Results
object as the first argument we'll pass a dictionary of them. This will result in a HTML report that includes switches to select which case ("TP" or "Full") as well as which gauge optimization to display output quantities for. PDF reports cannot support this interactivity, and so if you try to generate a PDF report you'll get an error.
ws = pygsti.report.create_standard_report({'TP': results_tp, "Full": results_full},
"../tutorial_files/exampleMultiEstimateReport",
title="Example Multi-Estimate Report",
verbosity=2, auto_open=True)
*** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables *** targetSpamBriefTable took 0.09227 seconds
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
targetGatesBoxTable took 0.115583 seconds datasetOverviewTable took 0.667548 seconds bestGatesetSpamParametersTable took 0.001178 seconds bestGatesetSpamBriefTable took 0.235467 seconds bestGatesetSpamVsTargetTable took 0.108223 seconds bestGatesetGaugeOptParamsTable took 0.000386 seconds bestGatesetGatesBoxTable took 0.530659 seconds bestGatesetChoiEvalTable took 0.429807 seconds bestGatesetDecompTable took 0.263574 seconds bestGatesetEvalTable took 0.005017 seconds bestGermsEvalTable took 0.034672 seconds bestGatesetVsTargetTable took 0.064467 seconds
/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/theory.py:200: UserWarning: Output may be unreliable because the model is not approximately trace-preserving.
bestGatesVsTargetTable_gv took 0.335761 seconds bestGatesVsTargetTable_gvgerms took 0.130275 seconds bestGatesVsTargetTable_gi took 0.0143 seconds bestGatesVsTargetTable_gigerms took 0.053035 seconds bestGatesVsTargetTable_sum took 0.283558 seconds bestGatesetErrGenBoxTable took 1.198687 seconds metadataTable took 0.001889 seconds stdoutBlock took 0.000193 seconds profilerTable took 0.000905 seconds softwareEnvTable took 0.001578 seconds exampleTable took 0.038661 seconds singleMetricTable_gv took 0.311023 seconds singleMetricTable_gi took 0.043193 seconds fiducialListTable took 0.000633 seconds prepStrListTable took 0.000252 seconds effectStrListTable took 0.000187 seconds colorBoxPlotKeyPlot took 0.046957 seconds germList2ColTable took 0.00034 seconds progressTable took 3.55919 seconds *** Generating plots *** gramBarPlot took 0.061222 seconds progressBarPlot took 0.081087 seconds progressBarPlot_sum took 0.000544 seconds finalFitComparePlot took 0.475071 seconds bestEstimateColorBoxPlot took 13.585453 seconds bestEstimateTVDColorBoxPlot took 13.131424 seconds bestEstimateColorScatterPlot took 15.735445 seconds bestEstimateColorHistogram took 13.254422 seconds progressTable_scl took 9.2e-05 seconds progressBarPlot_scl took 6.1e-05 seconds bestEstimateColorBoxPlot_scl took 0.000166 seconds bestEstimateColorScatterPlot_scl took 0.000157 seconds bestEstimateColorHistogram_scl took 0.000144 seconds dataScalingColorBoxPlot took 5.9e-05 seconds Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance. Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance. Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance. Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance. dsComparisonSummary took 0.118111 seconds dsComparisonHistogram took 0.380056 seconds dsComparisonBoxPlot took 0.414922 seconds *** Merging into template file *** Rendering topSwitchboard took 0.000101 seconds Rendering maxLSwitchboard1 took 7.8e-05 seconds Rendering targetSpamBriefTable took 0.063934 seconds Rendering targetGatesBoxTable took 0.061765 seconds Rendering datasetOverviewTable took 0.000923 seconds Rendering bestGatesetSpamParametersTable took 0.002088 seconds Rendering bestGatesetSpamBriefTable took 0.247024 seconds Rendering bestGatesetSpamVsTargetTable took 0.002611 seconds Rendering bestGatesetGaugeOptParamsTable took 0.002972 seconds Rendering bestGatesetGatesBoxTable took 0.220525 seconds Rendering bestGatesetChoiEvalTable took 0.516746 seconds Rendering bestGatesetDecompTable took 0.129993 seconds Rendering bestGatesetEvalTable took 0.024324 seconds Rendering bestGermsEvalTable took 0.092785 seconds Rendering bestGatesetVsTargetTable took 0.001486 seconds Rendering bestGatesVsTargetTable_gv took 0.004191 seconds Rendering bestGatesVsTargetTable_gvgerms took 0.007059 seconds Rendering bestGatesVsTargetTable_gi took 0.004562 seconds Rendering bestGatesVsTargetTable_gigerms took 0.004776 seconds Rendering bestGatesVsTargetTable_sum took 0.003621 seconds Rendering bestGatesetErrGenBoxTable took 0.487722 seconds Rendering metadataTable took 0.012294 seconds Rendering stdoutBlock took 0.001058 seconds Rendering profilerTable took 0.002418 seconds Rendering softwareEnvTable took 0.002448 seconds Rendering exampleTable took 0.020387 seconds Rendering metricSwitchboard_gv took 5.4e-05 seconds Rendering metricSwitchboard_gi took 3.8e-05 seconds Rendering singleMetricTable_gv took 0.016138 seconds Rendering singleMetricTable_gi took 0.024912 seconds Rendering fiducialListTable took 0.00485 seconds Rendering prepStrListTable took 0.003289 seconds Rendering effectStrListTable took 0.003453 seconds Rendering colorBoxPlotKeyPlot took 0.023572 seconds Rendering germList2ColTable took 0.006918 seconds Rendering progressTable took 0.006477 seconds Rendering gramBarPlot took 0.021127 seconds Rendering progressBarPlot took 0.036267 seconds Rendering progressBarPlot_sum took 0.038944 seconds Rendering finalFitComparePlot took 0.019729 seconds Rendering bestEstimateColorBoxPlot took 0.274103 seconds Rendering bestEstimateTVDColorBoxPlot took 0.2693 seconds Rendering bestEstimateColorScatterPlot took 0.443059 seconds Rendering bestEstimateColorHistogram took 0.291644 seconds Rendering progressTable_scl took 0.000594 seconds Rendering progressBarPlot_scl took 0.000808 seconds Rendering bestEstimateColorBoxPlot_scl took 0.001029 seconds Rendering bestEstimateColorScatterPlot_scl took 0.00066 seconds Rendering bestEstimateColorHistogram_scl took 0.000538 seconds Rendering dataScalingColorBoxPlot took 0.00077 seconds Rendering dscmpSwitchboard took 4.1e-05 seconds Rendering dsComparisonSummary took 0.027765 seconds Rendering dsComparisonHistogram took 0.139076 seconds Rendering dsComparisonBoxPlot took 0.125621 seconds Output written to ../tutorial_files/exampleMultiEstimateReport directory Opening ../tutorial_files/exampleMultiEstimateReport/main.html... *** Report Generation Complete! Total time 77.945s ***
In the above call we capture the return value in the variable ws
- a Workspace
object. PyGSTi's Workspace
objects function as both a factory for figures and tables as well as a smart cache for computed values. Within create_standard_report
a Workspace
object is created and used to create all the figures in the report. As an intended side effect, each of these figures is cached, along with some of the intermediate results used to create it. As we'll see below, a Workspace
can also be specified as input to create_standard_report
, allowing it to utilize previously cached quantities.
Another way: Because both results_tp
and results_full
above used the same dataset and operation sequences, we could have combined them as two estimates in a single Results
object (see the previous tutorial on pyGSTi's Results
object). This can be done by renaming at least one of the "default"
-named estimates in results_tp
or results_full
(below we rename both) and then adding the estimate within results_full
to the estimates already contained in results_tp
:
results_tp.rename_estimate('default','TP')
results_full.rename_estimate('default','Full')
results_both = results_tp.copy() #copy just for neatness
results_both.add_estimates(results_full, estimatesToAdd=['Full'])
Creating a report using results_both
will result in the same report we just generated. We'll demonstrate this anyway, but in addition we'll supply create_standard_report
a ws
argument, which tells it to use any cached values contained in a given input Workspace
to expedite report generation. Since our workspace object has the exact quantities we need cached in it, you'll notice a significant speedup. Finally, note that even though there's just a single Results
object, you still can't generate a PDF report from it because it contains multiple estimates.
pygsti.report.create_standard_report(results_both,
"../tutorial_files/exampleMultiEstimateReport2",
title="Example Multi-Estimate Report (v2)",
verbosity=2, auto_open=True, ws=ws)
*** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables *** targetSpamBriefTable took 0.000417 seconds targetGatesBoxTable took 0.000349 seconds datasetOverviewTable took 0.000245 seconds bestGatesetSpamParametersTable took 0.000965 seconds bestGatesetSpamBriefTable took 0.001204 seconds bestGatesetSpamVsTargetTable took 0.000777 seconds bestGatesetGaugeOptParamsTable took 0.000713 seconds bestGatesetGatesBoxTable took 0.001171 seconds bestGatesetChoiEvalTable took 0.000702 seconds bestGatesetDecompTable took 0.000817 seconds bestGatesetEvalTable took 0.000323 seconds bestGermsEvalTable took 0.000661 seconds bestGatesetVsTargetTable took 0.010797 seconds bestGatesVsTargetTable_gv took 0.001265 seconds bestGatesVsTargetTable_gvgerms took 0.002182 seconds bestGatesVsTargetTable_gi took 0.00037 seconds bestGatesVsTargetTable_gigerms took 0.000732 seconds bestGatesVsTargetTable_sum took 0.001124 seconds bestGatesetErrGenBoxTable took 0.001084 seconds metadataTable took 0.002658 seconds stdoutBlock took 0.000156 seconds profilerTable took 0.00112 seconds softwareEnvTable took 0.000156 seconds exampleTable took 0.000157 seconds
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
singleMetricTable_gv took 1.00074 seconds singleMetricTable_gi took 0.043727 seconds fiducialListTable took 0.000187 seconds prepStrListTable took 0.000124 seconds effectStrListTable took 0.000166 seconds colorBoxPlotKeyPlot took 0.000375 seconds germList2ColTable took 0.000265 seconds progressTable took 0.494041 seconds *** Generating plots *** gramBarPlot took 0.000445 seconds progressBarPlot took 0.047545 seconds progressBarPlot_sum took 0.00063 seconds finalFitComparePlot took 0.039485 seconds bestEstimateColorBoxPlot took 6.670777 seconds bestEstimateTVDColorBoxPlot took 6.586583 seconds bestEstimateColorScatterPlot took 7.28725 seconds bestEstimateColorHistogram took 6.426439 seconds progressTable_scl took 8.8e-05 seconds progressBarPlot_scl took 6e-05 seconds bestEstimateColorBoxPlot_scl took 0.00016 seconds bestEstimateColorScatterPlot_scl took 0.000153 seconds bestEstimateColorHistogram_scl took 0.00014 seconds dataScalingColorBoxPlot took 5.7e-05 seconds *** Merging into template file *** Rendering topSwitchboard took 0.000109 seconds Rendering maxLSwitchboard1 took 8e-05 seconds Rendering targetSpamBriefTable took 0.068026 seconds Rendering targetGatesBoxTable took 0.058047 seconds Rendering datasetOverviewTable took 0.001053 seconds Rendering bestGatesetSpamParametersTable took 0.002395 seconds Rendering bestGatesetSpamBriefTable took 0.253642 seconds Rendering bestGatesetSpamVsTargetTable took 0.003003 seconds Rendering bestGatesetGaugeOptParamsTable took 0.003293 seconds Rendering bestGatesetGatesBoxTable took 0.223867 seconds Rendering bestGatesetChoiEvalTable took 0.213146 seconds Rendering bestGatesetDecompTable took 0.131016 seconds Rendering bestGatesetEvalTable took 0.024017 seconds Rendering bestGermsEvalTable took 0.08801 seconds Rendering bestGatesetVsTargetTable took 0.001673 seconds Rendering bestGatesVsTargetTable_gv took 0.004549 seconds Rendering bestGatesVsTargetTable_gvgerms took 0.006734 seconds Rendering bestGatesVsTargetTable_gi took 0.004355 seconds Rendering bestGatesVsTargetTable_gigerms took 0.004779 seconds Rendering bestGatesVsTargetTable_sum took 0.003868 seconds Rendering bestGatesetErrGenBoxTable took 0.478271 seconds Rendering metadataTable took 0.011528 seconds Rendering stdoutBlock took 0.001166 seconds Rendering profilerTable took 0.002611 seconds Rendering softwareEnvTable took 0.002303 seconds Rendering exampleTable took 0.02164 seconds Rendering metricSwitchboard_gv took 3.8e-05 seconds Rendering metricSwitchboard_gi took 3.1e-05 seconds Rendering singleMetricTable_gv took 0.016514 seconds Rendering singleMetricTable_gi took 0.014514 seconds Rendering fiducialListTable took 0.002621 seconds Rendering prepStrListTable took 0.002107 seconds Rendering effectStrListTable took 0.002571 seconds Rendering colorBoxPlotKeyPlot took 0.022725 seconds Rendering germList2ColTable took 0.003585 seconds Rendering progressTable took 0.006995 seconds Rendering gramBarPlot took 0.022479 seconds Rendering progressBarPlot took 0.035332 seconds Rendering progressBarPlot_sum took 0.036974 seconds Rendering finalFitComparePlot took 0.019669 seconds Rendering bestEstimateColorBoxPlot took 0.27344 seconds Rendering bestEstimateTVDColorBoxPlot took 0.269551 seconds Rendering bestEstimateColorScatterPlot took 0.439941 seconds Rendering bestEstimateColorHistogram took 0.299781 seconds Rendering progressTable_scl took 0.00091 seconds Rendering progressBarPlot_scl took 0.000923 seconds Rendering bestEstimateColorBoxPlot_scl took 0.000987 seconds Rendering bestEstimateColorScatterPlot_scl took 0.000771 seconds Rendering bestEstimateColorHistogram_scl took 0.000743 seconds Rendering dataScalingColorBoxPlot took 0.000715 seconds Output written to ../tutorial_files/exampleMultiEstimateReport2 directory Opening ../tutorial_files/exampleMultiEstimateReport2/main.html... *** Report Generation Complete! Total time 31.9865s ***
<pygsti.report.workspace.Workspace at 0x10ff95748>
do_stdpractice_gst
¶It's no coincidence that a Results
object containing multiple estimates using the same data is precisely what's returned from do_stdpractice_gst
(see docstring for information on its arguments, and see the GST functions tutorial). This allows one to run GST multiple times, creating several different "standard" estimates and gauge optimizations, and plot them all in a single (HTML) report.
results_std = pygsti.do_stdpractice_gst(ds, target_model, fiducials, fiducials, germs,
maxLengths, verbosity=4, modes="TP,CPTP,Target",
gaugeOptSuite=('single','toggleValidSpam'))
# Generate a report with "TP", "CPTP", and "Target" estimates
pygsti.report.create_standard_report(results_std, "../tutorial_files/exampleStdReport",
title="Post StdPractice Report", auto_open=True,
verbosity=1)
-- Std Practice: Iter 1 of 3 (TP) --: --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing --- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.243730350963286 1.1796261581655645 0.9627515645786063 0.9424890722054706 0.033826151547621315 0.01692336936843073 Singular values of target I_tilde (truncating to first 4 of 6) = 4.242640687119286 1.414213562373096 1.4142135623730956 1.4142135623730954 2.5038933168948026e-16 2.023452063009528e-16 Resulting model: rho0 = TPSPAMVec with dimension 4 0.71-0.02 0.03 0.75 Mdefault = TPPOVM with effect vectors: 0: FullSPAMVec with dimension 4 0.73 0 0 0.65 1: ComplementSPAMVec with dimension 4 0.69 0 0-0.65 Gi = TPDenseOp with shape (4, 4) 1.00 0 0 0 0.01 0.92-0.03 0.02 0.01-0.01 0.90 0.02 -0.01 0 0 0.91 Gx = TPDenseOp with shape (4, 4) 1.00 0 0 0 0 0.91-0.01 0 -0.02-0.02-0.04-0.99 -0.05 0.03 0.81 0 Gy = TPDenseOp with shape (4, 4) 1.00 0 0 0 0.05 0 0 0.98 0.01 0 0.89-0.03 -0.06-0.82 0 0 --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 86.3537, mu=0, |J|=1010.99 --- Outer Iter 1: norm_f = 49.6491, mu=79.0766, |J|=1009.86 --- Outer Iter 2: norm_f = 49.5669, mu=26.3589, |J|=1008.85 --- Outer Iter 3: norm_f = 49.5665, mu=8.78629, |J|=1008.87 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 49.5665 (92 data params - 31 model params = expected mean of 61; p-value = 0.85235) Completed in 0.2s 2*Delta(log(L)) = 49.6936 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 150.19, mu=0, |J|=1397.23 --- Outer Iter 1: norm_f = 111.389, mu=138.539, |J|=1388.05 --- Outer Iter 2: norm_f = 111.209, mu=46.1798, |J|=1387.46 --- Outer Iter 3: norm_f = 111.208, mu=15.3933, |J|=1387.45 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 111.208 (168 data params - 31 model params = expected mean of 137; p-value = 0.948166) Completed in 0.2s 2*Delta(log(L)) = 111.486 Iteration 2 took 0.2s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 498.77, mu=0, |J|=2295.79 --- Outer Iter 1: norm_f = 421.84, mu=346.423, |J|=2300.79 --- Outer Iter 2: norm_f = 421.713, mu=115.474, |J|=2300.65 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 421.713 (450 data params - 31 model params = expected mean of 419; p-value = 0.453619) Completed in 0.3s 2*Delta(log(L)) = 422.191 Iteration 3 took 0.3s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 851.493, mu=0, |J|=3309.82 --- Outer Iter 1: norm_f = 806.348, mu=636.017, |J|=3286.21 --- Outer Iter 2: norm_f = 806.308, mu=212.006, |J|=3286.08 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 806.308 (862 data params - 31 model params = expected mean of 831; p-value = 0.724212) Completed in 0.6s 2*Delta(log(L)) = 807.505 Iteration 4 took 0.6s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). --- Outer Iter 0: norm_f = 1263, mu=0, |J|=4223.66 --- Outer Iter 1: norm_f = 1245.9, mu=917.211, |J|=4227.36 --- Outer Iter 2: norm_f = 1245.88, mu=305.737, |J|=4228.06 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 1245.88 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.53552) Completed in 0.9s 2*Delta(log(L)) = 1247.4 Iteration 5 took 0.9s Switching to ML objective (last iteration) --- MLGST --- --- Outer Iter 0: norm_f = 623.698, mu=0, |J|=2989.23 --- Outer Iter 1: norm_f = 623.667, mu=458.353, |J|=2990.87 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Maximum log(L) = 623.667 below upper bound of -2.13594e+06 2*Delta(log(L)) = 1247.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.523935) Completed in 0.4s 2*Delta(log(L)) = 1247.33 Final MLGST took 0.4s Iterative MLGST Total Time: 2.6s -- Performing 'single' gauge optimization on TP estimate -- -- Adding Gauge Optimized (single) -- -- Performing 'Spam 0.001' gauge optimization on TP estimate -- -- Adding Gauge Optimized (Spam 0.001) -- -- Performing 'Spam 0.001+v' gauge optimization on TP estimate -- -- Adding Gauge Optimized (Spam 0.001+v) -- -- Std Practice: Iter 2 of 3 (CPTP) --: --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params). --- Outer Iter 0: norm_f = 1.10824e+07, mu=0, |J|=1098.32 --- Outer Iter 1: norm_f = 525198, mu=152.044, |J|=23245 --- Outer Iter 2: norm_f = 105604, mu=119.002, |J|=4968.62 --- Outer Iter 3: norm_f = 17775.9, mu=81.851, |J|=1539.41 --- Outer Iter 4: norm_f = 2118.67, mu=39.4495, |J|=988.314 --- Outer Iter 5: norm_f = 91.4772, mu=13.1498, |J|=781.246 --- Outer Iter 6: norm_f = 66.6366, mu=10.0856, |J|=746.609 --- Outer Iter 7: norm_f = 59.8988, mu=16.5022, |J|=744.779 --- Outer Iter 8: norm_f = 55.2767, mu=32.1916, |J|=740.96 --- Outer Iter 9: norm_f = 50.7549, mu=24.8843, |J|=744.012 --- Outer Iter 10: norm_f = 49.7522, mu=8.29476, |J|=747.925 --- Outer Iter 11: norm_f = 49.7405, mu=48.5801, |J|=748.36 --- Outer Iter 12: norm_f = 49.7394, mu=44.6982, |J|=748.432 --- Outer Iter 13: norm_f = 49.739, mu=34.6233, |J|=748.465 --- Outer Iter 14: norm_f = 49.7386, mu=29.4361, |J|=748.5 --- Outer Iter 15: norm_f = 49.7383, mu=29.4353, |J|=748.54 --- Outer Iter 16: norm_f = 49.7383, mu=46.4292, |J|=748.574 --- Outer Iter 17: norm_f = 49.7379, mu=53.6274, |J|=748.601 --- Outer Iter 18: norm_f = 49.7376, mu=54.0897, |J|=748.637 --- Outer Iter 19: norm_f = 49.7374, mu=53.7448, |J|=748.669 --- Outer Iter 20: norm_f = 49.7372, mu=41.3964, |J|=748.695 --- Outer Iter 21: norm_f = 49.737, mu=19.6625, |J|=748.727 --- Outer Iter 22: norm_f = 49.7367, mu=15.9398, |J|=748.787 --- Outer Iter 23: norm_f = 49.7365, mu=33.1175, |J|=748.829 --- Outer Iter 24: norm_f = 49.7365, mu=52.8695, |J|=748.861 --- Outer Iter 25: norm_f = 49.7362, mu=54.9158, |J|=748.889 --- Outer Iter 26: norm_f = 49.736, mu=54.9155, |J|=748.92 --- Outer Iter 27: norm_f = 49.7359, mu=48.065, |J|=748.945 --- Outer Iter 28: norm_f = 49.7357, mu=23.5141, |J|=748.971 --- Outer Iter 29: norm_f = 49.7355, mu=13.3491, |J|=749.021 --- Outer Iter 30: norm_f = 49.7355, mu=22.8901, |J|=749.097 --- Outer Iter 31: norm_f = 49.7352, mu=58.1488, |J|=749.123 --- Outer Iter 32: norm_f = 49.735, mu=58.7563, |J|=749.156 --- Outer Iter 33: norm_f = 49.7349, mu=58.067, |J|=749.182 --- Outer Iter 34: norm_f = 49.7348, mu=32.8341, |J|=749.203 --- Outer Iter 35: norm_f = 49.7347, mu=10.9447, |J|=749.237 --- Outer Iter 36: norm_f = 49.7344, mu=10.5674, |J|=749.327 --- Outer Iter 37: norm_f = 49.7342, mu=82.2963, |J|=749.355 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 49.7342 (92 data params - 31 model params = expected mean of 61; p-value = 0.848291) Completed in 2.5s 2*Delta(log(L)) = 49.8652 Iteration 1 took 2.5s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params). --- Outer Iter 0: norm_f = 151.528, mu=0, |J|=1014.34 --- Outer Iter 1: norm_f = 122.487, mu=173.839, |J|=987.258 --- Outer Iter 2: norm_f = 112.801, mu=57.9464, |J|=996.402 --- Outer Iter 3: norm_f = 111.668, mu=31.1358, |J|=999.747 --- Outer Iter 4: norm_f = 111.484, mu=10.3786, |J|=1002.08 --- Outer Iter 5: norm_f = 111.476, mu=4.36119, |J|=1002.23 --- Outer Iter 6: norm_f = 111.475, mu=93.0386, |J|=1002.33 --- Outer Iter 7: norm_f = 111.475, mu=88.9496, |J|=1002.32 --- Outer Iter 8: norm_f = 111.475, mu=58.1522, |J|=1002.3 --- Outer Iter 9: norm_f = 111.474, mu=28.2497, |J|=1002.26 --- Outer Iter 10: norm_f = 111.474, mu=27.9109, |J|=1002.16 --- Outer Iter 11: norm_f = 111.474, mu=91.7847, |J|=1002.11 --- Outer Iter 12: norm_f = 111.474, mu=94.8186, |J|=1002.09 --- Outer Iter 13: norm_f = 111.474, mu=94.8078, |J|=1002.07 --- Outer Iter 14: norm_f = 111.474, mu=78.1253, |J|=1002.05 --- Outer Iter 15: norm_f = 111.474, mu=32.956, |J|=1002.02 --- Outer Iter 16: norm_f = 111.473, mu=20.4206, |J|=1001.95 --- Outer Iter 17: norm_f = 111.473, mu=42.907, |J|=1001.9 --- Outer Iter 18: norm_f = 111.473, mu=88.1256, |J|=1001.88 --- Outer Iter 19: norm_f = 111.473, mu=88.1234, |J|=1001.86 --- Outer Iter 20: norm_f = 111.473, mu=80.7846, |J|=1001.84 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 111.473 (168 data params - 31 model params = expected mean of 137; p-value = 0.946188) Completed in 1.3s 2*Delta(log(L)) = 111.765 Iteration 2 took 1.3s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params). --- Outer Iter 0: norm_f = 496.83, mu=0, |J|=1622.21 --- Outer Iter 1: norm_f = 425.12, mu=172.635, |J|=1614.05 --- Outer Iter 2: norm_f = 422.084, mu=57.545, |J|=1622.3 --- Outer Iter 3: norm_f = 422.023, mu=19.1817, |J|=1623.99 --- Outer Iter 4: norm_f = 422.007, mu=19.202, |J|=1622.74 --- Outer Iter 5: norm_f = 421.888, mu=19.3361, |J|=1622.9 --- Outer Iter 6: norm_f = 421.713, mu=6.44536, |J|=1625.17 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 421.713 (450 data params - 31 model params = expected mean of 419; p-value = 0.45362) Completed in 0.8s 2*Delta(log(L)) = 422.195 Iteration 3 took 0.8s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params). --- Outer Iter 0: norm_f = 851.552, mu=0, |J|=2237.29 --- Outer Iter 1: norm_f = 813.414, mu=295.355, |J|=2217.99 --- Outer Iter 2: norm_f = 811.822, mu=324.854, |J|=2226.8 --- Outer Iter 3: norm_f = 807.477, mu=108.285, |J|=2242.71 --- Outer Iter 4: norm_f = 807.406, mu=110.4, |J|=2245.32 --- Outer Iter 5: norm_f = 807.331, mu=685.514, |J|=2246.43 --- Outer Iter 6: norm_f = 807.319, mu=654.52, |J|=2246.59 --- Outer Iter 7: norm_f = 807.317, mu=520.906, |J|=2246.64 --- Outer Iter 8: norm_f = 807.316, mu=173.635, |J|=2246.59 --- Outer Iter 9: norm_f = 807.313, mu=57.8784, |J|=2246.3 --- Outer Iter 10: norm_f = 807.305, mu=33.5976, |J|=2245.2 --- Outer Iter 11: norm_f = 807.302, mu=42.516, |J|=2243.22 --- Outer Iter 12: norm_f = 807.289, mu=339.848, |J|=2243.46 --- Outer Iter 13: norm_f = 807.287, mu=339.599, |J|=2243.35 --- Outer Iter 14: norm_f = 807.286, mu=256.378, |J|=2243.19 --- Outer Iter 15: norm_f = 807.284, mu=85.4593, |J|=2242.94 --- Outer Iter 16: norm_f = 807.279, mu=59.0419, |J|=2242.14 --- Outer Iter 17: norm_f = 807.277, mu=417.602, |J|=2242.06 --- Outer Iter 18: norm_f = 807.276, mu=139.201, |J|=2241.9 --- Outer Iter 19: norm_f = 807.273, mu=46.4002, |J|=2241.4 --- Outer Iter 20: norm_f = 807.264, mu=28.3539, |J|=2239.71 --- Outer Iter 21: norm_f = 807.259, mu=31.9167, |J|=2236.66 --- Outer Iter 22: norm_f = 807.244, mu=200.188, |J|=2236.94 --- Outer Iter 23: norm_f = 807.242, mu=214.672, |J|=2236.5 --- Outer Iter 24: norm_f = 807.24, mu=266.844, |J|=2236.08 --- Outer Iter 25: norm_f = 807.237, mu=285.22, |J|=2235.76 --- Outer Iter 26: norm_f = 807.235, mu=286.131, |J|=2235.48 --- Outer Iter 27: norm_f = 807.232, mu=284.409, |J|=2235.19 --- Outer Iter 28: norm_f = 807.23, mu=246.399, |J|=2234.88 --- Outer Iter 29: norm_f = 807.228, mu=160.075, |J|=2234.51 --- Outer Iter 30: norm_f = 807.225, mu=125.143, |J|=2233.93 --- Outer Iter 31: norm_f = 807.223, mu=132.762, |J|=2233.16 --- Outer Iter 32: norm_f = 807.22, mu=285.635, |J|=2232.82 --- Outer Iter 33: norm_f = 807.217, mu=286.703, |J|=2232.53 --- Outer Iter 34: norm_f = 807.214, mu=284.888, |J|=2232.23 --- Outer Iter 35: norm_f = 807.212, mu=242.968, |J|=2231.91 --- Outer Iter 36: norm_f = 807.21, mu=151.601, |J|=2231.53 --- Outer Iter 37: norm_f = 807.207, mu=119.477, |J|=2230.89 --- Outer Iter 38: norm_f = 807.205, mu=137.491, |J|=2230.07 --- Outer Iter 39: norm_f = 807.201, mu=293.21, |J|=2229.75 --- Outer Iter 40: norm_f = 807.198, mu=293.562, |J|=2229.48 --- Outer Iter 41: norm_f = 807.196, mu=286.699, |J|=2229.19 --- Outer Iter 42: norm_f = 807.194, mu=210.881, |J|=2228.88 --- Outer Iter 43: norm_f = 807.192, mu=112.754, |J|=2228.45 --- Outer Iter 44: norm_f = 807.188, mu=106.226, |J|=2227.62 --- Outer Iter 45: norm_f = 807.186, mu=217.92, |J|=2227.23 --- Outer Iter 46: norm_f = 807.184, mu=236.72, |J|=2226.83 --- Outer Iter 47: norm_f = 807.181, mu=253.411, |J|=2226.47 --- Outer Iter 48: norm_f = 807.179, mu=258.714, |J|=2226.16 --- Outer Iter 49: norm_f = 807.177, mu=258.905, |J|=2225.85 --- Outer Iter 50: norm_f = 807.175, mu=258.416, |J|=2225.55 --- Outer Iter 51: norm_f = 807.173, mu=248.597, |J|=2225.24 --- Outer Iter 52: norm_f = 807.171, mu=216.922, |J|=2224.93 --- Outer Iter 53: norm_f = 807.17, mu=180.949, |J|=2224.57 --- Outer Iter 54: norm_f = 807.167, mu=172.91, |J|=2224.15 --- Outer Iter 55: norm_f = 807.166, mu=174, |J|=2223.71 --- Outer Iter 56: norm_f = 807.165, mu=270.951, |J|=2223.28 --- Outer Iter 57: norm_f = 807.163, mu=293.556, |J|=2223.03 --- Outer Iter 58: norm_f = 807.161, mu=294.205, |J|=2222.84 --- Outer Iter 59: norm_f = 807.159, mu=287.827, |J|=2222.63 --- Outer Iter 60: norm_f = 807.158, mu=206.067, |J|=2222.41 --- Outer Iter 61: norm_f = 807.156, mu=103.898, |J|=2222.09 --- Outer Iter 62: norm_f = 807.154, mu=97.6646, |J|=2221.47 --- Outer Iter 63: norm_f = 807.153, mu=213.583, |J|=2221.19 --- Outer Iter 64: norm_f = 807.152, mu=262.549, |J|=2220.92 --- Outer Iter 65: norm_f = 807.15, mu=277.118, |J|=2220.73 --- Outer Iter 66: norm_f = 807.149, mu=277.588, |J|=2220.56 --- Outer Iter 67: norm_f = 807.148, mu=274.669, |J|=2220.38 --- Outer Iter 68: norm_f = 807.147, mu=231.101, |J|=2220.2 --- Outer Iter 69: norm_f = 807.146, mu=148.993, |J|=2219.99 --- Outer Iter 70: norm_f = 807.145, mu=123.155, |J|=2219.67 --- Outer Iter 71: norm_f = 807.145, mu=140.655, |J|=2219.3 --- Outer Iter 72: norm_f = 807.143, mu=292.778, |J|=2219.16 --- Outer Iter 73: norm_f = 807.143, mu=292.782, |J|=2219.04 --- Outer Iter 74: norm_f = 807.142, mu=272.276, |J|=2218.91 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 807.142 (862 data params - 31 model params = expected mean of 831; p-value = 0.717193) Completed in 11.3s 2*Delta(log(L)) = 808.459 Iteration 4 took 11.3s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params). --- Outer Iter 0: norm_f = 1264.71, mu=0, |J|=2587.75 --- Outer Iter 1: norm_f = 1247.44, mu=339.21, |J|=2580.77 --- Outer Iter 2: norm_f = 1247.19, mu=113.07, |J|=2583.49 --- Outer Iter 3: norm_f = 1247.18, mu=51.8848, |J|=2582.38 --- Outer Iter 4: norm_f = 1247.17, mu=102.682, |J|=2581.25 --- Outer Iter 5: norm_f = 1247.17, mu=740.295, |J|=2581.29 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Sum of Chi^2 = 1247.17 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.525249) Completed in 1.8s 2*Delta(log(L)) = 1248.89 Iteration 5 took 1.8s Switching to ML objective (last iteration) --- MLGST --- --- Outer Iter 0: norm_f = 624.444, mu=0, |J|=1825.16 --- Outer Iter 1: norm_f = 624.418, mu=169.838, |J|=1826.36 --- Outer Iter 2: norm_f = 624.417, mu=56.6127, |J|=1826.14 --- Outer Iter 3: norm_f = 624.414, mu=36.7733, |J|=1825.47 --- Outer Iter 4: norm_f = 624.414, mu=115.665, |J|=1825.09 --- Outer Iter 5: norm_f = 624.412, mu=241.82, |J|=1825 --- Outer Iter 6: norm_f = 624.411, mu=241.831, |J|=1824.93 --- Outer Iter 7: norm_f = 624.41, mu=213.348, |J|=1824.84 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06 Maximum log(L) = 624.41 below upper bound of -2.13594e+06 2*Delta(log(L)) = 1248.82 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.512074) Completed in 1.7s 2*Delta(log(L)) = 1248.82 Final MLGST took 1.7s Iterative MLGST Total Time: 19.6s -- Performing 'single' gauge optimization on CPTP estimate -- -- Adding Gauge Optimized (single) -- -- Performing 'Spam 0.001' gauge optimization on CPTP estimate -- -- Adding Gauge Optimized (Spam 0.001) -- -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate -- -- Adding Gauge Optimized (Spam 0.001+v) -- -- Std Practice: Iter 3 of 3 (Target) --: --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing -- Performing 'single' gauge optimization on Target estimate -- -- Adding Gauge Optimized (single) -- -- Performing 'Spam 0.001' gauge optimization on Target estimate -- -- Adding Gauge Optimized (Spam 0.001) -- -- Performing 'Spam 0.001+v' gauge optimization on Target estimate -- -- Adding Gauge Optimized (Spam 0.001+v) -- *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/exampleStdReport directory Opening ../tutorial_files/exampleStdReport/main.html... *** Report Generation Complete! Total time 70.4261s ***
<pygsti.report.workspace.Workspace at 0x1359f55c0>
To display confidence intervals for reported quantities, you must do two things:
confidenceLevel
argument to create_standard_report
.Constructing a factory often means computing a Hessian, which can be time consuming, and so this is not done automatically. Here we demonstrate how to construct a valid factory for the "Spam 0.001" gauge-optimization of the "CPTP" estimate by computing and then projecting the Hessian of the likelihood function.
#Construct and initialize a "confidence region factory" for the CPTP estimate
crfact = results_std.estimates["CPTP"].add_confidence_region_factory('Spam 0.001', 'final')
crfact.compute_hessian(comm=None) #we could use more processors
crfact.project_hessian('intrinsic error')
pygsti.report.create_standard_report(results_std, "../tutorial_files/exampleStdReport2",
title="Post StdPractice Report (w/CIs on CPTP)",
confidenceLevel=95, auto_open=True, verbosity=1)
--- Hessian Projector Optimization from separate SPAM and Gate weighting --- Resulting intrinsic errors: 0.0083633 (gates), 0.0048806 (spam) Resulting sqrt(mean(operationCIs**2)): 0.0164815 Resulting sqrt(mean(spamCIs**2)): 0.0132789 *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables *** *** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/exampleStdReport2 directory Opening ../tutorial_files/exampleStdReport2/main.html... *** Report Generation Complete! Total time 89.6974s ***
<pygsti.report.workspace.Workspace at 0x135e28780>
We've already seen above that create_standard_report
can be given a dictionary of Results
objects instead of a single one. This allows the creation of reports containing estimates for different DataSet
s (each Results
object only holds estimates for a single DataSet
). Furthermore, when the data sets have the same operation sequences, they will be compared within a tab of the HTML report.
Below, we generate a new data set with the same sequences as the one loaded at the beginning of this tutorial, proceed to run standard-practice GST on that dataset, and create a report of the results along with those of the original dataset. Look at the "Data Comparison" tab within the gauge-invariant error metrics category.
#Make another dataset & estimates
depol_gateset = target_model.depolarize(op_noise=0.1)
datagen_gateset = depol_gateset.rotate((0.05,0,0.03))
#Compute the sequences needed to perform Long Sequence GST on
# this Model with sequences up to lenth 512
circuit_list = pygsti.construction.make_lsgst_experiment_list(
std1Q_XYI.target_model(), std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,
std1Q_XYI.germs, [1,2,4,8,16,32,64,128,256,512])
ds2 = pygsti.construction.generate_fake_data(datagen_gateset, circuit_list, nSamples=1000,
sampleError='binomial', seed=2018)
results_std2 = pygsti.do_stdpractice_gst(ds2, target_model, fiducials, fiducials, germs,
maxLengths, verbosity=3, modes="TP,CPTP,Target",
gaugeOptSuite=('single','toggleValidSpam'))
pygsti.report.create_standard_report({'DS1': results_std, 'DS2': results_std2},
"../tutorial_files/exampleMultiDataSetReport",
title="Example Multi-Dataset Report",
auto_open=True, verbosity=1)
-- Std Practice: Iter 1 of 3 (TP) --: --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing --- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.244829997162508 1.1936677889884049 0.9868539533169907 0.932197724091589 0.04714742318656945 0.012700520808584604 Singular values of target I_tilde (truncating to first 4 of 6) = 4.242640687119286 1.414213562373096 1.4142135623730956 1.4142135623730954 2.5038933168948026e-16 2.023452063009528e-16 --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 47.848 (92 data params - 31 model params = expected mean of 61; p-value = 0.89017) Completed in 0.2s 2*Delta(log(L)) = 47.897 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 112.296 (168 data params - 31 model params = expected mean of 137; p-value = 0.939668) Completed in 0.1s 2*Delta(log(L)) = 112.295 Iteration 2 took 0.2s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 409.638 (450 data params - 31 model params = expected mean of 419; p-value = 0.618972) Completed in 0.4s 2*Delta(log(L)) = 409.806 Iteration 3 took 0.4s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 833.614 (862 data params - 31 model params = expected mean of 831; p-value = 0.467957) Completed in 0.5s 2*Delta(log(L)) = 833.943 Iteration 4 took 0.6s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1262.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405531) Completed in 0.8s 2*Delta(log(L)) = 1262.98 Iteration 5 took 0.9s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 631.455 below upper bound of -2.13633e+06 2*Delta(log(L)) = 1262.91 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.401035) Completed in 0.3s 2*Delta(log(L)) = 1262.91 Final MLGST took 0.3s Iterative MLGST Total Time: 2.6s -- Performing 'single' gauge optimization on TP estimate -- -- Performing 'Spam 0.001' gauge optimization on TP estimate -- -- Performing 'Spam 0.001+v' gauge optimization on TP estimate -- -- Std Practice: Iter 2 of 3 (CPTP) --: --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 50.2614 (92 data params - 31 model params = expected mean of 61; p-value = 0.835129) Completed in 2.3s 2*Delta(log(L)) = 50.3385 Iteration 1 took 2.3s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 112.857 (168 data params - 31 model params = expected mean of 137; p-value = 0.934907) Completed in 1.3s 2*Delta(log(L)) = 112.882 Iteration 2 took 1.4s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 409.841 (450 data params - 31 model params = expected mean of 419; p-value = 0.616256) Completed in 2.1s 2*Delta(log(L)) = 410.036 Iteration 3 took 2.1s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 833.614 (862 data params - 31 model params = expected mean of 831; p-value = 0.467957) Completed in 1.5s 2*Delta(log(L)) = 833.943 Iteration 4 took 1.5s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1262.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405528) Completed in 1.3s 2*Delta(log(L)) = 1262.98 Iteration 5 took 1.3s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 631.455 below upper bound of -2.13633e+06 2*Delta(log(L)) = 1262.91 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.40103) Completed in 0.9s 2*Delta(log(L)) = 1262.91 Final MLGST took 0.9s Iterative MLGST Total Time: 9.4s -- Performing 'single' gauge optimization on CPTP estimate -- -- Performing 'Spam 0.001' gauge optimization on CPTP estimate -- -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate -- -- Std Practice: Iter 3 of 3 (Target) --: --- Circuit Creation --- 1282 sequences created Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing -- Performing 'single' gauge optimization on Target estimate -- -- Performing 'Spam 0.001' gauge optimization on Target estimate -- -- Performing 'Spam 0.001+v' gauge optimization on Target estimate -- *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance. The datasets are INCONSISTENT at 5.00% significance. - Details: - The aggregate log-likelihood ratio test is significant at 20.30 standard deviations. - The aggregate log-likelihood ratio test standard deviations signficance threshold is 1.98 - The number of sequences with data that is inconsistent is 14 - The maximum SSTVD over all sequences is 0.15 - The maximum SSTVD was observed for Qubit * ---|Gx|-|Gi|-|Gi|-|Gi|-|Gi|--- The datasets are INCONSISTENT at 5.00% significance. - Details: - The aggregate log-likelihood ratio test is significant at 20.30 standard deviations. - The aggregate log-likelihood ratio test standard deviations signficance threshold is 1.98 - The number of sequences with data that is inconsistent is 14 - The maximum SSTVD over all sequences is 0.15 - The maximum SSTVD was observed for Qubit * ---|Gx|-|Gi|-|Gi|-|Gi|-|Gi|--- Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance. *** Merging into template file *** Output written to ../tutorial_files/exampleMultiDataSetReport directory Opening ../tutorial_files/exampleMultiDataSetReport/main.html... *** Report Generation Complete! Total time 142.596s ***
<pygsti.report.workspace.Workspace at 0x1386404a8>
create_standard_report
options¶Finally, let us highlight a few of the additional arguments one can supply to create_standard_report
that allows further control over what gets reported.
Setting the link_to
argument to a tuple of 'pkl'
, 'tex'
, and/or 'pdf'
will create hyperlinks within the plots or below the tables of the HTML linking to Python pickle, LaTeX source, and PDF versions of the content, respectively. The Python pickle files for tables contain pickled pandas DataFrame
objects, wheras those of plots contain ordinary Python dictionaries of the data that is plotted. Applies to HTML reports only.
Setting the brevity
argument to an integer higher than $0$ (the default) will reduce the amount of information included in the report (for details on what is included for each value, see the doc string). Using brevity > 0
will reduce the time required to create, and later load, the report, as well as the output file/folder size. This applies to both HTML and PDF reports.
Below, we demonstrate both of these options in very brief (brevity=4
) report with links to pickle and PDF files. Note that to generate the PDF files you must have pdflatex
installed.
pygsti.report.create_standard_report(results_std,
"../tutorial_files/exampleBriefReport",
title="Example Brief Report",
auto_open=True, verbosity=1,
brevity=4, link_to=('pkl','pdf'))
*** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/exampleBriefReport directory Opening ../tutorial_files/exampleBriefReport/main.html... *** Report Generation Complete! Total time 60.318s ***
<pygsti.report.workspace.Workspace at 0x122e3c208>
create_report_notebook
¶In addition to the standard HTML-page reports demonstrated above, pyGSTi is able to generate a Jupyter notebook containing the Python commands to create the figures and tables within a general report. This is facilitated
by Workspace
objects, which are factories for figures and tables (see previous tutorials). By calling create_report_notebook
, all of the relevant Workspace
initialization and calls are dumped to a new notebook file, which can be run (either fully or partially) by the user at their convenience. Creating such "report notebooks" has the advantage that the user may insert Python code amidst the figure and table generation calls to inspect or modify what is display in a highly customizable fashion. The chief disadvantages of report notebooks is that they require the user to 1) have a Jupyter server up and running and 2) to run the notebook before any figures are displayed.
The line below demonstrates how to create a report notebook using create_report_notebook
. Note that the argument list is very similar to create_general_report
.
pygsti.report.create_report_notebook(results, "../tutorial_files/exampleReport.ipynb",
title="GST Example Report Notebook", confidenceLevel=None,
auto_open=True, connected=False, verbosity=3)
Report Notebook created as ../tutorial_files/exampleReport.ipynb
The dimension of the density matrix space with with more than 2 qubits starts to become quite large, and Models for 3+ qubits rarely allow every element of the operation process matrices to vary independently. As such, many of the figures generated by create_standard_report
are both too unwieldy (displaying a $64 \times 64$ grid of colored boxes for each operation) and not very helpful (you don't often care about what each element of an operation matrix is). For this purpose, we are developing a report that doesn't just dump out and analyze operation matrices as a whole, but looks at a Model
's structure to determine how best to report quantities. This "n-qubit report" is invoked using pygsti.report.create_nqnoise_report
, and has similar arguments to create_standard_report
. It is, however still under development, and while you're welcome to try it out, it may crash or not work in other weird ways.