This notebook shows how to use the extract_runs_into_db
function to extract runs from a database (DB) file (the source DB) into another DB file (the target DB). If the target DB does not exist, it will be created. The runs are NOT removed from the original DB file; they are copied over.
Let us set up a DB file with some runs in it.
import os
import numpy as np
from qcodes.dataset.database_extract_runs import extract_runs_into_db
from qcodes.dataset.experiment_container import Experiment, load_experiment_by_name
from qcodes.tests.instrument_mocks import DummyInstrument
from qcodes.dataset.measurements import Measurement
from qcodes import Station
# The following function is imported and used here only for the sake
# of explicitness. As a qcodes user, please, consider this function
# private to qcodes which means its name, behavior, and location may
# change without notice between qcodes versions.
from qcodes.dataset.sqlite.database import connect
source_path = os.path.join(os.getcwd(), "extract_runs_notebook_source.db")
target_path = os.path.join(os.getcwd(), "extract_runs_notebook_target.db")
source_conn = connect(source_path)
target_conn = connect(target_path)
Upgrading database; v0 -> v1: : 0it [00:00, ?it/s] Upgrading database; v1 -> v2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 407.89it/s] Upgrading database; v2 -> v3: : 0it [00:00, ?it/s] Upgrading database; v3 -> v4: : 0it [00:00, ?it/s] Upgrading database; v4 -> v5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 333.17it/s] Upgrading database; v5 -> v6: : 0it [00:00, ?it/s] Upgrading database; v6 -> v7: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 166.47it/s] Upgrading database; v7 -> v8: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 250.05it/s] Upgrading database; v8 -> v9: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 499.56it/s] Upgrading database; v0 -> v1: : 0it [00:00, ?it/s] Upgrading database; v1 -> v2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 333.33it/s] Upgrading database; v2 -> v3: : 0it [00:00, ?it/s] Upgrading database; v3 -> v4: : 0it [00:00, ?it/s] Upgrading database; v4 -> v5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 333.38it/s] Upgrading database; v5 -> v6: : 0it [00:00, ?it/s] Upgrading database; v6 -> v7: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 166.56it/s] Upgrading database; v7 -> v8: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 499.74it/s] Upgrading database; v8 -> v9: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 500.45it/s]
exp = Experiment(
name="extract_runs_experiment", sample_name="no_sample", conn=source_conn
)
my_inst = DummyInstrument("my_inst", gates=["voltage", "current"])
station = Station(my_inst)
meas = Measurement(exp=exp)
meas.register_parameter(my_inst.voltage)
meas.register_parameter(my_inst.current, setpoints=(my_inst.voltage,))
# Add 10 runs with gradually more and more data
for run_id in range(1, 11):
with meas.run() as datasaver:
for step, noise in enumerate(np.random.randn(run_id)):
datasaver.add_result((my_inst.voltage, step), (my_inst.current, noise))
Starting experimental run with id: 1. Starting experimental run with id: 2. Starting experimental run with id: 3. Starting experimental run with id: 4. Starting experimental run with id: 5. Starting experimental run with id: 6. Starting experimental run with id: 7. Starting experimental run with id: 8. Starting experimental run with id: 9. Starting experimental run with id: 10.
Now let us extract runs 3 and 7 into our desired target DB file. All runs must come from the same experiment. To extract runs from different experiments, one may call the function several times.
The function will look in the target DB to see if an experiment with matching attributes already exists. If not, such an experiment is created.
extract_runs_into_db(source_path, target_path, 3, 7)
target_exp = load_experiment_by_name(name="extract_runs_experiment", conn=target_conn)
target_exp
extract_runs_experiment#no_sample#1@C:\Users\jenielse\source\repos\Qcodes\docs\examples\DataSet\extract_runs_notebook_target.db ------------------------------------------------------------------------------------------------------------------------------- 1-results-1-my_inst_voltage,my_inst_current-3 2-results-2-my_inst_voltage,my_inst_current-7
The last number printed in each line is the number of data points. As expected, we get 3 and 7.
Note that the runs will have different run_id
s in the new database. Their GUIDs are, however, the same (as they must be).
exp.data_set(3).guid
'aaaaaaaa-0000-0000-0000-017ea07e4a31'
target_exp.data_set(1).guid
'aaaaaaaa-0000-0000-0000-017ea07e4a31'
Furthermore, note that the original run_id
preserved as captured_run_id
. We will demonstrate below how to look up data via the captured_run_id
.
target_exp.data_set(1).captured_run_id
3
There are occasions where it is convenient to combine data from several databases.
Let's first demonstrate this by creating some new experiments in another db file.
extra_source_path = os.path.join(os.getcwd(), "extract_runs_notebook_source_aux.db")
source_extra_conn = connect(extra_source_path)
Upgrading database; v0 -> v1: : 0it [00:00, ?it/s] Upgrading database; v1 -> v2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 500.04it/s] Upgrading database; v2 -> v3: : 0it [00:00, ?it/s] Upgrading database; v3 -> v4: : 0it [00:00, ?it/s] Upgrading database; v4 -> v5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 333.89it/s] Upgrading database; v5 -> v6: : 0it [00:00, ?it/s] Upgrading database; v6 -> v7: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 199.91it/s] Upgrading database; v7 -> v8: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 333.44it/s] Upgrading database; v8 -> v9: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 333.36it/s]
exp = Experiment(
name="extract_runs_experiment_aux", sample_name="no_sample", conn=source_extra_conn
)
meas = Measurement(exp=exp)
meas.register_parameter(my_inst.current)
meas.register_parameter(my_inst.voltage, setpoints=(my_inst.current,))
# Add 10 runs with gradually more and more data
for run_id in range(1, 11):
with meas.run() as datasaver:
for step, noise in enumerate(np.random.randn(run_id)):
datasaver.add_result((my_inst.current, step), (my_inst.voltage, noise))
Starting experimental run with id: 1. Starting experimental run with id: 2. Starting experimental run with id: 3. Starting experimental run with id: 4. Starting experimental run with id: 5. Starting experimental run with id: 6. Starting experimental run with id: 7. Starting experimental run with id: 8. Starting experimental run with id: 9. Starting experimental run with id: 10.
exp.data_set(3).guid
'aaaaaaaa-0000-0000-0000-017ea07e5986'
extract_runs_into_db(extra_source_path, target_path, 1, 3)
target_exp_aux = load_experiment_by_name(
name="extract_runs_experiment_aux", conn=target_conn
)
The GUID should be preserved.
target_exp_aux.data_set(2).guid
'aaaaaaaa-0000-0000-0000-017ea07e5986'
And the original run_id
is preserved as captured_run_id
target_exp_aux.data_set(2).captured_run_id
3
As runs move from one database to the other, uniquely identifying a run becomes non-trivial.
Note how we now have 2 runs in the same DB sharing the same captured_run_id
. This means that captured_run_id
is not a unique key. We can demonstrate that captured_run_id
is not unique by looking up the GUID
s that match this captured_run_id
.
from qcodes.dataset.data_set import (
load_by_guid,
load_by_run_spec,
get_guids_by_run_spec,
)
guids = get_guids_by_run_spec(conn=target_conn, captured_run_id=3)
guids
['aaaaaaaa-0000-0000-0000-017ea07e4a31', 'aaaaaaaa-0000-0000-0000-017ea07e5986']
load_by_guid(guids[0], conn=target_conn)
results #1@C:\Users\jenielse\source\repos\Qcodes\docs\examples\DataSet\extract_runs_notebook_target.db ------------------------------------------------------------------------------------------------------ my_inst_voltage - numeric my_inst_current - numeric
load_by_guid(guids[1], conn=target_conn)
results #4@C:\Users\jenielse\source\repos\Qcodes\docs\examples\DataSet\extract_runs_notebook_target.db ------------------------------------------------------------------------------------------------------ my_inst_current - numeric my_inst_voltage - numeric
To enable loading of runs that may share the same captured_run_id
, the function load_by_run_data
is supplied.
This function takes one or more optional sets of metadata. If more than one run matching this information is found the metadata of the matching runs is printed and an error is raised. It is now possible to suply more information to the function to uniquely identify a specific run.
try:
load_by_run_spec(captured_run_id=3, conn=target_conn)
except NameError:
print("Caught a NameError")
captured_run_id captured_counter experiment_name sample_name sample_id location work_station ----------------- ------------------ --------------------------- ------------- ----------- ---------- -------------- 3 3 extract_runs_experiment no_sample 2863311530 0 0 3 3 extract_runs_experiment_aux no_sample 2863311530 0 0 Caught a NameError
To single out one of these two runs, we can thus specify the experiment_name
:
load_by_run_spec(
captured_run_id=3, experiment_name="extract_runs_experiment_aux", conn=target_conn
)
results #4@C:\Users\jenielse\source\repos\Qcodes\docs\examples\DataSet\extract_runs_notebook_target.db ------------------------------------------------------------------------------------------------------ my_inst_current - numeric my_inst_voltage - numeric