This first tutorial will explain a little bit on what the data is and where to get it.
For the Code Challenge, you will be using the "primary" data set, as we've called it. The primary data set is
* labeled data set of 350,000 simulated signals
* 7 different labels, or "signal classifications"
* total of about 128 GB of data
This data set should be used to train your models.
You do not need to use all the data to train your models if you do not want to or need to consume the entire set.
There are also a small
and a medium
sized subset of these primary data files.
Each data file has a simple format:
* file name = <UUID>.dat
* a JSON header in the first line that contains:
* UUID
* signal_classification (label)
* followed by stream complex-valued time-series data.
The ibmseti
Python package is available to assist in reading this data and performing some basic operations for you.
There is also a second, simple and clean data set that you may use for warmup, which we call the "basic" data set. This basic set should be used as a sanity check and for very early-stage prototyping. We recommend that everybody starts with this.
* Only 4 different signal classifications
* 1000 simulation files for each class: 4000 files total
* Available as single zip file
* ~1 GB in total.
The difference between the
basic
andprimary
data sets is that the signals simulated in thebasic
set have, on average, much higher signal to noise ratio (they are larger amplitude signals). They also have other characteristics that will make the different signal classes very distinguishable. You should be able to get very high signal classification accuracy with the basic data set. The primary data set has smaller amplitude signals and can look more similar to each other, making classification accuracy more difficult with this data set. There are also only 4 classes in the basic data set and 7 classes in the primary set.
The primary small
is a subset of the full primary data set. Use for early-stage prototyping.
The primary medium
is a subset of the full primary data set. Use for early-stage prototyping & model building.
The primary full
is the entire primary data set. Use only if you want an enourmous training data set. You will need a small data center to process these data in a reasonable amount of time.
For all data sets, there exists an index file. That file is a CSV file. Each row holds the UUID, signal_classification (label) for a simulation file in the data set. You can use these index files in a few different ways (from using to keep track of your downloads, to facilitate parallelization of your analysis on Spark).
Data:
There is one primary_test
data set. Each data file is the same as the above training data except the JSON header does NOT contain the 'signal_classification' key.
See the Judging Criteria notebook for information on submitting your test-set classifications.
The data are stored in containers
on IBM Object Storage. You can access these data with HTTP calls. Here we use system level curl
, but you could easily use the Python requests
package.
The URL for all data files is composed of
base_url/container/objectname
.
The base_url
is:
#If you are running this in IBM Apache Spark (via Data Science Experience)
base_url = 'https://dal05.objectstorage.service.networklayer.com/v1/AUTH_cdbef52bdf7a449c96936e1071f0a46b'
#ELSE, if you are outside of IBM:
#base_url = 'https://dal.objectstorage.open.softlayer.com/v1/AUTH_cdbef52bdf7a449c96936e1071f0a46b'
#NOTE: if you are outside of IBM, pulling down data will be slower. :/
#Defining a local data folder to dump data
import os
mydatafolder = os.path.join( os.environ['PWD'], 'my_data_folder' )
if os.path.exists(mydatafolder) is False:
os.makedirs(mydatafolder)
We'll start with the basic data set. Because the basic data set is small, we've created a .zip
file of the full data set that you can download directly.
import os
basic_container = 'simsignals_basic_v2'
basic4_zip_file = 'basic4.zip'
os.system('curl {}/{}/{} > {}'.format(base_url, basic_container, basic4_zip_file, mydatafolder + '/' + basic4_zip_file))
!ls -al my_data_folder/basic4.zip
filename = 'primary_small.zip'
primary_small_url = '{}/simsignals_v2_zipped/{}'.format(base_url, filename)
os.system('curl {} > {}'.format(primary_small_url, mydatafolder +'/'+filename))
A CSV file containing the UUID, signal classifications for each file in the primary_small
subset.
filename = 'public_list_primary_v2_small_1june_2017.csv'
primary_small_csv_url = '{}/simsignals_files/{}'.format(base_url, filename)
os.system('curl {} > {}'.format(primary_small_csv_url, mydatafolder +'/'+filename))
Similarly, the primary_medium
subset can be found in a handful of zip files
med_N = '{}/simsignals_v2_zipped/primary_medium_{}.zip'
for i in range(1,7):
med_url = med_N.format(base_url, i)
output_file = mydatafolder + '/primary_medium_{}.zip'.format(i)
print 'GETing', output_file
os.system('curl {} > {}'.format(med_url, output_file ))
Here too, there is a CSV file containing the UUID, signal classifications for each file in the primary_medium
subset.
filename = 'public_list_primary_v2_medium_1june_2017.csv'
med_csv_url = '{}/simsignals_files/{}'.format(base_url, filename)
os.system('curl {} > {}'.format(med_csv_url, mydatafolder +'/'+filename))
Because the full set is so incredibly large, we only have these 350,000 files available individually on object storage.
The primary_full
list can be found here:
filename = 'public_list_primary_v2_full_1june_2017.csv'
prim_full = '{}/simsignals_files/{}'.format(base_url, filename)
os.system('curl {} > {}'.format(prim_full, mydatafolder +'/'+filename))
One can download this list and begin to pull down files individually if desired. Warning, however, this will take approximately a billion years if you are not running on IBM Apache Spark -- IBM Apache Spark and Object Storage exist in the same data center and share a fast network connection.
The data are found in
base_url/simsignals_v2/<uuid>.dat
For example:
We are working to make the primary full data set more easily, as this current setup is less than ideal. You will be notified if that becomes available. The data will be directly available for participants of the hackathon, however.
If you wish to programmatically begin to download the full data set you may use the following code.
import requests
import copy
file_list_container = 'simsignals_files'
file_list = 'public_list_primary_v2_full_1june_2017.csv'
primary_data_container = 'simsignals_v2'
r = requests.get('{}/{}/{}'.format(base_url, file_list_container, file_list), timeout=(9.0, 21.0))
filecontents = copy.copy(r.content)
full_primary_files = [line.split(',') for line in filecontents.split('\n')]
full_primary_files = full_primary_files[1:-1] #strip the header and empty last element
full_primary_files = map(lambda x: x[0]+".dat", full_primary_files) #now list of file names (<uuid>.dat)
#save your data into a local subfolder
save_to_folder = mydatafolder + '/primary_data_set'
if os.path.exists(save_to_folder) is False:
os.mkdir(save_to_folder)
count = 0
total = len(full_primary_files)
for row in full_primary_files:
r = requests.get('{}/{}/{}'.format(base_url, primary_data_container, row), timeout=(9.0, 21.0))
if count % 100 == 0:
print 'done ', count, ' out of ', total
count += 1
with open('{}/{}'.format(save_to_folder, row), 'w' ) as fout:
fout.write(r.content)
This will be a difficult data set to consume and process if you are using free-tier levels of software from any Cloud provider. You will likely want to have a robust machine, or sets of machines, with many threads and GPUs if you want to train models with such a large dat set.
For example, if you have access to an IBM Spark Enterprise cluster, because the network connection between IBM Spark and IBM Object Storage is so fast, we recommend that you do NOT download each file. Instead you could parallelize the index file and then retrieve and process each file on a worker node.
## Using Spark -- can parallelize the job across your worker nodes
import ibmseti
def retrieve_and_process(row):
try:
r = requests.get('{}/{}/{}'.format(base_url, primary_data_container, row), timeout=(9.0, 21.0))
except Exception as e:
return (row, 'failed', [])
aca = ibmseti.compamp.SimCompamp(r.content)
spectrogram = aca.get_spectrogram() # or do something else
features = my_feature_extractor(spectrogram) #example external function for reducing the spectrogram into a handful of features, perhaps
signal_class = aca.header()['signal_classifiation']
return (row, signal_class, features)
npartitions = 60
rdd = sc.parallelize(full_primary_files, npartitions)
#Now ask Spark to run the job
process_results = rdd.map(retrieve_and_process).collect()
Once you've trained your model, done all of your testing, and tweaks and are ready to submit an entry to the contest, you'll need to download the test data set and apply your model to that.
The test data set is similar to the labeled data, except that the JSON header is missing the 'signal_classification' key, and just contains the 'uuid'.
Like the other sets, this set is found in a .zip
file in the simsignals_v2_zipped
container;
filename = 'primary_testset.zip'
test_set_url = '{}/simsignals_v2_zipped/{}'.format(base_url, filename)
os.system('curl {} > {}'.format(test_set_url, mydatafolder +'/'+filename))
There are approximately 1000 simulations of each of the 7 signal classes -- but not exactly 1000 (+- some largeish number) so you can't cheat :).
See the Judging Criteria document for more details.
filename = 'public_list_primary_testset_1k_1june_2017.csv'
test_set_csv_url = '{}/simsignals_files/{}'.format(base_url, filename)
os.system('curl {} > {}'.format(test_set_csv_url, mydatafolder + '/' + filename))