This notebook will give an overview of using the excellent HDF5 Data Format for high performance computing and Plotly to graph data stored in this files. Plotly is a web-based graphing platform that lets you make and share interactive graphs and dashboards. You can use it for free online--sign up for an account here--and on-premise with Chart Studio Enterprise.
For those unfamilar with the HDF5 file format: HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data. HDF5 is portable and is extensible, allowing applications to evolve in their use of HDF5. The HDF5 Technology suite includes tools and applications for managing, manipulating, viewing, and analyzing data in the HDF5 format.
The HDF group has some great reasons to use their files - namely that it works great with all kind of data. You can read more here.
import pandas as pd
from IPython.display import display
import plotly.plotly as py # interactive graphing
from plotly.graph_objs import Bar, Scatter, Marker, Layout, Data, Figure, Heatmap, XAxis, YAxis
import plotly.tools as tls
import numpy as np
The dataset that we'll be using is data from NYC's open data portal. We'll be exploring a 100mb dataset covering traffic accidents in NYC. While we are capable of fitting this data into memory, the HDF5 file format has some unique affordances that allow us to query and save data in convenient ways.
Now the first thing we'll want to do is open up an access point to this HDF5 file, doing so is simple because pandas provides ready access to doing so.
pd.set_option('io.hdf.default_format','table')
store = pd.HDFStore('nypd_motors.h5')
Now that we've opened up our store, let's start storing some data
# df = pd.read_csv('NYPD_motor_collisions.csv', parse_dates=['DATE'])
# df.columns = [col.lower().replace(" ", "_") for col in df.columns]
# store.append("nypd", df,format='table',data_columns=True)
store
<class 'pandas.io.pytables.HDFStore'> File path: nypd_motors.h5 /nypd frame_table (typ->appendable,nrows->596990,ncols->29,indexers->[index],dc->[date,time,borough,zip_code,latitude,longitude,location,on_street_name,cross_street_name,off_street_name,number_of_persons_injured,number_of_persons_killed,number_of_pedestrians_injured,number_of_pedestrians_killed,number_of_cyclist_injured,number_of_cyclist_killed,number_of_motorist_injured,number_of_motorist_killed,contributing_factor_vehicle_1,contributing_factor_vehicle_2,contributing_factor_vehicle_3,contributing_factor_vehicle_4,contributing_factor_vehicle_5,unique_key,vehicle_type_code_1,vehicle_type_code_2,vehicle_type_code_3,vehicle_type_code_4,vehicle_type_code_5])
One thing that's nice about the HDF5 file is that it's kind of like a key value store. It's simple to use, and allows you to store things just like you might in a file system type hierarchy.
What's awesome about the HDF5 format is that it's almost like a miniature file system. It supports hierarchical data and is accessed like a python dictionary.
store.get_storer("df")
store.select("nypd").head()
date | time | borough | zip_code | latitude | longitude | location | on_street_name | cross_street_name | off_street_name | ... | contributing_factor_vehicle_2 | contributing_factor_vehicle_3 | contributing_factor_vehicle_4 | contributing_factor_vehicle_5 | unique_key | vehicle_type_code_1 | vehicle_type_code_2 | vehicle_type_code_3 | vehicle_type_code_4 | vehicle_type_code_5 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2015-06-02 | 13:48 | MANHATTAN | 10038 | 40.711780 | -73.999701 | (40.7117796, -73.9997006) | ST JAMES PLACE | MADISON STREET | NaN | ... | Unspecified | NaN | NaN | NaN | 3232026 | VAN | VAN | NaN | NaN | NaN |
1 | 2015-06-02 | 13:40 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | ... | Turning Improperly | NaN | NaN | NaN | 3232021 | PASSENGER VEHICLE | SPORT UTILITY / STATION WAGON | NaN | NaN | NaN |
2 | 2015-06-02 | 13:40 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1200 WATERS PLACE - PARKING LOT | ... | Unspecified | NaN | NaN | NaN | 3232261 | PASSENGER VEHICLE | PASSENGER VEHICLE | NaN | NaN | NaN |
3 | 2015-06-02 | 13:40 | MANHATTAN | 10004 | 40.706701 | -74.016047 | (40.7067007, -74.0160467) | WEST STREET | MORRIS STREET | NaN | ... | Unspecified | NaN | NaN | NaN | 3232015 | UNKNOWN | PASSENGER VEHICLE | NaN | NaN | NaN |
4 | 2015-06-02 | 13:38 | NaN | NaN | NaN | NaN | NaN | WOOLLEY AVENUE | GURDON STREET | NaN | ... | Other Vehicular | NaN | NaN | NaN | 3233372 | PASSENGER VEHICLE | PASSENGER VEHICLE | NaN | NaN | NaN |
5 rows × 29 columns
boroughs = store.select("nypd", "columns=['borough']")
boroughs['COUNT'] = 1
borough_groups = boroughs.groupby('borough')
borough_groups.sum().index
Index([u'BRONX', u'BROOKLYN', u'MANHATTAN', u'QUEENS', u'STATEN ISLAND'], dtype='object')
data = Data([Bar(y=borough_groups.sum()['COUNT'], x=borough_groups.sum().index)])
layout = Layout(xaxis=XAxis(title="Borough"), yaxis=YAxis(title='Accident Count'))
fig = Figure(data=data, layout=layout)
py.iplot(fig)
dates_borough = store.select("nypd", "columns=['date', 'borough']").sort('date')
dates_borough['COUNT'] = 1
date_borough_sum = dates_borough.groupby(['borough', "date"]).sum()
date_borough_sum.head()
COUNT | ||
---|---|---|
borough | date | |
BRONX | 2012-07-01 | 39 |
2012-07-02 | 71 | |
2012-07-03 | 73 | |
2012-07-04 | 51 | |
2012-07-05 | 60 |
data = []
for g, df in date_borough_sum.reset_index().groupby('borough'):
data.append(Scatter(x= df.date, y=df.COUNT,name=g))
layout = Layout(xaxis=XAxis(title="Date"), yaxis=YAxis(title="Accident Count"))
py.iplot(Figure(data=Data(data), layout=layout), filename='nypd_crashes/over_time')
Luckily for us, while this graph is a bit of a mess, we can still zoom in on specific times and ranges. This makes plotly perfect for exploring datasets. You can create a high level visual of the data then zoom into a more detailed level.
See below where using the above graph I could zoom in on a particular point and anontate it for future investigation.
tls.embed("https://plotly.com/~bill_chambers/274")
car_types = store.select("nypd", "columns=['vehicle_type_code_1', 'vehicle_type_code_2']")
car_types['COUNT'] = 1
code_1 = car_types.groupby('vehicle_type_code_1').sum()
code_2 = car_types.groupby('vehicle_type_code_2').sum()
data = Data([
Bar(x=code_1.index, y=code_1.COUNT,name='First Vehicle Type'),
Bar(x=code_2.index, y=code_2.COUNT,name='Second Vehicle Type')
])
py.iplot(Figure(data=data, layout=Layout(barmode='group', yaxis=YAxis(title="Vehicle Incidents"))))
No big surprises here, we can see that passenger vehicles, likely being the most prevalent vehicles, are the ones involved in the most accidents for the first and second vehicles. However this does make for some more interesting questions, does this extrapolate to each vehicle class. That is, do all kinds of vehicles hit all other vehicles in more or less the same frequency?
Let's explore large commercial vehicles.
large_vehicles = car_types.groupby(
'vehicle_type_code_1'
).get_group(
'LARGE COM VEH(6 OR MORE TIRES)'
).groupby('vehicle_type_code_2').sum()
data = Data([Bar(x=large_vehicles.index,y=large_vehicles.COUNT)])
py.iplot(Figure(data=data, layout=Layout(yaxis=YAxis(title="Incident Per Vehicle Type"))))
At first glance it seems alright, but it's worth more exploration - let's Z-Score the data and compare their scores.
large_vehicles.head()
COUNT | |
---|---|
vehicle_type_code_2 | |
AMBULANCE | 9 |
BICYCLE | 98 |
BUS | 151 |
FIRE TRUCK | 6 |
LARGE COM VEH(6 OR MORE TIRES) | 878 |
code_2.head()
COUNT | |
---|---|
vehicle_type_code_2 | |
AMBULANCE | 842 |
BICYCLE | 13891 |
BUS | 8935 |
FIRE TRUCK | 412 |
LARGE COM VEH(6 OR MORE TIRES) | 10299 |
def z_score(df):
df['zscore'] = ((df.COUNT - df.COUNT.mean())/df.COUNT.std())
return df
data = Data([
Bar(x=z_score(code_2).index,y=z_score(code_2).zscore, name='All Vehicles'),
Bar(x=z_score(large_vehicles).index,y=z_score(large_vehicles).zscore,name='Large Vehicles'),
])
py.iplot(Figure(data=data, layout=Layout(yaxis=YAxis(title="Incident Per Vehicle Type"))),name='nypd_crashes/large vs all vehicles')
We can see that things are relatively similar, except that large vehicles seem to hit large vehicles much more than most others. This could warrant further investigation.
While grouped bar charts can be useful for these kinds of comparisons, it can be great to visualize this data with heatmaps as well. We can create one of these by creation a contingency table or cross tabulation.
cont_table = pd.crosstab(car_types['vehicle_type_code_1'], car_types['vehicle_type_code_2']).apply(np.log)
Because of the different magnitudes of data, I decided to log scale it.
py.iplot(Data([
Heatmap(z = cont_table.values, x=cont_table.columns, y=cont_table.index, colorscale='Jet')
]),filename='nypd_crashes/vehicle to vehicle heatmap')
With this we are able to see more interesting nuances in the data. For instance taxis seems to have lots of accidents with other taxis, while vans and station wagons also seem to have many accidents.
There's clearly a lot to explore in this dataset.
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install publisher --upgrade
import publisher
publisher.publish(
'PyTables.ipynb', 'ipython-notebooks/pytables/', 'PyTables and HDF5 Tutorial',
'An introduction to PyTables and HDF5 for data analysis in Python.', name='Pytables and Plotly')
Requirement already up-to-date: publisher in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages