'''
Pandas is one of the most powerful contributions to python for quick and easy data analysis. Data Science is dominated by
one common data structure - the table. Python never had a great native way to manipulate tables in ways that many analysts
are used to (if you're at all familliar with spreadsheets or relational databases). The basic Pandas data structure is the Data
Frame which, if you are an R user, should sound familliar.
This module is a very high level treatment of basic data operations one typically uses when manipulating tables in Python.
To really learn all of the details, refer to the book.
'''
#To import
import pandas as pd #Its common to use pd as the abbreviation
from pandas import Series, DataFrame #Wes McKinney recommends importing these separately - they are used so often and benefit from having their own namespace
'''
The Series - for this module we'll skip the Series (see book for details), but we will define it. A Series is a one dimensional
array like object that has an array plus an index, which labels the array entries. Once we present a Data Frame, one can think
of a series as similar to a Data Frame with just one column.
'''
'''
A simple example of the DataFrame - building one from a dictionary
(note for this to work each list has to be the same length)
'''
data = {'state':['OH', 'OH', 'OH', 'NV', 'NV'],
'year':[2000, 2001, 2002, 2001, 2002],
'pop':[1.5, 1.7, 3.6, 2.4, 2.9]}
frame = pd.DataFrame(data) #This function will turn the dict to the data frame. Notice that the keys become columns and an index is created
frame
pop | state | year | |
---|---|---|---|
0 | 1.5 | OH | 2000 |
1 | 1.7 | OH | 2001 |
2 | 3.6 | OH | 2002 |
3 | 2.4 | NV | 2001 |
4 | 2.9 | NV | 2002 |
#To retrieve columns...use dict-like notation or use the column name as an attribute
frame['state'], frame.state
(0 OH 1 OH 2 OH 3 NV 4 NV Name: state, dtype: object, 0 OH 1 OH 2 OH 3 NV 4 NV Name: state, dtype: object)
#To retrieve a row, you can index it like a list, or use the actual row index name using the .ix method
frame[1:2], frame.ix[1]
( pop state year 1 1.7 OH 2001, pop 1.7 state OH year 2001 Name: 1, dtype: object)
#Assigning a new column is easy too
frame['big_pop'] = (frame['pop']>3)
frame
pop | state | year | big_pop | |
---|---|---|---|---|
0 | 1.5 | OH | 2000 | False |
1 | 1.7 | OH | 2001 | False |
2 | 3.6 | OH | 2002 | True |
3 | 2.4 | NV | 2001 | False |
4 | 2.9 | NV | 2002 | False |
'''
One operation on data that is frequent enough to highlight here is sorting
'''
import numpy as np
df = pd.DataFrame(np.random.randn(10,1), columns = ['Rand1'])
df['OrigOrd'] = df.index.values
df = df.sort_index(by = 'Rand1', ascending = False) #Sorting by a particular column
df
Rand1 | OrigOrd | |
---|---|---|
2 | 0.643863 | 2 |
4 | 0.298427 | 4 |
7 | 0.118110 | 7 |
5 | -0.220349 | 5 |
9 | -0.393215 | 9 |
0 | -0.452396 | 0 |
3 | -0.502908 | 3 |
6 | -0.585668 | 6 |
1 | -0.987663 | 1 |
8 | -1.774430 | 8 |
df = df.sort_index() #Now sorting back, using the index
df
Rand1 | OrigOrd | |
---|---|---|
0 | -0.452396 | 0 |
1 | -0.987663 | 1 |
2 | 0.643863 | 2 |
3 | -0.502908 | 3 |
4 | 0.298427 | 4 |
5 | -0.220349 | 5 |
6 | -0.585668 | 6 |
7 | 0.118110 | 7 |
8 | -1.774430 | 8 |
9 | -0.393215 | 9 |
'''
Some of the real power we are after is the ability to condense, merge and concatenate data sets. This is where we
want Python to have the same data munging functionality we usually get from executing SQL statements on relational
databases.
'''
alpha = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df1 = DataFrame({'rand_float':np.random.randn(10), 'key':alpha})
df2 = DataFrame({'rand_int':np.random.randint(0, 5, size = 10), 'key':alpha})
'''
So we have two dataframes that share indexes (in this case all of them). We want to combine them. In sql we would execute
a join, such as Select * from table1 a join table2 b on a.key=b.key;
'''
df_merge = pd.merge(df1,df2,on='key')
df_merge
key | rand_float | rand_int | |
---|---|---|---|
0 | a | -1.253479 | 2 |
1 | b | 1.285238 | 1 |
2 | c | 0.312605 | 2 |
3 | d | -1.128266 | 2 |
4 | e | -0.096089 | 0 |
5 | f | -0.920994 | 3 |
6 | g | -1.790160 | 4 |
7 | h | -0.273019 | 4 |
8 | i | -0.922280 | 3 |
9 | j | -0.063663 | 0 |
'''
Now that we have this merged table, we might want to summarize it within a key grouping
'''
df_merge.groupby('rand_int').mean()
rand_float | |
---|---|
rand_int | |
0 | -0.079876 |
1 | 1.285238 |
2 | -0.689713 |
3 | -0.921637 |
4 | -1.031590 |
'''
You can have multiple aggregation functions, but the syntax isn't the same
'''
df_merge.groupby('rand_int').agg([np.sum, np.mean, len, np.std])
rand_float | ||||
---|---|---|---|---|
sum | mean | len | std | |
rand_int | ||||
0 | -0.159752 | -0.079876 | 2 | 0.022929 |
1 | 1.285238 | 1.285238 | 1 | NaN |
2 | -2.069140 | -0.689713 | 3 | 0.870288 |
3 | -1.843274 | -0.921637 | 2 | 0.000909 |
4 | -2.063179 | -1.031590 | 2 | 1.072780 |