Dask Dataframes can read and store data in many of the same formats as Pandas dataframes. In this example we read and write data with the popular CSV and Parquet formats, and discuss best practices when using these formats.
from IPython.display import YouTubeVideo YouTubeVideo("0eEsIA0O1iE")
Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.
The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.
from dask.distributed import Client client = Client(n_workers=1, threads_per_worker=4, processes=True, memory_limit='2GB') client
import dask df = dask.datasets.timeseries() df
import os import datetime if not os.path.exists('data'): os.mkdir('data') def name(i): """ Provide date for filename given index Examples -------- >>> name(0) '2000-01-01' >>> name(10) '2000-01-11' """ return str(datetime.date(2000, 1, 1) + i * datetime.timedelta(days=1)) df.to_csv('data/*.csv', name_function=name);
We now have many CSV files in our data directory, one for each day in the month of January 2000. Each CSV file holds timeseries data for that day. We can read all of them as one logical dataframe using the
dd.read_csv function with a glob string.
!ls data/*.csv | head
We can read one file with
pandas.read_csv or many files with
import pandas as pd df = pd.read_csv('data/2000-01-01.csv') df.head()
import dask.dataframe as dd df = dd.read_csv('data/2000-*-*.csv') df
read_csv function has many options to help you parse files. The Dask version uses the Pandas function internally, and so supports many of the same options. You can use the
? operator to see the full documentation string.
In this case we use the
parse_dates keyword to parse the timestamp column to be a datetime. This will make things more efficient in the future. Notice that the dtype of the timestamp column has changed from
df = dd.read_csv('data/2000-*-*.csv', parse_dates=['timestamp']) df
Whenever we operate on our dataframe we read through all of our CSV data so that we don't fill up RAM. This is very efficient for memory use, but reading through all of the CSV files every time can be slow.
df = dd.read_parquet('data/2000-01.parquet', engine='pyarrow') df
Parquet is a column-store, which means that it can efficiently pull out only a few columns from your dataset. This is good because it helps to avoid unnecessary data loading.
%%time df = dd.read_parquet('data/2000-01.parquet', columns=['name', 'x'], engine='pyarrow') df.groupby('name').x.mean().compute()
Here the difference is not that large, but with larger datasets this can save a great deal of time.