We've seen that Pandas and Geopandas are excellent libraries for analyzing tabular "labeled data". Xarray is designed to make it easier to work with with labeled multidimensional data. By multidimensional data (also often called N-dimensional), we mean data with many independent dimensions or axes. For example, we might represent Earth's surface temperature $T$ as a three dimensional variable
$$ T(x, y, t) $$where $x$ and $y$ are spatial dimensions and and $t$ is time. By labeled, we mean data that has metadata associated with it describing the names and relationships between the variables. The cartoon below shows a "data cube" schematic dataset with temperature and preciptation sharing the same three dimensions, plus longitude and latitude as auxilliary coordinates.
Like Pandas, xarray has two fundamental data structures:
DataArray
, which holds a single multi-dimensional variable and its coordinatesDataset
, which holds multiple variables that potentially share the same coordinatesA DataArray
has four essential attributes:
values
: a numpy.ndarray
holding the array’s valuesdims
: dimension names for each axis (e.g., ('x', 'y', 'z')
)coords
: a dict-like container of arrays (coordinates) that label each point (e.g., 1-dimensional arrays of numbers, datetime objects or strings)attrs
: an OrderedDict
to hold arbitrary metadata (attributes)A dataset is simply an object containing multiple DataArrays indexed by variable name
Let's start by constructing some DataArrays manually
import numpy as np
import xarray as xr
print('Numpy version: ', np.__version__)
print('Xarray version: ', xr.__version__)
Here we model the simple function
$$f(x) = sin(x)$$on the interval $-\pi$ to $\pi$. We start by creating the data as numpy arrays.
x = np.linspace(-np.pi, np.pi, 19)
f = np.sin(x)
Now we are going to put this into an xarray DataArray.
A simple DataArray without dimensions or coordinates isn't much use.
da_f = xr.DataArray(f)
da_f
We can add a dimension name...
da_f = xr.DataArray(f, dims=['x'])
da_f
But things get most interesting when we add a coordinate:
da_f = xr.DataArray(f, dims=['x'], coords={'x': x})
da_f
Xarray has built-in plotting, like pandas.
da_f.plot(marker='o')
We can always use regular numpy indexing and slicing on DataArrays to get the data back out.
# get the 10th item
da_f[10]
# get the first 10 items
da_f[:10]
However, it is often much more powerful to use xarray's .sel()
method to use label-based indexing. This allows us to fetch values based on the value of the coordinate, not the numerical index.
da_f.sel(x=0)
da_f.sel(x=slice(0, np.pi)).plot()
When we perform mathematical manipulations of xarray DataArrays, the coordinates come along for the ride. Imagine we want to calcuate
$$ g = f^2 + 1 $$We can apply familiar numpy operations to xarray objects.
da_g = da_f**2 + 1
da_g
da_g.plot()
da_f
and da_g
together.(da_f * da_g).sel(x=slice(-1, 1)).plot(marker='o')
If we are just dealing with 1D data, Pandas and Xarray have very similar capabilities. Xarray's real potential comes with multidimensional data.
At this point we will load data from a netCDF file into an xarray dataset.
%%bash
git clone https://github.com/pangeo-data/tutorial-data.git
ds = xr.open_dataset('./tutorial-data/sst/NOAA_NCDC_ERSST_v3b_SST-1960.nc')
ds
## Xarray > v0.14.1 has a new HTML output type!
xr.set_options(display_style="html")
ds
# both do the exact same thing
# dictionary syntax
sst = ds['sst']
# attribute syntax
sst = ds.sst
sst
In this example, we take advantage of the fact that xarray understands time to select a particular date
sst.sel(time='1960-06-15').plot(vmin=-2, vmax=30)
But we can select along any axis
sst.sel(lon=180).transpose().plot()
sst.sel(lon=180, lat=40).plot()
Usually the process of data analysis involves going from a big, multidimensional dataset to a few concise figures. Inevitably, the data must be "reduced" somehow. Examples of simple reduction operations include:
etc. Xarray supports all of these and more, via a familiar numpy-like syntax. But with xarray, you can specify the reductions by dimension.
First we start with the default, reduction over all dimensions:
sst.mean()
sst_time_mean = sst.mean(dim='time')
sst_time_mean.plot(vmin=-2, vmax=30)
sst_zonal_mean = sst.mean(dim='lon')
sst_zonal_mean.transpose().plot()
sst_time_and_zonal_mean = sst.mean(dim=('time', 'lon'))
sst_time_and_zonal_mean.plot()
# some might prefer to have lat on the y axis
sst_time_and_zonal_mean.plot(y='lat')
The means we calculated above were "naive"; they were straightforward numerical means over the different dimensions of the dataset. They did not account, for example, for spherical geometry of the globe and the necessary weighting factors. Although xarray is very useful for geospatial analysis, it has no built-in understanding of geography.
Below we show how to create a proper weighted mean by using the formula for the area element in spherical coordinates. This is a good illustration of several xarray concepts.
The area element for lat-lon coordinates is
$$ \delta A = R^2 \delta \phi \delta \lambda \cos(\phi) $$where $\phi$ is latitude, $\delta \phi$ is the spacing of the points in latitude, $\delta \lambda$ is the spacing of the points in longitude, and $R$ is Earth's radius. (In this formula, $\phi$ and $\lambda$ are measured in radians.) Let's use xarray to create the weight factor.
R = 6.37e6
# we know already that the spacing of the points is one degree latitude
dϕ = np.deg2rad(1.)
dλ = np.deg2rad(1.)
dA = R**2 * dϕ * dλ * np.cos(np.deg2rad(ds.lat))
dA.plot()
dA.where(sst[0].notnull())
pixel_area = dA.where(sst[0].notnull())
pixel_area.plot()
total_ocean_area = pixel_area.sum(dim=('lon', 'lat'))
sst_weighted_mean = (sst * pixel_area).sum(dim=('lon', 'lat')) / total_ocean_area
sst_weighted_mean.plot()
Xarray integrates with cartopy to enable you to plot your data on a map
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
plt.figure(figsize=(12, 8))
ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine())
ax.coastlines()
sst[0].plot(transform=ccrs.PlateCarree(), vmin=-2, vmax=30,
cbar_kwargs={'shrink': 0.4})
One of the killer features of xarray is its ability to open many files into a single dataset. We do this with the open_mfdataset
function.
ds_all = xr.open_mfdataset('./tutorial-data/sst/*nc', combine='by_coords')
ds_all
Now we have 57 years of data instead of one!
Now that we have a bigger dataset, this is a good time to check out xarray's groupby capabilities.
sst_clim = ds_all.sst.groupby('time.month').mean(dim='time')
sst_clim
Now the data has dimension month
instead of time!
Each value represents the average among all of the Januaries, Februaries, etc. in the dataset.
(sst_clim[6] - sst_clim[0]).plot()
plt.title('June minus July SST Climatology')
Resample is meant specifically to work with time data (data with a datetime64
variable as a dimension).
It allows you to change the time-sampling frequency of your data.
Let's illustrate by selecting a single point.
sst_ts = ds_all.sst.sel(lon=300, lat=10)
sst_ts_annual = sst_ts.resample(time='A').mean(dim='time')
sst_ts_annual
sst_ts.plot()
sst_ts_annual.plot()
An alternative approach is a "running mean" over the time dimension.
This can be accomplished with xarray's .rolling
operation.
sst_ts_rolling = sst_ts.rolling(time=24, center=True).mean()
sst_ts_annual.plot(marker='o')
sst_ts_rolling.plot()
This page from NOAA explains how the El Niño Southern Oscillation index is calculated.
(Note that "anomaly" means that the seasonal cycle is removed.)
Try working on this on your own for 5 minutes.
Once you're done, try comparing the ENSO Index you calculated with the NINO3.4 index published by NOAA. The pandas snippet below will load the official time series for comparison.
import pandas as pd
noaa_nino34 = pd.read_csv('https://www.cpc.ncep.noaa.gov/data/indices/sstoi.indices',
sep=r" ", skipinitialspace=True,
parse_dates={'time': ['YR','MON']},
index_col='time')['NINO3.4']
noaa_nino34.head()
Here are some important resources for learning more about xarray and getting help.