Explore the National Water Model Reanalysis v2.1

Explore the NWM Reanalysis (1979-2020) NetCDF files (all 367,439 of them) on AWS as a single xarray dataset! The only new file we created was a JSON file that points to data chunks in the original NetCDF files that is then read with the fsspec and zarr packages.

See this blog post for how this works.

Important note on performance: The data in the original NetCDF files is chunked as the entire spatial domain and a single time step. Thus reading a time series will be very slow -- to extract a time series at a single location for the entire time period will require reading and uncompressing 8TB of data! But extraction of a few days or weeks of data will be relatively fast.

In [1]:
import intake

Use Intake to load the consolidated NWM dataset

The Intake catalog, the consolidated JSON file it accesses, and the NetCDF files the JSON file references are all on public S3 buckets that do not require an AWS account, so no credentials are required!

In [2]:
%%time
cat = intake.open_catalog('s3://esip-qhub-public/noaa/nwm/nwm_catalog.yml')
CPU times: user 1.26 s, sys: 690 ms, total: 1.95 s
Wall time: 2.05 s
In [3]:
list(cat)
Out[3]:
['nwm-reanalysis', 'nwm-forecast']
In [4]:
%%time
ds = cat['nwm-reanalysis'].to_dask()
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/intake_xarray/xzarr.py:31: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:
1. Consolidating metadata in this existing store with zarr.consolidate_metadata().
2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata.
  self._ds = xr.open_zarr(self._mapper, **self.kwargs)
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/xarray/conventions.py:512: SerializationWarning: variable 'qBtmVertRunoff' has multiple fill values {-9999000, 0}, decoding all values to NaN.
  new_vars[k] = decode_cf_variable(
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/xarray/conventions.py:512: SerializationWarning: variable 'qBucket' has multiple fill values {-999900000, 0}, decoding all values to NaN.
  new_vars[k] = decode_cf_variable(
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/xarray/conventions.py:512: SerializationWarning: variable 'qSfcLatRunoff' has multiple fill values {-999900000, 0}, decoding all values to NaN.
  new_vars[k] = decode_cf_variable(
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/xarray/conventions.py:512: SerializationWarning: variable 'q_lateral' has multiple fill values {0, -99990}, decoding all values to NaN.
  new_vars[k] = decode_cf_variable(
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/xarray/conventions.py:512: SerializationWarning: variable 'streamflow' has multiple fill values {0, -999900}, decoding all values to NaN.
  new_vars[k] = decode_cf_variable(
/home/conda/store/896e738a7fff13f931bce6a4a04b3575ecd1f4cbd0e7da9d83afcc7273e57b60-pangeo/lib/python3.8/site-packages/xarray/conventions.py:512: SerializationWarning: variable 'velocity' has multiple fill values {0, -999900}, decoding all values to NaN.
  new_vars[k] = decode_cf_variable(
CPU times: user 40.2 s, sys: 2.66 s, total: 42.9 s
Wall time: 45.7 s
In [5]:
ds
Out[5]:
<xarray.Dataset>
Dimensions:         (time: 367439, feature_id: 2776738)
Coordinates:
  * feature_id      (feature_id) float64 101.0 179.0 181.0 ... 1.18e+09 1.18e+09
    latitude        (feature_id) float32 dask.array<chunksize=(2776738,), meta=np.ndarray>
    longitude       (feature_id) float32 dask.array<chunksize=(2776738,), meta=np.ndarray>
  * time            (time) datetime64[ns] 1979-02-01T01:00:00 ... 2020-12-31T...
Data variables:
    elevation       (time, feature_id) float32 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    order           (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    qBtmVertRunoff  (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    qBucket         (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    qSfcLatRunoff   (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    q_lateral       (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    streamflow      (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
    velocity        (time, feature_id) float64 dask.array<chunksize=(1, 2776738), meta=np.ndarray>
Attributes: (12/19)
    Conventions:                CF-1.6
    TITLE:                      OUTPUT FROM WRF-Hydro v5.2.0-beta2
    _NCProperties:              version=2,netcdf=4.7.4,hdf5=1.10.7,
    cdm_datatype:               Station
    code_version:               v5.2.0-beta2
    dev:                        dev_ prefix indicates development/internal me...
    ...                         ...
    model_output_type:          channel_rt
    model_output_valid_time:    1979-02-01_01:00:00
    model_total_valid_times:    1416
    proj4:                      +proj=lcc +units=m +a=6370000.0 +b=6370000.0 ...
    station_dimension:          feature_id
    stream_order_output:        1
In [6]:
ds.streamflow.nbytes/1e12 # How many terabytes is a single variable
Out[6]:
8.162254671856

Read and plot streamflow for a specific time

The local National Weather Service office in Houston observed all-time record daily rainfall accumulations on both August 26 and 27, measured at 14.4 in (370 mm) and 16.08 in (408 mm) respectively

In [7]:
import hvplot.pandas
import geoviews as gv
from holoviews.operation.datashader import rasterize
import cartopy.crs as ccrs
import numpy as np
import pandas as pd