This notebook shows how to download and preprocess climate model data for bias correction and further use. To apply a bias adjustment method, three datasets are needed: 1) observation or reanalysis data; 2) historical climate model data over the same reference period that observations are available for; and 3) climate model data for a future, or more generally, application, period that is to be bias corrected.
Here we will download and preprocess CMIP6 data from the Climate Data Store (CDS) as climate model input and two reanalysis datasets: 1) ERA5 from the CDS and 2) NCEP/DOE Reanalysis II from the PSL datastore (NOAA).
There are many ways to access climate data on different temporal or spatial resolutions. This notebook is meant to illustrate one possible way to download data at daily resolution which is currently the primary temporal resolution supported in ibicus, although some can be applied at monthly resolution.
import os
import urllib
# Scientific computing
import iris
import xarray
import numpy as np
Let's create a data-directory where our inputs will go, if it does not yet exist:
DATADIR = "data_download_and_preprocessing"
if not os.path.exists(DATADIR):
os.mkdir(DATADIR)
To request climate data from the Climate Data Store (CDS) we will use the CDS API. Run the following cell if you have not yet istalled it:
#!pip install cdsapi
import cdsapi
# We disable urllib3 (used by cdsapi) warning
import urllib3
urllib3.disable_warnings()
We make use of the option to manually set the CDS API credentials. First, you have to set two variables: URL and KEY which build together your CDS API key. The string of characters that make up your KEY include your personal User ID and CDS API key. To obtain these, first register or login to the CDS (http://cds.climate.copernicus.eu), then visit https://cds.climate.copernicus.eu/api-how-to and copy the string of characters listed after "key:". Replace the ######### below with your key.
URL = 'https://cds.climate.copernicus.eu/api/v2'
KEY = '########################################' # enter your key instead
Let's choose a model and variable of interest, and fix some meta-paramters. If we are interested in multiple variable we can just iterate the code below:
# choose model
model = 'mpi_esm1_2_lr'
# choose variables to extract (not all variables available at daily resolution for all cmip6 models at the moment)
variable = 'precipitation'
# choose area to extract
area = [80, 3, 20, 30]
# choose a historical period to extract
period_hist = '1979-01-01/2005-12-31'
# choose a future period to extract:
period_fut = '2050-01-01/2070-12-31'
# choose a filename for the historical cm data
fname_cm_hist = f"cmip6_daily_1979-2015_ipsl_historical_{variable}.zip"
# choose a filename for the future cm data
fname_cm_future = f"cmip6_daily_2050-2070_ipsl_ssp5_8_5_{variable}.zip"
Both datasets will be in DATADIR
under fname_cm_hist
and fname_cm_future
.
Executing the following cell will retrieve historical climate model data:
# download historical climate model data
c = cdsapi.Client(url=URL, key=KEY)
c.retrieve(
'projections-cmip6',
{
'temporal_resolution': 'daily',
'experiment': 'historical',
'level': 'single_levels',
'variable': variable,
'model': model,
'date': period_hist,
'area': area,
'format': 'zip',
},
f'{DATADIR}/{fname_cm_hist}')
2022-08-22 20:15:46,493 INFO Welcome to the CDS 2022-08-22 20:15:46,494 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/projections-cmip6 2022-08-22 20:15:47,110 INFO Request is completed 2022-08-22 20:15:47,112 INFO Downloading https://download-0010-clone.copernicus-climate.eu/cache-compute-0010/cache/data4/adaptor.esgf_wps.retrieve-1661187996.0821903-27851-13-11a6e1e3-b6cb-4e5f-952a-b9c75475a54f.zip to data_download_and_preprocessing/cmip6_daily_1979-2015_ipsl_historical_precipitation.zip (11.6M) 2022-08-22 20:15:59,062 INFO Download rate 994.1K/s
Result(content_length=12162765,content_type=application/zip,location=https://download-0010-clone.copernicus-climate.eu/cache-compute-0010/cache/data4/adaptor.esgf_wps.retrieve-1661187996.0821903-27851-13-11a6e1e3-b6cb-4e5f-952a-b9c75475a54f.zip)
After unzipping the folder...
import zipfile
with zipfile.ZipFile(f'{DATADIR}/{fname_cm_hist}', 'r') as zip_ref:
zip_ref.extractall(DATADIR)
...the file is now in DATADIR/pr_day_MPI-ESM1-2-LR_historical_r1i1p1f1_gn_19790101-20141231_*.nc
Now we go through the same steps to download climate data in the future or application period:
# download future climate model data
c = cdsapi.Client(url=URL, key=KEY)
c.retrieve(
'projections-cmip6',
{
'temporal_resolution': 'daily',
'experiment': 'ssp5_8_5',
'level': 'single_levels',
'variable': variable,
'model': model,
'date': period_fut,
'area': area,
'format': 'zip',
},
f'{DATADIR}/{fname_cm_future}')
2022-08-22 20:16:00,107 INFO Welcome to the CDS 2022-08-22 20:16:00,110 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/projections-cmip6 2022-08-22 20:16:00,307 INFO Request is completed 2022-08-22 20:16:00,309 INFO Downloading https://download-0004-clone.copernicus-climate.eu/cache-compute-0004/cache/data6/adaptor.esgf_wps.retrieve-1661033873.2348728-23827-20-937b4c7b-1630-4e08-9600-6d3d1912cc8b.zip to data_download_and_preprocessing/cmip6_daily_2050-2070_ipsl_ssp5_8_5_precipitation.zip (8.2M) 2022-08-22 20:16:07,124 INFO Download rate 1.2M/s
Result(content_length=8643246,content_type=application/zip,location=https://download-0004-clone.copernicus-climate.eu/cache-compute-0004/cache/data6/adaptor.esgf_wps.retrieve-1661033873.2348728-23827-20-937b4c7b-1630-4e08-9600-6d3d1912cc8b.zip)
Again, we need to unzip the folder:
import zipfile
with zipfile.ZipFile(f'{DATADIR}/{fname_cm_future}', 'r') as zip_ref:
zip_ref.extractall(DATADIR)
The file is now in DATADIR/pr_day_MPI-ESM1-2-LR_ssp585_r1i1p1f1_gn_20500101-20701231_*.nc
Now we will download observations. We will first download ERA5 data from the CDS and afterwards the NCEP/DOE II Reanalysis from the PSL.
We will download ERA5 on daily temporal resolution (if the climate model were on other temporal resolutions we would also need a different one for ERA5). The script is inspired by this discussion and uses the Daily statistics calculated from ERA5 data application. The output of this application is a separate netCDF file for chosen daily statistic for each month for each year. We concatenate those files then manually. First we need to make some selections (make sure the data chosen here is consistent with the cm data pulled above):
# choose years to request (this should overlap with the `period_hist` chosen for the cm data)
# this is chosen shorter for demonstration purposes
years = list(map(str, range(1979, 1981)))
# choose months to request
months = list(map(str, range(10, 12)))
# choose a variable (must be a valid ERA5 CDS API name)
variable = "total_precipitation"
# choose a required statistic (valid names given in the application description above)
statistic = "daily_mean"
# choose an area (should be the same as above)
area = {"lat": [30, 80], "lon": [3, 20]}
# choose a filename
fname_era5 = f"era5_{variable}_{statistic}_1979_1981.nc"
And now we can request the data:
c = cdsapi.Client(url=URL, key=KEY, timeout=300)
# Loop over years and months
filenames_for_cleanup= []
for yr in years:
print(f"----- Requesting year: {yr} -----")
for mn in months:
result = c.service(
"tool.toolbox.orchestrator.workflow",
params={
"realm": "user-apps",
"project": "app-c3s-daily-era5-statistics",
"version": "master",
"kwargs": {
"dataset": "reanalysis-era5-single-levels",
"product_type": "reanalysis",
"variable": variable,
"statistic": statistic,
"year": yr,
"month": mn,
"time_zone": "UTC+00:0",
"frequency": "1-hourly",
"grid": "1.0/1.0",
"area": area,
},
"workflow_name": "application"
})
filename = f"{DATADIR}/era5_{variable}_{statistic}_{yr}_{mn}.nc"
url = result[0]['location']
# Download nc file
urllib.request.urlretrieve(url, filename)
# Append filename to list of filenames to cleanup
filenames_for_cleanup.append(filename)
# Combine monthly data
combined_data = xarray.open_mfdataset(f"{DATADIR}/era5_{variable}_{statistic}_*.nc", combine = 'nested', concat_dim="time")
combined_data.to_netcdf(f"{DATADIR}/{fname_era5}")
# Cleanup
for filename in filenames_for_cleanup:
os.remove(filename)
----- Requesting year: 1979 -----
2022-08-22 20:16:07,851 INFO Welcome to the CDS 2022-08-22 20:16:07,853 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/tasks/services/tool/toolbox/orchestrator/workflow/clientid-18d6bb5a46ad46178f7d69d70b376b61 2022-08-22 20:16:08,008 INFO Request is completed 2022-08-22 20:16:08,676 INFO Welcome to the CDS 2022-08-22 20:16:08,677 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/tasks/services/tool/toolbox/orchestrator/workflow/clientid-75eb56b162124e4aa0e2c14e282263c2 2022-08-22 20:16:09,858 INFO Welcome to the CDS 2022-08-22 20:16:09,859 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/tasks/services/tool/toolbox/orchestrator/workflow/clientid-af6e4ff44c9842d4b24313e3c054e09f
----- Requesting year: 1980 -----
2022-08-22 20:16:10,877 INFO Welcome to the CDS 2022-08-22 20:16:10,878 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/tasks/services/tool/toolbox/orchestrator/workflow/clientid-7aeb9221b2e446e49aa968e789eecf83
We now download the NCEP/DOE II data. Here is an overview of the possible variables and we take the data from the datastore here.
# Variable name. Needs to be one of the NCEP-names in https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis2/Dailies/gaussian_grid/.
variable = "prate.sfc.gauss"
# choose years to request (this should overlap with the `period_hist` chosen for the cm data)*
years = map(str, range(1979, 2005))
# choose an area (should be the same as above)
area = {"lat": [30, 80], "lon": [3, 20]}
# choose a filename
fname_ncep_doe = f"ncep_doe_{variable}_1979_2005.nc"
Now we can download the data:
# Download data year by year
filenames_for_cleanup = []
for year in years:
url = f"https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis2/Dailies/gaussian_grid/{variable}.{year}.nc"
filename = f"{DATADIR}/{variable}_{year}.nc"
# Download nc file
urllib.request.urlretrieve(url, filename)
# Append filename to list of filenames to cleanup
filenames_for_cleanup.append(filename)
# Combine data for variable
combined_data = xarray.open_mfdataset(f"{DATADIR}/{variable}_*.nc", combine = 'nested', concat_dim="time")
# Select area
combined_data = combined_data.sel(lon=slice(min(area["lon"]), max(area["lon"])),lat=slice(max(area["lat"]), min(area["lat"])))
# Write to file
combined_data.to_netcdf(f"{DATADIR}/{fname_ncep_doe}")
# Cleanup
for filename in filenames_for_cleanup:
os.remove(filename)
It is also possible (and probably easier) to download the data via ftp through the same links, or via the visual interface accessible via the graph-icon next to a variable in the NCEP/DOE 2 overview page. The latter also provides an option to select a range of dates and access merged data for that range that can directly be used for the further preprocessing steps.
Now that we have downloaded the data we need to make sure that observations and climate model data are:
Furthermore we might want to extract additional information and need to get the numpy arrays corresponding to the data. In the numpy arrays we need to make sure that they have the form [t,x,y]
.
Now that we have data on the same temporal resolution for both the climate model and observations we need to make sure they are also on the same spatial one and regrid the datasets. The climate model data is on a coarser grid, therefore we will regrid the observational data onto this resolution. However there are also other solutions, where the climate model data is regridded onto the resolution of the observations.
We will use iris for the regridding, however there are also xarray solutions. Different variables might require different regridding schemes: a list of ones available in iris is given here. For precipitation we choose a regridder based on Nearest values. Other regridders like linear ones might introduce negative values.
cm_hist = iris.load_cube(f"{DATADIR}/pr_day_MPI-ESM1-2-LR_historical_r1i1p1f1_gn_19790101-20051231_v20190710.nc", "precipitation_flux")
cm_future = iris.load_cube(f"{DATADIR}/pr_day_MPI-ESM1-2-LR_ssp585_r1i1p1f1_gn_20500101-20701231_v20190710.nc", "precipitation_flux")
/home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/pyproj/__init__.py:89: UserWarning: pyproj unable to set database path. _pyproj_global_context_initialize() /home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/cf.py:859: UserWarning: Missing CF-netCDF measure variable 'areacella', referenced by netCDF variable 'pr' warnings.warn( /home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/cf.py:1154: UserWarning: Ignoring variable 'lat_bnds' referenced by variable 'lat': Dimensions ('time', 'lat', 'bnds') do not span ('lat',) warnings.warn(msg) /home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/cf.py:1154: UserWarning: Ignoring variable 'lon_bnds' referenced by variable 'lon': Dimensions ('time', 'lon', 'bnds') do not span ('lon',) warnings.warn(msg) /home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/cf.py:859: UserWarning: Missing CF-netCDF measure variable 'areacella', referenced by netCDF variable 'pr' warnings.warn( /home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/cf.py:1154: UserWarning: Ignoring variable 'lat_bnds' referenced by variable 'lat': Dimensions ('time', 'lat', 'bnds') do not span ('lat',) warnings.warn(msg) /home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/cf.py:1154: UserWarning: Ignoring variable 'lon_bnds' referenced by variable 'lon': Dimensions ('time', 'lon', 'bnds') do not span ('lon',) warnings.warn(msg)
First let's take care of the ERA5 reanalysis:
obs_era5 = iris.load_cube(f"{DATADIR}/{fname_era5}")
obs_era5 = obs_era5.regrid(cm_hist, iris.analysis.Nearest())
And now of the NCEP/DOE II data:
obs_ncep_doe = iris.load_cube(f"{DATADIR}/{fname_ncep_doe}")
obs_ncep_doe = obs_ncep_doe.regrid(cm_hist, iris.analysis.Nearest())
/home/jakobwes/anaconda3/envs/ibias4/lib/python3.9/site-packages/iris/fileformats/_nc_load_rules/helpers.py:645: UserWarning: Ignoring netCDF variable 'prate' invalid units 'Kg/m^2/s' warnings.warn(msg)
The data objects are now all at the same temporal and spatial resolution. Because some debiasers need the dates as input, it is useful to extract them here:
def get_dates(x):
time_dimension = x.coords()[0]
dates = time_dimension.units.num2date(time_dimension.points)
return dates
get_dates = np.vectorize(get_dates)
dates_cm_hist = get_dates(cm_hist)
dates_cm_future = get_dates(cm_future)
dates_obs_era5 = get_dates(obs_era5)
dates_obs_ncep_doe = get_dates(obs_ncep_doe)
In order to start working with ibicus, we need to get the numpy arrays associated with the data from the iris cubes:
cm_hist = cm_hist.data
cm_future = cm_future.data
obs_era5 = obs_era5.data
obs_ncep_doe = obs_ncep_doe.data
We look at the shapes to make sure they are all in the form [t, x, y]
:
print(f"Shape cm_hist: {cm_hist.shape}")
print(f"Shape cm_future: {cm_future.shape}")
print(f"Shape obs_era5: {obs_era5.shape}")
print(f"Shape obs_ncep_doe: {obs_ncep_doe.shape}")
Shape cm_hist: (9862, 32, 15) Shape cm_future: (7670, 32, 15) Shape obs_era5: (122, 32, 15) Shape obs_ncep_doe: (9497, 32, 15)
From the ERA5 documentation we can see that the precipitation is measured in m, whilst in cm_hist
and cm_future
it is measured as flux (m / s^-1). To convert we need to divide the ERA5-values by 86 400 (the number of seconds per day):
obs_era5 = obs_era5/86400
The values in the NCEP/DOE II reanalysis are in the same units.
After these preparations we can finally apply a bias adjustment method. For a detailed introduction into the actual application of bias correction using ibicus, we refer you to the other notebooks.
For illustrative purposes we give one example here using a simple quantile mapping methodology that we apply to both ERA5 and NCEP/DOE II data.
from ibicus.debias import QuantileMapping
debiaser = QuantileMapping.from_variable("pr")
debiased_cm_future_era5 = debiaser.apply(obs_era5, cm_hist, cm_future)
debiased_cm_future_ncep_doe = debiaser.apply(obs_ncep_doe, cm_hist, cm_future)
INFO:root:----- Running debiasing for variable: Daily mean precipitation ----- 100%|████████████████████████████████████████████| 4/4 [00:00<00:00, 269.56it/s] INFO:root:----- Running debiasing for variable: Daily mean precipitation ----- 100%|████████████████████████████████████████████| 4/4 [00:00<00:00, 271.81it/s]