This notebook introduces you to the time series signatures associated with flooding. The data analysis is doen in the framework of Jupyter Notebooks. The Jupyter Notebook environment is easy to launch in any web browser for interactive data exploration with provided or new training data. Notebooks are comprised of text written in a combination of executable python code and markdown formatting including latex style mathematical equations. Another advantage of Jupyter Notebooks is that they can easily be expanded, changed, and shared with new data sets or newly available time series steps. Therefore, they provide an excellent basis for collaborative and repeatable data analysis.
This notebook covers the following data analysis concepts:
Important Notes about JupyterHub
Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.
import url_widget as url_w
notebookUrl = url_w.URLWidget()
display(notebookUrl)
from IPython.display import Markdown
from IPython.display import display
notebookUrl = notebookUrl.value
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
In this notebook we will use the following scientific libraries:
# Check Python version:
import sys
pn = sys.version_info[0]
from pathlib import Path
import re
from math import ceil
import pandas as pd # for DatetimeIndex
from osgeo import gdal # for GetRasterBand, Open, ReadAsArray
gdal.UseExceptions()
import numpy as np #for log10, mean, percentile, power
from pyproj import Transformer
%matplotlib widget
import matplotlib.pyplot as plt # for add_subplot, axis, figure, imshow, legend, plot, set_axis_off, set_data,
# set_title, set_xlabel, set_ylabel, set_ylim, subplots, title, twinx
import matplotlib.patches as patches # for Rectangle
import matplotlib.animation as an # for FuncAnimation
from matplotlib import rc
from ipyfilechooser import FileChooser
from IPython.display import HTML
plt.rcParams.update({'font.size': 12})
if pn == 2:
import cStringIO #needed for the image checkboxes
elif pn == 3:
import io
import base64
from opensarlab_lib import select_parameter
# For exporting:
from PIL import Image
def get_tiff_paths(paths):
tiff_paths = !ls $paths | sort -t_ -k5,5
return tiff_paths
Select the directory holding your tiffs
Select
buttonSelect
buttonChange
button to alter your selectionfc = FileChooser('/home/jovyan/notebooks')
display(fc)
Determine the path to the analysis directory containing the tiff directory:
tiff_dir = Path(fc.selected_path)
analysis_dir = tiff_dir.parent
print(f"analysis_dir: {analysis_dir}")
paths = tiff_dir/"*.tif*"
tiff_paths = get_tiff_paths(paths)
Select a polarization:
pol = select_parameter(['vv', 'vh'])
display(pol)
class PolarizationNotFoundError(Exception):
pass
polarization = pol.value
if polarization == 'vv':
path_options = ['*VV*.tif*', '*vv*.tif*']
else:
path_options = ['*VH*.tif*', '*vh*.tif*']
wildcard_path = None
for p in path_options:
pths = list(tiff_dir.rglob(p))
if pths:
wildcard_path = tiff_dir/p
break
if not wildcard_path:
raise PolarizationNotFoundError(f"No files found in {tiff_dir} with {pol.value} polarization")
print(wildcard_path)
Write a function to extract the acquisition dates.
def get_date(pth):
date_regex = r'(?<=_)\d{8}T\d{6}(?=_)'
try:
return re.search(date_regex, str(pth)).group(0)
except AttributeError:
raise Exception(f"Date string not found in {pth}")
Call get_dates() to collect the product acquisition dates:
dates = [get_date(d) for d in pths ]
dates.sort()
print(dates)
Parse the polarization from a tiff name and define a path to the vrt:
raster_path = f'{analysis_dir}/raster_stack_{polarization}.vrt'
print(raster_path)
Create the virtual raster table for the GeoTiffs:
!gdalbuildvrt -separate $raster_path $wildcard_path
Create Pandas time index and print the dates:
time_index = pd.DatetimeIndex(dates)
for jacqdate, acqdate in enumerate(time_index):
print('{:4d} {}'.format(jacqdate, acqdate.date()),end=' ')
if (jacqdate % 5 == 4): print()
img = gdal.Open(raster_path)
band = img.GetRasterBand(1)
raster0 = band.ReadAsArray()
band_number = 0 # Needed for updates
rasterstack = img.ReadAsArray()
Before analyzing the data, decide whether to use linear or logarithmic scaling
use_dB = False
def convert(raster, use_dB=use_dB):
# some Python trickery:
# if you call the convert function later, you can set the keyword
# argument use_dB to True or False
# if you do not provide a keyword argument, the value that you set
# above (when defining the function) is used
if use_dB:
return 10 * np.log10(raster)
else:
return raster
Create an animation to get an idea of where and when flooding might have occurred
%%capture
figani = plt.figure(figsize=(10, 5))
axani = figani.subplots()
axani.axis('off')
rasterstack_ = convert(rasterstack)
imani = axani.imshow(rasterstack_[0,...], cmap='gray', vmin=np.nanpercentile(rasterstack_, 1),
vmax=np.nanpercentile(rasterstack_, 99))
axani.set_title("{}".format(time_index[0].date()))
def animate(i):
axani.set_title("{}".format(time_index[i].date()))
imani.set_data(rasterstack_[i,...])
# Interval is given in milliseconds
ani = an.FuncAnimation(figani, animate, frames=rasterstack_.shape[0], interval=300)
rc('animation', embed_limit=40971520.0) # We need to increase the limit maybe to show the entire animation
Render
HTML(ani.to_jshtml())
As flooding is often associated with very low backscater, we first compute the minimum backscatter for each pixel to get a first impression of areas that could have been flooded during the entire period.
The following line calculates the minimum backscatter per pixel across the time series:
temporal_min = np.nanmin(convert(rasterstack), axis=0)
We will now visualize the minimum image in a way that we can move our mouse over the image and visualize the line/sample image coordinates. This will help us create time-series information for the most interesting image locations.
To do so, we first create some helper functions:
class pixelPicker:
def __init__(self, image, width, height):
self.x = None
self.y = None
self.fig = plt.figure(figsize=(width, height))
self.ax = self.fig.add_subplot(111, visible=False)
self.rect = patches.Rectangle(
(0.0, 0.0), width, height,
fill=False, clip_on=False, visible=False
)
self.rect_patch = self.ax.add_patch(self.rect)
self.cid = self.rect_patch.figure.canvas.mpl_connect('button_press_event',
self)
self.image = image
self.plot = self.gray_plot(self.image, fig=self.fig, return_ax=True)
self.plot.set_title('Select a Point of Interest')
def gray_plot(self, image, vmin=None, vmax=None, fig=None, return_ax=False):
'''
Plots an image in grayscale.
Parameters:
- image: 2D array of raster values
- vmin: Minimum value for colormap
- vmax: Maximum value for colormap
- return_ax: Option to return plot axis
'''
if vmin is None:
vmin = np.nanpercentile(self.image, 1)
if vmax is None:
vmax = np.nanpercentile(self.image, 99)
if fig is None:
my_fig = plt.figure()
ax = fig.add_axes([0.1,0.1,0.8,0.8])
ax.imshow(image, cmap=plt.cm.gist_gray, vmin=vmin, vmax=vmax)
if return_ax:
return(ax)
def __call__(self, event):
print('click', event)
self.x = event.xdata
self.y = event.ydata
for pnt in self.plot.get_lines():
pnt.remove()
plt.plot(self.x, self.y, 'ro')
Now we are ready to plot the minimum image. Click a point interest for which you want to analyze radar brightness over time:
fig_xsize = 7.5
fig_ysize = 7.5
my_plot = pixelPicker(temporal_min, fig_xsize, fig_ysize)
Save the selected coordinates:
sarloc = (ceil(my_plot.x), ceil(my_plot.y))
print(sarloc)
We will pick a pixel location identified in the SAR image above and plot the time series for this identified point. By focusing on image locations undergoing deforestation, we should see the changes in the radar cross section related to the deforestation event.
First, for processing of the imagery in this notebook we generate a list of image handles and retrieve projection and georeferencing information. We also define a function for mapping image pixels to a geographic projection.
geotrans = img.GetGeoTransform()
proj = img.GetProjection().split('[')[-1][:-2].split(',')[-1][1:-1]
xsize = img.RasterXSize
ysize = img.RasterYSize
bands = img.RasterCount
transformer = Transformer.from_crs(f"epsg:{proj}", "epsg:4326")
class MissingTansformerError(Exception):
pass
def geolocation(x, geotrans, y=None, latlon=False, transformer=None):
if len(x) == 2:
y = x[1]
x = x[0]
ref_x=geotrans[0]+sarloc[0]*geotrans[1]
ref_y=geotrans[3]+sarloc[1]*geotrans[5]
if latlon:
if transformer:
ref_y, ref_x = transformer.transform(ref_x, ref_y)
else:
raise MissingTansformerError(
"You must pass a pyproj transformer to geolocation to convert UTM to EPSG")
return (ref_x, ref_y)
Now, let's pick a rectangle around a center pixel which we selected and defined in variable sarloc ...
extent = (5, 5) # choose a 5 by 5 rectangle
latlon = True
refsarloc = geolocation(sarloc, geotrans, latlon=True, transformer=transformer)
projsymbol = '°' if latlon else 'm'
... and extract the time series for this small area around the selected center pixel in a memory-efficient way (needed for larger stacks):
plt.rcParams.update({'font.size': 9})
bs_aggregated = []
for band in range(bands):
rs = img.GetRasterBand(band+1).ReadAsArray(sarloc[0], sarloc[1],
extent[0], extent[1])
rs_mean = convert(np.nanmean(rs))
bs_aggregated.append(rs_mean)
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
labeldB = 'dB' if use_dB else 'linear'
ax.plot(time_index, bs_aggregated, color='k', marker='o', markersize=3)
ax.set_xlabel('Date')
ax.set_ylabel(f'Sentinel-1 $\\gamma^0$ [{labeldB}]')
plt.xticks(rotation = 45)
plt.grid()
_ = fig.suptitle(f'Location: {refsarloc[0]:.3f}{projsymbol} '
f'{refsarloc[1]:.3f}{projsymbol}')
# fig.tight_layout()
figname = (f'RCSTimeSeries-{refsarloc[0]:.3f}{projsymbol} '
f'{refsarloc[1]:.3f}{projsymbol}.png')
plt.savefig(f'{analysis_dir}/{figname}', dpi=300, transparent='true')
ExploreSARTimeSeriesFlood_From_Prepared_Data_Stack.ipynb - Version 1.3.2 - February 2024
Version Changes: