This notebook introduces you to a simple method to deliniate areas of active agriculture from SAR time series data. The method is based on the Coefficient of Variation and results in the detection of actively managed crop areas. The coefficient of vatiation approach will be used as the main algorithm for agriculture mapping by the upcoming NASA/ISRO SAR mssion NISAR. Hence, the approach demonstrated here can be applied to data from this future mission.
The exercises will use Jupyter Notebooks, an environment that is easy to launch in any web browser for interactive data exploration with provided or new training data. Notebooks are comprised of text written in a combination of executable python code and markdown formatting including latex style mathematical equations. Another advantage of Jupyter Notebooks is that they can easily be expanded, changed, and shared with new data sets or newly available time series steps. Therefore, they provide an excellent basis for collaborative and repeatable data analysis.
This notebook covers the following data analysis concepts:
Important Note about JupyterHub
Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.
import url_widget as url_w
notebookUrl = url_w.URLWidget()
display(notebookUrl)
from IPython.display import Markdown
from IPython.display import display
notebookUrl = notebookUrl.value
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
In this notebook we will use the following scientific libraries:
Our first step is to import them:
%%capture
from pathlib import Path
import pandas as pd # for DatetimeIndex
from osgeo import gdal # for Info
gdal.UseExceptions()
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib import rc
from IPython.display import HTML
import opensarlab_lib as asfn
asfn.jupytertheme_matplotlib_format()
This notebook will work with a 46-image deep C-band SAR data stack south of Bogota, Colombia, covering the southern end of the city and adjacent agriculture areas. The closeness of the area to Bogota should provide means for visual validation of SAR-derived agriculture extent information. The C-band data were acquired by ESA's Sentinel-1 SAR sensor constellation and are available to you through the services of the Alaska Satellite Facility.
The site in question is shown in the image to the right. Data were acquired between January of 2017 and December of 2018 in descending orbit direction.
Before we get started, let's first create a working directory for this analysis and change into it:
path = Path("/home/jovyan/notebooks/SAR_Training/English/Ecosystems/AgricultureMapping")
if not path.exists():
path.mkdir()
We will retrieve the relevant data from an Amazon Web Service (AWS) cloud storage bucket using the following command:
time_series_path = 's3://asf-jupyter-data-west/BogotaAg.zip'
time_series = Path(time_series_path).name
!aws --region=us-west-2 --no-sign-request s3 cp $time_series_path $time_series
Now, let's unzip the file (overwriting previous extractions) and clean up after ourselves:
if Path(time_series).exists():
asfn.asf_unzip(str(path), time_series)
Path(time_series).unlink()
We are defining two helper functions for this notebook:
create_geotiff()
to write out imagestimeseries_metrics()
to compute various metrics from a time series data stackdef create_geotiff(name, array, data_type, ndv, bandnames=None, ref_image=None,
geo_t=None, projection=None):
# If it's a 2D image we fake a third dimension:
if len(array.shape) == 2:
array = np.array([array])
if ref_image == None and (geo_t == None or projection == None):
raise RuntimeWarning('ref_image or settings required.')
if bandnames != None:
if len(bandnames) != array.shape[0]:
raise RuntimeError('Need {} bandnames. {} given'
.format(array.shape[0],len(bandnames)))
else:
bandnames = ['Band {}'.format(i+1) for i in range(array.shape[0])]
if ref_image != None:
refimg = gdal.Open(ref_image)
geo_t = refimg.GetGeoTransform()
projection = refimg.GetProjection()
driver = gdal.GetDriverByName('GTIFF')
array[np.isnan(array)] = ndv
DataSet = driver.Create(name, array.shape[2],
array.shape[1], array.shape[0],
data_type)
DataSet.SetGeoTransform(geo_t)
DataSet.SetProjection(projection)
for i, image in enumerate(array, 1):
DataSet.GetRasterBand(i).WriteArray( image )
DataSet.GetRasterBand(i).SetNoDataValue(ndv)
DataSet.SetDescription(bandnames[i-1])
DataSet.FlushCache()
return name
def timeseries_metrics(raster, ndv=0):
# Make us of numpy nan functions
# Check if type is a float array
if not raster.dtype.name.find('float')>-1:
raster=raster.astype(np.float64)
# Set ndv to nan
if ndv != np.nan:
raster[np.equal(raster,ndv)]=np.nan
# Build dictionary of the metrics
tsmetrics={}
rperc = np.nanpercentile(raster,[5,50,95],axis=0)
tsmetrics['mean']=np.nanmean(raster,axis=0)
tsmetrics['max']=np.nanmax(raster,axis=0)
tsmetrics['min']=np.nanmin(raster,axis=0)
tsmetrics['range']=tsmetrics['max']-tsmetrics['min']
tsmetrics['median']=rperc[1]
tsmetrics['p5']=rperc[0]
tsmetrics['p95']=rperc[2]
tsmetrics['prange']=rperc[2]-rperc[0]
tsmetrics['var']=np.nanvar(raster,axis=0)
tsmetrics['cov']=tsmetrics['var']/tsmetrics['mean']
return tsmetrics
Create a variable containing the VRT filename and the image acquisition dates:
!gdalbuildvrt -separate {path}/raster_stack.vrt {path}/tiffs/*_VV.tiff
image_file = path/"raster_stack.vrt"
Create an index of timedelta64
data with Pandas:
!ls {path}/tiffs/*_VV.tiff | sort | sed 's/[^0-9]*//g' | cut -c 1-8 > {path}/raster_stack.dates
datefile= path/'raster_stack.dates'
dates= open(str(datefile)).readlines()
tindex= pd.DatetimeIndex(dates)
Print the bands and dates for all images in the virtual raster table (VRT):
j = 1
print(f"Bands and dates for {image_file}")
for i in tindex:
print("{:4d} {}".format(j, i.date()), end=' ')
j += 1
if j%5 == 1:
print()
img = gdal.Open(str(image_file))
band = img.GetRasterBand(1)
raster0 = band.ReadAsArray()
band_number = 0 # Needed for updates
rasterstack= img.ReadAsArray()
Print the bands, pixels, and lines:
print(f"Number of bands: {img.RasterCount}")
print(f"Number of pixels: {img.RasterXSize}")
print(f"Number of lines: {img.RasterYSize}")
Note, that if your data were generated by HyP3, this step is not necessary! HyP3 performs the full data calibration and provides you with calibrated data in power scale.
If, your data is from a different source, however, calibration may be necessary to ensure that image gray values correspond to proper radar cross section information.
Calibration coefficients for SAR data are often defined in the decibel (dB) scale due to the high dynamic range of the imaging system. For the L-band ALOS PALSAR data at hand, the conversion from uncalibrated DN values to calibrated radar cross section values in dB scale is performed by applying a standard calibration factor of -83 dB.
$\gamma^0_{dB} = 20 \cdot log10(DN) -83$
The data at hand are radiometrically terrain corrected images, which are often expressed as terrain flattened $\gamma^0$ backscattering coefficients. For forest and land cover monitoring applications $\gamma^o$ is the preferred metric.
To apply the calibration constant for your data and export in dB scale, uncomment the following code cell:
#caldB=20*np.log10(rasterstack)-83
While dB-scaled images are often "visually pleasing", they are often not a good basis for mathematical operations on data. For instance, when we compute the mean of observations, it makes a difference whether we do that in power or dB scale. Since dB scale is a logarithmic scale, we cannot simply average data in that scale.
Please note that the correct scale in which operations need to be performed is the power scale. This is critical, e.g. when speckle filters are applied, spatial operations like block averaging are performed, or time series are analyzed.
To convert from dB to power, apply: $\gamma^o_{pwr} = 10^{\frac{\gamma^o_{dB}}{10}}$
#calPwr=np.power(10.,caldB/10.)
Create and move into a directory in which to store our plots and animations:
product_path = path/'plots_and_animations'
if not product_path.exists():
product_path.mkdir()
%%capture
fig = plt.figure(figsize=(14, 8))
ax = fig.subplots()
ax.axis('off')
vmin = np.percentile(rasterstack.flatten(), 5)
vmax = np.percentile(rasterstack.flatten(), 95)
r0dB = 20 * np.log10(raster0) - 83
im = ax.imshow(raster0, cmap='gray', vmin=vmin, vmax=vmax)
ax.set_title("{}".format(tindex[0].date()))
def animate(i):
ax.set_title("{}".format(tindex[i].date()))
im.set_data(rasterstack[i])
# Interval is given in milliseconds
ani = animation.FuncAnimation(fig, animate, frames=rasterstack.shape[0], interval=400)
Configure matplotlib's RC settings for the animation:
rc('animation', embed_limit=40971520.0) # We need to increase the limit maybe to show the entire animation
Create a javascript animation of the time-series running inline in the notebook:
HTML(ani.to_jshtml())
Delete the dummy png that was saved to the current working directory while generating the JavaScript animation in the last code cell.
try:
tmp = product_path/'None0000000.png'
tmp.unlink()
except FileNotFoundError:
pass
Save the animation (animation.gif
):
ani.save(product_path/'animation-AgSite.gif', writer='pillow', fps=2)
Once a time-series was constructed, we can compute a set of metrics that will aid us later in applications such as change detection and active agriculture detection. In the next code cells, we will compute the following variables for each pixel in the stack:
First, we mask out pixels without relevant information to be unbiased in statical number we calculate later. Then we calculate the time series metrics:
mask = rasterstack == 0
rasterPwr = np.ma.array(rasterstack, mask=mask, dtype=np.float64)
%%capture
metrics = timeseries_metrics(rasterPwr.filled(np.nan), ndv=np.nan)
metrics.keys()
Let's look at the histograms for the time series variance and coeficient of variation to aid displaying those images:
fig, ax = plt.subplots(1, 2, figsize=(16,4))
ax[0].hist(metrics['var'].flatten(), bins=100, range=(0, 0.005))
ax[1].hist(metrics['cov'].flatten(), bins=100, range=(0, 0.04))
_ = ax[0].set_title('Variance')
_ = ax[1].set_title('Coefficient of Variation')
We use thresholds determined from those histograms to set the scaling in the time series visualization. For the backscatter metrics we choose a typical range appropriate for this ecosystem, the C-band radar sensor, and the VH polarization. A typical range is -30 dB (0.0001) to -10 dB (0.1).
# List the metrics keys you want to plot
metric_keys = ['mean', 'median', 'max',
'min', 'p95', 'p5', 'range',
'prange', 'var', 'cov']
fig = plt.figure(figsize=(16, 40))
idx = 1
for i in metric_keys:
ax = fig.add_subplot(5,2,idx)
if i == 'var': vmin, vmax = (0.0, 0.003)
elif i == 'cov': vmin, vmax = (0., 0.03)
else:
vmin,vmax=(0.0,0.3)
ax.imshow(metrics[i],vmin=vmin,vmax=vmax,cmap='gray')
ax.set_title(i.upper())
ax.axis('off')
idx+=1
Areas of active agriculture show strong changes of vegetation throughout a growth season. From plantation through maturation and harvest, the strong variation of vegetation structure causes the radar brigthness to vary significantly throughout a season, resulting in large values for Range, Variance, and Coefficient of Variation metrics. Steps for agriculture mapping using Coefficient of Variation include:
With the CoV metric already calculated above, we can now move toward thresholding the metric:
We can set a threshold $t$ for the coefficient of variation image to classify change in the time series:
${cp}_{x,y} = \frac{\sigma_{x,y}^2}{\overline{X}_{x,y}} > t$
Let's look at the histogram and the Cumulative Distribution Function (CDF) of the coefficient of variation:
plt.rcParams.update({'font.size': 14})
fig = plt.figure(figsize=(14, 4)) # Initialize figure with a size
ax1 = fig.add_subplot(121) # 121 determines: 2 rows, 2 plots, first plot
ax2 = fig.add_subplot(122)
h = ax1.hist(metrics['cov'].flatten(), bins=200, range=(0, 0.03))
ax1.xaxis.set_label_text('Coefficient of Variation Metric')
ax1.set_title('Histogram')
n, bins, patches = ax2.hist(metrics['cov'].flatten(), bins=200,
range=(0, 0.03), cumulative='True',
density='True', histtype='step',
label='Empirical')
ax2.xaxis.set_label_text('Coefficient of Variation Metric')
ax2.set_title('CDF')
plt.savefig(product_path/'thresh_cov_histogram.png', dpi=200, transparent='true')
outind = np.where(n > 0.9)
threshind = np.min(outind)
threshcov = bins[threshind]
ax1.axvline(threshcov,color='red')
ax2.axvline(threshcov,color='red')
With a threshold of $t=CDF_{cov^2} > 0.90$ (10% pixels with highest variance) the change pixels would look like the following image:
plt.figure(figsize=(8, 8))
mask = metrics['cov'] < threshcov # For display we prepare the inverse mask
maskcov = ~mask # Store this for later output
plt.imshow(mask, cmap='gray')
_ = plt.title('Threshold Classifier on Coefficient of Variation > %1.3f' % threshcov )
plt.savefig(product_path/'changes_cov_threshold.png', dpi=200, transparent='true')
proj = img.GetProjection()
geotrans = list(img.GetGeoTransform())
geotrans
We use the root of the time series data stack name and append a ts_metrics<metric>.tif
ending as filenames:
dirname = path/(image_file.stem + '_tsmetrics')
if not dirname.exists():
dirname.mkdir()
Now we can output the individual metrics as GeoTIFF images
names = [] # List to keep track of all the names
for i in metrics:
# Name, Array, DataType, NDV,bandnames=None,ref_image
name = dirname/(f'{image_file.stem}{i}.tif')
create_geotiff(str(name), metrics[i], gdal.GDT_Float32,np.nan,
[i], geo_t=geotrans, projection=proj)
names.append(name)
Now we can output the map of active agriculture that we have created using the Coefficient of Variations metric. We will output to a GeoTIFF image:
imagenamecov = path/(image_file.stem + 'Ag_from_cov_threshold.tif')
create_geotiff(str(imagenamecov), maskcov, gdal.GDT_Byte,
np.nan, [i], geo_t=geotrans, projection=proj)
The Coefficient of Variation is a good indicator for agricultural activity, however, by itself it is not fail safe. Further analysis of radar brightness can be performed. Also, other remote sensing data can be added to remove false alarms.
For a bit more information on change detection and SAR in general, please look at the recently published SAR Handbook: Comprehensive Methodologies for Forest Monitoring and Biomass Estimation.
Explore agriculture mapping from the 46-image deep data stack a bit more by:
Exercise8-AgricultureMappingwithSARUsingCoV.ipynb - Version 1.4.1 - November 2021
Version Changes:
os
and obsolete asfn
modules with pathlib
counterpartsasf_notebook.py
html
converted to Markdown
url_widget
asfn_notebook
with opensarlab_lib
.