This notebook shows how you can overlay a video on a Leaflet map. This video can come off-the-shelf from an URL, but we will also see how to create your own video from NumPy arrays.
The following libraries are needed:
tqdm
rasterio
numpy
matplotlib
ipyleaflet
The recommended way is to try to conda install
them first, and if they are not found then pip install
.
import ftplib
import os
from tqdm import tqdm
import rasterio
from rasterio.warp import reproject, Resampling
from affine import Affine
import matplotlib.pyplot as plt
import numpy as np
import subprocess
from base64 import b64encode
from ipyleaflet import Map, VideoOverlay
try:
from StringIO import StringIO
py3 = False
except ImportError:
from io import StringIO, BytesIO
py3 = True
center = [25, -130]
zoom = 4
m = Map(center=center, zoom=zoom)
m
If you happen to find a pre-built video for which you know the geographic bounds, overlaying it is as simple as:
bounds = [(13, -130), (32, -100)]
io = VideoOverlay(url='https://www.mapbox.com/bites/00188/patricia_nasa.webm', bounds=bounds)
m.add_layer(io)
io.interact(opacity=(0.0,1.0,0.01))
However if you zoom in and play with the opacity, you will see that the coastline in the video does not overlay perfectly with the Leaflet map. This is because Leaflet uses a projection known as Web Mercator, whereas the video comes from satellite imagery. The view of the Earth from a geostationary satellite is called a disk, which is the part of the Earth that the satellite sees. Usually the disk is reprojected to e.g. WGS84, which is slightly different from Web Mercator. On a relatively small area, like in this video, the difference will be acceptable, but if you want to show some data over the whole map, it can be a problem. In that case you will need to reproject, as we will see.
Here we download some satellite rainfall estimates (NASA's Global Precipitation Measurement). They come in the WGS84 projection and cover the entire globe between latitudes 60N and 60S. They have a spatial resolution of 0.1° and a temporal resolution of 30 minutes. We first download each individual files for the day 2017/01/01 (48 files).
os.makedirs('tif', exist_ok=True)
ftp = ftplib.FTP('arthurhou.pps.eosdis.nasa.gov')
passwd = 'ebwje/cspdibsuAhnbjm/dpn'
login = ''
for c in passwd:
login += chr(ord(c) - 1)
ftp.login(login, login)
ftp.cwd('gpmdata/2017/01/01/gis')
lst = [i for i in ftp.nlst() if i.startswith('3B-HHR-GIS.MS.MRG.3IMERG.') and i.endswith('.V05B.tif')]
for filename in tqdm(lst):
if not os.path.exists('tif/' + filename):
ftp.retrbinary("RETR " + filename, open('tif/' + filename, 'wb').write)
ftp.quit()
When we convert them into images, we will need to apply a color map to the data. In order to scale this color map, we need to extract the maximum value present in the whole data set. Since the maximum value will appear very rarely, the visual rendering will be better if we saturate the images.
vmax = 0
for f in os.listdir('tif'):
dataset = rasterio.open('tif/' + f)
data = dataset.read()[0][300:1500]
data = np.where(data==9999, np.nan, data) / 10 # in mm/h
vmax = max(vmax, np.nanmax(data))
vmax *= 0.5 # saturate a little bit
We reproject our data (originally in WGS84, also known as EPSG:4326) to Web Mercator (also known as EPSG:3857), which is the projection used by Leaflet. After applying a color map (here plt.cm.jet
), we save each image to a PNG file.
# At this point if GDAL complains about not being able to open EPSG support file gcs.csv, try in the terminal:
# export GDAL_DATA=`gdal-config --datadir`
os.makedirs('png', exist_ok=True)
for f in tqdm(os.listdir('tif')):
png_name = f[:-3] + 'png'
if not os.path.exists('png/' + png_name):
dataset = rasterio.open('tif/' + f)
with rasterio.Env():
rows, cols = 1200, 3600
src_transform = Affine(0.1, 0, -180, 0, -0.1, 60)
src_crs = {'init': 'EPSG:4326'}
data = dataset.read()[0][300:1500]
source = np.where((data==9999) | (~np.isfinite(data)), 0, data) / 10 # in mm/h
dst_crs = {'init': 'EPSG:3857'}
bounds = [-180, -60, 180, 60]
dst_transform, width, height = rasterio.warp.calculate_default_transform(src_crs, dst_crs, cols, rows, *bounds)
dst_shape = height, width
destination = np.zeros(dst_shape)
reproject(
source,
destination,
src_transform=src_transform,
src_crs=src_crs,
dst_transform=dst_transform,
dst_crs=dst_crs,
resampling=Resampling.nearest)
data_web = destination
fig, ax = plt.subplots(1, figsize=(36, 12))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1)
ax.imshow(data_web, vmin=0, vmax=vmax, interpolation='nearest', cmap=plt.cm.jet)
ax.axis('tight')
ax.axis('off')
plt.savefig('png/' + png_name)
plt.close()
We use ffmpeg
to create a video from our individual images (ffmpeg can be installed on many systems). This utility needs our files to be named with a sequential number, which is why we rename them. The ffmpeg
commands are pretty obscure but they do the job, and we finally get a rain.mp4
video!
png_files = os.listdir('png')
png_files.sort()
for i, f in enumerate(png_files):
if f.startswith('3B-HHR-GIS.MS.MRG.3IMERG.'):
os.rename('png/' + f, 'png/f' + str(i).zfill(2) + '.png')
if not os.path.exists('mp4/rain.mp4'):
os.makedirs('mp4', exist_ok=True)
bitrate = '4000k'
framerate = '12'
cmd = 'ffmpeg -r $framerate -y -f image2 -pattern_type glob -i "png/*.png" -c:v libx264 -preset slow -b:v $bitrate -pass 1 -c:a libfdk_aac -b:a 0k -f mp4 -r $framerate -profile:v high -level 4.2 -pix_fmt yuv420p -movflags +faststart -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" /dev/null && \
ffmpeg -r $framerate -f image2 -pattern_type glob -i "png/*.png" -c:v libx264 -preset slow -b:v $bitrate -pass 2 -c:a libfdk_aac -b:a 0k -r $framerate -profile:v high -level 4.2 -pix_fmt yuv420p -movflags +faststart -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" mp4/rain.mp4'
cmd = cmd.replace('$framerate', framerate).replace('$bitrate', bitrate)
subprocess.check_output(cmd, shell=True)
The video can be sent to the browser by embedding the data into the URL, et voilà!
center = [0, -70]
zoom = 3
m = Map(center=center, zoom=zoom, interpolation='nearest')
m
if py3:
f = BytesIO()
else:
f = StringIO()
with open('mp4/rain.mp4', 'rb') as f:
data = b64encode(f.read())
if py3:
data = data.decode('ascii')
videourl = 'data:video/mp4;base64,' + data
bounds = [(-60, -180), (60, 180)]
io = VideoOverlay(url=videourl, bounds=bounds)
m.add_layer(io)
io.interact(opacity=(0.0,1.0,0.01))