HyP3's Python SDK hyp3_sdk
provides a convenience wrapper around the HyP3 API and HyP3 jobs.
The HyP3 SDK can be installed using Anaconda/Miniconda
(recommended) via conda
:
conda install -c conda-forge hyp3_sdk
Or using pip
:
python -m pip install hyp3_sdk
Full documentation of the SDK can be found in the HyP3 documentation.
We also recommend installing the asf_search
Python package for performing searches of the ASF catalog. The ASF Search
Python package can be installed using Anaconda/Miniconda
(recommended) via conda
:
conda install -c conda-forge asf_search
Or using pip
:
python -m pip install asf_search
Full documentation of asf_search
can be found in the ASF search documentation.
# initial setup
import asf_search as asf
import hyp3_sdk as sdk
The SDK will attempt to pull your NASA Earthdata Login credentials out of ~/.netrc
by default, or you can pass your credentials in directly
# .netrc
hyp3 = sdk.HyP3()
# or enter your credentials
hyp3 = sdk.HyP3(prompt=True)
The SDK provides a submit method for all supported job types.
Sentinel-1 Radiometric Terrain Correction (RTC) jobs are submitted using ESA granule IDs. The example granules below can be viewed in ASF Search here.
granules = [
'S1A_IW_SLC__1SDV_20210214T154835_20210214T154901_036588_044C54_8494',
'S1B_IW_SLC__1SDV_20210210T153131_20210210T153159_025546_030B48_B568',
'S1A_IW_SLC__1SDV_20210210T025526_20210210T025553_036522_0449E2_7769',
'S1A_IW_SLC__1SDV_20210210T025501_20210210T025528_036522_0449E2_3917',
'S1B_IW_SLC__1SDV_20210209T030255_20210209T030323_025524_030A8D_7E88',
'S1B_IW_SLC__1SDV_20210209T030227_20210209T030257_025524_030A8D_5BAF',
'S1A_IW_SLC__1SDV_20210202T154835_20210202T154902_036413_044634_01A1',
]
rtc_jobs = sdk.Batch()
for g in granules:
rtc_jobs += hyp3.submit_rtc_job(g, name='rtc-example')
print(rtc_jobs)
Here we've given each job the name rtc-example
, which we can use later to search for these jobs.
HyP3.submit_rtc_job
also accepts
keyword arguments
to customize the RTC products to your application.
The SDK can also submit Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) jobs which processes reference and secondary granule pairs.
For a particular reference granule, we may want to use the nearest and next-nearest temporal neighbor granules as secondary
scenes. To programmatically find our secondary granules for a reference granule, We'll define a get_nearest_neighbors
function that uses the baseline stack method from asf_search
:
from typing import Optional
def get_nearest_neighbors(granule: str, max_neighbors: Optional[int] = None) -> asf.ASFSearchResults:
granule = asf.granule_search(granule)[-1]
stack = reversed([item for item in granule.stack() if item.properties['temporalBaseline'] < 0])
return asf.ASFSearchResults(stack)[:max_neighbors]
Now, using the example granule list for our RTC jobs as the reference scenes, we can find their nearest and next-nearest neighbor granules, and submit them as pairs for InSAR processing.
from tqdm.auto import tqdm # For a nice progress bar: https://github.com/tqdm/tqdm#ipython-jupyter-integration
insar_jobs = sdk.Batch()
for reference in tqdm(granules):
neighbors = get_nearest_neighbors(reference, max_neighbors=2)
for secondary in neighbors:
insar_jobs += hyp3.submit_insar_job(reference, secondary.properties['sceneName'], name='insar-example')
print(insar_jobs)
Like RTC jobs, HyP3.submit_insar_job
accepts
keyword arguments
to customize the InSAR products to your application.
AutoRIFT supports processing Sentinel-1, Sentinel-2, or Landsat-8 Collection 2 pairs.
autorift_pairs = [
# Sentinel-1 ESA granule IDs
('S1A_IW_SLC__1SSH_20170221T204710_20170221T204737_015387_0193F6_AB07',
'S1B_IW_SLC__1SSH_20170227T204628_20170227T204655_004491_007D11_6654'),
# Sentinel-2 ESA granule IDs
('S2B_MSIL1C_20200612T150759_N0209_R025_T22WEB_20200612T184700',
'S2A_MSIL1C_20200627T150921_N0209_R025_T22WEB_20200627T170912'),
# Landsat 8
('LC08_L1TP_009011_20200703_20200913_02_T1',
'LC08_L1TP_009011_20200820_20200905_02_T1'),
]
autorift_jobs = sdk.Batch()
for reference, secondary in autorift_pairs:
autorift_jobs += hyp3.submit_autorift_job(reference, secondary, name='autorift-example')
print(autorift_jobs)
AutoRIFT does not currently accept any keyword arguments for product customization.
One jobs are submitted, you can either watch the jobs until they finish
rtc_jobs = hyp3.watch(rtc_jobs)
which will require you to keep the cell/terminal running, or you can come back later and search for jobs
rtc_jobs = hyp3.find_jobs(name='rtc-example')
rtc_jobs = hyp3.watch(rtc_jobs)
Batches are collections of jobs. They provide a snapshot of the job status when the job was created or last refreshed. To get updated information on a batch
print(insar_jobs)
insar_jobs = hyp3.refresh(insar_jobs)
print(insar_jobs)
hyp3.watch()
will return a refreshed batch once every job in the batch has completed.
Batches can be added together
print(f'Number of Jobs:\n RTC:{len(rtc_jobs)}\n InSAR:{len(insar_jobs)}\n autoRIFT:{len(autorift_jobs)}')
all_jobs = rtc_jobs + insar_jobs + autorift_jobs
print(f'Total number of Jobs: {len(all_jobs)}')
You can check the status of a batch (at last refresh) by printing the batch
print(all_jobs)
and filter jobs by status
succeeded_jobs = all_jobs.filter_jobs(succeeded=True, running=False, failed=False)
print(f'Number of succeeded jobs: {len(succeeded_jobs)}')
failed_jobs = all_jobs.filter_jobs(succeeded=False, running=False, failed=True)
print(f'Number of failed jobs: {len(failed_jobs)}')
You can download the files for all successful jobs
file_list = succeeded_jobs.download_files()
Note: only succeeded jobs will have files to download.