Using the HyP3 SDK for Python

Binder

HyP3's Python SDK hyp3_sdk provides a convenience wrapper around the HyP3 API and HyP3 jobs.

The HyP3 SDK can be installed using Anaconda/Miniconda (recommended) via conda:

conda install -c conda-forge hyp3_sdk

Or using pip:

python -m pip install hyp3_sdk

Full documentation of the SDK can be found in the HyP3 documentation

In [ ]:
# initial setup
import hyp3_sdk as sdk
from hyp3_sdk import asf_search

Authenticating to the API

The SDK will attempt to pull your NASA Earthdata Login credentials out of ~/.netrc by default, or you can pass your credentials in directly

In [ ]:
# .netrc
hyp3 = sdk.HyP3()
In [ ]:
# or enter your credentials
hyp3 = sdk.HyP3(prompt=True)

Submitting jobs

The SDK provides a submit method for all supported job types.

Submitting Sentinel-1 RTC jobs

Sentinel-1 Radiometric Terrain Correction (RTC) jobs are submitted using ESA granule IDs. The example granules below can be viewed in ASF Search here.

In [ ]:
granules = [
    'S1A_IW_SLC__1SDV_20210214T154835_20210214T154901_036588_044C54_8494',
    'S1B_IW_SLC__1SDV_20210210T153131_20210210T153159_025546_030B48_B568',
    'S1A_IW_SLC__1SDV_20210210T025526_20210210T025553_036522_0449E2_7769',
    'S1A_IW_SLC__1SDV_20210210T025501_20210210T025528_036522_0449E2_3917',
    'S1B_IW_SLC__1SDV_20210209T030255_20210209T030323_025524_030A8D_7E88',
    'S1B_IW_SLC__1SDV_20210209T030227_20210209T030257_025524_030A8D_5BAF',
    'S1A_IW_SLC__1SDV_20210202T154835_20210202T154902_036413_044634_01A1',
]


rtc_jobs = sdk.Batch()
for g in granules:
    rtc_jobs += hyp3.submit_rtc_job(g, name='rtc-example')
print(rtc_jobs)

Here we've given each job the name rtc-example, which we can use later to search for these jobs.

HyP3.submit_rtc_job also accepts keyword arguments to customize the RTC products to your application.

Submitting Sentinel-1 InSAR jobs

The SDK can also submit Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) jobs. Using the example granule list for our RTC jobs as the reference scenes, we can find their nearest and next-nearest neighbor granules, and submit them as pairs for InSAR processing.

In [ ]:
from tqdm.auto import tqdm  # For a nice progress bar: https://github.com/tqdm/tqdm#ipython-jupyter-integration

insar_jobs = sdk.Batch()
for reference in tqdm(granules):
    neighbors_metadata = asf_search.get_nearest_neighbors(reference, max_neighbors=2)
    for secondary_metadata in neighbors_metadata:
        insar_jobs += hyp3.submit_insar_job(reference, secondary_metadata['granuleName'], name='insar-example')
print(insar_jobs)

Like RTC jobs, HyP3.submit_insar_job accepts keyword arguments to customize the InSAR products to your application.

Submitting autoRIFT jobs

AutoRIFT supports processing Sentinel-1, Sentinel-2, or Landsat-8 Collection 2 pairs.

In [ ]:
autorift_pairs = [
    # Sentinel-1 ESA granule IDs
    ('S1A_IW_SLC__1SSH_20170221T204710_20170221T204737_015387_0193F6_AB07',
     'S1B_IW_SLC__1SSH_20170227T204628_20170227T204655_004491_007D11_6654'),
    # Sentinel-2 ESA granule IDs
    ('S2B_MSIL1C_20200612T150759_N0209_R025_T22WEB_20200612T184700',
     'S2A_MSIL1C_20200627T150921_N0209_R025_T22WEB_20200627T170912'),
    # Sentinel-2 Element 84 Earth Search ID
    ('S2B_22WEB_20200612_0_L1C', 'S2A_22WEB_20200627_0_L1C'),
    # Landsat 8
    ('LC08_L1TP_009011_20200703_20200913_02_T1',
     'LC08_L1TP_009011_20200820_20200905_02_T1'),
]

autorift_jobs = sdk.Batch()
for reference, secondary in autorift_pairs:
    autorift_jobs += hyp3.submit_autorift_job(reference, secondary, name='autorift-example')
print(autorift_jobs)

AutoRIFT does not currently accept any keyword arguments for product customization.

Monitoring jobs

One jobs are submitted, you can either watch the jobs until they finish

In [ ]:
rtc_jobs = hyp3.watch(rtc_jobs)

which will require you to keep the cell/terminal running, or you can come back later and search for jobs

In [ ]:
rtc_jobs = hyp3.find_jobs(name='rtc-example')
rtc_jobs = hyp3.watch(rtc_jobs)

Downloading files

Batches are collections of jobs. They provide a snapshot of the job status when the job was created or last refreshed. To get updated information on a batch

In [ ]:
print(insar_jobs)
insar_jobs = hyp3.refresh(insar_jobs)
print(insar_jobs)

hyp3.watch() will return a refreshed batch once every job in the batch has completed.

Batches can be added together

In [ ]:
print(f'Number of Jobs:\n  RTC:{len(rtc_jobs)}\n  InSAR:{len(insar_jobs)}\n  autoRIFT:{len(autorift_jobs)}')
all_jobs = rtc_jobs + insar_jobs + autorift_jobs
print(f'Total number of Jobs: {len(all_jobs)}')

You can check the status of a batch (at last refresh) by printing the batch

In [ ]:
print(all_jobs)

and filter jobs by status

In [ ]:
succeeded_jobs = all_jobs.filter_jobs(succeeded=True, running=False, failed=False)
print(f'Number of succeeded jobs: {len(succeeded_jobs)}')
failed_jobs = all_jobs.filter_jobs(succeeded=False, running=False, failed=True)
print(f'Number of failed jobs: {len(failed_jobs)}')

You can download the files for all successful jobs

In [ ]:
file_list = succeeded_jobs.download_files()

Note: only succeeded jobs will have files to download.