Using the HyP3 SDK for Python


HyP3's Python SDK hyp3_sdk provides a convenience wrapper around the HyP3 API and HyP3 jobs.

The HyP3 SDK can be installed using Anaconda/Miniconda (recommended) via conda:

conda install -c conda-forge hyp3_sdk

Or using pip:

python -m pip install hyp3_sdk

Full documentation of the SDK can be found in the HyP3 documentation.

We also recommend installing the asf_search Python package for performing searches of the ASF catalog. The ASF Search Python package can be installed using Anaconda/Miniconda (recommended) via conda:

conda install -c conda-forge asf_search

Or using pip:

python -m pip install asf_search

Full documentation of asf_search can be found in the ASF search documentation.

In [ ]:
# initial setup
import asf_search as asf
import hyp3_sdk as sdk

Authenticating to the API

The SDK will attempt to pull your NASA Earthdata Login credentials out of ~/.netrc by default, or you can pass your credentials in directly

In [ ]:
# .netrc
hyp3 = sdk.HyP3()
In [ ]:
# or enter your credentials
hyp3 = sdk.HyP3(prompt=True)

Submitting jobs

The SDK provides a submit method for all supported job types.

Submitting Sentinel-1 RTC jobs

Sentinel-1 Radiometric Terrain Correction (RTC) jobs are submitted using ESA granule IDs. The example granules below can be viewed in ASF Search here.

In [ ]:
granules = [

rtc_jobs = sdk.Batch()
for g in granules:
    rtc_jobs += hyp3.submit_rtc_job(g, name='rtc-example')

Here we've given each job the name rtc-example, which we can use later to search for these jobs.

HyP3.submit_rtc_job also accepts keyword arguments to customize the RTC products to your application.

Submitting Sentinel-1 InSAR jobs

The SDK can also submit Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) jobs which processes reference and secondary granule pairs.

For a particular reference granule, we may want to use the nearest and next-nearest temporal neighbor granules as secondary scenes. To programmatically find our secondary granules for a reference granule, We'll define a get_nearest_neighbors function that uses the baseline stack method from asf_search:

In [ ]:
from typing import Optional

def get_nearest_neighbors(granule: str, max_neighbors: Optional[int] = None) -> asf.ASFSearchResults:
    granule = asf.granule_search(granule)[-1]
    stack = reversed([item for item in granule.stack() if['temporalBaseline'] < 0])
    return asf.ASFSearchResults(stack)[:max_neighbors]

Now, using the example granule list for our RTC jobs as the reference scenes, we can find their nearest and next-nearest neighbor granules, and submit them as pairs for InSAR processing.

In [ ]:
from import tqdm  # For a nice progress bar:

insar_jobs = sdk.Batch()
for reference in tqdm(granules):
    neighbors = get_nearest_neighbors(reference, max_neighbors=2)
    for secondary in neighbors:
        insar_jobs += hyp3.submit_insar_job(reference,['sceneName'], name='insar-example')

Like RTC jobs, HyP3.submit_insar_job accepts keyword arguments to customize the InSAR products to your application.

Submitting autoRIFT jobs

AutoRIFT supports processing Sentinel-1, Sentinel-2, or Landsat-8 Collection 2 pairs.

In [ ]:
autorift_pairs = [
    # Sentinel-1 ESA granule IDs
    # Sentinel-2 ESA granule IDs
    # Sentinel-2 Element 84 Earth Search ID
    ('S2B_22WEB_20200612_0_L1C', 'S2A_22WEB_20200627_0_L1C'),
    # Landsat 8

autorift_jobs = sdk.Batch()
for reference, secondary in autorift_pairs:
    autorift_jobs += hyp3.submit_autorift_job(reference, secondary, name='autorift-example')

AutoRIFT does not currently accept any keyword arguments for product customization.

Monitoring jobs

One jobs are submitted, you can either watch the jobs until they finish

In [ ]:
rtc_jobs =

which will require you to keep the cell/terminal running, or you can come back later and search for jobs

In [ ]:
rtc_jobs = hyp3.find_jobs(name='rtc-example')
rtc_jobs =

Downloading files

Batches are collections of jobs. They provide a snapshot of the job status when the job was created or last refreshed. To get updated information on a batch

In [ ]:
insar_jobs = hyp3.refresh(insar_jobs)
print(insar_jobs) will return a refreshed batch once every job in the batch has completed.

Batches can be added together

In [ ]:
print(f'Number of Jobs:\n  RTC:{len(rtc_jobs)}\n  InSAR:{len(insar_jobs)}\n  autoRIFT:{len(autorift_jobs)}')
all_jobs = rtc_jobs + insar_jobs + autorift_jobs
print(f'Total number of Jobs: {len(all_jobs)}')

You can check the status of a batch (at last refresh) by printing the batch

In [ ]:

and filter jobs by status

In [ ]:
succeeded_jobs = all_jobs.filter_jobs(succeeded=True, running=False, failed=False)
print(f'Number of succeeded jobs: {len(succeeded_jobs)}')
failed_jobs = all_jobs.filter_jobs(succeeded=False, running=False, failed=True)
print(f'Number of failed jobs: {len(failed_jobs)}')

You can download the files for all successful jobs

In [ ]:
file_list = succeeded_jobs.download_files()

Note: only succeeded jobs will have files to download.