Before running this notebook for the first time, please read Using the HyP3 SDK to process new granules for given search parameters for a complete introduction to this tutorial.
You can run this notebook to submit On Demand InSAR jobs for all granules that match a particular set of search parameters (date range, area of interest, etc.). After you run the notebook, more granules may become available for your search parameters over the following days (because there is a delay between data being acquired and becoming available in the archive), or you may decide to modify your search parameters. In either case, you can simply run the notebook again to submit InSAR jobs for all granules that have not yet been processed.
This workflow is particularly useful for ongoing monitoring of a geographic area of interest, but it can be used whenever you want to augment your project with additional products without generating duplicates.
First, install dependencies:
!pip install 'asf-search>=6.6.2' hyp3-sdk
import asf_search
from hyp3_sdk import HyP3
Next, define your search parameters and job specification as shown below. The search parameters become keyword arguments to the asf_search.search
function. See here for a full list of available keywords.
search_parameters = {
"start": "2023-04-05T00:00:00Z",
"end": "2023-04-10T00:00:00Z",
"intersectsWith":
"POLYGON((-110.7759 44.8543,-101.3998 44.8543,-101.3998 50.8183,-110.7759 50.8183,-110.7759 44.8543))",
"platform": "S1",
"processingLevel": "SLC",
}
job_specification = {
"job_parameters": {},
"job_type": "INSAR_GAMMA",
"name": "Project Name"
}
Next, construct a list of unprocessed granules:
hyp3 = HyP3()
previous_jobs = hyp3.find_jobs(
name=job_specification['name'],
job_type=job_specification['job_type'],
)
processed_granules = [job.job_parameters['granules'][0] for job in previous_jobs]
print(f'Found {len(processed_granules)} previously processed granules')
search_results = asf_search.search(**search_parameters)
search_results.raise_if_incomplete()
unprocessed_granules = [
result for result in search_results if result.properties['sceneName'] not in processed_granules
]
print(f'Found {len(unprocessed_granules)} unprocessed granules')
Finally, get the temporal baseline for each unprocessed granule and submit a new InSAR job for each pair. You can adjust the number of pairs included for each unprocessed granule by changing the value of the depth
parameter for the get_neighbors
function.
Note that unprocessed granules are handled in batches. You can adjust the batch size by changing the value of the batch_size
variable.
from copy import deepcopy
def get_neighbors(granule: asf_search.ASFProduct, platform: str, depth=2) -> list[str]:
stack = asf_search.baseline_search.stack_from_product(granule)
stack.raise_if_incomplete()
stack = [item for item in stack if
item.properties['temporalBaseline'] < 0 and item.properties['sceneName'].startswith(platform)]
neighbors = [item.properties['sceneName'] for item in stack[-depth:]]
return neighbors
def get_jobs_for_granule(granule: asf_search.ASFProduct) -> list[dict]:
jobs = []
neighbors = get_neighbors(granule, search_parameters['platform'])
for neighbor in neighbors:
job = deepcopy(job_specification)
job['job_parameters']['granules'] = [granule.properties['sceneName'], neighbor]
jobs.append(job)
return jobs
batch_size = 10
for i in range(0, len(unprocessed_granules), batch_size):
new_jobs = [
job for granule in unprocessed_granules[i:i+batch_size]
for job in get_jobs_for_granule(granule)
]
print(f'Submitting {len(new_jobs)} jobs')
hyp3.submit_prepared_jobs(new_jobs)
print('Done.')