Parsl: Advanced Features

In this tutorial we present advanced features of Parsl including its ability to support multiple sites, elastically scale across sites, and its support for fault tolerance.

In [ ]:
import parsl
from parsl import *
#parsl.set_stream_logger() # <-- log everything to stdout
print(parsl.__version__)

1) Multiple Sites

In the "parsl-introduction" notebook we showed how a configuration file controls the execution provider and model used to execute a Parsl script. While we showed only a single site, Parsl is capable of distributing workload over several sites simultaneously. Below we show an example configuration that combines local thread execution and local pilot job execution. By default, Apps will execute on any configured sites. However, you can also specify a specific site, or sites, on which an App can execute by adding a list of sites to the App decorator. In the following cells, we show a three-stage workflow in which the first app uses local threads, the second uses local pilot jobs, and the third (with no sites specified) will use either threads or pilot jobs.

First, we define two "sites", which in this example are both local. The first uses threads, and the second uses pilot job execution. We then instantiate a DataFlowKernel object with these two sites.

In [ ]:
# Define a configuration for using local threads and pilot jobs
multi_site_config = {
    "sites" : [
        { "site" : "Local_Threads",
          "auth" : { "channel" : None },
          "execution" : {
              "executor" : "threads",
              "provider" : None,
              "maxThreads" : 4
          }
        }, {
        "site" : "Local_IPP",
        "auth" : {
            "channel" : "local"
        },
        "execution" : {
            "executor" : "ipp",
            "provider" : "local",
            "script_dir" : ".scripts",
            "scriptDir" : ".scripts",
            "block" : {
                "nodes" : 1,
                "taskBlocks" : 1,
                "walltime" : "00:05:00",
                "initBlocks" : 1,
                "minBlocks" : 0,
                "maxBlocks" : 1,
                "scriptDir" : "."
            }
        }
    }],
    "globals" : {"lazyErrors" : True}
}

dfk = DataFlowKernel(config=multi_site_config)

Next, we define three Apps, which have the same functionality as in the previous tutorial. However, the first is specified to use the first site only, the second is specific to use the second site only, and the third doesn't have a site specification, so it can run on any available site.

In [ ]:
# Generate app runs on the "Local_Threads" site
@App('bash', dfk, sites=["Local_Threads"])
def generate(outputs=[]):
    return "echo $(( RANDOM )) &> {outputs[0]}"

# Concat app runs on the "Local_IPP" site
@App('bash', dfk, sites=["Local_IPP"])
def concat(inputs=[], outputs=[], stdout="stdout.txt", stderr='stderr.txt'):
    return "cat {0} > {1}".format(" ".join(inputs), outputs[0])

# Total app runs on either site
@App('python', dfk)
def total(inputs=[]):
    total = 0
    with open(inputs[0], 'r') as f:
        for l in f:
            total += int(l)
    return total

Finally, we run the apps, and cleanup.

In [ ]:
# Create 5 files with random numbers
output_files = []
for i in range (5):
     output_files.append(generate(outputs=['random-%s.txt' % i]))

# Concatenate the files into a single file
cc = concat(inputs=[i.outputs[0] for i in output_files], outputs=["all.txt"])

# Calculate the sum of the random numbers
total = total(inputs=[cc.outputs[0]])

print (total.result())

dfk.cleanup()

2) Elasticity

As a Parsl script is evaluated, it creates a collection of tasks for asynchronous execution. In most cases this stream of tasks is variable as different stages of the workflow are evaluated. To address this variability, Parsl is able to monitor the flow of tasks and elastically provision resources, within user specified bounds, in response.

In the following example, we declare the range of blocks to be provisioned from 0 to 10 (minBlocks and maxBlocks, respectively). We then set parallelism to 0.1, which means that Parsl will favor reusing resources rather than provisioning new resources. You should see that the app is executed on the same process ID.

In [ ]:
local_ipp = {
    "sites": [
        {"site": "Local_IPP",
         "auth": {
             "channel": None,
         },
         "execution": {
             "executor": "ipp",
             "provider": "local",
             "block": {
                  "nodes": 1,
                   "taskBlocks": 1,
                   "minBlocks": 0,
                   "initBlocks": 0,
                   "maxBlocks": 10,
                   "parallelism": 0.1,
                   "walltime": "00:20:00"
             }
         }
         }]
}

dfk = DataFlowKernel(config=local_ipp.copy())

@App("python", dfk)
def python_app():
    import time     
    import os
    time.sleep(5)
    return "(%s) Hello World!" %  os.getpid()

results = {}
for i in range(0, 10):
    results[i] = python_app()

print("Waiting for results ....")
for i in range(0, 10):
    print(results[i].result())

dfk.cleanup()

We now modify the parallelism option to 1. This configuration means that Parsl will favor elastic growth to execute as many tasks simultaineously as possible, up to the user defined limit. You can modify the parallelism between 0 and 1 to experiment with different scaling policies.

In [ ]:
local_ipp = {
    "sites": [
        {"site": "Local_IPP",
         "auth": {
             "channel": None,
         },
         "execution": {
             "executor": "ipp",
             "provider": "local",
             "block": {
                  "nodes": 1,
                   "taskBlocks": 1,
                   "minBlocks": 0,
                   "initBlocks": 0,
                   "maxBlocks": 10,
                   "parallelism": 1,
                   "walltime": "00:20:00"
             }
         }
         }]
}

dfk = DataFlowKernel(config=local_ipp.copy())

@App("python", dfk)
def python_app():
    import time     
    import os
    time.sleep(5)
    return "(%s) Hello World!" %  os.getpid()

results = {}
for i in range(0, 10):
    results[i] = python_app()

print("Waiting for results ....")
for i in range(0, 10):
    print(results[i].result())

dfk.cleanup()

3) Fault tolerance and caching

Workflows are often re-executed for various reasons, including workflow or node failure, code errors, or extension of the workflow. It is inefficient to re-execute apps that have succesfully completed. Parsl provides two mechanisms to improve efficacy via app caching and/or workflow-level checkpointing.

App Caching

When developing a workflow, developers often re-execute the same workflow with incremental changes. Often large fragments of the workflow are re-executed even though they have not been modified. This wastes not only time but also computational resources. App Caching solves this problem by caching results from apps that have completed so that they can be re-used. Caching is enabled by setting the cache argument to the App wrapper. Note: the cached result is returned only when the same function, with the same name, input arguments, and function body is called. If any of these are changed, a new result is computed and returned.

The following example shows two calls to the slow_message app with the same message. You will see that the first call is slow (since the app sleeps for 5 seconds), but the second call returns immedidately (the app is not actually executed this time, so there is no sleep delay).

Note: running this example in Jupyter notebooks will cache the results through subsequent executions of the cell.

In [ ]:
local_threads = {
    "sites" : [
        { "site" : "Local_Threads",
          "auth" : { "channel" : None },
          "execution" : {
              "executor" : "threads",
              "provider" : None,
              "maxThreads" : 4
          }
        }
    ]
}

dfk = DataFlowKernel(config=local_threads)
@App('python', dfk, cache = True)
def slow_message(message):
    import time     
    time.sleep(5)
    return message

# First call to slow_message will calcuate the value
first = slow_message("Hello World")
print ("First: %s" % first.result())

# Second call to slow_message with the same args will
# return immediately
second = slow_message("Hello World")
print ("Second: %s" % second.result())

# Third call to slow_message with different arguments
# will again wait
third = slow_message("Hello World!")
print ("Third: %s" % third.result())

dfk.cleanup()

Checkpointing

Parsl's checkpointing model enables workflow state to be saved and then used at a later time to resume execution from that point. Checkpointing provides workflow-level fault tolerance, insuring against failure of the Parsl control process.

Parsl implements an incremental checkpointing model: each explicit checkpoint will save state changes from the previous checkpoint. Thus, the full history of a workflow may be distributed across multiple checkpoints.

Checkpointing uses App caching to store results. Thus, the same caveats apply to non-deterministic functions. That is, the checkpoint saves results for an instance of an App when it has the same name, arguments, and function body.

In this example we demonstrate how to automatically checkpoint workflows when tasks succesfully execute. This is enabled in the config by setting checkpointMode to task_exit. Other checkpointing models are described in the checkpointing documentation.

In [ ]:
local_threads = {
    "sites" : [
        { "site" : "Local_Threads",
          "auth" : { "channel" : None },
          "execution" : {
              "executor" : "threads",
              "provider" : None,
              "maxThreads" : 4
          }
        }
    ], 
    "globals": {"lazyErrors": True,
                "memoize": True,
                "checkpointMode": "task_exit",
    }
}

dfk = DataFlowKernel(config=local_threads)

@App('python', dfk, cache=True)
def slow_double(x):
    import time
    time.sleep(5)
    return x * 2

d = []
for i in range(5):
    d.append(slow_double(i))

# wait for results
print([d[i].result() for i in range(5)])

dfk.cleanup()

To restart from a previous checkpoint the DFK must be configured with the appropriate checkpoint file. In most cases this is likley to be the most recent checkpoint file created. The following approach works with any checkpoint file, irrespective of which checkpointing method was used to create it.

In this example we reload the most recent checkpoint and attempt to run the same workflow. The results return immediately as there is no need to rexecute each app.

In [ ]:
import os
last_runid = sorted(os.listdir('runinfo/'))[-1]
last_checkpoint = os.path.abspath('runinfo/{0}/checkpoint'.format(last_runid))

print("Restarting from checkpoint: %s" % last_checkpoint) 
dfk = DataFlowKernel(config=local_threads,
                     checkpointFiles=[last_checkpoint])

# Rerun the same workflow
d = []
for i in range(5):
    d.append(slow_double(i))

# wait for results
print([d[i].result() for i in range(5)])

dfk.cleanup()