Resilience against hardware failures

Scenario: We have a cluster that partially consists of preemptible ressources. That is, we'll have to deal with workers suddenly being shut down during computation. While demonstrated here with a LocalCluster, Dask's resilience against preempted ressources is most useful with, e.g., Dask Kubernetes or Dask Jobqueue.

Relevant docs:

Increase resilience

Whenever a worker shuts down, the scheduler will increment the suspiciousness counter of all tasks that were assigned (not necessarily computing) to the worker in question. Whenever the suspiciousness of a task exceeds a certain threshold (3 by default), the task will be considered broken. We want to compute many tasks on only a few workers with workers shutting down randomly. So we expect the suspiciousness of all tasks to grow rapidly. Let's increase the threshold:

In [ ]:
import dask

dask.config.set({'distributed.scheduler.allowed-failures': 100});

All other imports

In [ ]:
from dask.distributed import Client, LocalCluster
from dask import bag as db
import os
import random
from time import sleep

A cluster

In [ ]:
cluster = LocalCluster(threads_per_worker=1, n_workers=4, memory_limit=400e6)
client = Client(cluster)

A simple workload

We'll multiply a range of numbers by two, add some sleep to simulate some real work, and then reduce the whole sequence of doubled numbers by summing them.

In [ ]:
def multiply_by_two(x):
    return 2 * x
In [ ]:
N = 400

x = db.from_sequence(range(N), npartitions=N // 2)

mults =

summed = mults.sum()

Suddenly shutting down workers

Let's mark two worker process id's as non-preemptible.

In [ ]:
all_current_workers = [ for w in cluster.scheduler.workers.values()]
non_preemptible_workers = all_current_workers[:2]
In [ ]:
def kill_a_worker():
    preemptible_workers = [ for w in cluster.scheduler.workers.values()
        if not in non_preemptible_workers]
    if preemptible_workers:
        os.kill(random.choice(preemptible_workers), 15)

Start the computation and keep shutting down workers while it's running

In [ ]:
summed = client.compute(summed)

while not summed.done():

Check if results match

In [ ]:
print(f"`sum(range({N}))` on cluster: {summed.result()}\t(should be {N * (N-1)})")