Chapter 7 of Real World Algorithms.

Panos Louridas

Athens University of Economics and Business

Dijkstra's Algorithm finds the shortest path from a given node to other nodes in the graph.

It works by using a priority queue where it stores path distances.

Initially all distances are set to $\infty$ apart from the distance to the start, which is 0.

Then, as long as there are elements in the queue, the algorithm takes off the minimum item from the queue, i.e., the node with the minimum distance, and relaxes the distance to its neighbors.

As an example, we will use a rectangular grid graph, which could represent a traffic grid.

Such a graph is represented by the traffic_grid_graph.txt file and looks as follows.

The numbers correspond to weights. The nodes are the black bullet points, and we assume they are numbered from 0, starting from the top left and proceeding left-to-right, top-to-bottom.

The graph is undirected, so we'll use the following function to read it.

The function reads a line, splits it in three parts, and interprets the first two as nodes and the third as weight.

In [1]:

```
def read_graph(filename, directed=False):
graph = {}
with open(filename) as input_file:
for line in input_file:
parts = line.split()
if len(parts) != 3:
continue # not a valid line, ignore
[n1, n2, w] = [ int (x) for x in parts ]
if n1 not in graph:
graph[n1] = []
if n2 not in graph:
graph[n2] = []
graph[n1].append((n2, w))
if not directed:
graph[n2].append((n1, w))
return graph
```

Now we can read the grid graph.

In [2]:

```
import pprint
g = read_graph("traffic_grid_graph.txt")
pprint.pprint(g)
```

As we mentioned, Dijkstra's algorithm requires a priority queue.

We can use a very simple implementation: just a list.

In that case, we need a way to find the minimum element in the queue, that is, the list.

To find the position of the minimum in a list we could use the following function.

We use `sys.maxsize`

, the maximum integer, as a substitute for $\infty$.

In Python there is no predefined maximum integer.

`sys.maxsize`

gives the maximum 32 bit or 64 bit integer, depending on the machine.

We assume that the distances we are dealing with are less than that; but if not, we can simply use a bigger value standing for $\infty$.

In [3]:

```
import sys
MAX_INT = sys.maxsize
def find_min(a):
min_index = -1
min_val = MAX_INT
for (i, v) in enumerate(a):
if v < min_val:
min_index = i
min_val = v
return min_index
```

How fast is `find_min(a)`

?

In the worst case it will go down the whole list, so for a list of $n$ elements, it is $O(n)$.

Let's see how it performs in a random list of 10,000 elements.

We'll create 1000 random lists and find the minimum in each of them, averaging the time required.

In [4]:

```
import random
import timeit
total_elapsed = 0
for i in range(1000):
a = list(range(10000))
# Put the elements of the list in random order
random.shuffle(a)
start = timeit.default_timer()
min_index = find_min(a)
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed / 100, "seconds")
```

0.004783692079999927 seconds

Alternatively, we could use following one-liner to find the index of the minimum element of a list:

```
a.index(min(a))
```

This requires two passes from the list, one to find the minimum element and then a second to find its index.

The total time is still $O(n + n) = O(2n) = O(n)$.

How does it compare to `find_min(a)`

in practice?

In [5]:

```
total_elapsed = 0
for i in range(1000):
a = list(range(10000))
random.shuffle(a)
start = timeit.default_timer()
min_index = a.index(min(a))
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed / 100, "seconds")
```

0.0021569067499999763 seconds

We see that it takes less than half the time!

That is because functions `min()`

and `list.index()`

are implemented much more efficiently than our handcrafted `find_min(a)`

.

In fact, they are implemented in C, which is called by Python, which explains the big speed difference.

As a general lesson: be sure to check whether an optimized implementation for what you want to do already exists.

We can now implement Dijkstra's algorithm using this priority queue implementation.

In [6]:

```
def dijkstra(g, s):
nodes = g.keys()
num_nodes = len(nodes)
# Initialize array holding path lengths.
dist = [ MAX_INT ] * num_nodes
dist[s] = 0
# Initialize array holding node predecessors.
pred = [ -1 ] * num_nodes
# Initialize the priority queue; initially it
# is just the same with the distance array.
pq = dist[:]
elements_in_queue = num_nodes
while elements_in_queue != 0:
u = pq.index(min(pq))
pq[u] = MAX_INT
elements_in_queue -= 1
for (v, w) in g[u]:
# If a better path is found,
# relax the distance and update
# the priority queue.
if dist[u] != MAX_INT and dist[v] > dist[u] + w:
dist[v] = dist[u] + w
pred[v] = u
pq[v] = dist[v]
return (pred, dist)
```

Using it on the traffic grid graph, we get:

In [7]:

```
pred, dist = dijkstra(g, 0)
print('pred', pred)
print('dist', dist)
```

So we see that the path to node 15 (bottom right) has length 18.

The previous node is node 11, the node before 11 is 10, and so on.

We can roll out our own function to show us the complete path and save us the trouble of doing it manually.

In [8]:

```
def get_path(pred, t):
path = []
while t != -1:
path.append(t)
t = pred[t]
# Path is constructed end to start,
# so we need to reverse it.
path.reverse()
return path
print('path to 15 =', get_path(pred, 15), 'length =', dist[15])
print('path to 12 = ', get_path(pred, 12), 'length =', dist[12])
```

Note that we are constructing the path by adding to the end of the list; so we need to reverse the list at the end to get it in the right order.

That is because inserting in the start of a list in Python with `insert(0, v)`

incurs $O(n)$ memory movement costs.

We can indeed see for ourselves the relative costs, by creating a list with $10^5$ elements. Note that creating a list of $10^6$ elements with `insert(0, v)`

may take too long, as the situation can deteriorate badly.

In [9]:

```
total_elapsed = 0
a = []
start = timeit.default_timer()
for i in range(int(1e5)):
a.insert(0, i)
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed / 100, "seconds")
```

0.019321988779999993 seconds

In [10]:

```
total_elapsed = 0
a = []
start = timeit.default_timer()
for i in range(int(1e5)):
a.append(i)
a = a[::-1]
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed / 100, "seconds")
```

0.00015370600999998986 seconds

Dijkstra's algorithm is fast and easy to implement.

However, it does not produce the right results on graphs that contain negative weigths.

Consider for example the following graph, stored in negative_weights_graph.txt:

If we read it and run Dijkstra's algorithm on it, we'll get:

In [11]:

```
g = read_graph('negative_weights_graph.txt', directed=True)
pred, dist = dijkstra(g, 0)
print('pred', pred)
print('dist', dist)
print(get_path(pred, 3))
```

pred [-1, 0, 1, 0] dist [0, 5, 1, 3] [0, 3]

We see that it fails to find the correct path from 0 to 3, which goes through nodes 1 and 2, and reports just the direct path from node 0 instead.

Reweighting the graph will not solve the problem.

For example, if we add 4 to all weights on the previous graph we get:

But we still get wrong results:

In [12]:

```
g = read_graph('re_weighted_graph.txt', directed=True)
pred, dist = dijkstra(g, 0)
print('pred', pred)
print('dist', dist)
print(get_path(pred, 3))
```

pred [-1, 0, 0, 0] dist [0, 9, 8, 7] [0, 3]

Dijkstra's algorighm finds the shortest paths from one node to all others.

If we run it from every node of a graph, it will find the shortest path between all pairs of nodes.

This is the *all pairs shortest paths* problem.

To implement the all pairs shortest paths problem, we'll call Dijkstra's algorithm once for each node, storing the results of each call.

To hold all the results together, we'll use two lists, `preds`

and `dists`

, each one of which will contain the all the `pred`

and `dist`

lists from each call.

The `preds`

and `dists`

lists will start out empty, and we will add to them each `pred`

and `dist`

list as it comes out from Dijkstra's algorithm.

In [13]:

```
def all_pairs_shortest_paths(g):
preds = [ ]
dists = [ ]
# g.keys() does not guarantee a specific
# order for the keys, so we call sorted().
for s in sorted(g.keys()):
pred, dist = dijkstra(g, s)
preds.append(pred)
dists.append(dist)
return (preds, dists)
```

Let's apply it to the traffic grid graph.

In [14]:

```
g = read_graph('traffic_grid_graph.txt')
preds, dists = all_pairs_shortest_paths(g)
for s in sorted(g.keys()):
print('starting node:', s)
print(preds[s])
print(dists[s])
```

With `all_pairs_shortest_paths(g)`

we can find the *diameter* of the graph: the longest shortest path.

In other words, the diameter of the graph is the longest distance between two nodes, if we walk along shortest paths.

To find the diameter of a graph, we only need to search for the maximum possible distance in the `dists`

list.

The following function does that, while also keeping track of the start and end node of the diameter path.

In [15]:

```
def graph_diameter(g, dists):
diameter = 0 # minimum diameter
max_s = -1 # start node of diameter path
max_t = -1 # end node of diameter path
# Do for all nodes in the graph.
for s in sorted(g.keys()):
# Get the corresponding dist list.
dist = dists[s]
# Get the index of the maximum distance;
# that is also the node with the maximum
# distance as nodes start from zero.
max_index = dist.index(max(dist))
# Get the maximum distance.
max_distance = dist[max_index]
# Update if necessary.
if max_distance > diameter:
diameter = max_distance
max_s = s
max_t = max_index
return (max_s, max_t, diameter)
s, t, diameter = graph_diameter(g, dists)
print('start:', s, 'end:', t, 'diameter:', diameter)
```

start: 3 end: 8 diameter: 20

Note that in this graph there is more than one path with length 20; `graph_diameter(g, dists)`

just returns one of them.

Also note that the graph is undirected, so the length of the path from 3 to 8 is the same with the length of the path from 8 to 3.