Home

# Algorithms in Cryptography¶

Building mathematical knowledge up to where RSA (named for inventors Ron Rivest, Adi Shamir and Leonard Adleman) makes sense, gives you many useful concepts and insights along the way.

Not that RSA is by any means the only cryptographic algorithm we care about. On the contrary, RSA is relatively new and we may build our concepts and insights by exploring the field of cyptography more generally.

We should start with one of the oldest algorithms in history, called Euclid's Algorithm (EA). Euclid probably got it from some earlier source. It has applications far beyond cryptography.

Python will help us, by expressing this and other algorithms succinctly. We'll be able to test them interactively.

However, before we get to Euclid's Algorithm (EA), and then Euclid's Extended Algorithm (EEA) we should review the basic concepts of prime versus composite positive integers.

### Primes vs Composites¶

Primes have no divisors other than themselves, and one. Trying to divide them by another integer greater than 1 always leaves some remainder.

Composities are products of prime factors and comprise the rest of the positive integers.

All positive integers except 1 are either composite or prime. Mathematician J. H. Conway suggested we make -1 a prime, as then negative integers could also be reduced to prime factors.

Discovering whether a giant integer is composite or prime is not always easy, because huge composites with only two prime factors may behave just like primes, and finding these factors can be really hard to do. Absent any factorization, the number might be prime after all, right?

In fact, RSA depends on factoring large composites being next to impossible, if they're large enough.

### Modulo Arithmetic in Python¶

When we talk about factoring numbers, that gets us thinking about remainders, like if there is one. We say m divides n "evenly" not because m goes into n an even number of times (which may be true too) but because there's no remainder. m divides n with nothing left over. That makes m a factor of n.

In [1]:
12 % 3  # no remainder, 3 divides 12 evenly

Out[1]:
0
In [2]:
12 % 7  # yes remainder, 7 is not a factor of 12

Out[2]:
5

The divmod function (built in, no need to import it from anywhere), returns a tuple with two pieces of information: how many times b went into a, and the remainder after so doing.

In [3]:
divmod(100, 12)

Out[3]:
(8, 4)

12 goes into 100 eight times, leaving a remainder of 4.

By definition, the 2nd argument $m$ times the first output $q$, plus the 2nd output $r$, should equal the 1st argument $n$ to divmod(n, m).

Think about it for awhile. We're looking at lots of moving parts.

$q$ stands for quotient and $r$ for remainder. divmod(n, m) returns (n//m, n%m).

In [4]:
def _divmod(n, m):
return (n//m, n%m)

_divmod(28398, 747)

Out[4]:
(38, 12)
In [5]:
def always_true(n, m):
q, r = divmod(n, m)
print((q, r))
return n == q * m + r  # should always be True

In [6]:
always_true(28398, 747)  # try a bunch of examples

(38, 12)

Out[6]:
True

### Prime Numbers¶

In [7]:
import primes  # a package (has __init__.py)

In [8]:
dir(primes)    # not everything is exposed

Out[8]:
['PrimeNumbers',
'__builtins__',
'__cached__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__path__',
'__spec__',
'all_factors',
'eratosthenes',
'euler_test',
'factors',
'invmod',
'isprime',
'primes_gen',
'primesplay',
'xgcd']
In [9]:
primes.factors(100)

Out[9]:
(1, 2, 2, 5, 5)
In [10]:
primes.all_factors(100)

Out[10]:
[1, 2, 4, 5, 10, 20, 25, 50, 100]
In [11]:
p = primes.primes_gen.PrimeNumbers()
[next(p) for _ in range(20)]

Out[11]:
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]

Source Code for the PrimeNumbers iterator.

### Coprimes (Strangers)¶

Coprimes are not necessarily prime themselves. Two numbers are coprime if they have no factors in common. We also call them "strangers" in that case.

For example 10 and 7 are coprime, but not 27 and 15. Those two (27 and 15) have the common factor 3.

So that means: if the greatest common divisor (GCD) of two positive integers is > 1 (greater than 1), then these two integers are not coprime. They have some factor in common. If the greatest number that divides evenly into both of them, is 1, then they're "strangers" to one another.

Clearly it would be useful to have a sure-fire way to obtain the GCD of any two positive integers.

### Greatest Common Divisor (GCD)¶

One way we're taught to find the GCD in school is to get the prime factors of m and n, to find out what factors they have in common.

The product of all in-common factors, including of the same prime more than once, gives the GCD.

In [12]:
factors = primes.factors
from functools import reduce

def gcd(a, b):
p_b = list(factors(b))
common_ab = []
for p_a in factors(a):
if p_a in p_b:
common_ab.append(p_a)
p_b.remove(p_a)
return reduce(mul, common_ab)  # product of all primes in common


One issue with this method, is prime factorization gets to be difficult with larger numbers.

Euclid's Algorithm, introduced below, does not suffer from this deficiency.

In [13]:
def euclid(a, b):
while a % b:         # when remainder is 0, b is gcd
b, a = a % b, b  # chopping down to 1
return b

In [14]:
euclid(5, 12)

Out[14]:
1
In [15]:
euclid(27, 15)

Out[15]:
3
In [16]:
euclid(10, 7)

Out[16]:
1

So 10 and 7 are strangers.

In [17]:
def strangers(a: int, b: int) -> bool:
return euclid(a,b)==1

In [18]:
strangers(10, 7)

Out[18]:
True

### Totatives¶

Now that we have a working GCD function (or just import it from math), lets define the "totatives" of a number n, to be all coprimes < n, including 1 itself. Remember coprimes to n are not necessarily prime themselves, it's just that they don't divide n evenly or contain any factors that do.

A quick way to compute totatives is with a list comprehension.

In [19]:
def totatives(n):
return [m for m in range(1, n) if strangers(m, n)]

In [20]:
totatives(12)

Out[20]:
[1, 5, 7, 11]
In [21]:
print(totatives(29))

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]


If every positive number < n is coprime to n, then n itself is a prime number.

### Totient¶

The totient of n is the number of totatives of n. This concept will come in handy when we look at Euler's Theorem, a generalization of Fermat's Little Theorem.

Since we have a function for totatives already, counting them will give us the info.

In [22]:
def totient(n):
return len(totatives(n))

In [23]:
totient(12)

Out[23]:
4
In [24]:
totient(100)

Out[24]:
40

Euler sometimes used the lowercase Greek letter $\phi$ for totient. We might provide that as a synonym:

In [25]:
𝜙 = totient
𝜙(12)

Out[25]:
4

The totient of prime p is always (p-1) because every number less than p, down to 1 inclusive is coprime to p.

In [26]:
𝜙(17)

Out[26]:
16
In [27]:
𝜙(593)

Out[27]:
592
In [28]:
N = 593 * 17
N

Out[28]:
10081
In [29]:
𝜙(N)

Out[29]:
9472
In [30]:
𝜙(593) * 𝜙(17)

Out[30]:
9472
In [31]:
def check(a, b):
"""
if a, b are coprime
"""
if not strangers(a, b):
print("Not coprime")
return
return 𝜙(a) * 𝜙(b) == 𝜙(a * b)

In [32]:
check(100, 12)

Not coprime

In [33]:
def check2(a, b):
"""
for any a, b, a more general identity
"""
return 𝜙(a) * 𝜙(b) * gcd(a, b) == 𝜙(a * b) * 𝜙(gcd(a, b))

In [34]:
check2(100, 12)

Out[34]:
True

Another way to compute the totient is to use the algorithm:

$$\phi(n) = n\ \prod\limits_{p | n} (1 - 1/p)$$

p | n means "p a prime factor of n" (divides evenly) and we only want the unique ones. For example, in computing the totient of 100, we would have only the prime factors 2, and 5, each once. The $\prod$ means to form a product of all the unique prime factors using terms (1 - 1/p).

We will translate this typography into a Python program below.

In [35]:
100 * (1 - 1/2) * (1 - 1/5)

Out[35]:
40.0

To keep the computation out of floating point, we may use Fraction objects and, realizing the answer is a whole number, grab just the numerator at the very end (the denominator will be 1).

In [36]:
from fractions import Fraction

def tot(n):
product = Fraction(1,1)
for p in set(primes.factors(n)[1:]):  # throw away 1 as a factor (not prime)
product *= (Fraction(1,1) - Fraction(1,p))
return (n * product).numerator

In [37]:
tot(100)

Out[37]:
40
In [38]:
tot(N)

Out[38]:
9472

### GCD (continued)¶

GCD and LCD are often used, especially when simplifying fractions. To get the fraction $(m/n)$ in lowest terms, one divides both $m$ and $n$ by gcd(m,n).

Clearly gcd is a workhorse at the core of our numeric computations.

How does Euclid's Method get the job done?

Let's break it down, step by step.

If I'm looking for the greatest divisor of a and b, I should first see if a divides b or b divides a, with no remainder. If $b > a$, then divmod(a, b) is (0, a).

In [39]:
divmod(4, 12)

Out[39]:
(0, 4)

This means a and b will swap in the next line:

In [40]:
a=4
b=12

print(a, b)  # before

if a % b:
b, a = a % b, b

print(a, b)  # after


4 12
12 4


Therefore, we don't really care if a > b or b < a at the start.

Going forward, whenever there's a remainder, the question becomes "what divides both this new remainder, and the smaller of the two numbers just compared?"

In other words, the problem keeps transferring to finding a divisor that works for the smaller size, and the remainder upon dividing into the larger size. The quantities get smaller and smaller.

Once 1 is reached, as the smaller size b, we're done, as gcd = b = 1 always divides into an integer with no remainder. Remember, if gcd(a, b) is 1, then a and b are coprime.

### LCD¶

We may define the lowest common divisor lcd(a, b) as (a * b)/gcd(a, b) i.e. their product, after canceling all factors in common.

The LCD is the smallest number both $a$ and $b$ will divide into, evenly (without remainder).

In [41]:
gcd = euclid  # make a synonym

def lcd(a, b):
return (a * b)//gcd(a, b)

In [42]:
r = lcd(679, 301)
r

Out[42]:
29197
In [43]:
r//301, r//679

Out[43]:
(97, 43)

### Python's reduce function¶

Suppose we want to find the greatest common divisor of a whole long list of numbers? Ditto LCD. The idea makes sense. We can use the reduce function in functools.

reduce eats the first two arguments, gets a result, eats the next argument to combine with the earlier result, and so on, a cumulative strategy. Think of adding and/or multiplying a whole list of numbers together.

In [44]:
? reduce

Docstring:
reduce(function, sequence[, initial]) -> value

Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5).  If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.
Type:      builtin_function_or_method

In [45]:
reduce(add, [1,2,3,4]) # sum(sum(sum(0, 1), 2), 3)...

Out[45]:
10
In [46]:
reduce(mul, range(1, 10)) # factorial 9!

Out[46]:
362880
In [47]:
import math
from math import factorial

factorial(9)

Out[47]:
362880
In [48]:
reduce(gcd, [25, 27, 32, 17])

Out[48]:
1
In [49]:
reduce(lcd, [25, 27, 32, 17])

Out[49]:
367200

### Another Primality Test¶

This topic is introduced in the Pascal's Triangle Notebook as well as in the Sympy Notebook.

We're revisiting it here as an application for reduce along with gcd.

Lets define a function to give the nth row of Pascal's Triangle.

The Binomial Theorem gives us an expression we can use. Our goal is to introduce another primality test associated with a relatively new algorithm called AKS after its discoverers (similar to how RSA is named for a 3-person team).

In [50]:
import sys
sys.version[:3]

Out[50]:
'3.7'
In [51]:
from math import gcd as euclid

try:
from math import binom # included in Python 3.8 up
except:
def binom(n, k):
return math.factorial(n) // math.factorial(k) // math.factorial(n - k)

In [52]:
p = 19
coeffs = [binom(p, k) for k in range(0,p+1)]
coeffs

Out[52]:
[1,
19,
171,
969,
3876,
11628,
27132,
50388,
75582,
92378,
92378,
75582,
50388,
27132,
11628,
3876,
969,
171,
19,
1]

Note the symmetry here. With an odd number as input, two terms always repeat at the center. For our primality test, we don't need to test any coefficient but once. The theorem behind AKS states that only a prime will be a divisor of every coefficient in the the corresponding row of Pascal's Triangle.

In [53]:
coeffs[1:p//2 + 1]  # just left side, keeping coefficient p

Out[53]:
[19, 171, 969, 3876, 11628, 27132, 50388, 75582, 92378]
In [54]:
reduce(euclid, coeffs[1:p//2 + 1])

Out[54]:
19

If the gcd is p itself, then we know p is a divisor of all the coefficients in question, and therefore p is prime.

See the Youtube Gallery for a video on this recently discovered Primality Test. The AKS test itself is a different but related algorithm.

In [55]:
def primality_test(c):
"""
p divides evenly into the coefficients of the pth
row of Pascal's Triangle, if and only if p is prime
"""
coeffs = [binom(c, k) for k in range(1, c//2 + 1)]
return c == reduce(euclid, coeffs)  # gcd is the candidate prime?

primality_test(11)

Out[55]:
True
In [57]:
primality_test(17)

Out[57]:
True
In [58]:
print(list(filter(primality_test, [2, *range(3, 200, 2)])))

[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199]


To be continued...

Note to readers: in the early days, simple links such as the above returned you to this very repo, but Github will now redirect you somewhere else. To use this repo effectively, you're encouraged to clone it locally and use it internally to JupyterLab.