import sys
sys.path.append('..')
import pypn as pn
from pypn.variables import *
PyPN (pronounced like the famous hobbit) is a python library and linear logic theorem prover based on proof nets.
Currently, it's mainly intended to support reasoning in multiplicative linear logic (MLL) plus the n-ary MIX rule, which was the main thing I was interested in when I first wrote it. In "one-sided" form, MLL is very simple: $$ \frac{\vdash \Gamma, A \quad \vdash \Delta, B}{\vdash \Gamma,\Delta, A \otimes B} \qquad\qquad \frac{\vdash \Gamma, A, B}{\vdash \Gamma, A ⅋ B} $$
$$ \frac{}{\vdash 1} \qquad\qquad \frac{\vdash \Gamma}{\vdash \Gamma,\bot} \qquad\qquad \frac{}{\vdash A^{\perp}, A} $$Adding the n-ary MIX rule is equivalent to requiring the two units to coincide, i.e. $I := 1 \equiv \bot$. In particular, it implies the binary MIX rule, which looks like "mixing" together the contents of two sequents, whence the name.
Okay, down to business. The first thing to note is we imported all of the symbols in pypn.variables
. This dumps a lot of variables and atoms into our namespace to play with. Variables are named {A0,...,Z0,A1,...,Z1,A2,...,Z2}
, i.e. every capital letter, followed by a 0, 1, or 2. If you need more, use the Var
constructor:
Sandwich = Var('Sandwich')
Variables can be combined to make expressions via MLL connectives, which are implemented as overloaded operators.
$I \leadsto$ pn.I
$X_0 \otimes Y_0 \leadsto$ X0 * Y0
$X_0 ⅋ Y_0 \leadsto$ X0 + Y0
$X_0^{\perp} \leadsto$ ~X0
$X_0 ⊸ Y_0 := X_0^{\perp} ⅋ Y_0 \leadsto$ X0 >> Y0
These will all return an object of type Expr
, which also implements these operations. In fact, Var
is a subclass of Expr
.
Complex expressions are normalised by pushing negation inside:
~((X0 >> Y0) * (X1 >> Y1))
Expr((X0 * ~Y0) + (X1 * ~Y1))
Operator precedence comes from Python, which sometimes does the Right Thing:
X0 * Y0 >> X1 * Y1 == (X0 * Y0) >> (X1 * Y1)
True
...and other times not. Notably, >>
associates to the left, rather the the right, as we would expect for implication:
X0 >> X1 >> X2 == (X0 >> X1) >> X2
True
Atoms behave in almost exactly the same way as variables, except they are named by lower-case letters and they know they are "positive". Expressions can be positive, negative, or neither depending on where negations appear: tensors and pars of positive elements are positive, negations of positive elements are negative, and anything else is neither.
This won't play a role for MLL+MIX, but can be important for some extensions to the logic.
exprs = [a0, a0 * b0, a0 + b0, ~a0, ~a0 * ~b0, ~a0 + ~b0, ~a0 * b0, ~a0 + b0, A0, B0, C0]
for e in exprs:
print(str(e).rjust(10), end=': ')
if e.positive(): print("positive")
elif e.negative(): print("negative")
else: print("neither")
a0: positive a0 * b0: positive a0 + b0: positive ~a0: negative ~a0 * ~b0: negative ~a0 + ~b0: negative ~a0 * b0: neither ~a0 + b0: neither A0: neither B0: neither C0: neither
Now, we are ready to prove some stuff. The simplest way to prove one formula entails another is to pass it to pn.prove
. If successful, it will return a proof net, which can you draw with pn.draw
.
p = pn.prove((X0 >> Y0) * (X1 >> Y1), X0 >> ((Y0 >> X1) >> Y1))
pn.draw(p)
If it pn.prove
fails, it will return None
. For convenience, pn.draw(None)
will simply print a helpful message:
p = pn.prove(X0 + X1, X0 * X1)
pn.draw(p)
FAIL
But what does it do? It starts by calling pn.proofnet.decompose
on the hypothesis and pn.proofnet.compose
on the conclusion, which give a pair of trees:
g = pn.Graph()
nodes, row = pn.proofnet.decompose((X0 >> Y0) * (X1 >> Y1), g)
pn.proofnet.compose(X0 >> ((Y0 >> X1) >> Y1), g, row+1)
pn.draw(g)
It then plugs pairs of inputs and outputs and checks that the proofnet is still valid via the function checker
. Once all the nodes but the hypothesis and conclusion are plugged, a proof is accepted if it passes the checker
. If it fails the checker, it keeps trying all type-respecting pluggings until a valid one is found or all possibilities are exhausted. This (super simple) strategy is exponential, even with a poly-time checker
function, but this only really becomes a problem if you have lots of repeated variables in a formula.
checker
is the most important part of this process, and it is passed in as an argument to pn.prove
. This can be any function that takes a pn.Graph
and returns a boolean, so its possible to prove unsound stuff if you change the checker:
p = pn.prove(X0 + X1, X0 * X1, checker=lambda x: True)
pn.draw(p)
If no checker
is provided, the default is pn.proofnet.cut_checker
, which repeatedly removes root notes, subject to condition that "par-like" nodes disconnect the graph. Other possibilities are:
pn.proofnet.switching_checker
, which implements the Danos-Regnier "switching" conditionspn.proofnet.hocc_cut_checker
, which implements a looser condition than cut_checker
which is sound for higher-order causal categories (see https://arxiv.org/abs/1701.04732), but not plain MLL+MIXIn particular, in HOCC-logic, tensor and par coincide for purely positive (and hence purely negative) formulae.
p = pn.prove(a0+b0, a0*b0, checker=pn.proofnet.hocc_cut_checker)
pn.draw(p)
Note this only holds for atoms {a0, b0}
, which are assumed to be positive, as mentioned earlier. Variables {A0,B0}
(which stand for arbitrary formulae) will still not satisfy this entailment:
p = pn.prove(A0+B0, A0*B0, checker=pn.proofnet.hocc_cut_checker)
pn.draw(p)
FAIL