#!/usr/bin/env python # coding: utf-8 # # Mining Function Specifications # # In the [chapter on assertions](Assertions.ipynb), we have seen how important it is to _check_ whether the result is as expected. In this chapter, we introduce a technique that allows us to _mine_ function specifications from a set of given executions, resulting in abstract and formal _descriptions_ of what the function expects and what it delivers. # # These so-called _dynamic invariants_ produce pre- and post-conditions over function arguments and variables from a set of executions. Within debugging, the resulting _assertions_ can immediately check whether function behavior has changed, but can also be useful to determine the characteristics of _failing_ runs (as opposed to _passing_ runs). Furthermore, the resulting specifications provide pre- and postconditions for formal program proofs, testing, and verification. # # This chapter is based on [a chapter with the same name in The Fuzzing Book](https://www.fuzzingbook.org/html/DynamicInvariants.html), which focuses on test generation. # In[1]: from bookutils import YouTubeVideo YouTubeVideo("HDu1olXFvv0") # **Prerequisites** # # * You should be familiar with tracing program executions, as in the [chapter on tracing](Tracer.ipynb). # * Later in this section, we access the internal _abstract syntax tree_ representations of Python programs and transform them, as in the [chapter on tracking failure origins](Slicer.ipynb). # In[2]: import bookutils.setup # In[3]: from Tracer import Tracer # In[4]: # ignore from typing import Sequence, Any, Callable, Tuple from typing import Dict, Union, Set, List, cast, Optional # ## Synopsis # # # To [use the code provided in this chapter](Importing.ipynb), write # # ```python # >>> from debuggingbook.DynamicInvariants import # ``` # # and then make use of the following features. # # # This chapter provides two classes that automatically extract specifications from a function and a set of inputs: # # * `TypeAnnotator` for _types_, and # * `InvariantAnnotator` for _pre-_ and _postconditions_. # # Both work by _observing_ a function and its invocations within a `with` clause. Here is an example for the type annotator: # # ```python # >>> def sum2(a, b): # type: ignore # >>> return a + b # >>> with TypeAnnotator() as type_annotator: # >>> sum2(1, 2) # >>> sum2(-4, -5) # >>> sum2(0, 0) # ``` # The `typed_functions()` method will return a representation of `sum2()` annotated with types observed during execution. # # ```python # >>> print(type_annotator.typed_functions()) # def sum2(a: int, b: int) -> int: # return a + b # # ``` # As a shortcut, one can also just evaluate the annotator: # # ```python # >>> type_annotator # def sum2(a: int, b: int) -> int: # return a + b # ``` # The invariant annotator works similarly: # # ```python # >>> with InvariantAnnotator() as inv_annotator: # >>> sum2(1, 2) # >>> sum2(-4, -5) # >>> sum2(0, 0) # ``` # The `functions_with_invariants()` method will return a representation of `sum2()` annotated with inferred pre- and postconditions that all hold for the observed values. # # ```python # >>> print(inv_annotator.functions_with_invariants()) # @precondition(lambda a, b: isinstance(a, int)) # @precondition(lambda a, b: isinstance(b, int)) # @postcondition(lambda return_value, a, b: a == return_value - b) # @postcondition(lambda return_value, a, b: b == return_value - a) # @postcondition(lambda return_value, a, b: isinstance(return_value, int)) # @postcondition(lambda return_value, a, b: return_value == a + b) # @postcondition(lambda return_value, a, b: return_value == b + a) # def sum2(a, b): # type: ignore # return a + b # # # ``` # Again, a shortcut is available: # # ```python # >>> inv_annotator # @precondition(lambda a, b: isinstance(a, int)) # @precondition(lambda a, b: isinstance(b, int)) # @postcondition(lambda return_value, a, b: a == return_value - b) # @postcondition(lambda return_value, a, b: b == return_value - a) # @postcondition(lambda return_value, a, b: isinstance(return_value, int)) # @postcondition(lambda return_value, a, b: return_value == a + b) # @postcondition(lambda return_value, a, b: return_value == b + a) # def sum2(a, b): # type: ignore # return a + b # ``` # Such type specifications and invariants can be helpful as _oracles_ (to detect deviations from a given set of runs). The chapter gives details on how to customize the properties checked for. # # ![](PICS/DynamicInvariants-synopsis-1.svg) # # # ## Specifications and Assertions # # When implementing a function or program, one usually works against a _specification_ – a set of documented requirements to be satisfied by the code. Such specifications can come in natural language. A formal specification, however, allows the computer to check whether the specification is satisfied. # # In the [chapter on assertions](Assertions.ipynb), we have seen how _preconditions_ and _postconditions_ can describe what a function does. Consider the following (simple) square root function: # In[5]: def square_root(x): # type: ignore assert x >= 0 # Precondition ... assert result * result == x # Postcondition return result # The assertion `assert p` checks the condition `p`; if it does not hold, execution is aborted. Here, the actual body is not yet written; we use the assertions as a specification of what `square_root()` _expects_, and what it _delivers_. # # The topmost assertion is the _precondition_, stating the requirements on the function arguments. The assertion at the end is the _postcondition_, stating the properties of the function result (including its relationship with the original arguments). Using these pre- and postconditions as a specification, we can now go and implement a square root function that satisfies them. Once implemented, we can have the assertions check at runtime whether `square_root()` works as expected. # However, not every piece of code is developed with explicit specifications in the first place; let alone does most code comes with formal pre- and post-conditions. (Just take a look at the chapters in this book.) This is a pity: As Ken Thompson famously said, "Without specifications, there are no bugs – only surprises". It is also a problem for debugging, since, of course, debugging needs some specification such that we know what is wrong, and how to fix it. This raises the interesting question: Can we somehow _retrofit_ existing code with "specifications" that properly describe their behavior, allowing developers to simply _check_ them rather than having to write them from scratch? This is what we do in this chapter. # ## Beyond Generic Failures # # Before we go into _mining_ specifications, let us first discuss why it could be useful to _have_ them. As a motivating example, consider the full implementation of `square_root()` from the [chapter on assertions](Assertions.ipynb): # In[6]: import bookutils.setup # In[7]: def square_root(x): # type: ignore """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx # `square_root()` does not come with any functionality that would check types or values. Hence, it is easy for callers to make mistakes when calling `square_root()`: # In[8]: from ExpectError import ExpectError, ExpectTimeout # In[9]: with ExpectError(): square_root("foo") # In[10]: with ExpectError(): x = square_root(0.0) # At least, the Python system catches these errors at runtime. The following call, however, simply lets the function enter an infinite loop: # In[11]: with ExpectTimeout(1): x = square_root(-1.0) # Our goal is to avoid such errors by _annotating_ functions with information that prevents errors like the above ones. The idea is to provide a _specification_ of expected properties – a specification that can then be checked at runtime or statically. # \todo{Introduce the concept of *contract*.} # ## Mining Data Types # # For our Python code, one of the most important "specifications" we need is *types*. Python being a "dynamically" typed language means that all data types are determined at run time; the code itself does not explicitly state whether a variable is an integer, a string, an array, a dictionary – or whatever. # As _writer_ of Python code, omitting explicit type declarations may save time (and allows for some fun hacks). It is not sure whether a lack of types helps in _reading_ and _understanding_ code for humans. For a _computer_ trying to analyze code, the lack of explicit types is detrimental. If, say, a constraint solver, sees `if x:` and cannot know whether `x` is supposed to be a number or a string, this introduces an _ambiguity_. Such ambiguities may multiply over the entire analysis in a combinatorial explosion – or in the analysis yielding an overly inaccurate result. # Python 3.6 and later allow data types as _annotations_ to function arguments (actually, to all variables) and return values. We can, for instance, state that `square_root()` is a function that accepts a floating-point value and returns one: # In[12]: def square_root_with_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return square_root(x) # By default, such annotations are ignored by the Python interpreter. Therefore, one can still call `square_root_typed()` with a string as an argument and get the exact same result as above. However, one can make use of special _type checking_ modules that would check types – _dynamically_ at runtime or _statically_ by analyzing the code without having to execute it. # ### Excursion: Runtime Type Checking # _Please note: `enforce` no longer runs on Python 3.7 and later, so this section is commented out_ # # The Python `enforce` package provides a function decorator that automatically inserts type-checking code that is executed at runtime. Here is how to use it: # In[13]: # import enforce # In[14]: # @enforce.runtime_validation # def square_root_with_checked_type_annotations(x: float) -> float: # """Computes the square root of x, using the Newton-Raphson method""" # return square_root(x) # Now, invoking `square_root_with_checked_type_annotations()` raises an exception when invoked with a type dfferent from the one declared: # In[15]: # with ExpectError(): # square_root_with_checked_type_annotations(True) # Note that this error is not caught by the "untyped" variant, where passing a boolean value happily returns $\sqrt{1}$ as result. # In[16]: # square_root(True) # In Python (and other languages), the boolean values `True` and `False` can be implicitly converted to the integers 1 and 0; however, it is hard to think of a call to `sqrt()` where this would not be an error. # In[17]: from bookutils import quiz # In[18]: quiz("What happens if we call " "`square_root_with_checked_type_annotations(1)`?", [ "`1` is automatically converted to float. It will pass.", "`1` is a subtype of float. It will pass.", "`1` is an integer, and no float. The type check will fail.", "The function will fail for some other reason." ], '37035 // 12345') # "Prediction is very difficult, especially when the future is concerned" (Niels Bohr). We can find out by a simple experiment that `float` actually means `float` – and not `int`: # In[19]: # with ExpectError(enforce.exceptions.RuntimeTypeError): # square_root_with_checked_type_annotations(1) # To allow `int` as a type, we need to specify a _union_ of types. # In[20]: # @enforce.runtime_validation # def square_root_with_union_type(x: Union[int, float]) -> float: # """Computes the square root of x, using the Newton-Raphson method""" # return square_root(x) # In[21]: # square_root_with_union_type(2) # In[22]: # square_root_with_union_type(2.0) # In[23]: # with ExpectError(enforce.exceptions.RuntimeTypeError): # square_root_with_union_type("Two dot zero") # ### End of Excursion # ### Static Type Checking # # Type annotations can also be checked _statically_ – that is, without even running the code. Let us create a simple Python file consisting of the above `square_root_typed()` definition and a bad invocation. # In[24]: import inspect import tempfile # In[25]: f = tempfile.NamedTemporaryFile(mode='w', suffix='.py') f.name # In[26]: f.write(inspect.getsource(square_root)) f.write('\n') f.write(inspect.getsource(square_root_with_type_annotations)) f.write('\n') f.write("print(square_root_with_type_annotations('123'))\n") f.flush() # These are the contents of our newly created Python file: # In[27]: from bookutils import print_file # In[28]: print_file(f.name, start_line_number=1) # [Mypy](http://mypy-lang.org) is a type checker for Python programs. As it checks types statically, types induce no overhead at runtime; plus, a static check can be faster than a lengthy series of tests with runtime type checking enabled. Let us see what `mypy` produces on the above file: # In[29]: import subprocess # In[30]: result = subprocess.run(["mypy", "--strict", f.name], universal_newlines=True, stdout=subprocess.PIPE) # In[31]: print(result.stdout.replace(f.name + ':', '')) del f # Delete temporary file # We see that `mypy` complains about untyped function definitions such as `square_root()`; most important, however, it finds that the call to `square_root_with_type_annotations()` in the last line has the wrong type. # With `mypy`, we can achieve the same type safety with Python as in statically typed languages – provided that we as programmers also produce the necessary type annotations. Is there a simple way to obtain these? # ## Mining Type Specifications # # Our first task will be to mine type annotations (as part of the code) from _values_ we observe at run time. These type annotations would be _mined_ from actual function executions, _learning_ from (normal) runs what the expected argument and return types should be. By observing a series of calls such as these, we could infer that both `x` and the return value are of type `float`: # In[32]: y = square_root(25.0) y # In[33]: y = square_root(2.0) y # How can we mine types from executions? The answer is simple: # # 1. We _observe_ a function during execution # 2. We track the _types_ of its arguments # 3. We include these types as _annotations_ into the code. # # To do so, we can make use of Python's tracing facility we already observed in the [chapter on tracing executions](Tracer.ipynb). With every call to a function, we retrieve the arguments, their values, and their types. # ### Tracing Calls # # To observe argument types at runtime, we define a _tracer function_ that tracks the execution of `square_root()`, checking its arguments and return values. The `CallTracer` class is set to trace functions in a `with` block as follows: # # ```python # with CallTracer() as tracer: # function_to_be_tracked(...) # info = tracer.collected_information() # ``` # # To create the tracer, we build on the `Tracer` superclass as in the [chapter on tracing executions](Tracer.ipynb). # In[34]: from types import FrameType # We start with two helper functions. `get_arguments()` returns a list of call arguments in the given call frame. # In[35]: Arguments = List[Tuple[str, Any]] # In[36]: def get_arguments(frame: FrameType) -> Arguments: """Return call arguments in the given frame""" # When called, all arguments are local variables local_variables = dict(frame.f_locals) # explicit copy arguments = [(var, frame.f_locals[var]) for var in local_variables] # FIXME: This may be needed for Python < 3.10 # arguments.reverse() # Want same order as call return arguments # `simple_call_string()` is a helper for logging that prints out calls in a user-friendly manner. # In[37]: def simple_call_string(function_name: str, argument_list: Arguments, return_value : Any = None) -> str: """Return function_name(arg[0], arg[1], ...) as a string""" call = function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" if return_value is not None: call += " = " + repr(return_value) return call # Now for `CallTracer`. The constructor simply invokes the `Tracer` constructor: # In[38]: class CallTracer(Tracer): def __init__(self, log: bool = False, **kwargs: Any)-> None: super().__init__(**kwargs) self._log = log self.reset() def reset(self) -> None: self._calls: Dict[str, List[Tuple[Arguments, Any]]] = {} self._stack: List[Tuple[str, Arguments]] = [] # The `traceit()` method does nothing yet; this is done in specialized subclasses. The `CallTracer` class implements a `traceit()` function that checks for function calls and returns: # In[39]: class CallTracer(CallTracer): def traceit(self, frame: FrameType, event: str, arg: Any) -> None: """Tracking function: Record all calls and all args""" if event == "call": self.trace_call(frame, event, arg) elif event == "return": self.trace_return(frame, event, arg) # `trace_call()` is called when a function is called; it retrieves the function name and current arguments, and saves them on a stack. # In[40]: class CallTracer(CallTracer): def trace_call(self, frame: FrameType, event: str, arg: Any) -> None: """Save current function name and args on the stack""" code = frame.f_code function_name = code.co_name arguments = get_arguments(frame) self._stack.append((function_name, arguments)) if self._log: print(simple_call_string(function_name, arguments)) # When the function returns, `trace_return()` is called. We now also have the return value. We log the whole call with arguments and return value (if desired) and save it in our list of calls. # In[41]: class CallTracer(CallTracer): def trace_return(self, frame: FrameType, event: str, arg: Any) -> None: """Get return value and store complete call with arguments and return value""" code = frame.f_code function_name = code.co_name return_value = arg # TODO: Could call get_arguments() here # to also retrieve _final_ values of argument variables called_function_name, called_arguments = self._stack.pop() assert function_name == called_function_name if self._log: print(simple_call_string(function_name, called_arguments), "returns", return_value) self.add_call(function_name, called_arguments, return_value) # `add_call()` saves the calls in a list; each function name has its own list. # In[42]: class CallTracer(CallTracer): def add_call(self, function_name: str, arguments: Arguments, return_value: Any = None) -> None: """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append((arguments, return_value)) # we can retrieve the list of calls, either for a given function name (`calls()`), # or for all functions (`all_calls()`). # In[43]: class CallTracer(CallTracer): def calls(self, function_name: str) -> List[Tuple[Arguments, Any]]: """Return list of calls for `function_name`.""" return self._calls[function_name] # In[44]: class CallTracer(CallTracer): def all_calls(self) -> Dict[str, List[Tuple[Arguments, Any]]]: """ Return list of calls for function_name, or a mapping function_name -> calls for all functions tracked """ return self._calls # Let us now put this to use. We turn on logging to track the individual calls and their return values: # In[45]: with CallTracer(log=True) as tracer: y = square_root(25) y = square_root(2.0) # After execution, we can retrieve the individual calls: # In[46]: calls = tracer.calls('square_root') calls # Each call is a pair (`argument_list`, `return_value`), where `argument_list` is a list of pairs (`parameter_name`, `value`). # In[47]: square_root_argument_list, square_root_return_value = calls[0] simple_call_string('square_root', square_root_argument_list, square_root_return_value) # If the function does not return a value, `return_value` is `None`. # In[48]: def hello(name: str) -> None: print("Hello,", name) # In[49]: with CallTracer() as tracer: hello("world") # In[50]: hello_calls = tracer.calls('hello') hello_calls # In[51]: hello_argument_list, hello_return_value = hello_calls[0] simple_call_string('hello', hello_argument_list, hello_return_value) # ### Getting Types # # Despite what you may have read or heard, Python actually _is_ a typed language. It is just that it is _dynamically typed_ – types are used and checked only at runtime (rather than declared in the code, where they can be _statically checked_ at compile time). We can thus retrieve types of all values within Python: # In[52]: type(4) # In[53]: type(2.0) # In[54]: type([4]) # We can retrieve the type of the first argument to `square_root()`: # In[55]: parameter, value = square_root_argument_list[0] parameter, type(value) # as well as the type of the return value: # In[56]: type(square_root_return_value) # Hence, we see that (so far), `square_root()` is a function taking (among others) integers and returning floats. We could declare `square_root()` as: # In[57]: def square_root_annotated(x: int) -> float: return square_root(x) # This is a representation we could place in a static type checker, allowing to check whether calls to `square_root()` actually pass a number. A dynamic type checker could run such checks at runtime. # By default, Python does not do anything with such annotations. However, tools can access annotations from functions and other objects: # In[58]: square_root_annotated.__annotations__ # This is how run-time checkers access the annotations to check against. # ### Annotating Functions with Types # # Our plan is to annotate functions automatically, based on the types we have seen. Our aim is to build a class `TypeAnnotator` that can be used as follows. First, it would track some execution: # # ```python # with TypeAnnotator() as annotator: # some_function_call() # ``` # # After tracking, `TypeAnnotator` would provide appropriate methods to access (type-)annotated versions of the function seen: # # ```python # print(annotator.typed_functions()) # ``` # # Let us put the pieces together to build `TypeAnnotator`. # #### Excursion: Accessing Function Structure # To annotate functions, we need to convert a function into a tree representation (called _abstract syntax trees_, or ASTs) and back; we already have seen these in the chapter on [tracking value origins](Slicer.ipynb). # In[59]: import ast import inspect # We can get the source of a Python function using `inspect.getsource()`. (Note that this does not work for functions defined in other notebooks.) # In[60]: square_root_source = inspect.getsource(square_root) square_root_source # To view these in a visually pleasing form, our function `print_content(s, suffix)` formats and highlights the string `s` as if it were a file with ending `suffix`. We can thus view (and highlight) the source as if it were a Python file: # In[61]: from bookutils import print_content # In[62]: print_content(square_root_source, '.py') # Parsing this gives us an abstract syntax tree (AST) – a representation of the program in tree form. # In[63]: square_root_ast = ast.parse(square_root_source) # What does this AST look like? The helper functions `ast.dump()` (textual output) and `showast.show_ast()` (graphical output with [showast](https://github.com/hchasestevens/show_ast)) allow us to inspect the structure of the tree. We see that the function starts as a `FunctionDef` with name and arguments, followed by a body, which is a list of statements of type `Expr` (the docstring), type `Assign` (assignments), `While` (while loop with its own body), and finally `Return`. # In[64]: print(ast.dump(square_root_ast, indent=4)) # Too much text for you? This graphical representation may make things simpler. # In[65]: from bookutils import show_ast # In[66]: show_ast(square_root_ast) # The function `ast.unparse()` converts such a tree back into the more familiar textual Python code representation. Comments are gone, and there may be more (or fewer) parentheses than before, but the result has the same semantics: # In[67]: print_content(ast.unparse(square_root_ast), '.py') # #### End of Excursion # #### Excursion: Annotating Functions with Given Types # Let us now go and transform ASTs to add type annotations. We start with a helper function `parse_type(name)` which parses a type name into an AST. # In[68]: def parse_type(name: str) -> ast.expr: class ValueVisitor(ast.NodeVisitor): def visit_Expr(self, node: ast.Expr) -> None: self.value_node = node.value tree = ast.parse(name) name_visitor = ValueVisitor() name_visitor.visit(tree) return name_visitor.value_node # In[69]: print(ast.dump(parse_type('int'))) # In[70]: print(ast.dump(parse_type('[object]'))) # We now define a helper function that actually adds type annotations to a function AST. The `TypeTransformer` class builds on the Python standard library `ast.NodeTransformer` infrastructure. It would be called as # # ```python # TypeTransformer({'x': 'int'}, 'float').visit(ast) # ``` # # to annotate the arguments of `square_root()`: `x` with `int`, and the return type with `float`. The returned AST can then be unparsed, compiled or analyzed. # In[71]: class TypeTransformer(ast.NodeTransformer): def __init__(self, argument_types: Dict[str, str], return_type: Optional[str] = None): self.argument_types = argument_types self.return_type = return_type super().__init__() # The core of `TypeTransformer` is the method `visit_FunctionDef()`, which is called for every function definition in the AST. Its argument `node` is the subtree of the function definition to be transformed. Our implementation accesses the individual arguments and invokes `annotate_args()` on them; it also sets the return type in the `returns` attribute of the node. # In[72]: class TypeTransformer(TypeTransformer): def visit_FunctionDef(self, node: ast.FunctionDef) -> ast.FunctionDef: """Add annotation to function""" # Set argument types new_args = [] for arg in node.args.args: new_args.append(self.annotate_arg(arg)) new_arguments = ast.arguments( node.args.posonlyargs, new_args, node.args.vararg, node.args.kwonlyargs, node.args.kw_defaults, node.args.kwarg, node.args.defaults ) # Set return type if self.return_type is not None: node.returns = parse_type(self.return_type) return ast.copy_location( ast.FunctionDef(node.name, new_arguments, node.body, node.decorator_list, node.returns), node) # Each argument gets its own annotation, taken from the types originally passed to the class: # In[73]: class TypeTransformer(TypeTransformer): def annotate_arg(self, arg: ast.arg) -> ast.arg: """Add annotation to single function argument""" arg_name = arg.arg if arg_name in self.argument_types: arg.annotation = parse_type(self.argument_types[arg_name]) return arg # Does this work? Let us annotate the AST from `square_root()` with types for the arguments and return types: # In[74]: new_ast = TypeTransformer({'x': 'int'}, 'float').visit(square_root_ast) # When we unparse the new AST, we see that the annotations actually are present: # In[75]: print_content(ast.unparse(new_ast), '.py') # Similarly, we can annotate the `hello()` function from above: # In[76]: hello_source = inspect.getsource(hello) # In[77]: hello_ast = ast.parse(hello_source) # In[78]: new_ast = TypeTransformer({'name': 'str'}, 'None').visit(hello_ast) # In[79]: print_content(ast.unparse(new_ast), '.py') # #### End of Excursion # #### Excursion: Annotating Functions with Mined Types # Let us now annotate functions with types mined at runtime. We start with a simple function `type_string()` that determines # the appropriate type of a given value (as a string): # In[80]: def type_string(value: Any) -> str: return type(value).__name__ # In[81]: type_string(4) # In[82]: type_string([]) # For composite structures, `type_string()` does not examine element types; hence, the type of `[3]` is simply `list` instead of, say, `list[int]`. For now, `list` will do fine. # In[83]: type_string([3]) # `type_string()` will be used to infer the types of argument values found at runtime, as returned by `CallTracer.all_calls()`: # In[84]: with CallTracer() as tracer: y = square_root(25.0) y = square_root(2.0) # In[85]: tracer.all_calls() # The function `annotate_types()` takes such a list of calls and annotates each function listed: # In[86]: from StackInspector import StackInspector # In[87]: def annotate_types(calls: Dict[str, List[Tuple[Arguments, Any]]]) \ -> Dict[str, ast.AST]: annotated_functions = {} stack_inspector = StackInspector() for function_name in calls: function = stack_inspector.search_func(function_name) if function: annotated_functions[function_name] = \ annotate_function_with_types(function, calls[function_name]) return annotated_functions # For each function, we get the source and its AST and then get to the actual annotation in `annotate_function_ast_with_types()`: # In[88]: def annotate_function_with_types(function: Callable, function_calls: List[Tuple[Arguments, Any]]) -> ast.AST: function_code = inspect.getsource(function) function_ast = ast.parse(function_code) return annotate_function_ast_with_types(function_ast, function_calls) # The function `annotate_function_ast_with_types()` invokes the `TypeTransformer` with the calls seen, and for each call, iterate over the arguments, determine their types, and annotate the AST with these. The universal type `Any` is used when we encounter type conflicts, which we will discuss below. # In[89]: def annotate_function_ast_with_types(function_ast: ast.AST, function_calls: List[Tuple[Arguments, Any]]) -> ast.AST: parameter_types: Dict[str, str] = {} return_type = None for calls_seen in function_calls: args, return_value = calls_seen if return_value: if return_type and return_type != type_string(return_value): return_type = 'Any' else: return_type = type_string(return_value) for parameter, value in args: try: different_type = (parameter_types[parameter] != type_string(value)) except KeyError: different_type = False if different_type: parameter_types[parameter] = 'Any' else: parameter_types[parameter] = type_string(value) annotated_function_ast = \ TypeTransformer(parameter_types, return_type).visit(function_ast) return annotated_function_ast # Here is `square_root()` annotated with the types recorded usign the tracer, above. # In[90]: print_content(ast.unparse(annotate_types(tracer.all_calls())['square_root']), '.py') # #### End of Excursion # #### Excursion: A Type Annotator Class # Let us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows accessing the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST. # In[91]: class TypeTracer(CallTracer): pass # In[92]: class TypeAnnotator(TypeTracer): def typed_functions_ast(self) -> Dict[str, ast.AST]: """Return a dict name -> AST for all functions observed, annotated with types""" return annotate_types(self.all_calls()) def typed_function_ast(self, function_name: str) -> Optional[ast.AST]: """Return an AST for all calls of `function_name` observed, annotated with types""" function = self.search_func(function_name) if not function: return None return annotate_function_with_types(function, self.calls(function_name)) def typed_functions(self) -> str: """Return the code for all functions observed, annotated with types""" functions = '' for f_name in self.all_calls(): f_ast = self.typed_function_ast(f_name) if f_ast: functions += ast.unparse(f_ast) else: functions += '# Could not find function ' + repr(f_name) return functions def typed_function(self, function_name: str) -> str: """Return the code for all calls of `function_name` observed, annotated with types""" function_ast = self.typed_function_ast(function_name) if not function_ast: raise KeyError return ast.unparse(function_ast) def __repr__(self) -> str: """String representation, like `typed_functions()`""" return self.typed_functions() # #### End of Excursion # Here is how to use `TypeAnnotator`. We first track a series of calls: # In[93]: with TypeAnnotator() as annotator: y = square_root(25.0) y = square_root(2.0) # After tracking, we can immediately retrieve an annotated version of the functions tracked: # In[94]: print_content(annotator.typed_functions(), '.py') # This also works for multiple and diverse functions. One could go and implement an automatic type annotator for Python files based on the types seen during execution. # In[95]: with TypeAnnotator() as annotator: hello('type annotations') y = square_root(1.0) # In[96]: print_content(annotator.typed_functions(), '.py') # A content as above could now be sent to a type checker, which would detect any type inconsistency between callers and callees. # #### Excursion: Handling Multiple Types # # Let us now resolve the role of the magic `Any` type in `annotate_function_ast_with_types()`. If we see multiple types for the same argument, we set its type to `Any`. For `square_root()`, this makes sense, as its arguments can be integers as well as floats: # In[97]: with CallTracer() as tracer: y = square_root(25.0) y = square_root(4) # In[98]: annotated_square_root_ast = annotate_types(tracer.all_calls())['square_root'] print_content(ast.unparse(annotated_square_root_ast), '.py') # The following function `sum3()` can be called with floating-point numbers as arguments, resulting in the parameters getting a `float` type: # In[99]: def sum3(a, b, c): # type: ignore return a + b + c # In[100]: with TypeAnnotator() as annotator: y = sum3(1.0, 2.0, 3.0) y # In[101]: print_content(annotator.typed_functions(), '.py') # If we call `sum3()` with integers, though, the arguments get an `int` type: # In[102]: with TypeAnnotator() as annotator: y = sum3(1, 2, 3) y # In[103]: print_content(annotator.typed_functions(), '.py') # And we can also call `sum3()` with strings, which assigns the arguments a `str` type: # In[104]: with TypeAnnotator() as annotator: y = sum3("one", "two", "three") y # In[105]: print_content(annotator.typed_functions(), '.py') # If we have multiple calls, but with different types, `TypeAnnotator()` will assign an `Any` type to both arguments and return values: # In[106]: with TypeAnnotator() as annotator: y = sum3(1, 2, 3) y = sum3("one", "two", "three") # In[107]: typed_sum3_def = annotator.typed_function('sum3') # In[108]: print_content(typed_sum3_def, '.py') # A type `Any` makes it explicit that an object can, indeed, have any type; it will not be typechecked at runtime or statically. To some extent, this defeats the power of type checking; but it also preserves some of the type flexibility that many Python programmers enjoy. Besides `Any`, the `typing` module supports several additional ways to define ambiguous types; we will keep this in mind for a later exercise. # #### End of Excursion # ## Mining Invariants # # Besides basic data types. we can check several further properties from arguments. We can, for instance, whether an argument can be negative, zero, or positive; or that one argument should be smaller than the second; or that the result should be the sum of two arguments – properties that cannot be expressed in a (Python) type. # # Such properties are called *invariants*, as they hold across all invocations of a function. Specifically, invariants come as _pre_- and _postconditions_ – conditions that always hold at the beginning and at the end of a function. (There are also _data_ and _object_ invariants that express always-holding properties over the state of data or objects, but we do not consider these in this book.) # ### Annotating Functions with Pre- and Postconditions # # The classical means to specify pre- and postconditions is via _assertions_, which we have introduced in the [chapter on assertions](Assertions.ipynb). A precondition checks whether the arguments to a function satisfy the expected properties; a postcondition does the same for the result. We can express and check both using assertions as follows: # In[109]: def square_root_with_invariants(x): # type: ignore assert x >= 0 # Precondition ... assert result * result == x # Postcondition return result # A nicer way, however, is to syntactically separate invariants from the function at hand. Using appropriate decorators, we could specify pre- and postconditions as follows: # # ```python # @precondition lambda x: x >= 0 # @postcondition lambda return_value, x: return_value * return_value == x # def square_root_with_invariants(x): # # normal code without assertions # ... # ``` # # The decorators `@precondition` and `@postcondition` would run the given functions (specified as anonymous `lambda` functions) before and after the decorated function, respectively. If the functions return `False`, the condition is violated. `@precondition` gets the function arguments as arguments; `@postcondition` additionally gets the return value as first argument. # It turns out that implementing such decorators is not hard at all. Our implementation builds on a [code snippet from StackOverflow](https://stackoverflow.com/questions/12151182/python-precondition-postcondition-for-member-function-how): # In[110]: import functools # In[111]: def condition(precondition: Optional[Callable] = None, postcondition: Optional[Callable] = None) -> Callable: def decorator(func: Callable) -> Callable: @functools.wraps(func) # preserves name, docstring, etc def wrapper(*args: Any, **kwargs: Any) -> Any: if precondition is not None: assert precondition(*args, **kwargs), \ "Precondition violated" # Call original function or method retval = func(*args, **kwargs) if postcondition is not None: assert postcondition(retval, *args, **kwargs), \ "Postcondition violated" return retval return wrapper return decorator # In[112]: def precondition(check: Callable) -> Callable: return condition(precondition=check) # In[113]: def postcondition(check: Callable) -> Callable: return condition(postcondition=check) # With these, we can now start decorating `square_root()`: # In[114]: @precondition(lambda x: x > 0) def square_root_with_precondition(x): # type: ignore return square_root(x) # This catches arguments violating the precondition: # In[115]: with ExpectError(): square_root_with_precondition(-1.0) # Likewise, we can provide a postcondition: # In[116]: import math # In[117]: @postcondition(lambda ret, x: math.isclose(ret * ret, x)) def square_root_with_postcondition(x): # type: ignore return square_root(x) # In[118]: y = square_root_with_postcondition(2.0) y # If we have a buggy implementation of $\sqrt{x}$, this gets caught quickly: # In[119]: @postcondition(lambda ret, x: math.isclose(ret * ret, x)) def buggy_square_root_with_postcondition(x): # type: ignore return square_root(x) + 0.1 # In[120]: with ExpectError(): y = buggy_square_root_with_postcondition(2.0) # While checking pre- and postconditions is a great way to catch errors, specifying them can be cumbersome. Let us try to see whether we can (again) _mine_ some of them. # ### Mining Invariants # # To _mine_ invariants, we can use the same tracking functionality as before; instead of saving values for individual variables, though, we now check whether the values satisfy specific _properties_ or not. For instance, if all values of `x` seen satisfy the condition `x > 0`, then we make `x > 0` an invariant of the function. If we see positive, zero, and negative values of `x`, though, then there is no property of `x` left to talk about. # # The general idea is thus: # # 1. Check all variable values observed against a set of predefined properties; and # 2. Keep only those properties that hold for all runs observed. # ### Defining Properties # # What precisely do we mean by properties? Here is a small collection of value properties that would frequently be used in invariants. All these properties would be evaluated with the _metavariables_ `X`, `Y`, and `Z` (actually, any upper-case identifier) being replaced with the names of function parameters: # In[121]: INVARIANT_PROPERTIES = [ "X < 0", "X <= 0", "X > 0", "X >= 0", # "X == 0", # implied by "X", below # "X != 0", # implied by "not X", below ] # When `square_root(x)` is called as, say `square_root(5.0)`, we see that `x = 5.0` holds. The above properties would then all be checked for `x`. Only the properties `X > 0`, `X >= 0`, and `not X` hold for the call seen; and hence `x > 0`, `x >= 0`, and `not x` (or better: `x != 0`) would make potential preconditions for `square_root(x)`. # We can check for many more properties such as relations between two arguments: # In[122]: INVARIANT_PROPERTIES += [ "X == Y", "X > Y", "X < Y", "X >= Y", "X <= Y", ] # Types also can be checked using properties. For any function parameter `X`, only one of these will hold: # In[123]: INVARIANT_PROPERTIES += [ "isinstance(X, bool)", "isinstance(X, int)", "isinstance(X, float)", "isinstance(X, list)", "isinstance(X, dict)", ] # We can check for arithmetic properties: # In[124]: INVARIANT_PROPERTIES += [ "X == Y + Z", "X == Y * Z", "X == Y - Z", "X == Y / Z", ] # Here's relations over three values, a Python special: # In[125]: INVARIANT_PROPERTIES += [ "X < Y < Z", "X <= Y <= Z", "X > Y > Z", "X >= Y >= Z", ] # These Boolean properties also check for other types, as in Python, `None`, an empty list, an empty set, an empty string, and the value zero all evaluate to `False`. # In[126]: INVARIANT_PROPERTIES += [ "X", "not X" ] # Finally, we can also check for list or string properties. Again, this is just a tiny selection. # In[127]: INVARIANT_PROPERTIES += [ "X == len(Y)", "X == sum(Y)", "X in Y", "X.startswith(Y)", "X.endswith(Y)", ] # ### Extracting Meta-Variables # # Let us first introduce a few _helper functions_ before we can get to the actual mining. `metavars()` extracts the set of meta-variables (`X`, `Y`, `Z`, etc.) from a property. To this end, we parse the property as a Python expression and then visit the identifiers. # In[128]: def metavars(prop: str) -> List[str]: metavar_list = [] class ArgVisitor(ast.NodeVisitor): def visit_Name(self, node: ast.Name) -> None: if node.id.isupper(): metavar_list.append(node.id) ArgVisitor().visit(ast.parse(prop)) return metavar_list # In[129]: assert metavars("X < 0") == ['X'] # In[130]: assert metavars("X.startswith(Y)") == ['X', 'Y'] # In[131]: assert metavars("isinstance(X, str)") == ['X'] # ### Instantiating Properties # # To produce a property as invariant, we need to be able to _instantiate_ it with variable names. The instantiation of `X > 0` with `X` being instantiated to `a`, for instance, gets us `a > 0`. To this end, the function `instantiate_prop()` takes a property and a collection of variable names and instantiates the meta-variables left-to-right with the corresponding variables names in the collection. # In[132]: def instantiate_prop_ast(prop: str, var_names: Sequence[str]) -> ast.AST: class NameTransformer(ast.NodeTransformer): def visit_Name(self, node: ast.Name) -> ast.Name: if node.id not in mapping: return node return ast.Name(id=mapping[node.id], ctx=ast.Load()) meta_variables = metavars(prop) assert len(meta_variables) == len(var_names) mapping = {} for i in range(0, len(meta_variables)): mapping[meta_variables[i]] = var_names[i] prop_ast = ast.parse(prop, mode='eval') new_ast = NameTransformer().visit(prop_ast) return new_ast # In[133]: def instantiate_prop(prop: str, var_names: Sequence[str]) -> str: prop_ast = instantiate_prop_ast(prop, var_names) prop_text = ast.unparse(prop_ast).strip() while prop_text.startswith('(') and prop_text.endswith(')'): prop_text = prop_text[1:-1] return prop_text # In[134]: assert instantiate_prop("X > Y", ['a', 'b']) == 'a > b' # In[135]: assert instantiate_prop("X.startswith(Y)", ['x', 'y']) == 'x.startswith(y)' # ### Evaluating Properties # # To actually _evaluate_ properties, we do not need to instantiate them. Instead, we simply convert them into a boolean function, using `lambda`: # In[136]: def prop_function_text(prop: str) -> str: return "lambda " + ", ".join(metavars(prop)) + ": " + prop # Here is a simple example: # In[137]: prop_function_text("X > Y") # We can easily evaluate the function: # In[138]: def prop_function(prop: str) -> Callable: return eval(prop_function_text(prop)) # Here is an example: # In[139]: p = prop_function("X > Y") # In[140]: quiz("What is p(100, 1)?", [ "False", "True" ], 'p(100, 1) + 1', globals()) # In[141]: p(100, 1) # In[142]: p(1, 100) # ### Checking Invariants # # To extract invariants from an execution, we need to check them on all possible instantiations of arguments. If the function to be checked has two arguments `a` and `b`, we instantiate the property `X < Y` both as `a < b` and `b < a` and check each of them. # To get all combinations, we use the Python `permutations()` function: # In[143]: import itertools # In[144]: for combination in itertools.permutations([1.0, 2.0, 3.0], 2): print(combination) # The function `true_property_instantiations()` takes a property and a list of tuples (`var_name`, `value`). It then produces all instantiations of the property with the given values and returns those that evaluate to True. # In[145]: Invariants = Set[Tuple[str, Tuple[str, ...]]] # In[146]: def true_property_instantiations(prop: str, vars_and_values: Arguments, log: bool = False) -> Invariants: instantiations = set() p = prop_function(prop) len_metavars = len(metavars(prop)) for combination in itertools.permutations(vars_and_values, len_metavars): args = [value for var_name, value in combination] var_names = [var_name for var_name, value in combination] try: result = p(*args) except: result = None if log: print(prop, combination, result) if result: instantiations.add((prop, tuple(var_names))) return instantiations # Here is an example. If `x == -1` and `y == 1`, the property `X < Y` holds for `x < y`, but not for `y < x`: # In[147]: invs = true_property_instantiations("X < Y", [('x', -1), ('y', 1)], log=True) invs # The instantiation retrieves the short form: # In[148]: for prop, var_names in invs: print(instantiate_prop(prop, var_names)) # Likewise, with values for `x` and `y` as above, the property `X < 0` only holds for `x`, but not for `y`: # In[149]: invs = true_property_instantiations("X < 0", [('x', -1), ('y', 1)], log=True) # In[150]: for prop, var_names in invs: print(instantiate_prop(prop, var_names)) # ### Extracting Invariants # # Let us now run the above invariant extraction on function arguments and return values as observed during a function execution. To this end, we extend the `CallTracer` class into an `InvariantTracer` class, which automatically computes invariants for all functions and all calls observed during tracking. # By default, an `InvariantTracer` uses the `INVARIANT_PROPERTIES` properties as defined above; however, one can specify alternate sets of properties. # In[151]: class InvariantTracer(CallTracer): def __init__(self, props: Optional[List[str]] = None, **kwargs: Any) -> None: if props is None: props = INVARIANT_PROPERTIES self.props = props super().__init__(**kwargs) # The key method of the `InvariantTracer` is the `invariants()` method. This iterates over the calls observed and checks which properties hold. Only the intersection of properties – that is, the set of properties that hold for all calls – is preserved, and eventually returned. The special variable `return_value` is set to hold the return value. # In[152]: RETURN_VALUE = 'return_value' # In[153]: class InvariantTracer(InvariantTracer): def all_invariants(self) -> Dict[str, Invariants]: return {function_name: self.invariants(function_name) for function_name in self.all_calls()} def invariants(self, function_name: str) -> Invariants: invariants = None for variables, return_value in self.calls(function_name): vars_and_values = variables + [(RETURN_VALUE, return_value)] s = set() for prop in self.props: s |= true_property_instantiations(prop, vars_and_values, self._log) if invariants is None: invariants = s else: invariants &= s assert invariants is not None return invariants # Here's an example of how to use `invariants()`. We run the tracer on a small set of calls. # In[154]: with InvariantTracer() as tracer: y = square_root(25.0) y = square_root(10.0) tracer.all_calls() # The `invariants()` method produces a set of properties that hold for the observed runs, together with their instantiations over function arguments. # In[155]: invs = tracer.invariants('square_root') invs # As before, the actual instantiations are easier to read: # # In[156]: def pretty_invariants(invariants: Invariants) -> List[str]: props = [] for (prop, var_names) in invariants: props.append(instantiate_prop(prop, var_names)) return sorted(props) # In[157]: pretty_invariants(invs) # We see that the both `x` and the return value have a `float` type. We also see that both are always greater than zero. These are properties that may make useful pre- and postconditions, notably for symbolic analysis. # However, there's also an invariant which does _not_ universally hold, namely `return_value <= x`, as the following example shows: # In[158]: square_root(0.01) # Clearly, 0.1 > 0.01 holds. This is a case of us not learning from sufficiently diverse inputs. As soon as we have a call including `x = 0.1`, though, the invariant `return_value <= x` is eliminated: # In[159]: with InvariantTracer() as tracer: y = square_root(25.0) y = square_root(10.0) y = square_root(0.01) pretty_invariants(tracer.invariants('square_root')) # We will discuss later how to ensure sufficient diversity in inputs. (Hint: This involves test generation.) # Let us try out our invariant tracer on `sum3()`. We see that all types are well-defined; the properties that all arguments are non-zero, however, is specific to the calls observed. # In[160]: with InvariantTracer() as tracer: y = sum3(1, 2, 3) y = sum3(-4, -5, -6) pretty_invariants(tracer.invariants('sum3')) # If we invoke `sum3()` with strings instead, we get different invariants. Notably, we obtain the postcondition that the returned string always starts with the string in the first argument `a` – a universal postcondition if strings are used. # In[161]: with InvariantTracer() as tracer: y = sum3('a', 'b', 'c') y = sum3('f', 'e', 'd') pretty_invariants(tracer.invariants('sum3')) # If we invoke `sum3()` with both strings and numbers (and zeros, too), there are no properties left that would hold across all calls. That's the price of flexibility. # In[162]: with InvariantTracer() as tracer: y = sum3('a', 'b', 'c') y = sum3('c', 'b', 'a') y = sum3(-4, -5, -6) y = sum3(0, 0, 0) pretty_invariants(tracer.invariants('sum3')) # ### Converting Mined Invariants to Annotations # # As with types, above, we would like to have some functionality where we can add the mined invariants as annotations to existing functions. To this end, we introduce the `InvariantAnnotator` class, extending `InvariantTracer`. # We start with a helper method. `params()` returns a comma-separated list of parameter names as observed during calls. # In[163]: class InvariantAnnotator(InvariantTracer): def params(self, function_name: str) -> str: arguments, return_value = self.calls(function_name)[0] return ", ".join(arg_name for (arg_name, arg_value) in arguments) # In[164]: with InvariantAnnotator() as annotator: y = square_root(25.0) y = sum3(1, 2, 3) # In[165]: annotator.params('square_root') # In[166]: annotator.params('sum3') # Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those properties that do not depend on the return value) as a string with annotations: # In[167]: class InvariantAnnotator(InvariantAnnotator): def preconditions(self, function_name: str) -> List[str]: """Return a list of mined preconditions for `function_name`""" conditions = [] for inv in pretty_invariants(self.invariants(function_name)): if inv.find(RETURN_VALUE) >= 0: continue # Postcondition cond = ("@precondition(lambda " + self.params(function_name) + ": " + inv + ")") conditions.append(cond) return conditions # In[168]: with InvariantAnnotator() as annotator: y = square_root(25.0) y = square_root(0.01) y = sum3(1, 2, 3) # In[169]: annotator.preconditions('square_root') # `postconditions()` does the same for postconditions: # In[170]: class InvariantAnnotator(InvariantAnnotator): def postconditions(self, function_name: str) -> List[str]: """Return a list of mined postconditions for `function_name`""" conditions = [] for inv in pretty_invariants(self.invariants(function_name)): if inv.find(RETURN_VALUE) < 0: continue # Precondition cond = (f"@postcondition(lambda {RETURN_VALUE}," f" {self.params(function_name)}: {inv})") conditions.append(cond) return conditions # In[171]: with InvariantAnnotator() as annotator: y = square_root(25.0) y = square_root(0.01) y = sum3(1, 2, 3) # In[172]: annotator.postconditions('square_root') # With these, we can take a function and add both pre- and postconditions as annotations: # In[173]: class InvariantAnnotator(InvariantAnnotator): def functions_with_invariants(self) -> str: """Return the code of all observed functions, annotated with invariants""" functions = "" for function_name in self.all_invariants(): try: function = self.function_with_invariants(function_name) except KeyError: function = '# Could not find function ' + repr(function_name) functions += function return functions def function_with_invariants(self, function_name: str) -> str: """Return the code of `function_name`, annotated with invariants""" function = self.search_func(function_name) if not function: raise KeyError source = inspect.getsource(function) return '\n'.join(self.preconditions(function_name) + self.postconditions(function_name)) + \ '\n' + source def __repr__(self) -> str: """String representation, like `functions_with_invariants()`""" return self.functions_with_invariants() # Here comes `function_with_invariants()` in all its glory: # In[174]: with InvariantAnnotator() as annotator: y = square_root(25.0) y = square_root(0.01) y = sum3(1, 2, 3) # In[175]: print_content(annotator.function_with_invariants('square_root'), '.py') # Quite a number of invariants, isn't it? Further below (and in the exercises), we will discuss on how to focus on the most relevant properties. # ## Avoiding Overspecialization # # Mined specifications can only be as good as the executions they were mined from. If we only see a single call, for instance, we will be faced with several mined pre- and postconditions that _overspecialize_ towards the values seen. # Let us illustrate this effect on a simple `sum2()` function which adds two numbers. # In[176]: def sum2(a, b): # type: ignore return a + b # If we invoke `sum2()` with a variety of arguments, the invariants all capture the relationship between `a`, `b`, and the return value as `return_value == a + b` in all its variations. # In[177]: with InvariantAnnotator() as annotator: sum2(31, 45) sum2(0, 0) sum2(-1, -5) # In[178]: print_content(annotator.functions_with_invariants(), '.py') # If, however, we see only a single call, the invariants will overspecialize to the single call seen: # In[179]: with InvariantAnnotator() as annotator: y = sum2(2, 2) print_content(annotator.functions_with_invariants(), '.py') # The mined precondition `a == b`, for instance, only holds for the single call observed; the same holds for the mined postcondition `return_value == a * b`. Yet, `sum2()` can obviously be successfully called with other values that do not satisfy these conditions. # To get out of this trap, we have to _learn from more and more diverse runs_. # One way to obtain such runs is by _generating_ inputs. Indeed, a simple test generator for calls of `sum2()` will easily resolve the problem. # In[180]: import random # In[181]: with InvariantAnnotator() as annotator: for i in range(100): a = random.randrange(-10, +10) b = random.randrange(-10, +10) length = sum2(a, b) # In[182]: print_content(annotator.function_with_invariants('sum2'), '.py') # Note, though, that an API test generator, such as above, will have to be set up such that it actually respects preconditions – in our case, we invoke `sum2()` with integers only, already assuming its precondition. In some way, one thus needs a specification (a model, a grammar) to mine another specification – a chicken-and-egg problem. # However, there is one way out of this problem: If one can automatically generate tests at the system level, then one has an _infinite source of executions_ to learn invariants from. In each of these executions, all functions would be called with values that satisfy the (implicit) precondition, allowing us to mine invariants for these functions. This holds, because at the system level, invalid inputs must be rejected by the system in the first place. The meaningful precondition at the system level, ensuring that only valid inputs get through, thus gets broken down into a multitude of meaningful preconditions (and subsequent postconditions) at the function level. # The big requirement for all this, though, is that one needs good test generators. This will be the subject of another book, namely [The Fuzzing Book](https://www.fuzzingbook.org/). # ## Partial Invariants # # For debugging, it can be helpful to focus on invariants produced only by _failing_ runs, thus characterizing the _circumstances under which a function fails_. Let us illustrate this on an example. # The `middle()` function from the [chapter on statistical debugging](StatisticalDebugger.ipynb) is supposed to return the middle of three integers `x`, `y`, and `z`. # In[183]: from StatisticalDebugger import middle # minor dependency # In[184]: with InvariantAnnotator() as annotator: for i in range(100): x = random.randrange(-10, +10) y = random.randrange(-10, +10) z = random.randrange(-10, +10) mid = middle(x, y, z) # By default, our `InvariantAnnotator()` does not return any particular pre- or postcondition (other than the types observed). That is just fine, as the function indeed imposes no particular precondition; and the postcondition from `middle()` is not covered by the `InvariantAnnotator` patterns. # In[185]: print_content(annotator.functions_with_invariants(), '.py') # Things get more interesting if we focus on a particular subset of runs only, though - say, a set of inputs where `middle()` fails. # In[186]: from StatisticalDebugger import MIDDLE_FAILING_TESTCASES # minor dependency # In[187]: with InvariantAnnotator() as annotator: for x, y, z in MIDDLE_FAILING_TESTCASES: mid = middle(x, y, z) # In[188]: print_content(annotator.functions_with_invariants(), '.py') # Now that's an intimidating set of pre- and postconditions. However, almost all of the preconditions are implied by the one precondition # # ```python # @precondition(lambda x, y, z: y < x < z, doc='y < x < z') # ``` # # which characterizes the exact condition under which `middle()` fails (which also happens to be the condition under which the erroneous second `return y` is executed). By checking how _invariants for failing runs_ differ from _invariants for passing runs_, we can identify circumstances for function failures. # In[189]: quiz("Could `InvariantAnnotator` also determine a precondition " "that characterizes _passing_ runs?", [ "Yes", "No" ], 'int(math.exp(1))', globals()) # Indeed, it cannot – the correct invariant for passing runs would be the _inverse_ of the invariant for failing runs, and `not A < B < C` is not part of our invariant library. We can easily test this: # In[190]: from StatisticalDebugger import MIDDLE_PASSING_TESTCASES # minor dependency # In[191]: with InvariantAnnotator() as annotator: for x, y, z in MIDDLE_PASSING_TESTCASES: mid = middle(x, y, z) # In[192]: print_content(annotator.functions_with_invariants(), '.py') # ## Some Examples # # Let us try out the `InvariantAnnotator` on a number of examples. # ### Removing HTML Markup # # Running `InvariantAnnotator` on our ongoing example `remove_html_markup()` does not provide much, as our invariant properties are tailored towards numerical functions. # In[193]: from Intro_Debugging import remove_html_markup # In[194]: with InvariantAnnotator() as annotator: remove_html_markup("bar") remove_html_markup("bar") remove_html_markup('"bar"') # In[195]: print_content(annotator.functions_with_invariants(), '.py') # In the [chapter on DDSet](DDSetDebugger.ipynb), we will see how to express more complex properties for structured inputs. # ### A Recursive Function # # Here's another example. `list_length()` recursively computes the length of a Python function. Let us see whether we can mine its invariants: # In[196]: def list_length(elems: List[Any]) -> int: if elems == []: length = 0 else: length = 1 + list_length(elems[1:]) return length # In[197]: with InvariantAnnotator() as annotator: length = list_length([1, 2, 3]) print_content(annotator.functions_with_invariants(), '.py') # Almost all these properties are relevant. Of course, the reason the invariants are so neat is that the return value is equal to `len(elems)` is that `X == len(Y)` is part of the list of properties to be checked. # ### Sum of two Numbers # # The next example is a very simple function: If we have a function without return value, the return value is `None`, and we can only mine preconditions. (Well, we get a "postcondition" `not return_value` that the return value evaluates to False, which holds for `None`). # In[198]: def print_sum(a, b): # type: ignore print(a + b) # In[199]: with InvariantAnnotator() as annotator: print_sum(31, 45) print_sum(0, 0) print_sum(-1, -5) # In[200]: print_content(annotator.functions_with_invariants(), '.py') # ## Checking Specifications # # A function with invariants, as above, can be fed into the Python interpreter, such that all pre- and postconditions are checked. We create a function `square_root_annotated()` which includes all the invariants mined above. # In[201]: with InvariantAnnotator() as annotator: y = square_root(25.0) y = square_root(0.01) # In[202]: square_root_def = annotator.functions_with_invariants() square_root_def = square_root_def.replace('square_root', 'square_root_annotated') # In[203]: print_content(square_root_def, '.py') # In[204]: exec(square_root_def) # The "annotated" version checks against invalid arguments – or more precisely, against arguments with properties that have not been observed yet: # In[205]: with ExpectError(): square_root_annotated(-1.0) # type: ignore # This is in contrast to the original version, which just hangs on negative values: # In[206]: with ExpectTimeout(1): square_root(-1.0) # If we make changes to the function definition such that the properties of the return value change, such _regressions_ are caught as violations of the postconditions. Let us illustrate this by simply inverting the result, and return $-2$ as square root of 4. # In[207]: square_root_def = square_root_def.replace('square_root_annotated', 'square_root_negative') square_root_def = square_root_def.replace('return approx', 'return -approx') # In[208]: print_content(square_root_def, '.py') # In[209]: exec(square_root_def) # Technically speaking, $-2$ _is_ a square root of 4, since $(-2)^2 = 4$ holds. Yet, such a change may be unexpected by callers of `square_root()`, and hence, this would be caught with the first call: # In[210]: with ExpectError(): square_root_negative(2.0) # type: ignore # We see how pre- and postconditions, as well as types, can serve as *oracles* during testing. In particular, once we have mined them for a set of functions, we can check them again and again with test generators – especially after code changes. The more checks we have, and the more specific they are, the more likely it is we can detect unwanted effects of changes. # ## Synopsis # # This chapter provides two classes that automatically extract specifications from a function and a set of inputs: # # * `TypeAnnotator` for _types_, and # * `InvariantAnnotator` for _pre-_ and _postconditions_. # # Both work by _observing_ a function and its invocations within a `with` clause. Here is an example for the type annotator: # In[211]: def sum2(a, b): # type: ignore return a + b # In[212]: with TypeAnnotator() as type_annotator: sum2(1, 2) sum2(-4, -5) sum2(0, 0) # The `typed_functions()` method will return a representation of `sum2()` annotated with types observed during execution. # In[213]: print(type_annotator.typed_functions()) # As a shortcut, one can also just evaluate the annotator: # In[214]: type_annotator # The invariant annotator works similarly: # In[215]: with InvariantAnnotator() as inv_annotator: sum2(1, 2) sum2(-4, -5) sum2(0, 0) # The `functions_with_invariants()` method will return a representation of `sum2()` annotated with inferred pre- and postconditions that all hold for the observed values. # In[216]: print(inv_annotator.functions_with_invariants()) # Again, a shortcut is available: # In[217]: inv_annotator # Such type specifications and invariants can be helpful as _oracles_ (to detect deviations from a given set of runs). The chapter gives details on how to customize the properties checked for. # In[218]: # ignore from ClassDiagram import display_class_hierarchy # In[219]: # ignore display_class_hierarchy([TypeAnnotator, InvariantAnnotator], public_methods=[ TypeAnnotator.typed_function, TypeAnnotator.typed_functions, TypeAnnotator.typed_function_ast, TypeAnnotator.typed_functions_ast, TypeAnnotator.__repr__, InvariantAnnotator.function_with_invariants, InvariantAnnotator.functions_with_invariants, InvariantAnnotator.preconditions, InvariantAnnotator.postconditions, InvariantAnnotator.__repr__, InvariantTracer.__init__, CallTracer.__init__ ], project='debuggingbook' ) # ## Lessons Learned # # * Type annotations and explicit invariants allow for _checking_ arguments and results for expected data types and other properties. # * One can automatically _mine_ data types and invariants by observing arguments and results at runtime. # * The quality of mined invariants depends on the diversity of values observed during executions; this variety can be increased by generating tests. # ## Next Steps # # In the next chapter, we will explore [abstracting failure conditions](DDSetDebugger.ipynb). # ## Background # # The [DAIKON dynamic invariant detector](https://plse.cs.washington.edu/daikon/) can be considered the mother of function specification miners. Continuously maintained and extended for more than 20 years, it mines likely invariants in the style of this chapter for a variety of languages, including C, C++, C#, Eiffel, F#, Java, Perl, and Visual Basic. On top of the functionality discussed above, it holds a rich catalog of patterns for likely invariants, supports data invariants, can eliminate invariants that are implied by others, and determines statistical confidence to disregard unlikely invariants. The corresponding paper \cite{Ernst2001} is one of the seminal and most-cited papers of Software Engineering. A multitude of works have been published based on DAIKON and detecting invariants; see this [curated list](http://plse.cs.washington.edu/daikon/pubs/) for details. # The interaction between test generators and invariant detection is already discussed in \cite{Ernst2001} (incidentally also using grammars). The Eclat tool \cite{Pacheco2005} is a model example of tight interaction between a unit-level test generator and DAIKON-style invariant mining, where the mined invariants are used to produce oracles and to systematically guide the test generator towards fault-revealing inputs. # Mining specifications is not restricted to pre- and postconditions. The paper "Mining Specifications" \cite{Ammons2002} is another classic in the field, learning state protocols from executions. Grammar mining \cite{Gopinath2020} can also be seen as a specification mining approach, this time learning specifications for input formats. # As it comes to adding type annotations to existing code, the blog post ["The state of type hints in Python"](https://www.bernat.tech/the-state-of-type-hints-in-python/) gives a great overview on how Python type hints can be used and checked. To add type annotations, there are two important tools available that also implement our above approach: # # * [MonkeyType](https://instagram-engineering.com/let-your-code-type-hint-itself-introducing-open-source-monkeytype-a855c7284881) implements the above approach of tracing executions and annotating Python 3 arguments, returns, and variables with type hints. # * [PyAnnotate](https://github.com/dropbox/pyannotate) does a similar job, focusing on code in Python 2. It does not produce Python 3-style annotations, but instead produces annotations as comments that can be processed by static type checkers. # # These tools have been created by engineers at Facebook and Dropbox, respectively, assisting them in checking millions of lines of code for type issues. # ## Exercises # # Our code for mining types and invariants is in no way complete. There are dozens of ways to extend our implementations, some of which we discuss in exercises. # ### Exercise 1: Union Types # # The Python `typing` module allows expressing that an argument can have multiple types. For `square_root(x)`, this allows expressing that `x` can be an `int` or a `float`: # In[220]: def square_root_with_union_type(x: Union[int, float]) -> float: # type: ignore ... # Extend the `TypeAnnotator` such that it supports union types for arguments and return values. Use `Optional[X]` as a shorthand for `Union[X, None]`. # **Solution.** Left to the reader. Hint: extend `type_string()`. # ### Exercise 2: Types for Local Variables # # In Python, one cannot only annotate arguments with types, but actually also local and global variables – for instance, `approx` and `guess` in our `square_root()` implementation: # In[221]: def square_root_with_local_types(x: Union[int, float]) -> float: """Computes the square root of x, using the Newton-Raphson method""" approx: Optional[float] = None guess: float = x / 2 while approx != guess: approx: float = guess # type: ignore guess: float = (approx + x / approx) / 2 # type: ignore return approx # Extend the `TypeAnnotator` such that it also annotates local variables with types. Search the function AST for assignments, determine the type of the assigned value, and make it an annotation on the left-hand side. # **Solution.** Left to the reader. # ### Exercise 3: Verbose Invariant Checkers # # Our implementation of invariant checkers does not make it clear for the user which pre-/postcondition failed. # In[222]: @precondition(lambda s: len(s) > 0) def remove_first_char(s: str) -> str: return s[1:] # In[223]: with ExpectError(): remove_first_char('') # The following implementation adds an optional `doc` keyword argument which is printed if the invariant is violated: # In[224]: def verbose_condition(precondition: Optional[Callable] = None, postcondition: Optional[Callable] = None, doc: str = 'Unknown') -> Callable: def decorator(func: Callable) -> Callable: # Use `functools` to preserve name, docstring, etc @functools.wraps(func) def wrapper(*args: Any, **kwargs: Any) -> Any: if precondition is not None: assert precondition(*args, **kwargs), \ "Precondition violated: " + doc # call original function or method retval = func(*args, **kwargs) if postcondition is not None: assert postcondition(retval, *args, **kwargs), \ "Postcondition violated: " + doc return retval return wrapper return decorator # In[225]: def verbose_precondition(check: Callable, **kwargs: Any) -> Callable: return verbose_condition(precondition=check, doc=kwargs.get('doc', 'Unknown')) # In[226]: def verbose_postcondition(check: Callable, **kwargs: Any) -> Callable: return verbose_condition(postcondition=check, doc=kwargs.get('doc', 'Unknown')) # In[227]: @verbose_precondition(lambda s: len(s) > 0, doc="len(s) > 0") # type: ignore def remove_first_char(s: str) -> str: return s[1:] # In[228]: remove_first_char('abc') # In[229]: with ExpectError(): remove_first_char('') # Extend `InvariantAnnotator` into a `VerboseInvariantAnnotator` class that includes the conditions in the generated pre- and postconditions. # **Solution.** Here's a simple solution: # In[230]: class VerboseInvariantAnnotator(InvariantAnnotator): def preconditions(self, function_name: str) -> List[str]: conditions = [] for inv in pretty_invariants(self.invariants(function_name)): if inv.find(RETURN_VALUE) >= 0: continue # Postcondition cond = ("@verbose_precondition(lambda " + self.params(function_name) + ": " + inv + ', doc=' + repr(inv) + ")") conditions.append(cond) return conditions # In[231]: class VerboseInvariantAnnotator(VerboseInvariantAnnotator): def postconditions(self, function_name: str) -> List[str]: conditions = [] for inv in pretty_invariants(self.invariants(function_name)): if inv.find(RETURN_VALUE) < 0: continue # Precondition cond = ("@verbose_postcondition(lambda " + RETURN_VALUE + ", " + self.params(function_name) + ": " + inv + ', doc=' + repr(inv) + ")") conditions.append(cond) return conditions # The resulting annotations are harder to read, but easier to diagnose: # In[232]: with VerboseInvariantAnnotator() as annotator: y = sum2(2, 2) print_content(annotator.functions_with_invariants(), '.py') # As an alternative, one may be able to use `inspect.getsource()` on the lambda expression or unparse it. This is left to the reader. # ### Exercise 4: Save Initial Values # # If the value of an argument changes during function execution, this can easily confuse our implementation: The values are tracked at the beginning of the function, but checked only when it returns. Extend the `InvariantAnnotator` and the infrastructure it uses such that # # * it saves argument values both at the beginning and at the end of a function invocation; # * postconditions can be expressed over both _initial_ values of arguments and the _final_ values of arguments; # * the mined postconditions refer to both these values as well. # **Solution.** To be added. # ### Exercise 5: Implications # # Several mined invariant are actually _implied_ by others: If `x > 0` holds, then this implies `x >= 0` and `x != 0`. Extend the `InvariantAnnotator` such that implications between properties are explicitly encoded, and such that implied properties are no longer listed as invariants. See \cite{Ernst2001} for ideas. # **Solution.** Left to the reader. # ### Exercise 6: Local Variables # # Postconditions may also refer to the values of local variables. Consider extending `InvariantAnnotator` and its infrastructure such that the values of local variables at the end of the execution are also recorded and made part of the invariant inference mechanism. # **Solution.** Left to the reader. # ### Exercise 7: Embedding Invariants as Assertions # # Rather than producing invariants as annotations for pre- and postconditions, insert them as `assert` statements into the function code, as in: # # ```python # def square_root(x): # 'Computes the square root of x, using the Newton-Raphson method' # assert isinstance(x, int), 'violated precondition' # assert x > 0, 'violated precondition' # approx = None # guess = (x / 2) # while (approx != guess): # approx = guess # guess = ((approx + (x / approx)) / 2) # return_value = approx # assert return_value < x, 'violated postcondition' # assert isinstance(return_value, float), 'violated postcondition' # return approx # ``` # # Such a formulation may make it easier for test generators and symbolic analysis to access and interpret pre- and postconditions. # **Solution.** Here is a tentative implementation that inserts invariants into function ASTs. # Part 1: Embedding Invariants into Functions # In[233]: class EmbeddedInvariantAnnotator(InvariantTracer): def function_with_invariants_ast(self, function_name: str) -> ast.AST: return annotate_function_with_invariants(function_name, self.invariants(function_name)) def function_with_invariants(self, function_name: str) -> str: return ast.unparse(self.function_with_invariants_ast(function_name)) # In[234]: def annotate_invariants(invariants: Dict[str, Invariants]) -> Dict[str, ast.AST]: annotated_functions = {} for function_name in invariants: try: annotated_functions[function_name] = annotate_function_with_invariants(function_name, invariants[function_name]) except KeyError: continue return annotated_functions # In[235]: def annotate_function_with_invariants(function_name: str, function_invariants: Invariants) -> ast.AST: stack_inspector = StackInspector() function = stack_inspector.search_func(function_name) if function is None: raise KeyError function_code = inspect.getsource(function) function_ast = ast.parse(function_code) return annotate_function_ast_with_invariants(function_ast, function_invariants) # In[236]: def annotate_function_ast_with_invariants(function_ast: ast.AST, function_invariants: Invariants) -> ast.AST: annotated_function_ast = EmbeddedInvariantTransformer(function_invariants).visit(function_ast) return annotated_function_ast # Part 2: Preconditions # In[237]: class PreconditionTransformer(ast.NodeTransformer): def __init__(self, invariants: Invariants) -> None: self.invariants = invariants super().__init__() def preconditions(self) -> List[ast.stmt]: preconditions = [] for (prop, var_names) in self.invariants: assertion = "assert " + instantiate_prop(prop, var_names) + ', "violated precondition"' assertion_ast = ast.parse(assertion) if assertion.find(RETURN_VALUE) < 0: preconditions += assertion_ast.body return preconditions def insert_assertions(self, body: List[ast.stmt]) -> List[ast.stmt]: preconditions = self.preconditions() try: docstring = cast(ast.Constant, body[0]).value.s except: docstring = None if docstring: return [body[0]] + preconditions + body[1:] else: return preconditions + body def visit_FunctionDef(self, node: ast.FunctionDef) -> ast.FunctionDef: """Add invariants to function""" # print(ast.dump(node)) node.body = self.insert_assertions(node.body) return node # In[238]: class EmbeddedInvariantTransformer(PreconditionTransformer): pass # In[239]: with EmbeddedInvariantAnnotator() as annotator: square_root(5) # In[240]: print_content(annotator.function_with_invariants('square_root'), '.py') # In[241]: with EmbeddedInvariantAnnotator() as annotator: y = sum3(3, 4, 5) y = sum3(-3, -4, -5) y = sum3(0, 0, 0) # In[242]: print_content(annotator.function_with_invariants('sum3'), '.py') # Part 3: Postconditions # # We make a few simplifying assumptions: # # * Variables do not change during execution. # * There is a single `return` statement at the end of the function. # In[243]: class EmbeddedInvariantTransformer(EmbeddedInvariantTransformer): def postconditions(self) -> List[ast.stmt]: postconditions = [] for (prop, var_names) in self.invariants: assertion = ("assert " + instantiate_prop(prop, var_names) + ', "violated postcondition"') assertion_ast = ast.parse(assertion) if assertion.find(RETURN_VALUE) >= 0: postconditions += assertion_ast.body return postconditions def insert_assertions(self, body: List[ast.stmt]) -> List[ast.stmt]: new_body = super().insert_assertions(body) postconditions = self.postconditions() body_ends_with_return = isinstance(new_body[-1], ast.Return) if body_ends_with_return: ret_val = cast(ast.Return, new_body[-1]).value saver = RETURN_VALUE + " = " + ast.unparse(cast(ast.AST, ret_val)) else: saver = RETURN_VALUE + " = None" saver_ast = cast(ast.stmt, ast.parse(saver)) postconditions = [saver_ast] + postconditions if body_ends_with_return: return new_body[:-1] + postconditions + [new_body[-1]] else: return new_body + postconditions # In[244]: with EmbeddedInvariantAnnotator() as annotator: square_root(5) # In[245]: square_root_def = annotator.function_with_invariants('square_root') # Here's the full definition with included assertions: # In[246]: print_content(square_root_def, '.py') # In[247]: exec(square_root_def.replace('square_root', 'square_root_annotated')) # In[248]: with ExpectError(): square_root_annotated(-1) # Here come some more examples: # In[249]: with EmbeddedInvariantAnnotator() as annotator: y = sum3(3, 4, 5) y = sum3(-3, -4, -5) y = sum3(0, 0, 0) # In[250]: print_content(annotator.function_with_invariants('sum3'), '.py') # In[251]: with EmbeddedInvariantAnnotator() as annotator: length = list_length([1, 2, 3]) print_content(annotator.function_with_invariants('list_length'), '.py') # In[252]: with EmbeddedInvariantAnnotator() as annotator: print_sum(31, 45) # In[253]: print_content(annotator.function_with_invariants('print_sum'), '.py') # And we're done! # ### Exercise 8: Grammar-Generated Properties # # The larger the set of properties to be checked, the more potential invariants can be discovered. Create a _grammar_ that systematically produces a large set of properties. See \cite{Ernst2001} for possible patterns. # ### Exercise 9: Loop Invariants # # This is not so much a problem in debugging, but rather for _symbolic verification_. A _loop invariant_ is a property that holds for every iteration of a loop, such as # # ```python # assert tag or not quote # ``` # # for `remove_html_markup()`. Create an annotator that determines and adds loop invariants for `for` and `while` loops. # ### Exercise 10: Path Invariants # # A _path invariant_ is a property that holds for taking a particular path, such as # # ```python # assert y < x < z # ``` # # for the second (erroneous) `return y` statement of the `middle()` function. Create an annotator that adds path invariants (expressed over the preconditions that hold when taking this particular branch) for every `if` statement in the code.