#!/usr/bin/env python # coding: utf-8 # # Where the Bugs are # # Every time a bug is fixed, developers leave a trace – in the _version database_ when they commit the fix, or in the _bug database_ when they close the bug. In this chapter, we learn how to _mine these repositories_ for past changes and bugs, and how to _map_ them to individual modules and functions, highlighting those project components that have seen most changes and fixes over time. # In[1]: from bookutils import YouTubeVideo # YouTubeVideo("w4u5gCgPlmg") # **Prerequisites** # # * You should have read [the chapter on tracking bugs](Tracking.ipynb). # In[2]: import bookutils # In[3]: import Tracking # ## Synopsis # # # To [use the code provided in this chapter](Importing.ipynb), write # # ```python # >>> from debuggingbook.ChangeExplorer import # ``` # # and then make use of the following features. # # # This chapter provides two classes `ChangeCounter` and `FineChangeCounter` that allow to mine and visualize the distribution of changes in a given `git` repository. # # `ChangeCounter` is initialized as # # ```python # change_counter = ChangeCounter(repository) # ``` # where `repository` is either # # * a _directory_ containing a `git` clone (i.e., it contains a `.git` directory) # * the URL of a `git` repository. # # Additional arguments are being passed to the underlying `RepositoryMining` class from the [PyDriller](https://pydriller.readthedocs.io/) Python package. A `filter` keyword argument, if given, is a predicate that takes a modification (from PyDriller) and returns True if it should be included. # # In a change counter, all elements in the repository are represented as _nodes_ – tuples $(f_1, f_2, ..., f_n)$ that denote a _hierarchy_: Each $f_i$ is a directory holding $f_{i+1}$, with $f_n$ being the actual file. # # A `change_counter` provides a number of attributes. `changes` is a mapping of nodes to the number of changes in that node: # # ```python # >>> change_counter.changes[('README.md',)] # # 7 # ``` # The `messages` attribute holds all commit messages related to that node: # # ```python # >>> change_counter.messages[('README.md',)] # # ['first commit', # 'Adjusted to debuggingbook', # 'New Twitter handle: @Debugging_Book', # 'Doc update', # 'Doc update', # 'Doc update', # 'Doc update'] # ``` # The `sizes` attribute holds the (last) size of the respective element: # # ```python # >>> change_counter.sizes[('README.md',)] # # 10701 # ``` # `FineChangeCounter` acts like `ChangeCounter`, but also retrieves statistics for elements _within_ the respective files; it has been tested for C, Python, and Jupyter Notebooks and should provide sufficient results for programming languages with similar syntax. # # The `map()` method of `ChangeCounter` and `FineChangeCounter` produces an interactive tree map that allows to explore the elements of a repository. The redder (darker) a rectangle, the more changes it has seen; the larger a rectangle, the larger its size in bytes. # # ```python # >>> fine_change_counter.map() # ``` # #
# # Subclassing offers several ways to customize what to mine and how to visualize it. See the chapter for details. # # Here are all the classes defined in this chapter: # # # ![](PICS/ChangeExplorer-synopsis-1.svg) # # # ## Mining Change Histories # # The history of any software project is a history of change. Any nontrivial project thus comes with a _version database_ to organize and track changes; and possibly also with an [issue database](Tracking.ipynb) to organize and track issues. # # Over time, these databases hold plenty of information about the project: _Who changed what, when, and why?_ This information can be _mined_ from existing databases and _analyzed_ to answer questions such as # # * Which parts in my project were most frequently or recently changed? # * How many files does the average change touch? # * Where in my project were the most bugs fixed? # To answer such questions, we can _mine_ change and bug histories for past changes and fixes. This involves digging through version databases such as `git` and [issue trackers such as RedMine or Bugzilla](Tracking.ipynb) and extracting all their information. Fortunately for us, there is ready-made infrastructure for some of this. # ## Mining with PyDriller # [PyDriller](https://pydriller.readthedocs.io/) is a Python package for mining change histories. Its `RepositoryMining` class takes a `git` version repository and allows to access all the individual changes ("modifications"), together with committers, affected files, commit messages, and more. # In[4]: from pydriller import RepositoryMining # https://pydriller.readthedocs.io/ # To use `RepositoryMining`, we need to pass it # * the URL of a `git` repository; or # * the directory name where a cloned `git` repository can be found. # # In general, cloning a `git` repository locally (with `git clone URL`) and then analyzing it locally will be faster and require less network resources. # Let us apply `RepositoryMining` on the repository of this book. The function `current_repo()` returns the directory in which a `.git` subdirectory is stored – that is, the root of a cloned `git` repository. # In[5]: import os # In[6]: def current_repo(): path = os.getcwd() while True: if os.path.exists(os.path.join(path, '.git')): return os.path.normpath(path) # Go one level up new_path = os.path.normpath(os.path.join(path, '..')) if new_path != path: path = new_path else: return None return None # In[7]: current_repo() # This gives us a repository miner for the book: # In[8]: book_miner = RepositoryMining(current_repo()) # `traverse_commits()` is a generator that returns one commit after another. Let us fetch the very first commit made to the book: # In[9]: book_commits = book_miner.traverse_commits() book_first_commit = next(book_commits) # Each commit has a number of attributes telling us more about the commit. # In[10]: [attr for attr in dir(book_first_commit) if not attr.startswith('_')] # For instance, the `msg` attribute lets us know about the commit message: # In[11]: book_first_commit.msg # whereas the `author` attribute gets us the name and email of the person who made the commit: # In[12]: [attr for attr in dir(book_first_commit.author) if not attr.startswith('_')] # In[13]: book_first_commit.author.name, book_first_commit.author.email # A commit consists of multiple _modifications_ to possibly multiple files. The commit `modifications` attribute returns a list of modifications. # In[14]: book_first_commit.modifications # For each modification, we can retrieve the files involved as well as several statistics: # In[15]: [attr for attr in dir(book_first_commit.modifications[0]) if not attr.startswith('_')] # Let us see which file was created with this modification: # In[16]: book_first_commit.modifications[0].new_path # The `source_code` attribute holds the entire file contents after the modification. # In[17]: print(book_first_commit.modifications[0].source_code) # We see that the `debuggingbook` project started with a very simple commit, namely the addition of an (almost empty) `README.md` file. # The attribute `source_code_before` holds the previous source code. We see that it is `None` – the file was just created. # In[18]: print(book_first_commit.modifications[0].source_code_before) # Let us have a look at the _second_ commit. We see that it is much more substantial already. # In[19]: book_second_commit = next(book_commits) # In[20]: [m.new_path for m in book_second_commit.modifications] # We fetch the modification for the `README.md` file: # In[21]: readme_modification = [m for m in book_second_commit.modifications if m.new_path == 'README.md'][0] # The `source_code_before` attribute holds the previous version (which we already have seen): # In[22]: print(readme_modification.source_code_before) # The `source_code` attribute holds the new version – now a complete "README" file. (Compare this first version to the [current README text](index.ipynb).) # In[23]: print(readme_modification.source_code[:400]) # The `diff` attribute holds the differences between the old and the new version. # In[24]: print(readme_modification.diff[:100]) # The `diff_parsed` attribute even lists added and deleted lines: # In[25]: readme_modification.diff_parsed['added'][:10] # With all this information, we can track all commits and modifications and establish statistics over which files were changed (and possibly even fixed) most. This is what we will do in the next section. # ## Counting Changes # # We start with a simple `ChangeCounter` class that, given a repository, counts for each file how frequently it was changed. # The constructor takes the repository to be analyzed and sets the internal counters: # In[26]: class ChangeCounter: """Count the number of changes for a repository.""" def __init__(self, repo, filter=None, log=False, **kwargs): """Constructor. `repo` is a git repository (as URL or directory). `filter` is a predicate that takes a modification and returns True if it should be considered (default: consider all). `log` turns on logging if set. `kwargs` are passed to the `RepositoryMining()` constructor.""" self.repo = repo self.log = log if filter is None: filter = lambda m: True self.filter = filter # A node is an tuple (f_1, f_2, f_3, ..., f_n) denoting # a folder f_1 holding a folder f_2 ... holding a file f_n self.changes = {} # Mapping node -> #of changes self.messages = {} # Mapping node -> list of commit messages self.sizes = {} # Mapping node -> last size seen self.hashes = set() # All hashes already considered self.mine(**kwargs) # The method `mine()` does all the heavy lifting of mining. It retrieves all commits and all modifications from the repository, passing the modifications through the `update_stats()` method. # In[27]: class ChangeCounter(ChangeCounter): def mine(self, **kwargs): """Gather data from repository. To be extended in subclasses.""" miner = RepositoryMining(self.repo, **kwargs) for commit in miner.traverse_commits(): for m in commit.modifications: m.hash = commit.hash m.committer = commit.committer m.committer_date = commit.committer_date m.msg = commit.msg if self.include(m): self.update_stats(m) # The `include()` method allows to filter modifications. For simplicity, we copy the most relevant attributes of the commit over to the modification, such that the filter can access them, too. # In[28]: class ChangeCounter(ChangeCounter): def include(self, m): """Return True if the modification `m` should be included (default: the `filter` predicate given to the constructor). To be overloaded in subclasses.""" return self.filter(m) # The `update_stats()` method is the method that does the counting. It takes a modification converts the file name into a _node_ – a tuple $(f_1, f_2, ..., f_n)$ that denotes a _hierarchy_: Each $f_i$ is a directory holding $f_{i+1}$, with $f_n$ being the actual file. Here is what this notebook looks as a node: # In[29]: tuple('debuggingbook/notebooks/ChangeExplorer.ipynb'.split('/')) # For each such node, `update_stats()` then invokes `update_size()`, `update_changes()`, and `update_elems()`. # In[30]: class ChangeCounter(ChangeCounter): def update_stats(self, m): """Update counters with modification `m`. Can be extended in subclasses.""" if not m.new_path: return node = tuple(m.new_path.split('/')) if m.hash not in self.hashes: self.hashes.add(m.hash) self.update_size(node, len(m.source_code) if m.source_code else 0) self.update_changes(node, m.msg) self.update_elems(node, m) # `update_size()` simply saves the last size of the item being modified. Since we progress from first to last commit, this reflects the size of the newest version. # In[31]: class ChangeCounter(ChangeCounter): def update_size(self, node, size): """Update counters for `node` with `size`. Can be extended in subclasses.""" self.sizes[node] = size # `update_changes()` increases the counter `changes` for the given node `node`, and adds the current commit message `commit_msg` to its list. This makes # # * `size` a mapping of nodes to their size # * `changes` a mapping of nodes to the number of changes they have seen # * `commit_msg` a mapping of nodes to the list of commit messages that have affected them. # In[32]: class ChangeCounter(ChangeCounter): def update_changes(self, node, commit_msg): """Update stats for `node` changed with `commit_msg`. Can be extended in subclasses.""" self.changes.setdefault(node, 0) self.changes[node] += 1 self.messages.setdefault(node, []) self.messages[node].append(commit_msg) # The `update_elems()` method is reserved for later use, when we go and count fine-grained changes. # In[33]: class ChangeCounter(ChangeCounter): def update_elems(self, node, m): """Update counters for subelements of `node` with modification `m`. To be defined in subclasses.""" pass # Let us put `ChangeCounter` to action – on the current (debuggingbook) repository. # In[34]: DEBUGGINGBOOK_REPO = current_repo() # In[35]: DEBUGGINGBOOK_REPO # You can also specify a URL instead, but this will access the repository via the network and generally be much slower. # In[36]: # DEBUGGINGBOOK_REPO = 'https://github.com/uds-se/debuggingbook.git' # The function `debuggingbook_change_counter` instantiates a `ChangeCounter` class (or any subclass) with the debuggingbook repository, mining all the counters as listed above. # In[37]: def debuggingbook_change_counter(cls): """Instantiate a ChangeCounter (sub)class `cls` with the debuggingbook repo""" def filter(m): """Do not include the `docs/` directory; it only holds Web pages""" return m.new_path and not m.new_path.startswith('docs/') return cls(DEBUGGINGBOOK_REPO, filter=filter) # Let us set `change_counter` to this `ChangeCounter` instance. This can take a few minutes. # In[38]: from Timer import Timer # In[39]: with Timer() as t: change_counter = debuggingbook_change_counter(ChangeCounter) t.elapsed_time() # The attribute `changes` of our `ChangeCounter` now is a mapping of nodes to the respective number of changes. Here are the first 10 entries: # In[40]: list(change_counter.changes.keys())[:10] # This is the number of changes to the `Chapters.makefile` file which lists the book chapters: # In[41]: change_counter.changes[('Chapters.makefile',)] # The `messages` attribute holds all the messages: # In[42]: change_counter.messages[('Chapters.makefile',)] # In[43]: for node in change_counter.changes: assert len(change_counter.messages[node]) == change_counter.changes[node] # The `sizes` attribute holds the final size: # In[44]: change_counter.sizes[('Chapters.makefile',)] # ## Visualizing Past Changes # To explore the number of changes across all project files, we visualize them as a _tree map_. A tree map visualizes hierarchical data using nested rectangles. In our visualization, each directory is shown as a rectangle containing smaller rectangles. The _size_ of a rectangle is relative to its size (in bytes); and the _color_ of a rectangle is relative to the number of changes it has seen. # We use the [easyplotly](https://github.com/mwouts/easyplotly) package to easily create a treemap. # In[45]: import easyplotly as ep import plotly.graph_objects as go # In[46]: import math # The method `map_node_sizes()` returns a size for the node – any number will do. By default, we use a logarithmic scale, such that smaller files are not totally visually eclipsed by larger files. # In[47]: class ChangeCounter(ChangeCounter): def map_node_sizes(self): """Return a mapping of nodes to sizes. Can be overloaded in subclasses.""" # Default: use log scale return {node: math.log(self.sizes[node]) if self.sizes[node] else 0 for node in self.sizes} # Alternative: use sqrt size return {node: math.sqrt(self.sizes[node]) for node in self.sizes} # Alternative: use absolute size return self.sizes # The method `map_node_color()` returns a color for the node – again, as a number. The smallest and largest numbers returned indicate beginning and end in the given color scale, respectively. # In[48]: class ChangeCounter(ChangeCounter): def map_node_color(self, node): """Return a color of the node, as a number. Can be overloaded in subclasses.""" if node and node in self.changes: return self.changes[node] return None # The method `map_node_text()` shows a text to be displayed in the rectangle; we set this to the number of changes. # In[49]: class ChangeCounter(ChangeCounter): def map_node_text(self, node): """Return the text to be shown for the node (default: #changes). Can be overloaded in subclasses.""" if node and node in self.changes: return self.changes[node] return None # The methods `map_hoverinfo()` and `map_colorscale()` set additional map parameters. For details, see the [easyplotly](https://github.com/mwouts/easyplotly) documentation. # In[50]: class ChangeCounter(ChangeCounter): def map_hoverinfo(self): """Return the text to be shown when hovering over a node. To be overloaded in subclasses.""" return 'label+text' def map_colorscale(self): """Return the colorscale for the map. To be overloaded in subclasses.""" return 'YlOrRd' # With all this, the `map()` function creates a tree map of the repository, using the [easyplotly](https://github.com/mwouts/easyplotly) `Treemap` constructor. # In[51]: class ChangeCounter(ChangeCounter): def map(self): """Produce an interactive tree map of the repository.""" treemap = ep.Treemap( self.map_node_sizes(), text=self.map_node_text, hoverinfo=self.map_hoverinfo(), marker_colors=self.map_node_color, marker_colorscale=self.map_colorscale(), root_label=self.repo, branchvalues='total' ) fig = go.Figure(treemap) fig.update_layout(margin=dict(l=0, r=0, t=30, b=0)) return fig # This is what the tree map for `debuggingbook` looks like. # # * Click on any rectangle to enlarge it. # * Click outside of the rectangle to return to a wider view. # * Hover over a rectangle to get further information. # In[52]: change_counter = debuggingbook_change_counter(ChangeCounter) # In[53]: change_counter.map() # We can easily identify the most frequently changed files: # In[54]: all_nodes = list(change_counter.changes.keys()) all_nodes.sort(key=lambda node: change_counter.changes[node], reverse=True) [(node, change_counter.changes[node]) for node in all_nodes[:4]] # In[55]: # ignore all_notebooks = [node for node in change_counter.changes.keys() if len(node) == 2 and node[1].endswith('.ipynb')] all_notebooks.sort(key=lambda node: change_counter.changes[node], reverse=True) # In[56]: from bookutils import quiz # In[57]: quiz("Which two notebooks have seen the most changes over time?", [ f"`{all_notebooks[3][1].split('.')[0]}`", f"`{all_notebooks[1][1].split('.')[0]}`", f"`{all_notebooks[2][1].split('.')[0]}`", f"`{all_notebooks[0][1].split('.')[0]}`", ], [1234 % 3, 3702 / 1234]) # Indeed, these two are the two most frequently changed notebooks: # In[58]: all_notebooks[0][1].split('.')[0], all_notebooks[1][1].split('.')[0] # ## Past Fixes # # Knowing which files have been changed most is useful in debugging, because any change increases the chance to introduce a new bug. Even more important, however, is the question of how frequently a file was _fixed_ in the past, as this is an important indicator for its bug-proneness. # (One may think that fixing several bugs _reduces_ the number of bugs, but unfortunately, a file which has seen several fixes in the past is likely to see fixes in the future, too. This is because the bug-proneness of a software component very much depends on the requirements it has to fulfill, and if these requirements are unclear, complex, or frequently change, this translates into many fixes.) # How can we tell _changes_ from _fixes_? # # * One indicator is _commit messages_: # If they refer to "bugs" or "fixes", then the change is a fix. # * Another indicator is _bug numbers_: # If a commit message contains an issue number from an associated issue database, then we can make use of the issue referred to. # * The issue database may provide us with additional information about the bug, such as its severity, how many people it was assigned to, how long it took to fix it, and more. # * A final indicator is _time_: # If a developer first committed a change and in the same time frame marked an issue as "resolved", then it is likely that the two refer to each other. # # The way these two are linked very much depends on the project – and the discipline of developers as it comes to change messages. _Branches_ and _merges_ bring additional challenges. # For the `debuggingbook` project, identifying fixes is easy. The discipline is that if a change fixes a bug, it is prefixed with `Fix:`. We can use this to introduce a `FixCounter` class specific to our `debuggingbook` project. # In[59]: class FixCounter(ChangeCounter): def include(self, m): """Include all modifications whose commit messages start with 'Fix:'""" return super().include(m) and m and m.msg.startswith("Fix:") # As a twist to our default `ChangeCounter` class, we include the "fix" messages in the tree map rectangles. # In[60]: class FixCounter(FixCounter): def map_node_text(self, node): if node and node in self.messages: return "
".join(self.messages[node]) return "" def map_hoverinfo(self): return 'label' # This is the tree map showing fixes. We see that # * only those components that actually have seen a fix are shown; and # * the fix distribution differs from the change distribution. # In[61]: fix_counter = debuggingbook_change_counter(FixCounter) # In[62]: fix_counter.map() # ## Fine-Grained Changes # # In programming projects, individual files typically consist of _smaller units_ such as functions, classes, and methods. We want to determine which of these _units_ are frequently changed (and fixed). For this, we need to _break down_ individual files into smaller parts, and then determine which of these parts would be affected by a change. # ### Mapping Elements to Locations # # Our first task is a simple means to split a (programming) file into smaller parts, each with their own locations. First, we need to know what kind of content a file contains. To this end, we use the Python [magic](https://github.com/ahupp/python-magic) package. (The "magic" in the name does not refer to some "magic" functionality, but to the practice of having files start with "magic" bytes that indicate their type.) # In[63]: import magic # The `magic` package easily guesses that a file contains C code: # In[64]: magic.from_buffer(''' #include int main(int argc, char *argv[]) { printf("Hello, world!\n") } ''') # It also works well for Python code: # In[65]: magic.from_buffer(''' def foo(): print("Hello, world!") ''') # Jupyter Notebooks, however, are identified as `SGML` documents: # In[66]: magic.from_buffer(open(os.path.join(current_repo(), 'notebooks', 'ChangeExplorer.ipynb')).read()) # We define a set of _delimiters_ for these languages which use _regular expressions_ to identify # * the _language_ (matching the `magic` output) # * the _beginning of a unit_, and # * the _end_ of a unit, # # For Python, for instance, any line starting with `def` or `class` denotes the start of some unit; any line starting with something else denotes the end of a unit. For Jupyter, the delimiters do the same, yet encoded into JSON. The definitions for C are likely to work for a wide range of languages that all use `{` and `}` to delimit units. # In[67]: import re # In[68]: DELIMITERS = [ ( # Python re.compile(r'^python.*'), # Beginning of element re.compile(r'^(async\s+)?(def|class)\s+(?P\w+)\W.*'), # End of element re.compile(r'^[^#\s]') ), ( # Jupyter Notebooks re.compile(r'^(json|exported sgml|jupyter).*'), re.compile(r'^\s+"(async\s+)?(def|class)\s+(?P\w+)\W'), re.compile(r'^(\s+"[^#\s\\]|\s+\])') ), ( # C source code (actually, any { }-delimited language) re.compile(r'^(c |c\+\+|c#|java|perl|php).*'), re.compile(r'^[^\s].*\s+(?P\w+)\s*[({].*'), re.compile(r'^[}]') ) ] # The function `rxdelim()` returns suitable delimiters for a given content, using `DELIMITERS`. # In[69]: def rxdelim(content): """Return suitable begin and end delimiters for the content `content`. If no matching delimiters are found, return `None, None`.""" tp = magic.from_buffer(content).lower() for rxtp, rxbegin, rxend in DELIMITERS: if rxtp.match(tp): return rxbegin, rxend return None, None # The function `elem_mapping()` returns a list of the individual elements as found in the file, indexed by line numbers (starting with 1). # In[70]: def elem_mapping(content, log=False): """Return a list of the elements in `content`, indexed by line number.""" rxbegin, rxend = rxdelim(content) if rxbegin is None: return [] mapping = [None] current_elem = None lineno = 0 for line in content.split('\n'): lineno += 1 match = rxbegin.match(line) if match: current_elem = match.group('name') elif rxend.match(line): current_elem = None mapping.append(current_elem) if log: print(f"{lineno:3} {str(current_elem):15} {line}") return mapping # Here is an example of how `elem_mapping()` works. During execution (with `log` set to `True`), we already see the elements associated with individual line numbers. # In[71]: some_c_source = """ #include int foo(int x) { return x; } struct bar { int x, y; } int main(int argc, char *argv[]) { return foo(argc); } """ some_c_mapping = elem_mapping(some_c_source, log=True) # In the actual mapping, we can access the individual units for any line number: # In[72]: some_c_mapping[1], some_c_mapping[8] # Here's how this works for Python: # In[73]: some_python_source = """ def foo(x): return x class bar(blue): x = 25 def f(x): return 26 def main(argc): return foo(argc) """ some_python_mapping = elem_mapping(some_python_source, log=True) # In[74]: # some_jupyter_source = open("Slicer.ipynb").read() # some_jupyter_mapping = elem_mapping(some_jupyter_source, log=False) # ### Determining Changed Elements # # Using a mapping from `elem_mapping()`, we can determine which elements are affected by a change. The `changed_elems_by_mapping()` function returns the set of affected elements. # In[75]: def changed_elems_by_mapping(mapping, start, length=0): """Within `mapping`, return the set of elements affected by a change starting in line `start` and extending over `length` additional lines""" elems = set() for line in range(start, start + length + 1): if line < len(mapping) and mapping[line]: elems.add(mapping[line]) return elems # Here's an example of `changed_elems_by_mapping()`, applied to the Python content, above: # In[76]: changed_elems_by_mapping(some_python_mapping, start=2, length=4) # The function `elem_size()` returns the size of an element (say, a function). # In[77]: def elem_size(elem, source): """Within `source`, return the size of `elem`""" source_lines = [''] + source.split('\n') size = 0 mapping = elem_mapping(source) for line_no in range(len(mapping)): if mapping[line_no] == elem or mapping[line_no] is elem: size += len(source_lines[line_no] + '\n') return size # In[78]: elem_size('foo', some_python_source) # In[79]: assert sum(elem_size(name, some_python_source) for name in ['foo', 'bar', 'main']) == len(some_python_source) # Given an old version and a new version of a (text) file, we can use the `diff_match_patch` module to determine differences, and from these the affected lines: # In[80]: from ChangeDebugger import diff # minor dependency # In[81]: from diff_match_patch import diff_match_patch # In[82]: def changed_elems(old_source, new_source): """Determine the elements affected by the change from `old_source` to `new_source`""" patches = diff(old_source, new_source) old_mapping = elem_mapping(old_source) new_mapping = elem_mapping(new_source) elems = set() for patch in patches: old_start_line = patch.start1 + 1 new_start_line = patch.start2 + 1 for (op, data) in patch.diffs: length = data.count('\n') if op == diff_match_patch.DIFF_INSERT: elems |= changed_elems_by_mapping(old_mapping, old_start_line) elems |= changed_elems_by_mapping(new_mapping, new_start_line, length) elif op == diff_match_patch.DIFF_DELETE: elems |= changed_elems_by_mapping(old_mapping, old_start_line, length) elems |= changed_elems_by_mapping(new_mapping, new_start_line) old_start_line += length new_start_line += length return elems # Here is how `changed_elems()` works. We define a "new" version of `some_python_source`: # In[83]: some_new_python_source = """ def foo(y): return y class qux(blue): x = 25 def f(x): return 26 def main(argc): return foo(argc) """ # In[84]: changed_elems(some_python_source, some_new_python_source) # Note that the list of changed elements includes added as well as deleted elements. # ### Putting it all Together # # We introduce a class `FineChangeCounter` that, like `ChangeCounter`, counts changes for individual files; however, `FineChangeCounter` adds additional nodes for all elements affected by the change. For a file consisting of multiple elements, this has the same effect as if the file were a directory, and the elements were all contained as individual files in this directory. # In[85]: class FineChangeCounter(ChangeCounter): def update_elems(self, node, m): old_source = m.source_code_before if m.source_code_before else "" new_source = m.source_code if m.source_code else "" for elem in changed_elems(old_source, new_source): elem_node = node + (elem,) self.update_size(elem_node, elem_size(elem, new_source)) self.update_changes(elem_node, m.msg) # Retrieving fine-grained changes takes a bit more time, since all files have to be parsed... # In[86]: with Timer() as t: fine_change_counter = debuggingbook_change_counter(FineChangeCounter) t.elapsed_time() # ... but the result is very much worth it. We can now zoom into individual files and compare the change counts for the individual functions. # In[87]: fine_change_counter.map() # Like before, we can access the most frequently changed elements, This is the most frequently changed item in the book: # In[88]: elem_nodes = [node for node in fine_change_counter.changes.keys() if len(node) == 3 and node[1].endswith('.ipynb')] elem_nodes.sort(key=lambda node: fine_change_counter.changes[node], reverse=True) [(node, fine_change_counter.changes[node]) for node in elem_nodes[:1]] # In[89]: from bookutils import quiz # In[90]: quiz("Which is the _second_ most changed element?", [ f"`{elem_nodes[3][2]}` in `{elem_nodes[3][1].split('.ipynb')[0]}`", f"`{elem_nodes[1][2]}` in `{elem_nodes[1][1].split('.ipynb')[0]}`", f"`{elem_nodes[2][2]}` in `{elem_nodes[2][1].split('.ipynb')[0]}`", f"`{elem_nodes[0][2]}` in `{elem_nodes[0][1].split('.ipynb')[0]}`", ], 1975308642 / 987654321) # Indeed, here comes the list of the top five most frequently changed elements: # In[91]: [(node, fine_change_counter.changes[node]) for node in elem_nodes[:5]] # Now it is time to apply these tools on your own projects. Which are the most frequently changed (and fixed) elements? Why is that so? What can you do to improve things? All these are consequences of debugging – to help having fewer bugs in the future! # ## Synopsis # This chapter provides two classes `ChangeCounter` and `FineChangeCounter` that allow to mine and visualize the distribution of changes in a given `git` repository. # `ChangeCounter` is initialized as # # ```python # change_counter = ChangeCounter(repository) # ``` # where `repository` is either # # * a _directory_ containing a `git` clone (i.e., it contains a `.git` directory) # * the URL of a `git` repository. # # Additional arguments are being passed to the underlying `RepositoryMining` class from the [PyDriller](https://pydriller.readthedocs.io/) Python package. A `filter` keyword argument, if given, is a predicate that takes a modification (from PyDriller) and returns True if it should be included. # In a change counter, all elements in the repository are represented as _nodes_ – tuples $(f_1, f_2, ..., f_n)$ that denote a _hierarchy_: Each $f_i$ is a directory holding $f_{i+1}$, with $f_n$ being the actual file. # # A `change_counter` provides a number of attributes. `changes` is a mapping of nodes to the number of changes in that node: # In[92]: change_counter.changes[('README.md',)] # The `messages` attribute holds all commit messages related to that node: # In[93]: change_counter.messages[('README.md',)] # The `sizes` attribute holds the (last) size of the respective element: # In[94]: change_counter.sizes[('README.md',)] # `FineChangeCounter` acts like `ChangeCounter`, but also retrieves statistics for elements _within_ the respective files; it has been tested for C, Python, and Jupyter Notebooks and should provide sufficient results for programming languages with similar syntax. # The `map()` method of `ChangeCounter` and `FineChangeCounter` produces an interactive tree map that allows to explore the elements of a repository. The redder (darker) a rectangle, the more changes it has seen; the larger a rectangle, the larger its size in bytes. # In[95]: fine_change_counter.map() # Subclassing offers several ways to customize what to mine and how to visualize it. See the chapter for details. # Here are all the classes defined in this chapter: # In[96]: # ignore from ClassDiagram import display_class_hierarchy # In[97]: # ignore display_class_hierarchy([FineChangeCounter, FixCounter], public_methods=[ ChangeCounter.__init__, ChangeCounter.map # FIXME: Why is `map()` not highlighted? ], project='debuggingbook') # ## Lessons Learned # # * We can easily _mine_ past changes and map these to individual files and elements # * This information can be helpful in guiding the debugging and development process # * Counting _fixes_ needs to be customized to the conventions used in the project at hand # ## Background # # To be added # ## Exercises # # To be added # ### Exercise 1: _Title_ # # _Text of the exercise_ # **Solution.** _Solution for the exercise_