The Gothenburg model emerged from a meeting of the developers of CollateX and Juxta in Gothenburg, Sweden in 2009 at a joint workshop of the EU-funded research projects Open Scholarly Communities on the Web (COST Action 32) and Interedition (http://wiki.tei-c.org/index.php/Textual_Variance). The Gothenburg model conceptualizes collation as a pipeline that involves the following independent but coordinated modules, so that one or another may be modified without requiring revision of the rest of the system. The stages are:
The Gothenburg model is mostly about separation of concerns: it tries to find a division of the work processes involved with collation that results in steps that could reasonably be seen as self contained units or tasks. The aim is to minimize the dependencies or couplings between these steps—ideally one step's output is the next step's input only, and there are no other strong relations or dependencies between the steps. This separation of concerns ensures that we can see each task as a self-contained challenge to be solved. How these tasks should be solved is not defined by the Gothenburg model. The implementation is free and up to any one who wants (to try) to provide a solution. The developers of CollateX wanted to try for a maximal computational solution (that is: one that if possible would computationally fully automate the task of collation).
The description of the five stages of collation below presupposes that the texts to be collated have already been transcribed and digitized, which is to say that transcription and digitization function as an implicit Stage 0 in the pipeline.
Tokenization is the division of a continuous text into units to be aligned (called tokens). Most commonly, tokens are whitespace-delimited words, but tokenization can be performed at any level of granularity, e.g., “syllables, words, lines, phrases, verses, paragraphs, or text nodes” (http://wiki.tei-c.org/index.php/Textual_Variance). Challenges that arise during tokenization include the following:
The preceding issues affect tokenization on an intellectual level in that they involve decisions by the researcher that would arise with or without a computational environment. Machine-assisted algorithmic tokenization, though, raises additional challenges, including the following:
CollateX comes with a default tokenizer with the following properties:
This means that, for example, the default tokenizer would split the input “Peter’s cat.” into five tokens: “Peter”, “’”, “s ”, “cat”, and “.”. The tokenizer keeps trailing white space with the preceding token, so the third token contains two characters, the possessive “s” and the space that follows it. By default, CollateX ignores trailing white space when it compares tokens for alignment (Stage 3 of the Gothenburg model), so “cat” (three alphabetic characters) and “cat ” (three alphabetic characters followed by a space) would be treated as the same for collation purposes, but the trailing white space in the input in the second example would remain available should the user need to retrieve it later.
In conformity with the modular structure of the Gothenburg model, if the default tokenizer built into CollateX does not meet your research needs, you can replace it with your own without having to write your own code to modify the behavior of later parts of the Gothenburg processing pipeline. We discuss how to do this later in the workshop.
Some degree of normalization is inevitable during transcription because the editor has to transform handwriting, which is analog, into a digital representation. Every instance of a handwritten letter is likely to differ at least minutely from every other instance of the same letter, and the editors will nonetheless transcribe them the same way if they do not consider the differences meaningful. In diplomatic editions editors aim for maximal fidelity to the original, but even there some degree of conflation of variants is inevitable. The normalization process in the Gothenburg model does not refer to normalization during transcription. Normalization in the Gothenburg model refers to situations where the editor has transcribed two different written forms differently because the difference is important in some way, but it is not important for alignment. For that reason, the normalization stage of the Gothenburg model refers to the process of telling the alignment stage, which follows, to treat some graphically different forms as if they were the same for the purpose of aligning them.
One common guideline in digital editing is to ignore non-substantive variation when performing comparisons for the purpose of alignment, although the understanding of non-substantive is both project-specific and subjective. Many (not all) editors may choose to normalize away differences in punctuation and in upper vs. lower case. Some editors may further elect to normalize orthographic variation (the use of what humans would regard as different spellings, with different letters), variant letterforms (what humans would regard as the same letters, but with different shapes), abbreviation, superscription, etc. If an editor considers these types of variation completely unimportant, they might be normalized during transcription, but if the variants need to be available for the output but should not be allowed to influence the alignment, then can be normalized between the tokenization and the alignment stages of the Gothenburg model. This type of alignment creates a normalized shadow copy of the literal token, so that the normalized form can be used to determine what should be considered the same for alignment purposes, but the eventual output can nonetheless preserve and return the original spelling.
The types of textual variation that occur during the copying of texts may be categorized as the insertion of new textual material, the deletion of textual material present in the source, the substitution of new text in place of old, and the transposition of old material to a new location. Transposition can be encoded as deletion of material from one location and its insertion in another, but if the editor believes that the two acts are related, that is, that the material has been moved and the deletion and insertion are not independent of each other, the eventual edition should record that fact.
Editors understand variation as substantive or non-substantive, but that distinction depends on the writing system, the goals of the edition, and even the purpose of the textual comparison. For example, for the purpose of alignment we might prefer to ignore grammatical differences and align only on lemmata, while for philological analysis and interpretation we might wish to distinguish which witnesses use which inflected forms of the same lexeme. Most commonly, though, substantive variation includes equipollent readings (that is, different but equivalent or synonymous textual content), linguistic variation, and scribal error. Non-substantive variation commonly includes graphic variants (such as variant ways of writing the same letter), punctuation, upper vs lower case, and orthographic variation (different spellings of the same word form).
Variant letterforms (different shapes of what is meant to represent the same basic letter), and abbreviation. Which of these types of variation count as substantive or non-substantive depends on the nature of the writing system and the types of roles the edition is intended to fulfill. One project may reasonably regard (and process) as substantive a type of variation that another project would regularize as non-substantive.
Markup normalization requires attention at both the tokenization and the normalization stage. The markup is part of the stream of characters, so when the text is divided into words or other tokens, the tokenizer must know what to do with any markup it encounters. For example, the characters “text” in an XML tag
<text> should not be treated the same way as the word “text” in the content of an element. This means that either the markup must be treated as something other than textual content during tokenization or it must be handled specially during normalization.
The output of tokenization (Stage 1) serves as the input to normalization (Stage 2), which provides, in turn, the input for alignment (Stage 3). In CollateX, the output of normalization works by encoding each token as a complex object that contains both a textual (t) property and a normalized (n) property. The t property remains available throughout all subsequent stages of the Gothenburg model, and is therefore available for rendering in the eventual output visualization. The n property is used in the alignment stage to tell the collation engine which different t values should be treated as the same for alignment purposes. We will discuss how CollateX represents and uses these properties later in the workshop.
By default, the only normalization that CollateX performs is that it ignores trailing white space at the ends of tokens. This means that, for example, “Peter’s cat” and “Peter’s cat”, which differ (this may be difficult to see in the HTML rendering) only in that there are two spaces between the words in the second version, would be aligned as if they contained exactly the same tokens. Some projects are likely to need to perform more aggressive normalization, which can range from the algorithmic, such as case folding (treat upper- and lowercase letters as if they were the same) or lemmatization (align according to the base lexeme, ignoring inflection information), to the idiosyncratic, such as a synonym lookup dictionary that might recognize that “3” and “three” should be treated as equivalent for alignment purposes.
What counts as substantive variation (not to be normalized) and non-substantive variation (to be normalized) may differ not only among collation projects, but also among steps within a project. For example, for alignment purposes we might want to treat all inflected forms of the same lexeme as equivalent, but at a later analytic stage we might want to differentiate among inflected forms of the same lexeme. The normalization process described here refers to normalization for alignment purposes, and because CollateX retains information about the original (non-normalized) token in the t property, it remains available should it be needed at a later analytic stage.
The alignment stage of the Gothenburg model compares the normalized forms of the tokens for the witnesses and tries to create the best alignment possible. There are alignment ambiguities that are difficult even for a human to resolve unambiguously, such as aligning “Peter’s very blue cat” with “Peter’s very very blue cat”, where it is not clear which instance of “very” in the second witness is the appropriate alignment point for the single instance of “very” in the first. Where CollateX cannot make a principled decision, it makes an arbitrary one; in this case it would align “very” in the first witness arbitrarily with the first instances of “very” in the second.
To instantiate the alignment CollateX builds a variant graph, a structure that can be represented through the following visualization:
The graph is directed from an empty start node (the square at the left numbered 5) to an empty end node (the square on the right numbered 3). The specific numbers in the start and end nodes are arbitrary, and are not important for understanding the graph. The graph also contains a lot of diagnostic information that is introduced by CollateX and that is useful for developers. The way to read the graph is as follows:
CollateX can output the variant graph, and we sometimes do that for development purposes, but it is not a traditional way of representing variation to philologists. The variant graph is nonetheless used as an output format, intended for human interaction, in some modern applications, such as the Stemmaweb collection of tools for analysis of collated texts. We discuss visualization below, under Stage 5 of the Gothenburg model, but we introduce the variant graph here because of the insights it provides into Stage 3, the alignment module.
So how does the alignment process build the variant graph? In the example to the right, imagine that each column represents a witness and each square with a letter in it (but not the ones with dashes) in the column represents a token. Witness 1 has four tokens (“a”, “b”, “c”, “d”), Witness 2 has the same four tokens, but in a different order (token “b” has been moved to the end), and Witness 3 has three tokens (“b”, “c”, “d”). At the alignment stage of the Gothenburg model, the alignment engine will recognize that all three witnesses have tokens “c” and “d” in order and connect them (represented by horizontal lines in the image here), which translates into grouping them into the same node in the variant graph above. It will also recognize that token “a” appears in only two witnesses, and it will align those and recognize that there is nothing in Witness 3 in the place where this token appears in the other two witnesses. The situation with token “b”, though, is complicated because it occurs in all three witnesses, but in different positions. At this stage the alignment process is not capable of changing the order of the tokens in the witnesses, so instead of recognizing the instance of “b” in Witness 2 as a counterpart of token “b” in the other two witnesses, it regards Witness 2 as having nothing corresponding to token “b” where it appears early in Witnesses 1 and 3, and then as having its own token “b” at the end that has no counterpart in the other two witnesses. For this reason, at this stage the alignment has not analyzed all of the variation that a human philologist would consider relevant to understanding the transmission of the text. In particular, this alignment (whether represented as the variant graph above or as the table to the right) treats the transposition of token “b” in Witness 2 as two mutually independent events, a deletion or omission and then an addition or insertion, while a human would regard those as part of a single transposition event.
Considerations that arise during the alignment stage of the Gothenburg model include the following:
Stage 3 of the Gothenburg model, alignment, operates by aligning tokens that have the same normalized shadow copies (the n properties discussed in Stage 2, normalization/regularization). The alignment process relies on exact matching because it is computationally inexpensive (that is, fast) to determine whether two tokens are or are not identical (string equal), but because alignment looks only at exact matches, situations with no exact match often cannot be resolved at that stage. For example, the strings “The gray koala” and “The big grey koala” (note the different spellings of “gray” ~ grey”) match in their first and last tokens, but the middle token “gray” in the first witness is not an exact match to either “big” or “grey” in the second. Because the alignment stage looks only at exact string matching, it cannot make this decision; what CollateX does in such situations (unless the user has turned on near matching) is arbitrarily look to the left, so it would align “gray” with “big”, rather than with “grey”, which a human would recognize easily as incorrect. If the user turns on near matching, though, CollateX will recognize that there is no exact match, and instead of defaulting automatically to the left candidate, it will look for the closest match and rearrange the alignment if necessary.
The analysis stage refers in this case to the computational analysis of the output of the alignment operation, with the goal of fine-tuning it to make decisions that cannot be made on the basis of exact string matching. That includes near matching, that is, finding the closest match in situations that lack an exact match, such as the one described above. It would too computationally expensive to run a near match calculation on every group of tokens, but reserving it for the post-alignment analysis means that it will be run only in situations where there is no exact alignment. The identification of transposition also happens in the analysis stage, because the alignment stage is not capable of rearranging the tokens in a witness, as in the example to the right.
As said, the Gothenburg model was mostly geared towards separation of concerns. It defines collation as a set of steps that could reasonably be seen as isolated tasks, making solving and implementing solutions easier than if those steps are intricately intertwined and connected. The Gothenburg model does not prescribe how these individual tasks should be solved. This decoupling also ensures that if somebody for one of the steps implements a better or quicker solution, or a solution that is very specific and adequate for some particular project, then that step can be easily replaced in workflows with the new implementation/solution. A perceptive reader will note however that the separation of concern, at least between the alignment and analysis steps, may not be perfect. In some cases collation workflows create a feedback loop between the stages of analysis and alignment. Some textual genesis phenomena (especially for instance transposition) are very hard to solve for a computer, and may indeed not be decisively computable at all. In such cases human intervention and judgement is needed to argue what happened during the genesis of the text and witnesses. In such cases this additional analysis information may yield valuable information on the basis of which the collating algorithm can refine its alignment. In such cases post processing information resulting from the analysis phase can be used to inform the rerun of alignment stage of the process.
Transposition is the phenomenon where an author or copyist has moved a piece of text from one place in a witness to a different place in another witness. From a theoretical perspective one can regard a transposition as a deletion of certain tokens in one witness combined with an addition of the exact same tokens in another. Detecting a transposition from a computational perspective is a very hard problem. A computer can never determine with certainty an author’s intention. So instead of a fixed rule that always holds true, we use an heuristic: a measurement that takes a number of variables into account, like the number of tokens between the addition and the deletion, the length of the transposed piece of text and the frequency of the tokens involved in a transposition in the text. An example of a good heuristic is one that balances the length of the piece of text that is a candidate for a transposition against the moved distance. This means that a longer piece is text is allowed to move over a greater distance and still counts as a transposition, while a short piece of text is allowed to move only over a small distance. Another thing that complicates the detection of transposition is that tokens can be repeated in a text. If a piece of text is deleted twice in one witness and added once in another witness, transposition detection gets even harder, since the question then involves determining which of the deletions is part of the transposition. Usually the piece that is closest to its counterpart is the correct one.
The variant graph can be output as SVG (scalable vector graphics), which we’ve used for illustration above, but this should be understood as a visualization or serialization of the graph, and not as the graph itself. In CollateX the graph can be visualized directly as SVG or in GraphML, an XML vocabulary for representing graph information. The SVG is intended for visualization and the GraphML for subsequent processing. Neither of these is the sort of visualization that will be familiar to philologists, but because the graph is the most direct representation of the results of the alignment process, it may nonetheless provide insights into the structure of the collation that are more difficult to perceive with less immediate visualizations. The graph in principle is capable of encoding transpositions (and, for example, a visualization of the graph could draw a special kind of connecting line between the two nodes with the reading “b”), but at present CollateX does not record transposition information in the graph, which means that it cannot be represented in a visualization.
CollateX also supports the output of an alignment table, similar to the images above, where empty cells (represented in those illustrations with hyphens) are inserted to align the tokens visually in either rows or columns. CollateX can generate visualizations of the alignment table as plain text or as HTML, and the HTML visualization can present the witnesses in rows and the tokens in columns or vice versa, as follows:
from collatex import * collation = Collation() collation.add_plain_witness("A", "The quick brown fox jumps over the dog.") collation.add_plain_witness("B", "The brown fox jumps over the lazy dog.") alignment_table = collate(collation, output="html")
|A||The||quick||brown fox jumps over the||-||dog.|
|B||The||-||brown fox jumps over the||lazy||dog.|
alignment_table = collate(collation, output="html", layout="vertical")
|brown fox jumps over
|brown fox jumps over
CollateX also supports output as JSON (intended for subsequent post-processing), generic XML (intended for subsequent postprocessing with XSLT), and TEI XML parallel segmentation notation.
Some images and information in this tutorial were derived from Haentjens Dekker, R. et al., 2015. Computer supported collation of modern manuscripts: CollateX and the Beckett Digital Manuscript Project. *Literary and Linguistic Computing*, 30(3), pp.452–470; and from http://wiki.tei-c.org/index.php/Textual_Variance. Some additional textual content on this page was generously contributed by Helena Bermúdez Sabel.