Welcome to this interactive book on Statistical Natural Language Processing (NLP). NLP is a field that lies in the intersection of Computer Science, Artificial Intelligence (AI) and Linguistics with the goal to enable computers to solve tasks that require natural language understanding and/or generation. Such tasks are omnipresent in most of our day-to-day life: think of Machine Translation, Automatic Question Answering or even basic Search. All these tasks require the computer to process language in one way or another. But even if you ignore these practical applications, many people consider language to be at the heart of human intelligence, and this makes NLP (and it's more linguistically motivated cousin, Computational Linguistics), important for its role in AI alone.
NLP is a vast field with beginnings dating back to at least the 1960s, and it is difficult to give a full account of every aspect of NLP. Hence, this book focuses on a sub-field of NLP termed Statistical NLP (SNLP). In SNLP computers aren't directly programmed to process language; instead, they learn how language should be processed based on the statistics of a corpus of natural language. For example, a statistical machine translation system's behaviour is affected by the statistics of a parallel corpus where each document in one language is paired with its translation in another. This approach has been dominating NLP research for almost two decades now, and has seen widespread in industry too. Notice that while Statistics and Machine Learning are, in general, quite different fields, for the purposes of this book we will mostly identify Statistical NLP with Machine Learning-based NLP.
We think that to understand and apply SNLP in practice one needs knowledge of the following:
The book is somewhat structured around the task dimension. That is, we will explore different methods, frameworks and their implementations, usually in the context of specific NLP applications.
On a higher level the book is divided into themes that roughly correspond to learning paradigms within SNLP, and which follow a somewhat chronological order: we will start with generative learning, then discuss discriminative learning, then cover forms of weaker supervision to conclude with representation and deep learning. As an overarching theme we will use structured prediction, a formulation of machine learning that accounts for the fact that machine learning outputs are often not just classes, but structured objects such as sequences, trees or general graphs. This is a fitting approach, seeing as NLP tasks often require prediction of such structures.
COMPGI19 Course Logistics: slides
Introduction: slides
Tokenisation and Sentence Splitting: notes, slides, exercises
Generative Learning:
Discriminative Learning:
Weak Supervision:
Representation and Deep Learning
We have a few dedicated method chapters:
The best way to learn language processing with computers is to process language with computers. For this reason this book features interactive code blocks that we use to show NLP in practice, and that you can use to test and investigate methods and language. We use the Python language throughout this book because it offers a large number of relevant libraries and it is easy to learn.
To install the book locally and use it interactively follow the installation instruction on GitHub.
Labs:
Setup tutorials: