We have only really scratched the surface. This course covered the most important parts of ML and the ones more explored: a frequentist approach to supervised ML and inferential statistics. But not only are there more paradigms: active learning, unsupervised learning, online learning, reinforcement learning. But there is a whole different set of theory: bayesian.
Let me talk about why we did not explore the other paradigms: they are still in development. I can be confident that what you learned here will last for the next 50 years (hopefully), but the tools and theory that I would teach in RL or unsupervised learning could dramatically change in the next 5 years.
And then there is bayesian...
There are a couple of reasons. First it should be its own class. But then why didn't we start with bayesian. Well honestly it would require more background knowledge. I think that we would have to learn some actual probability theory in order to do that class.
So maybe in the future after I make a quick probability course I will go into bayesian, but for now you will have to be sated with this.