Mathematics of Machine Learning 2025
This is the main website for the Mathematics of Machine Learning course in the spring of 2025, as part of the bachelor of mathematics at the University of Amsterdam. Visit this page regularly for changes and updates.
Instructor: | Tim van Erven | (tim@ No spam, please timvanerven. No really, no spam nl) | |
Teaching Assistants: | Eva Frantzeskaki, Paolo Bagozzi |
General Information
Machine learning is one of the fastest growing areas of science, with far-reaching applications. This course gives an overview of the main techniques and algorithms. The lectures introduce the definitions and main characteristics of machine learning algorithms from a coherent mathematical perspective. In the workgroups, students will both solve mathematical exercises to deepen their understanding, and apply algorithms from the course to a selection of data sets using Python Jupyter notebooks.
We will use Canvas for announcements, grades and submitting homework.
Required Prior Knowledge
- Linear algebra, gradients, convexity
- Ability to write mathematical proofs
- Programming in Python with Jupyter notebooks
- Writing in LaTeX
Although mainly targeting mathematics students, the course is accessible to other science students (AI, CS, physics, …) with an interest in mathematical foundations of machine learning.
Lectures and Exercise Sessions
- Weekly lectures from 11h00-13h00 in room SP F1.02:
- Weeks 6-12 on Tuesdays
- Weeks 14-16, 20 and 21 on Mondays
- Weekly exercise classes on Thursdays, starting in the second week of
the course:
- Weeks 7-11 from 15h00-17h00:
- Group A: in room SP D1.112
- Group B: in room SP G5.29
- Weeks 14-16, 19 and 20 from 13h00-15h00:
- Group A: in room SP D1.110
- Group B: in room SP D1.162
- Weeks 7-11 from 15h00-17h00:
Examination Form
The course grade consists of the following components:
- Homework assignments. H = Average of homework grades,
excluding the lowest homework grade. - Two exams: midterm (M) and final (F).
The final grade is computed as 0.3H + 0.3M + 0.4F. If between 5 and 6, it is rounded to a whole point: 5 or 6. Otherwise it is rounded to half points.
Exams (closed book):
- Midterm: March 25, 13h00-15h00 in room SP C0.05
- Final exam: May 22, 09h00-12h00 in room SP C0.05
- Resit exam: July 1, 09h00-12h00 in room SP A1.10
The midterm will be about the first half of the course. The final exam will only be about the second half of the course. The resit exam (R) will cover both halves; it will replace both the midterm and the final exam, with final grade 0.3H + 0.7R. Both exams will be closed book, meaning that it is not allowed to use external resources during the exam.
Course Materials
The main book for the course is The Elements of Statistical Learning (ESL), 2nd edition, by Hastie, Tibshirani and Friedman, Springer-Verlag 2009. In addition, we will use selected parts from Ch. 18 of Computer Age Statistical Inference: Algorithms, Evidence and Data Science (CASI) by Efron and Hastie, Cambridge University Press, 2016. Some supplementary material will also be provided, as listed in the Course Schedule.
Both books are freely available online, but you may consider buying a paper copy of the ESL book, because you will need to study many of its chapters. The standard edition of ESL is hard cover, but there also exists a cheaper soft-cover edition for €39.99. To get the cheaper offer, open this link from inside the university network.
Course Schedule
This schedule will be updated throughout the course. Literature marked ‘optional’ is recommended for background, but will not be tested on the exam. TBA=To Be Announced.
Date | Topics | Literature |
---|---|---|
Feb. 4 | Supervised learning intro: classification and regression (overfitting 1), linear regression for classification (overfitting 2), nearest neighbor classification (overfitting 3). |
Slides 1
Ch. 1. Sect. 2.1, 2.2, 2.3. |
Feb. 11 |
Curse of dimensionality.
Statistical decision theory: expected prediction error (overfitting 4), Bayes-optimal prediction rule. Empirical Risk Minimization. Interpretation of least squares as ERM. Cross-validation. |
Slides 2
Sect. 2.4, 2.5. Sect. 7.10.1, 7.10.2; optionally: 7.12. |
Feb. 18 | Model selection for regression: best-subset selection, ridge regression and lasso, comparison of best-subset/ridge/lasso, ridge and lasso as shrinkage methods. |
Slides 3
Sect. 3.1, 3.2 up to 3.2.1, 3.3, 3.4 up to 3.4.2. Sect. 3.4.3. From lecture: derivation of formulas in Table 3.4. Optional: Sect. 1-3.4 about subgradients from Boyd and Vandenberghe lecture notes. |
Feb. 25 |
Recorded lecture via Zoom!
Finish ridge and lasso as shrinkage methods. Plug-in estimators. Linear discriminant analysis (LDA). Naive Bayes classifier, with application to spam filtering. |
Slides 4
Sect. 4.1, 4.2, 4.3 (except 4.3.1, 4.3.2, 4.3.3). Sect. 6.6.3. |
Mar. 4 |
Surrogate losses.
Logistic regression. |
Slides 5
Sect. 4.4 (except 4.4.3). |
Mar. 11 |
Decision trees for classification and regression.
Bias-variance trade-off. Bagging and random forests. |
Slides 6
Sect. 2.9, 9.2. |
Mar. 18 |
Boosting (AdaBoost), boosting as forward stagewise
additive modeling.
Q&A session. |
Slides 7
Sect. 8.7, 10.1, 10.2, 10.3., 10.4, 10.5, 10.6 (in 10.6 only the part about classification). |
Mar. 25 | Midterm Exam. | |
Mar. 31 | SVMs I: Optimal separating hyperplane, support vector machine (SVM), SVM learning as regularized hinge loss fitting, dual formulation, kernel trick. |
Slides 8
Sect. 4.5.2, 12.2, 12.3.1, 12.3.2. |
Apr. 7 | SVMs II: Dual formulation continued. |
Slides 9
Optionally: Ch. 5 from Boyd and Vandenberghe book |
Apr. 14 |
Unsupervised learning: K-means clustering.
Stochastic Optimization. |
Slides 10
Sect. 14.3 before 14.3.1; Sect. 14.3.6. NB. The book gives the wrong definition for K-means in Sect. 14.3.6, see erratum. Handout about stochastic optimization. |
Apr. 21 | Easter (no lecture) | |
Apr. 28 | Lecture-free week | |
May 5 | Liberation day (no lecture) | |
May. 12 | Neural networks/deep learning I: fully connected layers, stochastic gradient descent with backpropagation. |
Slides 11
From Ch. 18 of the CASI book: chapter intro, Sect. 18.1, Sect. 18.2 (except accelerated gradient methods). |
May 19 |
Neural networks/deep learning II:
convolutional layers.
Q&A session. |
Slides 12
From Ch. 18 of the CASI book: 18.4. |
May 22 | Final Exam. |
Homework Assignments
The homework assignments will be made available here. It is allowed to work together in pairs of two students, which can change per assignment. It is not allowed to collaborate with other people. In case you miss a deadline because of illness or other special circumstances, contact Tim to discuss possible solutions.
Submit via Canvas. Write your answers in LaTeX.
Homework | Extra Files | Start | Deadline |
---|---|---|---|
1. Bayes Optimality | Homework1-start.ipynb | Feb. 13 | Feb. 19, 13h00 |
2. Cross-validation | Homework2-start.ipynb | Feb. 20 | Feb. 26, 13h00 |
3. Regression | Feb. 27 | Mar. 5, 13h00 | |
4. Successful Spamming | Homework4-start.ipynb | Mar. 6 | Mar. 12, 13h00 |
5. Benefits of Averaging | Mar. 13 | Mar. 19, 13h00 | |
6. Surrogate Losses | Apr. 3 | Apr. 9, 13h00 | |
7. Support Vector Machines | Apr. 10 | Apr. 16, 13h00 | |
8. Clustering | Apr. 17 | May 7, 13h00 | |
9. Deep Learning | May 8 | May 16, 17h00! |
Further Reading
Here is a list of references for advanced further reading. These are all optional, and will not be tested on the exam.
- Machine Learning Theory: I recommend the free book by Shalev-Shwartz and Ben-David, which we also use in the MasterMath course Machine Learning Theory.
- Convex optimization: the free book by Boyd and Vandenberghe provides a very nice introduction. For a more extensive overview, see the free book by Bubeck.
- Deep learning: if you want to get up to date on the practice of deep learning, I recommend the Dive into Deep Learning interactive online book.