CS480/680: INTRODUCTION TO MACHINE LEARNING, Spring 2023, University of Waterloo


This course focuses on the introduction of machine leanring. However, we will cover what you are particularly interested in, e.g., technical details of how to train your own ChatGPT.



Graded Student Work for CS480: Graded Student Work for CS680: Homeworks (We do not accept hand-written submission. Typeset using LaTeX is recommended.): Course Project: Using AI to write homeworks or project report is prohibited. We may use OpenAI's tools to detect your submission.

Late Policy: We do NOT accept any late submissions, unless you have a legitimate reason with a formal proof (e.g. hospitalization, family urgency, etc.). The proof date should be within 7 days of your homework deadline. Traveling, busy with other stuff, or simply forgetting to submit, are not considered legitimate. Without a proof, your score will be multiplied by 75% if you are late within 24 hours, and 0 score if you are late beyond 24 hours. With a proof and my approval, you can get a 7-day homework extension.


There is no required textbook, but the following fine texts are recommended.

Schedule (tentative)

Date Category Topic Slides Suggested Readings Assignments
Lecture 1 May 8 Introduction
Link Deep Learning, Section 1
Lecture 2 May 10 Classic ML Perceptron
Link Patterns, Predictions, and Actions, Page 37
May 15 Classic ML Perceptron - Cont'
Link Patterns, Predictions, and Actions, Page 37 pdf, tex
Lecture 3 May 17 Classic ML Linear Regression
Link Probabilistic Machine Learning: An Introduction, Page 363
Lecture 4 May 23 Classic ML
  • Linear Regression - Cont'
  • Logistic Regression
  • Link
  • Link
  • Probabilistic Machine Learning: An Introduction, Page 333
    Lecture 5 May 24 Classic ML Hard-Margin SVM
    Link The Elements of Statistical Learning, Section 12.3
    Lecture 6 May 29 Classic ML Soft-Margin SVM
    Link The Elements of Statistical Learning, Section 12.3
    Lecture 7 May 31 Classic ML
  • Soft-Margin SVM - Cont'
  • Reproducing Kernels
  • Link
  • Link
  • The Elements of Statistical Learning, Section 12.3
    Lecture 8 June 5 Classic ML Gradient Descent
    Link Convex Optimization, Section 9.3 pdf, ipynb
    Lecture 9 June 7 Neural Nets
  • Gradient Descent - Cont'
  • Fully Connected NNs
  • Link
  • Link
  • Deep Learning, Section 6
    June 12 Neural Nets Fully Connected NNs - Cont'
    Link Deep Learning, Section 6
    Lecture 10 June 14 Neural Nets Convolutional NNs
    Link Deep Learning, Section 9
    June 19 Neural Nets Convolutional NNs - Cont'
    Link Deep Learning, Section 9
    No class June 21 - Mid-term Exam
    - STC 1012, 11:30am-1:00pm (90min)
    Lecture 11 June 26 Neural Nets Transformer
  • “Attention Is All You Need”. Vaswani et al. 2017 link
  • pdf, ipynb
    Lecture 12 June 28 Modern ML Paradigms Large Language Models
  • “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. Devlin et al. 2018 link
  • (GPT-1) “Improving Language Understanding by Generative Pre-training”. Radford et al. 2018 link
  • (talk by Andrej Karpathy) State of GPT link
  • No class July 3 - Canada Day
    July 5 Modern ML Paradigms Large Language Models - Cont'
  • (GPT-2) “Language Models are Unsupervised Multitask Learners”. Radford et al. 2019 link
  • (GPT-3) “Language Models are Few-Shot Learners”. Brown et al. 2020 link
  • (GPT-3.5) “Training Language Models to follow Instructions with Human Feedbacks”. Ouyang et al. 2022 link
  • (GPT-4) “GPT-4 Technical Report”. OpenAI 2023 link
  • Lecture 13 July 10 Modern ML Paradigms GANs
  • “Generative Adversarial Networks”. Goodfellow et al. 2014 link
  • Lecture 14 July 12 Modern ML Paradigms Self-Supervised Learning
  • “A Simple Framework for Contrastive Learning of Visual Representations”. Chen et al. 2020 link
  • “Momentum Contrast for Unsupervised Visual Representation Learning”. He et al. 2020 link
  • Lecture 15 July 17 Trustworthy ML Evasion Attacks
  • (White-box) “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks”. Croce et al. ICML 2020. link
  • (White-box) “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples”. Athalye et al. ICML 2018. link
  • (White-box) “DeepFool: a simple and accurate method to fool deep neural networks”. Moosavi-Dezfooli et al. CVPR 2016. link
  • (Black-box) “Square Attack: a query-efficient black-box adversarial attack via random search”. Andriushchenko et al. ECCV 2020. link
  • (Black-box) “Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models”. Brendel et al. ICLR 2018. link
  • pdf, tex
    Lecture 16 July 19 Trustworthy ML Robustness
  • “Towards Deep Learning Models Resistant to Adversarial Attacks”. Madry et al. ICLR 2018 link
  • “Theoretically Principled Trade-off between Robustness and Accuracy”. Zhang et al. ICML 2019 link
  • Lecture 17 July 24 Trustworthy ML Privacy (guest lecture)
    Link DifferentialPrivacy.org
    Lecture 18 July 26 Trustworthy ML Ethics (guest lecture)
  • Link
  • Note
  • DifferentialPrivacy.org
    Lecture 19 July 31 Trustworthy ML
  • Other Threats
  • Course Review
  • Link
  • (Physical) “Robust Physical-World Attacks on Deep Learning Models”. Eykholt et al. CVPR 2018 link
  • (Physical) “Adversarial examples in the physical world”. Kurakin et al. ICLR 2017 link
  • (Physical) “Synthesizing Robust Adversarial Examples”. Athalye et al. ICML 2018 link
  • (Physical) “Fooling automated surveillance cameras: adversarial patches to attack person detection”. Thys et al. CVPR 2019 Workshop link
  • (Poisoning) “Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks”. Shafahi et al. NeurIPS 2018 link
  • (Poisoning) “Trojaning Attack on Neural Networks”. Liu et al. NDSS 2018 link
  • (Poisoning) “Hidden Trigger Backdoor Attacks”. Saha et al. AAAI 2020 link
  • (Poisoning) “Deep Partition Aggregation: Provable Defense against General Poisoning Attacks”. Levine et al. ICLR 2021 link
  • Mental Health: If you or anyone you know experiences any academic stress, difficult life events, or feelings like anxiety or depression, we strongly encourage you to seek support.

    On-campus Resources

    Off-campus Resources

    Diversity: It is our intent that students from all diverse backgrounds and perspectives be well served by this course, and that students’ learning needs be addressed both in and out of class. We recognize the immense value of the diversity in identities, perspectives, and contributions that students bring, and the benefit it has on our educational environment. Your suggestions are encouraged and appreciated. Please let us know ways to improve the effectiveness of the course for you personally or for other students or student groups. In particular:

    Other related courses: CS480/680 offered by Yaoliang Yu