Deep Learning Lecture Series

This is a series of lectures essentially making up a mini course on the subject. The idea is to cover the practical as well as the theoretical aspects. In fact, my hope with this series of lectures is to shine light on interesting pure math problems arising in the context of deep learning, that so far has gathered only a minute amount of attention.

Table of contents

  1. History
    1. Brief history of the subject
    2. What is the perceptron algorithm?
    3. What is known about the perceptron algorithm?
    4. How does this relate to modern neural networks?
      1. What is a neural network?
  2. VC theory
    1. Quick recap
    2. Detour into VC-theory and generalization bounds
    3. Why Deep Neural Networks are difficult
    4. Open problems
  3. SGD and PDE
    1. Quick recap
    2. Gradient descent algorithm
    3. Stochastic gradient descent
    4. Can we model gradient descent?
    5. What is the connection to PDE?
  4. Survey
    1. CNNs
    2. ImageNet challenge and different winners
    3. Autoencoders
    4. Reinforcement learning
  5. The ups and downs of the loss landscape


For those interested in the Notebook that generated these slides, they can be found on Github.