Short Description
An introduction to optimization for machine learning. Computation of derivatives. Deep learning.
Learning Outcomes
By the end of the course, students are expected to be able to:
- Formulate various machine learning problems as optimization problems.
- Implement gradient descent and compare/contrast with stochastic gradient descent.
- Avoid common numerical errors due to rounding error.
- Compare/contrast different ways of computing derivatives (symbolic/automatic/numerical differentiation).
- Train neural networks for performing regression and classification tasks with deep learning.
- Deploy neural networks on a GPU.
Reference Material
- Ian Goodfellow, Yoshua Bengio and Aaron Courville. http://www.deeplearningbook.org/
- James, Gareth; Witten, Daniela; Hastie, Trevor; and Tibshirani, Robert. An Introduction to Statistical Learning: with Applications in R. 2014. Plus Python code and more Python code.
- Russell, Stuart, and Peter Norvig. Artificial intelligence: a modern approach. 1995.
- David Poole and Alan Mackwordth. Artificial Intelligence: foundations of computational agents. 2010.
- Kevin Murphy. Machine Learning: A Probabilistic Perspective. 2012.
- Christopher Bishop. Pattern Recognition and Machine Learning. 2007.
- Pang-Ning Tan, Michael Steinbach, Vipin Kumar. Introduction to Data Mining. 2005.
- Mining of Massive Datasets. Jure Leskovec, Anand Rajaraman, Jeffrey David Ullman. 2nd ed, 2014.
- A Course in Machine Learning
- Deep Learning with Python. Jason Brownlee.
- Stanford UFLDL tutorial (or here)
- Nando de Freitas lecture videos and online course
- Neural Networks and Deep Learning (free online book)
- Grokking Deep Learning
- Practical Deep Learning For Coders, Part 1 and some more resources on their blog here
- A Guide to Deep Learning
- a list of other resources: https://github.com/ChristosChristofidis/awesome-deep-learning
Instructor (2016-2017)
Note: information on this page is preliminary and subject to change.