Short course on scientific deep learning / by Prof. Tan Bui-Thanh (2023.07.31~2023.08.10)
Short course on scientific deep learning
Time: 10AM to noon
Day: 7/31, 8/2, 8/4, 8/7, 8/9, 8/10
Place: ASTC 615, Yonsei University
This 12-hour short course on scientific deep learning provides an applied math perspective of deep learning. In particular, we shall employ elementary mathematics to understand, analyze, and provide insights into various aspects of deep learning including: the universal approximation capability, backpropagation, gradient vanishing, bias-variance trade-off, etc. We also provide a short introduction to Jax, an efficient and light-weight deep learning platform
The content of the short course:
Overview of Machine learning
Deep neural networks
Proof of Universal approximation theorem for ReLU neural networks
What could be the best neural network for a given task? Review/introduction to optimization
Computing optimal solution with gradient descent: derivation and convergence analysis
gradient descent for training neural network with backpropagation
gradient vanishing/exploding issues
Statistical learning for neural network
Motivation/introduction to probability theory
Statistical learning and the bias-variance decomposition/trade-off
Stochastic gradient and it its convergence analysis
Why deep could be better than shallow? (If time permits)
Introduction to deep learning in Jax
- Organizer: Eun-Jae Park (ejpark@yonsei.ac.kr)