Mastercolorscience.files.wordpress.com



Course DescriptionsLearning outcomes?Students understand the basic theory of neural networks and up-to-date deep learning algorithms. Students get familiar with the most popular neural networks models and their gradient-based learning algorithms. Learn how the gradient is accelerated using back-propagation and different optimization strategies, plain Stochastic Gradient Descent (SGD) and momentum-based algorithms. Students master the profound knowledge of deep learning techniques, such as regularization, momentum, parameter & structure optimization, batch normalization, and dropout. Students understand the working principles of typical as well as advanced deep learning algorithms and deep learning neural networks, convolutional neural networks (CNN) and their variants. Modern deep generative models and technologies will also be studied including but are not only limited to Generative Adversarial Network (GAN), deterministic AutoEncoders, Variational AutoEncoders (VAE), flow based models (such as, PixelCNN and Wavenet) and self-supervised learning. Students know how to build up deep learning systems from scratch. Students know how to choose evaluation metrics for deep learning, how to preprocess data sets, and how to handle variance and bias in deep learning. Students are able to program deep learning using, for example, the TensorFlow platform (Other possible deep learning frameworks are PyTorch and Jax). Students gain hand-on experiences in using state-of-the-art deep learning techniques to deal with practical problems. Contents?Machine learning methods essential for deep learning. Concepts and challenges of deep learning. Deep learning models and techniques (deep neural networks, Convolutional Neural Networks (CNN), Generative Adversarial Network (GAN), AutoEncoders, Long Short-Term Memory (LSTM), modified versions of deep neural networks models, etc.). Recent advances in deep learning research. Cutting-edge applications of deep learning in feature extraction, image processing, pattern classification, speech recognition, time series prediction, etc. Modes of study?Lectures, teaching materials, exercises, longer project work and examination.Teaching methods?Lectures 32 hours, exercises 16 hours + 2 hours of tutoring hands on exercises.Study materials?I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2017, lecture slides and notes, selected papers from journals and conference proceedings. Evaluation criteria?Grading: 0-5.Prerequisites?Prerequisite? courses: ‘Artificial Intelligence’, ‘Probability Inference for Data Science 1’, and ‘Python Language’, basic knowledge of linear algebra, probability theory or statistical inference, and neural networks, general Python programming skills, and some preferred knowledge of deep learning frameworks, such as TensorFlow and PyTorch. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download