Each module contains 3 ECTS. You choose a total of 10 modules/30 ECTS in the following module categories:
- 12-15 ECTS in technical scientific modules (TSM)
TSM modules teach profile-specific specialist skills and supplement the decentralised specialisation modules.
- 9-12 ECTS in fundamental theoretical principles modules (FTP)
FTP modules deal with theoretical fundamentals such as higher mathematics, physics, information theory, chemistry, etc. They will teach more detailed, abstract scientific knowledge and help you to bridge the gap between abstraction and application that is so important for innovation.
- 6-9 ECTS in context modules (CM)
CM modules will impart additional skills in areas such as technology management, business administration, communication, project management, patent law, contract law, etc.
In the module description (download pdf) you find the entire language information per module divided into the following categories:
Deep Learning is one of the most active subareas of Machine Learning and Artificial Intelligence at the moment. Gartner has placed it at the peak in its 2017 Hype Cycle and the trend is going on. Deep Learning techniques are based on neural networks. They are at the core of a vast range of impressive applications, ranging from image classification, automated image captioning, language translation such as Google Translate, to playing Go and arcade games.
This course focuses on the mathematical aspects of neural networks, their implementation (in Python), and their training and usage. Students will learn the fundamental concepts of Deep Learning and develop a good understanding of applicability of Deep Learning for Machine Learning tasks. After completing the course, students will have developed the skills to apply Deep Learning in practical application settings.
Linear algebra: vector and matrix operations, Eigenvectors and –values
Multivariate calculus: partial differentiation, chain rule, gradient, Jacobian and Hessian
Statistics and probability theory: discrete and continuous distributions, multi-variate distributions, probability mass and density functions, Bayes’ Rule, maximum likelihood principle
Programming: Experience in a programming language with good understanding of loops and data structures such as arrays/lists and maps/dictionaries; understanding of object oriented programming concepts. The course is taught using Python.
- have a thorough understanding of neural network architectures including convolutional and recurrent networks.
- know loss functions (e.g. categorical cross entropy) that provide the optimization objective during training.
- understand the principles of back propagation.
- know the benefits of depths and representation learning.
- know some of the recent advances in the field and some of the open research questions.
- develop the ability to decide whether Deep Learning is suitable for a given task.
- gain the ability to build and train neural network models in a Deep Learning Framework such as TensorFlow.
Contents of Module
- Introduction: Logistic Neuron, training and cost functions.
Architectures: Feed-forward and recurrent networks. Applications of neural networks.
- Optimization strategies: Minimization of loss functions, gradient descent, stochastic gradient descent, mini-batch gradient descent, implementation of gradient decent optimizers in Python.
- Training of Deep Neural Networks: Backpropagation, computational graphs, automatic differentiation, special optimizers, such as Nestrov accelerated gradient, AdaGrad, or RMSProp; tricks for faster training, batch normalization, gradient clipping, special activation functions such as non-saturating activation functions, regularization using dropout.
- Multilayer Perceptron (MLP): implementation of an MLP including backpropagation in Python.
- Convolutional Neural Networks (CNNs): Convolutional and pooling layers, data augmentation, popular CNN architectures, transfer learning, applications.
- Practical Considerations and Methodology: Deep Learning frameworks such as TensorFlow; gpu vs cpu; visualizations such as activation maximization, class activation maps, saliency maps; performance metrics, selecting hyper-parameters, debugging strategies.
- Recurrent Neural Networks: Vanishing and exploding gradients, special memory cells, such as Gated Recurrent Units (GRU) or Long short-term memory (LSTM), static and dynamic unrolling, sequence classifiers, sequence-to-sequence models, encoder-decoder for language translation.
- Special and Current Research Topics such as
- Autoencoders: principal component analysis using autoencoders; special applications such as denoising auto-encoders.
- Generative Adversarial Models.
- Learning embeddings for word representations, attention mechanism, transformers.
Teaching and Learning Methods
Classroom teaching; programming exercises
I. Goodfellow, Y. Bengio, A. Courville: “Deep Learning”, MIT Press, 2016. ISBN: 978-0262035613.
N. Buduma: “Fundamentals of Deep Learning: Designing Next-Generation Machine Intelligence Algorithms”, O’Reilly, 2017. ISBN: 978-1491925614.
A. Géron, Hands-on Machine Learning with Scikit-Learn and TensorFlow, O'Reilly, 2017 ISBN: 978-1491962299.
C. M. Bishop: “Neural Networks for Pattern Recognition”. Clarendon Press. 1996. ISBN: 978-0198538646.
K. P. Murphy, "Machine Learning, A Probabilistic Perspective", MIT Press, 2012, ISBN: 9780262018029
T. M. Mitchell, "Machine Learning", McGraw-Hill Science/Engineering/Math, 1997, ISBN: 0070428077