MSE Master of Science in Engineering

The Swiss engineering master's degree


Chaque module vaut 3 ECTS. Vous sélectionnez 10 modules/30 ECTS parmi les catégories suivantes:

  • 12-15 crédits ECTS en Modules technico-scientifiques (TSM)
    Les modules TSM vous transmettent une compétence technique spécifique à votre orientation et complètent les modules de spécialisation décentralisés.
  • 9-12 crédits ECTS en Bases théoriques élargies (FTP)
    Les modules FTP traitent de bases théoriques telles que les mathématiques élevées, la physique, la théorie de l’information, la chimie, etc., vous permettant d’étendre votre profondeur scientifique abstraite et de contribuer à créer le lien important entre l’abstraction et l’application dans le domaine de l’innovation.
  • 6-9 crédits ECTS en Modules contextuels (CM)
    Les modules CM vous transmettent des compétences supplémentaires dans des domaines tels que la gestion des technologies, la gestion d’entreprise, la communication, la gestion de projets, le droit des brevets et des contrats, etc.

Le descriptif de module (download pdf) contient le détail des langues pour chaque module selon les catégories suivantes:

  • leçons
  • documentation
  • examen 
Deep Learning (TSM_DeLearn)

Deep Learning is one of the most active subareas of Machine Learning and Artificial Intelligence at the moment. Gartner has placed it at the peak in its 2017 Hype Cycle and the trend is going on. Deep Learning techniques are based on neural networks. They are at the core of a vast range of impressive applications, ranging from image classification, automated image captioning, language translation such as Google Translate, to playing Go and arcade games.

This course focuses on the mathematical aspects of neural networks, their implementation (in Python), and their training and usage. Students will learn the fundamental concepts of Deep Learning and develop a good understanding of applicability of Deep Learning for Machine Learning tasks. After completing the course, students will have developed the skills to apply Deep Learning in practical application settings.

Compétences préalables

Linear algebra: vector and matrix operations, Eigenvectors and –values
Multivariate calculus: partial differentiation, chain rule, gradient, Jacobian and Hessian
Statistics and probability theory: discrete and continuous distributions, multi-variate distributions, probability mass and density functions, Bayes’ Rule, maximum likelihood principle
Programming: Experience in a programming language with good understanding of loops and data structures such as arrays/lists and maps/dictionaries; understanding of object oriented programming concepts. The course is taught using Python.

Objectifs d'apprentissage

Students will

  • have a thorough understanding of neural network architectures including convolutional and recurrent networks.
  • know loss functions (e.g. categorical cross entropy) that provide the optimization objective during training.
  • understand the principles of back propagation.
  • know the benefits of depths and representation learning.
  • know some of the recent advances in the field and some of the open research questions.
  • develop the ability to decide whether Deep Learning is suitable for a given task.
  • gain the ability to build and train neural network models in a Deep Learning Framework such as TensorFlow.

Catégorie de module

  • Introduction: Logistic Neuron, training and cost functions.
    Architectures: Feed-forward and recurrent networks. Applications of neural networks.
  • Optimization strategies: Minimization of loss functions, gradient descent, stochastic gradient descent, mini-batch gradient descent, implementation of gradient decent optimizers in Python.
  • Training of Deep Neural Networks: Backpropagation, computational graphs, automatic differentiation, special optimizers, such as Nestrov accelerated gradient, AdaGrad, or RMSProp; tricks for faster training, batch normalization, gradient clipping, special activation functions such as non-saturating activation functions, regularization using dropout.
  • Multilayer Perceptron (MLP): implementation of an MLP including backpropagation in Python.
  • Convolutional Neural Networks (CNNs): Convolutional and pooling layers, data augmentation, popular CNN architectures, transfer learning, applications.
  • Practical Considerations and Methodology:  Deep Learning frameworks such as TensorFlow; gpu vs cpu; visualizations such as activation maximization, class activation maps, saliency maps; performance metrics, selecting hyper-parameters, debugging strategies.
  • Recurrent Neural Networks: Vanishing and exploding gradients, special memory cells, such as Gated Recurrent Units (GRU) or Long short-term memory (LSTM), static and dynamic unrolling, sequence classifiers, sequence-to-sequence models, encoder-decoder for language translation.
  • Special and Current Research Topics such as
    • Autoencoders: principal component analysis using autoencoders; special applications such as denoising auto-encoders.
    • Generative Adversarial Models.
    • Learning embeddings for word representations, attention mechanism, transformers.

Méthodes d'enseignement et d'apprentissage

Classroom teaching; programming exercises

Bibliographie

I. Goodfellow, Y. Bengio, A. Courville: “Deep Learning”, MIT Press, 2016. ISBN: 978-0262035613.

N. Buduma: “Fundamentals of Deep Learning: Designing Next-Generation Machine Intelligence Algorithms”, O’Reilly, 2017. ISBN: 978-1491925614.

A. Géron, Hands-on Machine Learning with Scikit-Learn and TensorFlow, O'Reilly, 2017 ISBN: 978-1491962299.

C. M. Bishop: “Neural Networks for Pattern Recognition”. Clarendon Press. 1996. ISBN: 978-0198538646.

K. P. Murphy, "Machine Learning, A Probabilistic Perspective", MIT Press, 2012, ISBN: 9780262018029 

T. M. Mitchell, "Machine Learning", McGraw-Hill Science/Engineering/Math, 1997, ISBN: 0070428077

Télécharger le descriptif complet

Retour