MSE Master of Science in Engineering

The Swiss engineering master's degree


Each module contains 3 ECTS. You choose a total of 10 modules/30 ECTS in the following module categories: 

  • 12-15 ECTS in technical scientific modules (TSM)
    TSM modules teach profile-specific specialist skills and supplement the decentralised specialisation modules.
  • 9-12 ECTS in fundamental theoretical principles modules (FTP)
    FTP modules deal with theoretical fundamentals such as higher mathematics, physics, information theory, chemistry, etc. They will teach more detailed, abstract scientific knowledge and help you to bridge the gap between abstraction and application that is so important for innovation.
  • 6-9 ECTS in context modules (CM)
    CM modules will impart additional skills in areas such as technology management, business administration, communication, project management, patent law, contract law, etc.

In the module description (download pdf) you find the entire language information per module divided into the following categories:

  • instruction
  • documentation
  • examination 
Advanced Natural Language Processing (TSM_AdvNLP)

This module enables students to understand the main theoretical concepts that are relevant to text and speech processing, and to design applications which, one the one hand, find, classify or extract information from text or speech, and on the other hand, generate text or speech to summarize or translate language data, or in response to user instructions.  The module briefly reviews fundamentals of natural language processing from a data science perspective, with emphasis on methods that support recent approaches based on deep learning models.  The module emphasizes the origins and rationale of foundation models, which can be fine-tuned, instructed, or given adequate prompts to achieve a wide range of tasks, thus paving the way towards generative artificial intelligence. The module also provides practical knowledge regarding multi-task models for spoken or written input, multilingual models, and interactive systems, as well as practical skills through hands-on exercises using open-source libraries and models, focusing on the rapid prototyping of solutions for a range of typical problems.

The module is divided into four parts.  The first part reviews the main concepts of language analysis and then focuses on the representation of words and the uses of bags-of-words, from the vector space model to non-contextual word embeddings with neural networks; applications include document retrieval and text similarity.  In the second part, deep learning models for sequences of words are discussed in depth, preceded by a review of statistical sequence models, with application, e.g., to part-of-speech tagging and named entity recognition.  The module presents a paradigm based on foundation models with Transformers – encoders, decoders, or both – which can be fine-tuned to various tasks or used for zero-shot learning.  The third part surveys neural models for speech processing and synthesis, along with typical tasks, data and evaluation methods.  Finally, the module presents methods that enable natural interaction with generative AI systems, including instruction tuning and reinforcement learning from human feedback, along with spoken and written chatbots, concluding with a discussions of the limitations and risks of such systems.

Prerequisites

  • Mathematics: basic linear algebra, probability theory (e.g. Bayes theorem), descriptive statistics and hypothesis testing.
  • Machine learning and deep learning (e.g., classifiers, neural networks), basic notions of natural language processing and information retrieval (e.g., preprocessing and manipulating text data, tokenization, tagging, TF-IDF, query-based text retrieval).
  • Programming for data science: good command of Python, ability to handle the entire data science pipeline (data acquisition and analysis, design and training of ML models, evaluation and interpretation of results).


Learning Objectives

  • The students are able to frame a problem in the domain of text and speech processing and generation.  They can relate a new problem to a range of known problems and adapt solutions to their needs.
  • The students are able to specify the characteristics of the data and features needed to train and test models, along with the suitable evaluation metrics.  Given a language processing problem, they are able to design comparative experiments to identify the most promising solution.
  • The students are able to select, among statistical and neural models, the most effective ones for a given task in language or speech processing and generation.  Moreover, they know how to select, between existing libraries and pretrained models, the most suitable ones for a given task.  The students are aware of the capabilities of foundation models, and know how to adapt them to specific task, through additional layers, fine-tuning, or prompt engineering.


Contents of Module

Part I: Words [ca. 20%]

1. Brief review of basic notions of natural language processing: properties of language, speech, and text; subword tokenization, including BPE and SentencePiece; main processing stages, tasks, evaluation metrics, and applications.

2. Text classification and sentiment analysis based on statistical learning with a bag-of-words representation; evaluation metrics for these tasks.

3. Word vectors and their uses: (a) high-dimensional vectors, the VSM model, and application to document retrieval; (b) low-dimensional vectors, non-contextual word embeddings, LSA, word2vec, FastText, and applications to text similarity.

Part II: Word Sequences [ca. 35%]

4. Statistical modeling of word sequences for word-level, span-level and sentence-level tasks; application to part-of-speech (POS) tagging, named entity recognition (NER), and parsing; evaluation methods for these tasks.

5. Language modeling, from n-grams to neural networks; sequence-to-sequence models using deep neural networks, RNNs, Transformers; application to machine translation and text summarization; evaluation methods for these tasks.

6. Foundation models: encoders, decoders, and encoder-decoder models; pre-training tasks; adaptation of models to other tasks using additional layers; fine-tuning pre-trained models; few-shot learning in large language models.

Part III : Speech [ca. 20%]

7. Representation and processing of speech with neural networks; statistical models vs. neural architectures based on RNNs and Transformers; CTC architecture; survey of existing frameworks and pretrained models; notions of speech synthesis.

8. Speech processing tasks, benchmark data and evaluation methods; topic detection, information extraction, and speech translation; multilingual systems.

Part IV: Interaction [ca. 25%]

9. Large language models: survey and emerging capabilities; instruction tuning and reinforcement learning from human feedback (RLHF); prompt engineering.

10.  Applications of generative AI; benchmarks with multiple tasks for evaluating foundation models and LLMs; limitations and risks, alignment with human preferences.

11. Spoken and written human-computer interaction: chatbots and dialogue systems.

Teaching and Learning Methods

Classroom teaching; programming exercises.

Literature

Speech and Language Processing, Daniel Jurafsky and James H. Martin, 2nd edition, Prentice-Hall, 2008 / 3rd edition draft, online, 2023.

Introduction to Information Retrieval, Christopher Manning, Prabhakar Raghavan and Hinrich Schütze, 2008.

Neural Network Methods for Natural Language Processing, Yoav Goldberg, Morgan & Claypool, 2017.

Supplemental material (articles) will be indicated for each lesson.

Download full module description

Back