Data Compression With Deep Probabilistic Models

Course by Prof. Robert Bamler at University of Tuebingen.

At a Glance

Links

Additional Resources

Tentative Schedule & Course Materials

Details of the following schedule are still subject to change.

Preview

Find out what you will learn in this course: from the information theoreti­cal foundations of communication to concrete practical implementations of deep learning based compressors.
0
Lecture
19 April 2021

Lossless Compression I: Symbol Codes

How can we compress a sequence of symbols and what do we have to know about the data source?
1
Lecture
20 April
Tutorial
26 April

Theoretical Bounds for Lossless Compression

What is the theoretical bound for lossless compression? We'll encounter a term called “entropy”.
2
Lecture
27 April
Tutorial
3 May

Optimality of Huffman Coding

We prove that the Huffman Coding algorithm produces an optimal symbol code.
In tutorial: form a team for your group project & discuss suggested topics.
3
Lecture
4 May
Tutorial
10 May

Random Variables and Autoregressive Models

Our first example of a deep-learning based data compression method after a crash-course on important concepts from probability theory.
4
Lecture
11 May
Tutorial
17 May

Bits-Back Coding With Latent Variable Models

Learn how to “short sell” bits (no, not on the stock exchange).
Deadline for finalizing team members and topic of your group project.
5
Lecture
18 May
Tutorial
31 May

Stream Codes I: Asymmetric Numeral Systems (ANS)

We've already learned how to create an optimal symbol code, but can we do better than symbol codes? Yes, if we're willing to think in fractional bits.
6
Lecture
1 June
Tutorial
7 June

Stream Codes II: Arithmetic Coding and Range Coding

A "bonus" video on a famous stream code with queue semantics while the problem set discusses the last missing piece of the ANS algorithm.
7
Lecture
8 June
Tutorial
14 June

Deep Latent Variable Models and Variational Autoencoders

Minimizing the bitrate directly leads us to amortized variational expectation maximization. Let's just call it “var­i­a­tion­al autoencoders”, though.
8
Lecture
15 June
Tutorial
21 June

Channel Coding Theorem and Theory of Lossy Compression

Two final gems of information theory before we venture into more applied issues starting with the next session.
Deadline for status report on project.
9
Lecture
22 June
Tutorial
28 June

Practical Machine Learning Based Lossy Compression

What are some proven model architec­tures for ML-based lossy compression?
Instead of tutorial: individual appoint­ments to discuss your group projects.
10
Lecture
29 June

Recent Advances in
Neural Compression

What are the latest trends and open research questions?
Instead of tutorial: continuing with the group appointments.
11
Lecture
6 July

Guest Stars

Two pioneers of neural compression:
Tue, 13 July: Dr. Christopher Schroers from DisneyResearch|Studios in Zurich
Mon, 19 July: Prof. Dr. Stephan Mandt from University of California at Irvine
12
Guest Lectures
13 & 19 July

Tips for Giving Presentations and for Scientific Writing

Presenting your group project next week and finalizing the written report should be a joy, not a pain! These tips might help you.
13
Lecture
20 July

Group Project Presentations

The stage is yours! Let's celebrate your achievements in the group projects with a round of presentations and demos, and with plenty of time for your questions and your feedback.
14
Work­shop
26 & 27 July