Data Compression With Deep Probabilistic Models
Course by Prof. Robert Bamler at University of
At a Glance
- Mondays 16:15-17:45 and Tuesdays 12:15-13:45 on zoom.
- First lecture: Monday, 19 April; after that, lectures will be on Tuesdays, see
detailed tentative schedule below.
- 6 ECTS with grade based on group project (you may skip the group project if you don't need the
- To encourage interactivity, neither the lectures nor the tutorials will be recorded. However, I'm providing
supplementary video material that repeats important concepts discussed in the lectures in a dedicated YouTube
Tentative Schedule & Course Materials
Details of the following schedule are still subject to change. Course materials (videos, lecture notes, problem
sets, code) will be made available here once the course starts.
Find out what you will learn in this course: from the information theoretical foundations of
concrete practical implementations of deep learning based compressors.
Lossless Compression I: Symbol Codes
How can we compress a sequence of symbols and what do we have to know about the data source?
Theoretical Bounds for Lossless Compression
What is the theoretical bound for lossless compression? We'll encounter a term
A Primer on Deep
How can we model more complex data sources with deep generative models?
In tutorial: form a team for your group project & discuss suggested topics.
Probability Theory II: Continuous Random Variables
How can we model the arrival time of a late bus? And why can we define only a “differential” entropy of this
A Primer on Scalable
How can we quantify parameter uncertainty in machine learning models?
Deadline for finalizing team members and topic of your group project.
Lossless Compression II: Stream Codes
We've already learned how to create an optimal symbol code, but can we do better than symbol codes? Yes, if
we're willing to think in fractional bits.
Lossless Compression III: Modern Stream Codes
Learn how to “short sell” bits. No, not on the stock exchange. This simple trick recently lead to a fast new
stream code for complex models.
How fast can we reliably transmit data over an unreliable channel? This gem of information theory will also
prepare us for lossy compression.
Lossy Compression I:
How do we quantify lossy compression performance, what's its theoretical bound, and does the channel matter?
Deadline for status report on project.
Lossy Compression II:
Practical Lossy Compression
How do existing classical and machine learning based lossy codecs work?
Instead of tutorial: individual appointments to discuss your group projects.
Recent Advances in
What are the latest trends and open research questions?
Instead of tutorial: continuing with the group appointments.
Two pioneers of neural compression:
Tue, 13 July: Dr. Christopher Schroers from DisneyResearch|Studios in Zurich
Mon, 19 July: Prof. Dr. Stephan Mandt from University of California at Irvine
13 & 19 July
Tips for Giving Presentations and for Scientific Writing
Presenting your group project next week and finalizing the written report should be a joy, not a pain! These
tips might help you.
Group Project Presentations
The stage is yours! Let's celebrate your achievements in the group projects with a round of presentations and
demos, and with plenty of time for your questions and your feedback.