Selected Slides and Talks

Slides: Towards Flow-based MCMC for Lattice Gauge Theory with Fermions, Physics∩ ML, 2021

Keywords: flows, fermions, lattice QCD, manifolds, equivariance


Slides: Normalizing Flows on Tori and Spheres, ICML2020

Keywords: flows, torus, sphere, manifold, exponential map, circular splines, moebius, non-compact transform

Normalizing-Flows-on-Tori-and-Spheres-ICML-presentation


ICML2020 Tutorial: Representation learning without labels

Abstract

The field of representation learning without labels, also known as unsupervised or self-supervised learning, is seeing significant progress. New techniques have been put forward that approach or even exceed the performance of fully supervised techniques in large-scale and competitive benchmarks such as image classification, while also showing improvements in label-efficiency by multiple orders of magnitude. Representation learning without labels is therefore finally starting to address some of the major challenges in modern deep learning. To continue making progress, however, it is important to systematically understand the nature of the learnt representations and the learning objectives that give rise to them.

The field of representation learning without labels, also known as unsupervised or self-supervised learning, is seeing significant progress. New techniques have been put forward that approach or even exceed the performance of fully supervised techniques in large-scale and competitive benchmarks such as image classification, while also showing improvements in label-efficiency by multiple orders of magnitude. Representation learning without labels is therefore finally starting to address some of the major challenges in modern deep learning. To continue making progress, however, it is important to systematically understand the nature of the learnt representations and the learning objectives that give rise to them.
In this tutorial we will: – Provide a unifying overview of the state of the art in representation learning without labels, – Contextualise these methods through a number of theoretical lenses, including generative modelling, manifold learning and causality, – Argue for the importance of careful and systematic evaluation of representations and provide an overview of the pros and cons of current evaluation methods.

ICML-2020-Tutorial-Slides


TALK: Generative Models and Symmetries

Abstract
In this talk I will discuss how some ideas from Physics such as phase-transitions and Gauge symmetries provide powerful tools to analyse and build generative models. In particular, the study of symmetries in Physics has revolutionised our understanding of the world and permeates every fundamental physical model. Inspired by this, I will focus on our recent work of incorporating Gauge symmetries into normalizing flows generative models and its potential applications in the sciences and ML.

Generative-Models-and-Symmetries


Talk at Simons Center for Geometry and Physics: Generative Models and Symmetries

keywords: generative models, probability divergence, thermodynamics, phase-transitions, VAEs, GANs, Normalizing Flows, equivariance, symmetries, equivariant flows.


Hammers & Nails, High-Energy Physics and Machine Learning 2019, Weizmann Institute of Physics

Slides: Non-Supervised Learning and Decision Making

keywords: Reinforcement learning, generative models, model-based reinforcement learning, generative world models, group theory, equivariant representations, predictive information gain, GQN, SimCore, 3D scenes.

unsupervised_learning_decision_making


 

Tutorial Slides: Deep Generative Models: Foundations, applications and open problems, CCN 2018

keywords: variational inference, generative models, VAE, approximate inference, normalizing-flows, R-NVP, IAF, density estimation, marginalization, GANs, moment-matching, uncertainty estimation, stochastic optimisation, probability divergences, adversarial, causality, autoregressive, moment-matching.

Abstract:

This tutorial will be a review of recent advances in deep generative models. Generative models have a long history and recent methods have combined the generality of probabilistic reasoning with the scalability of deep learning to develop learning algorithms that have been applied to a wide variety of problems giving state-of-the-art results in image generation, text-to-speech synthesis, and image captioning, amongst many others. Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. At the end of this tutorial, audience member will have a full understanding of the latest advances in generative modelling covering three of the active types of models: Markov models, latent variable models and implicit models, and how these models can be scaled to high-dimensional data. The tutorial will expose many questions that remain in this area, and for which there remains a great deal of opportunity for researchers.


 

Video & Slides: Approximate Inference and Deep Generative Models, CERN 2018

keywords: variational inference, generative models, VAE, approximate inference, normalizing-flows, R-NVP, IAF, density estimation, marginalization.

Abstract:

Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I’ll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I’ll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.


Slides: Tutorial on Deep Generative Models, UAI 2017

Video: Tutorial on Deep Generative Models, UAI 2017

keywords: variational inference, generative models, VAEs, GANs, approximate inference, normalizing-flows, R-NVP, IAF, density estimation.

Abstract:

This tutorial will be a review of recent advances in deep generative models. Generative models have a long history at UAI and recent methods have combined the generality of probabilistic reasoning with the scalability of deep learning to develop learning algorithms that have been applied to a wide variety of problems giving state-of-the-art results in image generation, text-to-speech synthesis, and image captioning, amongst many others. Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. At the end of this tutorial, audience member will have a full understanding of the latest advances in generative modeling covering three of the active types of models: Markov models, latent variable models and implicit models, and how these models can be scaled to high-dimensional data. The tutorial will expose many questions that remain in this area, and for which there remains a great deal of opportunity from members of the UAI community.


Video: One-Shot Generalization in Deep Generative Models, ICML2016

keywords: variational inference, generative models, one-shot learning, one-shot density estimation.

Abstract:

Humans have an impressive ability to reason about new concepts and experiences from just a single example. In particular, humans have an ability for one-shot generalization: an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept. We develop machine learning systems with this important capacity by developing new deep generative models, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning. We develop a class of sequential generative models that are built on the principles of feedback and attention. These two characteristics lead to generative models that are among the state-of-the art in density estimation and image generation. We demonstrate the one-shot generalization ability of our models using three tasks: unconditional sampling, generating new exemplars of a given concept, and generating new exemplars of a family of concepts. In all cases our models are able to generate compelling and diverse samples—having seen new examples just once—providing an important class of general-purpose models for one-shot machine learning.


Video: Variational Inference with Normalizing Flows, ICML2015

keywords: variational inference, generative models, normalizing flows, log-det-Jacobian.

Abstract:

The choice of the approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.


 

Slides: The Helmholtz Machine Revisited, EPFL2012

keywords: variational inference, generative models, temporal models, Helmholtz machine, Boltzmann Machine, wake-sleep, REINFORCE, variance-reduction.

Abstract: In this talk I gave at EPFL in 2012, I have introduced Deep-Latent variable models (DLGMs), Recurrent-DLGM temporal models (later named VRNNs) and the application of the REINFORCE algorithm to variational inference.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.