AIAI Seminar-Monday 14th July 2025 by Magdalena Proszewska

Title: "On Designing Diffusion Autoencoders for Efficient Generation and Representation Learning".

Abstract: Diffusion autoencoders (DAs) are variants of diffusion generative models that use an input-dependent latent variable to capture representations alongside the diffusion process. These representations, to varying extents, can be used for tasks such as downstream classification, controllable generation, and interpolation. However, the generative performance of DAs relies heavily on how well the latent variables can be modelled and subsequently sampled from. Better generative modelling is also the primary goal of another class of diffusion models---those that learn their forward (noising) process. While effective at adjusting the noise process in an input-dependent manner, they must satisfy additional constraints derived from the terminal conditions of the diffusion process. Here, we draw a connection between these two classes of models and show that certain design decisions (latent variable choice, conditioning method, etc.) in the DA framework—leading to a model we term DMZ—allow us to obtain the best of both worlds: effective representations as evaluated on downstream tasks, including domain transfer, as well as more efficient modelling and generation with fewer denoising steps compared to standard DMs.

This talk presents our recent CVPR workshop paper and NeurIPS submission exploring the intersection of generative modeling and representation learning, aiming to exchange ideas and connect with others in the field.

 Link to related paper: https://arxiv.org/abs/2506.00136