<-- Zurück

Improved sampling via learned diffusions

von Lorenz Richter, Julius Berner, Guan-Horng Liu

Jahr:

2024

Publikation:

International Conference on Learning Representations (ICLR)

Abstrakt:

Recently, a series of papers proposed deep learning-based approaches to sample from unnormalized target densities using controlled diffusion processes. In this work, we identify these approaches as special cases of the Schrödinger bridge problem, seeking the most likely stochastic evolution between a given prior distribution and the specified target. We further generalize this framework by introducing a variational formulation based on divergences between path space measures of time-reversed diffusion processes. This abstract perspective leads to practical losses that can be optimized by gradient-based algorithms and includes previous objectives as special cases. At the same time, it allows us to consider divergences other than the reverse Kullback-Leibler divergence that is known to suffer from mode collapse. In particular, we propose the so-called log-variance loss, which exhibits favorable numerical properties and leads to significantly improved performance across all considered approaches.

Link:

Read the paper

Additional Information


Brief introduction of the dida co-author(s) and relevance for dida's ML developments.

Dr. Lorenz Richter

Aus der Stochastik und Numerik kommend (FU Berlin), beschäftigt sich der Mathematiker seit einigen Jahren mit Deep-Learning-Algorithmen. Neben seinem Faible für die Theorie hat er in den letzten 10 Jahren diverse Data Science-Probleme praktisch gelöst. Lorenz leitet das Machine-Learning-Team.