## On Independent Samples along the Langevin Dynamics and Algorithm

### With Andre Wibisono, Yale University

# On Independent Samples along the Langevin Dynamics and Algorithm

Sampling from a probability distribution is a fundamental algorithmic task, and one way to do that is via running a Markov chain. The mixing time of a Markov chain characterizes how long we should run the Markov chain until the random variable converges to the stationary distribution. In this talk, we discuss the “independence time”, which is how long we should run a Markov chain until the initial and final random variables are approximately independent, in the sense that they have small mutual information. We study this question for two natural Markov chains: the Langevin dynamics in continuous time, and the Unadjusted Langevin Algorithm in discrete time. When the target distribution is strongly log-concave, we prove that the mutual information between the initial and final random variables decreases exponentially fast along both Markov chains. These convergence rates are tight, and lead to an estimate of the independence time which is similar to the mixing time guarantees of these Markov chains. We illustrate our proofs using the strong data processing inequality and the regularity properties of Langevin dynamics. Based on joint work with Jiaming Liang and Siddharth Mitra, https://arxiv.org/abs/2402.17067.

- Speaker: Andre Wibisono, Yale University
- Friday 24 May 2024, 14:00–15:00
- Venue: MR12, Centre for Mathematical Sciences.
- Series: Statistics; organiser: Dr Sergio Bacallado.