Robust inference for intractable likelihood models using kernel divergences

With Francois-Xavier Briol (UCL)

Robust inference for intractable likelihood models using kernel divergences

Modern statistics and machine learning tools are being applied to increasingly complex phenomenon, and as a result make use of increasingly complex models. A large class of such models are the so-called intractable likelihood models, where the likelihood is either too computational expensive to evaluate, or impossible to write down in closed form. This creates significant issues for classical approach such as maximum likelihood estimation or Bayesian inference, which are entirely reliant on evaluations of a likelihood. In this talk, we will cover several novel inference schemes which by-pass this issue. These will be constructed from kernel-based discrepancies such as maximum mean discrepancies and kernel Stein discrepancies, and can be used either in a frequentist or Bayesian framework. An important feature of our approach is that it will be provably robust, in the sense that a small number of outliers or mild model misspecification will not have a significant impact on parameter estimation. In particular, we will show how the choice of kernel can allow us to trade statistical efficiency with robustness. The methodology will then be illustrated on a range of intractable likelihood models in signal processing and biochemistry.

Add to your calendar or Include in your list