Statistical guarantees for neural operator surrogates

With Sven Wang (EPFL)

Statistical guarantees for neural operator surrogates

In recent years, “operator learning” methodologies for constructing data-driven surrogates for non-linear operators are gaining widespread attention. We present statistical convergence results for the learning of such non-linear mappings in infinite-dimensional spaces, e.g. arising from PDEs, given noisy input-output pairs. We provide convergence results for least-squares-type empirical risk minimizers over general classes, in terms of their approximation properties and metric entropy bounds. This generalizes classical results from finite-dimensional nonparametric regression to an infinite-dimensional setting.

Assuming $G_0$ to be holomorphic, we prove algebraic (in the sample size $n$) convergence rates in this setting, thereby overcoming the curse of dimensionality. To illustrate the wide applicability, as a prototypical example we discuss the learning of the non-linear solution operator to a parametric elliptic partial differential equation, with an encoder-decoder based neural operator architecture.

Add to your calendar or Include in your list