Causal effect estimation is important for many tasks in the natural and social sciences. We design algorithms for the continuous partial identification problem: bounding the effects of multivariate, continuous treatments when unmeasured confounding makes identification impossible. Specifically, we cast causal effects as objective functions within a constrained optimization problem, and minimize/maximize these functions to obtain bounds. We combine flexible learning algorithms with Monte Carlo methods to implement a family of solutions under the name of stochastic causal programming. In particular, we show how the generic framework can be efficiently formulated in settings where auxiliary variables are clustered into pre-treatment and post-treatment sets, where no fine-grained causal graph can be formulated. Contrasted to other generic approaches, this highly simplifies the problem and has advantages concerning how to encode structural knowledge without explicitly constructing latent hidden common causes.
Joint work with Kirtan Padh, Jakob Zeitler, David Watson, Matt Kusner and Niki Kilbertus.