By a basic linear algebra result, a family of two or more commuting symmetric matrices has a common eigenvector basis and can thus be jointly diagonalized. Such joint eigenvalue problems come in several flavors and they play an important role in a variety of applications, including independent component analysis in signal processing, multivariate polynomial systems, tensor decompositions, and computational quantum chemistry. Perhaps surprisingly, the development
of robust numerical algorithms for solving such problems is by no means trivial. To start with, roundoff error or other forms of error will inevitably destroy commutativity assumptions. In turn, one can at best hope to find approximate solutions to joint eigenvalue problems and, in
turn, most existing approaches are based on optimization techniques, which may or may not recover the approximate solution. In this talk, we propose randomized methods that address joint eigenvalue problems via the solution of one or a few standard eigenvalue problems. The methods are simple but surprisingly effective. We provide a theoretical explanation for their success by establishing probabilistic guarantees for robust recovery. Through numerical experiments on synthetic and real-world data, we show that our algorithms reach or outperform state-of-the-art optimization-based methods. This talk is based on joint work with Haoze He.