Regularized linear autoencoders, the Morse theory of loss, and backprop in the brain

With Jon Bloom (Broad Institute of MIT and Harvard)

Regularized linear autoencoders, the Morse theory of loss, and backprop in the brain

When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. We prove that L2-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA . Finally, we consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of deep learning.

ICML 2019 .

Add to your calendar or Include in your list