Greedy-LASSO, Greedy-Net

With Sina Mohammadtaheri

Greedy-LASSO, Greedy-Net: Generalization and unrolling of greedy sparse recovery algorithms

Sparse recovery generally aims at reconstructing a sparse vector, given linear measurements performed via a mixture (or sensing) matrix, typically underdetermined. Greedy (and thresholding) sparse recovery algorithms have known to serve well as a suitable alternative for convex optimization techniques, in particular in low sparsity regimes. In this talk, I take orthogonal matching pursuit (OMP) as an example, and establish a connection between OMP and convex optimization decoders in one side and neural networks on the other side. To achieve the former, we adopt a loss function-based perspective and propose a framework based on OMP that leads to greedy algorithms for a large class of loss functions including the well-known (weighted-)LASSO family, with explicit formulas for the choice of the “greedy selection criterion”. We show numerically that these greedy algorithms inherit properties of their ancestor convex decoder. In the second part of the talk, we leverage “softsoring”, to resolve the non-differentiability issue of OMP due to (arg)sorting, in order to derive a differentiable version of OMP that we call “Soft-OMP”, which we demonstrate numerically and theoretically that is a good approximation for OMP . We then unroll iterations of OMP onto layers of a neural network with weights as semantic trainable parameters that capture the structure within the data. Doing so, we also connect our approach to learning weights in weighted sparse recovery. I will conclude the talk by presenting implications of our framework for other greedy algorithms such as CoSaMP and IHT , and highlight some open problems. This is joint work with Simone Brugiapaglia (Concordia University) and Matthew Colbrook (University of Cambridge).

Add to your calendar or Include in your list