Sublinear Algorithms for Compressed Sensing
Compressible signals are pervasive in signal processing applications. The essential feature of a compressible signal is that it can be approximated well by a sparse signal. It has recently been observed that it is possible to collect the essential information from a compressible signal by projecting it onto a low-dimensional random subspace. This idea is called sketching in computer science and compressed sensing in signal processing; it can be traced to developments in geometric functional analysis from the early 1980s.
More precisely, suppose that a d-dimensional signal can be approximated by an m-sparse signal, i.e., one with at most m nonzero entries. Candès-Tao and Donoho show that O( m log d ) measurements suffice to capture this signal. They propose ell-1 minimization as a method for reconstruction. From an algorithmic point of view, this approach is extremely inefficient because it uses time proportional to d to identify m significant components. This presentation will describe an approach that is exponentially more efficient.
The talk constructs a uniform system of O( m polylog(d) ) measurements that can capture all m-sparse signals. Associated with this measurement system is a reconstruction algorithm that produces a sparse approximation in time O( m log^2 d ). The ell_1 error of the output is no worse than O(log m) times the optimal ell-1 error. Several extensions of this
algorithm are discussed. These variations can reduce the amount of storage required, improve the reconstruction error, and increase the size of the signal
class that is captured.
This work is joint with Anna C. Gilbert, Martin J. Strauss, and Roman Vershynin.
Joel Tropp graduated from The University of Texas in 2004, and, since
then, has been a postdoc in the Mathematics Department of The
University of Michigan. His interests include computational harmonic
analysis, discrete and convex geometry, and the analysis of