A Caltech Library Service

Sparse Recovery via Convex Optimization


Randall, Paige Alicia (2009) Sparse Recovery via Convex Optimization. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/3Z65-A925.


This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.

We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.

We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.

We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.

Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.

All our methods involve solving an l_1 minimization program which can be written as either a linear program or a second-order cone program, and the well-established machinery of convex optimization used to solve it rapidly.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:Dantzig selector; lasso; oracle inequality; restricted isometry property; RIP; statistical estimation
Degree Grantor:California Institute of Technology
Division:Physics, Mathematics and Astronomy
Major Option:Physics
Minor Option:Applied And Computational Mathematics
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Candes, Emmanuel J.
Thesis Committee:
  • Candes, Emmanuel J. (chair)
  • Kitaev, Alexei
  • Owhadi, Houman
  • McEliece, Robert J.
Defense Date:22 May 2009
Record Number:CaltechETD:etd-05292009-152544
Persistent URL:
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:2279
Deposited By: Imported from ETD-db
Deposited On:02 Jun 2009
Last Modified:26 Nov 2019 20:24

Thesis Files

PDF (RandallThesis.pdf) - Final Version
See Usage Policy.


Repository Staff Only: item control page