Citation
Xu, Weiyu (2010) Compressive sensing for sparse approximations: constructions, algorithms, and analysis. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:10262009081233260
Abstract
`Compressive sensing is an emerging research field that has applications in signal processing, error correction, medical imaging, seismology, and many more other areas. It promises to efficiently recover a sparse signal vector via a much smaller number of linear measurements than its dimension. Naturally, how to design these linear measurements, how to construct the original highdimensional signals efficiently and accurately, and how to analyze the sparse signal recovery algorithms are important issues in the developments of compressive sensing. This thesis is devoted to addressing these fundamental issues in the field of compressive sensing.
In compressive sensing, random measurement matrices are generally used and $\ell_1$ minimization algorithms often use linear programming or other optimization methods to recover the sparse signal vectors. But explicitly constructible measurement matrices providing performance guarantees were elusive and $\ell_1$ minimization algorithms are often very demanding in computational complexity for applications involving very large problem dimensions. In chapter \ref{chapter: Expander}, we propose and discuss a compressive sensing scheme with deterministic performance guarantees using deterministic explicitly constructible expander graphbased measurement matrices and show that the sparse signal recovery can be achieved with linear complexity. This is the first of such a kind of compressive sensing scheme with linear decoding complexity, deterministic performance guarantees of linear sparsity recovery, and deterministic explicitly constructible measurement matrices.
The popular and powerful $\ell_1$ minimization algorithms generally give better sparsity recovery performances than known greedy decoding algorithms. In chapter \ref{chapter: Grassmann}, starting from a necessary and sufficient nullspace condition for achieving a certain signal recovery accuracy, using highdimensional geometry, we give a unified \emph{nullspace Grassmann angle}based analytical framework for compressive sensing. This new framework gives sharp quantitative tradeoffs between the signal sparsity and the recovery accuracy of the $\ell_1$ optimization for approximately sparse signal. Our results concern the fundamental "balancedness" properties of linear subspaces and so may be of independent mathematical interest.
The conventional approach to compressed sensing assumes no prior information on the unknown signal other than the fact that it is sufficiently sparse over a particular basis. In many applications, however, additional prior information is available. In chapter 4, we will consider a particular model for the sparse signal that assigns a probability of being zero or nonzero to each entry of the unknown vector. The standard compressed sensing model is therefore a special
case where these probabilities are all equal. Following the introduction of the \emph{nullspace Grassmann angle}based analytical framework in this thesis, we are able to characterize the optimal recoverable sparsity thresholds using weighted $\ell_1$ minimization algorithms with the prior information.
The roles of $\ell_1$ minimization algorithm in recovering sparse signals from incomplete measurements are now well understood, and sharp recoverable sparsity thresholds for $\ell_1$ minimization have been obtained. The iterative reweighted $\ell_1$ minimization algorithms or related algorithms have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, but no rigorous theoretical results have been established to prove this fact. In chapter \ref{chapter: IterRe}, we try to provide a theoretical foundation for analyzing the iterative reweighted $\ell_1$ algorithms. In particular, we show that for a nontrivial class of signals, the iterative reweighted $\ell_1$ minimization can indeed deliver recoverable sparsity thresholds larger than the $\ell_1$ minimization. Again, our results are based on the nullspace Grassmann anglebased analytical framework.
Evolving from compressive sensing problems, where we are interested in recovering sparse vector signals from compressed linear measurements, we will turn our attention to recovering matrices of low rank from compressed linear measurements in chapter \ref{chapter: rank}, which is a challenging problem that arises in many applications in machine learning, control theory, and discrete geometry. This class of optimization problems is NPHARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic replaces the rank function with the nuclear norm of the decision variable and has been shown to provide the optimal low rank solution in a variety of scenarios. We analytically assess the practical performance of this heuristic for finding the minimum rank matrix subject to linear constraints. We start from the characterization of a necessary and sufficient condition that determines when this heuristic finds the minimum rank solution. We then obtain probabilistic bounds on the matrix dimensions and rank and the number of constraints, such that our conditions for success are satisfied for almost all linear constraint sets as the matrix dimensions tend to infinity. Empirical evidence shows that these probabilistic bounds provide accurate predictions of the heuristic's performance in nonasymptotic scenarios.
Item Type:  Thesis (Dissertation (Ph.D.)) 

Subject Keywords:  Information Theory; Signal Processing; Sampling Theory;Compressive Sensing; Matrix Rank Minimization; Sparse Approximation; Basis Pursuit;Nuclear Norm Minimization; Sparse Signal Recovery;Linear Subspace Balancedness; Expander Graph;Grassmann Angle; Weighted Basis Pursuit; 
Degree Grantor:  California Institute of Technology 
Division:  Engineering and Applied Science 
Major Option:  Electrical Engineering 
Minor Option:  Applied And Computational Mathematics 
Awards:  Charles and Ellen Wilts Prize, 2010 
Thesis Availability:  Public (worldwide access) 
Research Advisor(s): 

Thesis Committee: 

Defense Date:  13 August 2009 
NonCaltech Author Email:  weiyu (AT) systems.caltech.edu 
Record Number:  CaltechTHESIS:10262009081233260 
Persistent URL:  http://resolver.caltech.edu/CaltechTHESIS:10262009081233260 
Default Usage Policy:  No commercial reproduction, distribution, display or performance rights in this work are provided. 
ID Code:  5329 
Collection:  CaltechTHESIS 
Deposited By:  Weiyu Xu 
Deposited On:  21 Dec 2009 18:43 
Last Modified:  01 Aug 2014 17:22 
Thesis Files

PDF
 Final Version
See Usage Policy. 1623Kb 
Repository Staff Only: item control page