CaltechTHESIS
  A Caltech Library Service

Compressive Sensing for Sparse Approximations: Constructions, Algorithms, and Analysis

Citation

Xu, Weiyu (2010) Compressive Sensing for Sparse Approximations: Constructions, Algorithms, and Analysis. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/F63K-GT12. https://resolver.caltech.edu/CaltechTHESIS:10262009-081233260

Abstract

Compressive sensing is an emerging research field that has applications in signal processing, error correction, medical imaging, seismology, and many more other areas. It promises to efficiently recover a sparse signal vector via a much smaller number of linear measurements than its dimension. Naturally, how to design these linear measurements, how to construct the original high-dimensional signals efficiently and accurately, and how to analyze the sparse signal recovery algorithms are important issues in the developments of compressive sensing. This thesis is devoted to addressing these fundamental issues in the field of compressive sensing.

In compressive sensing, random measurement matrices are generally used and ℓ₁ minimization algorithms often use linear programming or other optimization methods to recover the sparse signal vectors. But explicitly constructible measurement matrices providing performance guarantees were elusive and ℓ₁ minimization algorithms are often very demanding in computational complexity for applications involving very large problem dimensions. In chapter 2, we propose and discuss a compressive sensing scheme with deterministic performance guarantees using deterministic explicitly constructible expander graph-based measurement matrices and show that the sparse signal recovery can be achieved with linear complexity. This is the first of such a kind of compressive sensing scheme with linear decoding complexity, deterministic performance guarantees of linear sparsity recovery, and deterministic explicitly constructible measurement matrices.

The popular and powerful ℓ₁ minimization algorithms generally give better sparsity recovery performances than known greedy decoding algorithms. In chapter 3, starting from a necessary and sufficient null-space condition for achieving a certain signal recovery accuracy, using high-dimensional geometry, we give a unified null-space Grassmann angle-based analytical framework for compressive sensing. This new framework gives sharp quantitative trade-offs between the signal sparsity and the recovery accuracy of the ℓ₁ optimization for approximately sparse signal. Our results concern the fundamental "balancedness" properties of linear subspaces and so may be of independent mathematical interest.

The conventional approach to compressed sensing assumes no prior information on the unknown signal other than the fact that it is sufficiently sparse over a particular basis. In many applications, however, additional prior information is available. In chapter 4, we will consider a particular model for the sparse signal that assigns a probability of being zero or nonzero to each entry of the unknown vector. The standard compressed sensing model is therefore a special case where these probabilities are all equal. Following the introduction of the null-space Grassmann angle-based analytical framework in this thesis, we are able to characterize the optimal recoverable sparsity thresholds using weighted ℓ₁ minimization algorithms with the prior information.

The roles of ℓ₁ minimization algorithm in recovering sparse signals from incomplete measurements are now well understood, and sharp recoverable sparsity thresholds for ℓ₁ minimization have been obtained. The iterative reweighted ℓ₁ minimization algorithms or related algorithms have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, but no rigorous theoretical results have been established to prove this fact. In chapter 5, we try to provide a theoretical foundation for analyzing the iterative reweighted ℓ₁ algorithms. In particular, we show that for a nontrivial class of signals, the iterative reweighted ℓ₁ minimization can indeed deliver recoverable sparsity thresholds larger than the ℓ₁ minimization. Again, our results are based on the null-space Grassmann angle-based analytical framework.

Evolving from compressive sensing problems, where we are interested in recovering sparse vector signals from compressed linear measurements, we will turn our attention to recovering matrices of low rank from compressed linear measurements in chapter 6, which is a challenging problem that arises in many applications in machine learning, control theory, and discrete geometry. This class of optimization problems is NP-HARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic replaces the rank function with the nuclear norm of the decision variable and has been shown to provide the optimal low rank solution in a variety of scenarios. We analytically assess the practical performance of this heuristic for finding the minimum rank matrix subject to linear constraints. We start from the characterization of a necessary and sufficient condition that determines when this heuristic finds the minimum rank solution. We then obtain probabilistic bounds on the matrix dimensions and rank and the number of constraints, such that our conditions for success are satisfied for almost all linear constraint sets as the matrix dimensions tend to infinity. Empirical evidence shows that these probabilistic bounds provide accurate predictions of the heuristic's performance in non-asymptotic scenarios.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:Information Theory; Signal Processing; Sampling Theory;Compressive Sensing; Matrix Rank Minimization; Sparse Approximation; Basis Pursuit;Nuclear Norm Minimization; Sparse Signal Recovery;Linear Subspace Balancedness; Expander Graph;Grassmann Angle; Weighted Basis Pursuit;
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Electrical Engineering
Minor Option:Applied And Computational Mathematics
Awards:Charles and Ellen Wilts Prize, 2010
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Hassibi, Babak
Thesis Committee:
  • Hassibi, Babak (chair)
  • Candes, Emmanuel J.
  • Vaidyanathan, P. P.
  • McEliece, Robert J.
  • Tropp, Joel A.
Defense Date:13 August 2009
Non-Caltech Author Email:weiyu (AT) systems.caltech.edu
Record Number:CaltechTHESIS:10262009-081233260
Persistent URL:https://resolver.caltech.edu/CaltechTHESIS:10262009-081233260
DOI:10.7907/F63K-GT12
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:5329
Collection:CaltechTHESIS
Deposited By: Weiyu Xu
Deposited On:21 Dec 2009 18:43
Last Modified:08 Nov 2019 18:07

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

1MB

Repository Staff Only: item control page