CaltechTHESIS
  A Caltech Library Service

Recovering Structured Signals in High Dimensions via Non-Smooth Convex Optimization: Precise Performance Analysis

Citation

Thrampoulidis, Christos (2016) Recovering Structured Signals in High Dimensions via Non-Smooth Convex Optimization: Precise Performance Analysis. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/Z998850V. https://resolver.caltech.edu/CaltechTHESIS:06032016-144604076

Abstract

The typical scenario that arises in modern large-scale inference problems is one where the ambient dimension of the unknown signal is very large (e.g., high-resolution images, recommendation systems), yet its desired properties lie in some low-dimensional structure such as, sparsity or low-rankness. In the past couple of decades, non-smooth convex optimization methods have emerged as a powerful tool to extract those structures, since they are often computationally efficient, and also they offer enough flexibility while simultaneously being amenable to performance analysis. Especially, since the advent of Compressed Sensing (CS) there has been significant progress towards this direction. One of the key ideas is that random linear measurements offer an efficient way to acquire structured signals. When the measurement matrix has entries iid from a wide class of distributions (including Gaussians), a series of recent works have established a complete and transparent theory that precisely captures the performance in the noiseless setting. In the more practical scenario of noisy measurements the performance analysis task becomes significantly more challenging and corresponding precise and unifying results have hitherto remained scarce. The available class of optimization methods, often referred to as regularized M-estimators, is now richer; additional factors (e.g., the noise distribution, the loss function, and the regularizer parameter) and several different measures of performance (e.g., squared-error, probability of support recovery) need to be taken into account.

This thesis develops a novel analytical framework that overcomes these challenges, and establishes {precise asymptotic performance guarantees for regularized M-estimators under Gaussian measurement matrices. In particular, the framework allows for a unifying analysis among different instances (such as the Generalized LASSO, and the LAD, to name a few) and accounts for a wide class of performance measures. Among others, we show results on the mean-squared-error of the Generalized-LASSO method and make insightful connections to the classical theory of ordinary least squares and to noiseless CS. Empirical evidence is presented that suggests the Gaussian assumption is not necessary. Beyond iid measurement matrices, motivated by practical considerations, we study certain classes of random matrices with orthogonal rows and establish their superior performance when compared to Gaussians.

A prominent application of this generic theory is on the analysis of the bit-error rate (BER) of the popular convex-relaxation of the Maximum Likelihood decoder for recovering BPSK signals in a massive Multiple Input Multiple Output setting. Our precise BER analysis allows comparison of these schemes to the unattainable Matched-filter bound, and further suggests means to provably boost their performance.

The last challenge is to evaluate the performance under non-linear measurements. For the Generalized LASSO, it is shown that this is (asymptotically) equivalent to the one under noisy linear measurements with appropriately scaled variance. This encompasses state-of-the art theoretical results of one-bit CS , and is also used to prove that the optimal quantizer of the measurements that minimizes the estimation error of the Generalized LASSO is the celebrated Lloyd-Max quantizer.

The framework is based on Gaussian process methods; in particular, on a new strong and tight version of a classical comparison inequality (due to Gordon, 1988) in the presence of additional convexity assumptions. We call this the Convex Gaussian Min-max Theorem (CGMT).

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:convex relaxations; structured signals; high-dimensional estimation; linear inverse problems; compressed sensing; Gaussian comparisons.
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Electrical Engineering
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Hassibi, Babak
Thesis Committee:
  • Hassibi, Babak (chair)
  • Vaidyanathan, P. P.
  • Tropp, Joel A.
  • Wierman, Adam C.
  • Chandrasekaran, Venkat
Defense Date:17 May 2016
Funders:
Funding AgencyGrant Number
NSFCCF-1018927
NSFCCF-1423663
NSFCNS-0932428
NSFCCF-1409204
Andreas Mentzelopoulos Scholarships for the University of PatrasUNSPECIFIED
Qualcomm Inc.UNSPECIFIED
NASA’s Jet Propulsion Laboratory through the President and Directors FundUNSPECIFIED
Record Number:CaltechTHESIS:06032016-144604076
Persistent URL:https://resolver.caltech.edu/CaltechTHESIS:06032016-144604076
DOI:10.7907/Z998850V
ORCID:
AuthorORCID
Thrampoulidis, Christos0000-0001-9053-9365
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:9836
Collection:CaltechTHESIS
Deposited By: Christos Thrampoulidis
Deposited On:06 Jun 2016 20:20
Last Modified:04 Oct 2019 00:13

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

2MB

Repository Staff Only: item control page