oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
RSP-Based Analysis for Sparsest and Least $\ell_1$-Norm Solutions to Underdetermined Linear Systems  [PDF]
Yunbin Zhao
Computer Science , 2013, DOI: 10.1109/TSP.2013.2281030
Abstract: Recently, the worse-case analysis, probabilistic analysis and empirical justification have been employed to address the fundamental question: When does $\ell_1$-minimization find the sparsest solution to an underdetermined linear system? In this paper, a deterministic analysis, rooted in the classic linear programming theory, is carried out to further address this question. We first identify a necessary and sufficient condition for the uniqueness of least $\ell_1$-norm solutions to linear systems. From this condition, we deduce that a sparsest solution coincides with the unique least $\ell_1$-norm solution to a linear system if and only if the so-called \emph{range space property} (RSP) holds at this solution. This yields a broad understanding of the relationship between $\ell_0$- and $\ell_1$-minimization problems. Our analysis indicates that the RSP truly lies at the heart of the relationship between these two problems. Through RSP-based analysis, several important questions in this field can be largely addressed. For instance, how to efficiently interpret the gap between the current theory and the actual numerical performance of $\ell_1$-minimization by a deterministic analysis, and if a linear system has multiple sparsest solutions, when does $\ell_1$-minimization guarantee to find one of them? Moreover, new matrix properties (such as the \emph{RSP of order $K$} and the \emph{Weak-RSP of order $K$}) are introduced in this paper, and a new theory for sparse signal recovery based on the RSP of order $K$ is established.
Best $\ell_1$-approximation of nonnegative polynomials by sums of squares  [PDF]
Jean Lasserre
Mathematics , 2010,
Abstract: Given a nonnegative polynomial f, we provide an explicit expression for its best $\ell_1$-norm approximation by a sum of squares of given degree.
Finding the largest low-rank clusters with Ky Fan $2$-$k$-norm and $\ell_1$-norm  [PDF]
Xuan Vinh Doan,Stephen Vavasis
Mathematics , 2014,
Abstract: We propose a convex optimization formulation with the Ky Fan $2$-$k$-norm and $\ell_1$-norm to find $k$ largest approximately rank-one submatrix blocks of a given nonnegative matrix that has low-rank block diagonal structure with noise. We analyze low-rank and sparsity structures of the optimal solutions using properties of these two matrix norms. We show that, under certain hypotheses, with high probability, the approach can recover rank-one submatrix blocks even when they are corrupted with random noise and inserted into a much larger matrix with other random noise blocks.
Sparsest Error Detection via Sparsity Invariant Transformation based $\ell_1$ Minimization  [PDF]
Suzhen Wang,Sheng Han,Zhiguo Zhang,Wing Shing Wong
Statistics , 2015,
Abstract: This paper presents a new method, referred to here as the sparsity invariant transformation based $\ell_1$ minimization, to solve the $\ell_0$ minimization problem for an over-determined linear system corrupted by additive sparse errors with arbitrary intensity. Many previous works have shown that $\ell_1$ minimization can be applied to realize sparse error detection in many over-determined linear systems. However, performance of this approach is strongly dependent on the structure of the measurement matrix, which limits application possibility in practical problems. Here, we present a new approach based on transforming the $\ell_0$ minimization problem by a linear transformation that keeps sparsest solutions invariant. We call such a property a sparsity invariant property (SIP), and a linear transformation with SIP is referred to as a sparsity invariant transformation (SIT). We propose the SIT-based $\ell_1$ minimization method by using an SIT in conjunction with $\ell_1$ relaxation on the $\ell_0$ minimization problem. We prove that for any over-determined linear system, there always exists a specific class of SIT's that guarantees a solution to the SIT-based $\ell_1$ minimization is a sparsest-errors solution. Besides, a randomized algorithm based on Monte Carlo simulation is proposed to search for a feasible SIT.
Borel equivalence relations between \ell_1 and \ell_p  [PDF]
Longyun Ding,Zhi Yin
Mathematics , 2011,
Abstract: In this paper, we show that, for each $p>1$, there are continuum many Borel equivalence relations between $\Bbb R^\omega/\ell_1$ and $\Bbb R^\omega/\ell_p$ ordered by $\le_B$ which are pairwise Borel incomparable.
The Ordered Weighted $\ell_1$ Norm: Atomic Formulation, Projections, and Algorithms  [PDF]
Xiangrong Zeng,Mário A. T. Figueiredo
Computer Science , 2014,
Abstract: The ordered weighted $\ell_1$ norm (OWL) was recently proposed, with two different motivations: its good statistical properties as a sparsity promoting regularizer; the fact that it generalizes the so-called {\it octagonal shrinkage and clustering algorithm for regression} (OSCAR), which has the ability to cluster/group regression variables that are highly correlated. This paper contains several contributions to the study and application of OWL regularization: the derivation of the atomic formulation of the OWL norm; the derivation of the dual of the OWL norm, based on its atomic formulation; a new and simpler derivation of the proximity operator of the OWL norm; an efficient scheme to compute the Euclidean projection onto an OWL ball; the instantiation of the conditional gradient (CG, also known as Frank-Wolfe) algorithm for linear regression problems under OWL regularization; the instantiation of accelerated projected gradient algorithms for the same class of problems. Finally, a set of experiments give evidence that accelerated projected gradient algorithms are considerably faster than CG, for the class of problems considered.
An $O(n\log(n))$ Algorithm for Projecting Onto the Ordered Weighted $\ell_1$ Norm Ball  [PDF]
Damek Davis
Mathematics , 2015,
Abstract: The ordered weighted $\ell_1$ (OWL) norm is a newly developed generalization of the Octogonal Shrinkage and Clustering Algorithm for Regression (OSCAR) norm. This norm has desirable statistical properties and can be used to perform simultaneous clustering and regression. In this paper, we show how to compute the projection of an $n$-dimensional vector onto the OWL norm ball in $O(n\log(n))$ operations. In addition, we illustrate the performance of our algorithm on a synthetic regression test.
An $\mathcal{O}(n\log n)$ projection operator for weighted $\ell_1$-norm regularization with sum constraint  [PDF]
Weiran Wang
Computer Science , 2015,
Abstract: We provide a simple and efficient algorithm for the projection operator for weighted $\ell_1$-norm regularization subject to a sum constraint, together with an elementary proof. The implementation of the proposed algorithm can be downloaded from the author's homepage.
On the Complexity of Robust PCA and $\ell_1$-norm Low-Rank Matrix Approximation  [PDF]
Nicolas Gillis,Stephen A. Vavasis
Mathematics , 2015,
Abstract: The low-rank matrix approximation problem with respect to the component-wise $\ell_1$-norm ($\ell_1$-LRA), which is closely related to robust principal component analysis (PCA), has become a very popular tool in data mining and machine learning. Robust PCA aims at recovering a low-rank matrix that was perturbed with sparse noise, with applications for example in foreground-background video separation. Although $\ell_1$-LRA is strongly believed to be NP-hard, there is, to the best of our knowledge, no formal proof of this fact. In this paper, we prove that $\ell_1$-LRA is NP-hard, already in the rank-one case, using a reduction from MAX CUT. Our derivations draw interesting connections between $\ell_1$-LRA and several other well-known problems, namely, robust PCA, $\ell_0$-LRA, binary matrix factorization, a particular densest bipartite subgraph problem, the computation of the cut norm of $\{-1,+1\}$ matrices, and the discrete basis problem, which we all prove to be NP-hard.
Beyond $\ell_1$-norm minimization for sparse signal recovery  [PDF]
Hassan Mansour
Mathematics , 2012,
Abstract: Sparse signal recovery has been dominated by the basis pursuit denoise (BPDN) problem formulation for over a decade. In this paper, we propose an algorithm that outperforms BPDN in finding sparse solutions to underdetermined linear systems of equations at no additional computational cost. Our algorithm, called WSPGL1, is a modification of the spectral projected gradient for $\ell_1$ minimization (SPGL1) algorithm in which the sequence of LASSO subproblems are replaced by a sequence of weighted LASSO subproblems with constant weights applied to a support estimate. The support estimate is derived from the data and is updated at every iteration. The algorithm also modifies the Pareto curve at every iteration to reflect the new weighted $\ell_1$ minimization problem that is being solved. We demonstrate through extensive simulations that the sparse recovery performance of our algorithm is superior to that of $\ell_1$ minimization and approaches the recovery performance of iterative re-weighted $\ell_1$ (IRWL1) minimization of Cand{\`e}s, Wakin, and Boyd, although it does not match it in general. Moreover, our algorithm has the computational cost of a single BPDN problem.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.