oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
An iterative approach to a constrained least squares problem  [PDF]
Simeon Reich,Hong-Kun Xu
Abstract and Applied Analysis , 2003, DOI: 10.1155/s1085337503212082
Abstract: A constrained least squares problem in a Hilbert space H is considered. The standard Tikhonov regularization method is used.In the case where the set of the constraints is the nonempty intersection of a finite collection of closed convex subsets of H, an iterative algorithm is designed. The resulting sequence is shown to converge strongly to the unique solution of the regularized problem. The net of the solutions to the regularized problems strongly converges to the minimum norm solution of the least squares problem if its solution set is nonempty.
On the filtering effect of iterative regularization algorithms for linear least-squares problems  [PDF]
Anastasia Cornelio,Federica Porta,Marco Prato,Luca Zanni
Mathematics , 2013, DOI: 10.1088/0266-5611/29/12/125013
Abstract: Many real-world applications are addressed through a linear least-squares problem formulation, whose solution is calculated by means of an iterative approach. A huge amount of studies has been carried out in the optimization field to provide the fastest methods for the reconstruction of the solution, involving choices of adaptive parameters and scaling matrices. However, in presence of an ill-conditioned model and real data, the need of a regularized solution instead of the least-squares one changed the point of view in favour of iterative algorithms able to combine a fast execution with a stable behaviour with respect to the restoration error. In this paper we want to analyze some classical and recent gradient approaches for the linear least-squares problem by looking at their way of filtering the singular values, showing in particular the effects of scaling matrices and non-negative constraints in recovering the correct filters of the solution.
LSMR: An iterative algorithm for sparse least-squares problems  [PDF]
David Fong,Michael Saunders
Mathematics , 2010, DOI: 10.1016/j.ijheatmasstransfer.2011.12.029
Abstract: An iterative method LSMR is presented for solving linear systems $Ax=b$ and least-squares problem $\min \norm{Ax-b}_2$, with $A$ being sparse or a fast linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the MINRES method applied to the normal equation $A\T Ax = A\T b$, so that the quantities $\norm{A\T r_k}$ are monotonically decreasing (where $r_k = b - Ax_k$ is the residual for the current iterate $x_k$). In practice we observe that $\norm{r_k}$ also decreases monotonically. Compared to LSQR, for which only $\norm{r_k}$ is monotonic, it is safer to terminate LSMR early. Improvements for the new iterative method in the presence of extra available memory are also explored.
An Iterative Algorithm for the Least Squares Generalized Reflexive Solutions of the Matrix Equations
Feng Yin,Guang-Xin Huang
Abstract and Applied Analysis , 2012, DOI: 10.1155/2012/857284
Abstract: The generalized coupled Sylvester systems play a fundamental role in wide applications in several areas, such as stability theory, control theory, perturbation analysis, and some other fields of pure and applied mathematics. The iterative method is an important way to solve the generalized coupled Sylvester systems. In this paper, an iterative algorithm is constructed to solve the minimum Frobenius norm residual problem: ‖?‖= min over generalized reflexive matrix . For any initial generalized reflexive matrix 1, by the iterative algorithm, the generalized reflexive solution ? can be obtained within finite iterative steps in the absence of round-off errors, and the unique least-norm generalized reflexive solution ? can also be derived when an appropriate initial iterative matrix is chosen. Furthermore, the unique optimal approximate solution  to a given matrix 0 in Frobenius norm can be derived by finding the least-norm generalized reflexive solution ? of a new corresponding minimum Frobenius norm residual problem: min‖?‖ with =?0, =?0. Finally, several numerical examples are given to illustrate that our iterative algorithm is effective.
Large scale canonical correlation analysis with iterative least squares  [PDF]
Yichao Lu,Dean P. Foster
Statistics , 2014,
Abstract: Canonical Correlation Analysis (CCA) is a widely used statistical tool with both well established theory and favorable performance for a wide range of machine learning problems. However, computing CCA for huge datasets can be very slow since it involves implementing QR decomposition or singular value decomposition of huge matrices. In this paper we introduce L-CCA, a iterative algorithm which can compute CCA fast on huge sparse datasets. Theory on both the asymptotic convergence and finite time accuracy of L-CCA are established. The experiments also show that L-CCA outperform other fast CCA approximation schemes on two real datasets.
NLOS Identification and Weighted Least-Squares Localization for UWB Systems Using Multipath Channel Statistics  [cached]
?smail Güven?,Chia-Chin Chong,Fujio Watanabe,Hiroshi Inamura
EURASIP Journal on Advances in Signal Processing , 2007, DOI: 10.1155/2008/271984
Abstract: Non-line-of-sight (NLOS) identification and mitigation carry significant importance in wireless localization systems. In this paper, we propose a novel NLOS identification technique based on the multipath channel statistics such as the kurtosis, the mean excess delay spread, and the root-mean-square delay spread. In particular, the IEEE 802.15.4a ultrawideband channel models are used as examples and the above statistics are found to be well modeled by log-normal random variables. Subsequently, a joint likelihood ratio test is developed for line-of-sight (LOS) or NLOS identification. Three different weighted least-squares (WLSs) localization techniques that exploit the statistics of multipath components (MPCs) are analyzed. The basic idea behind the proposed WLS approaches is that smaller weights are given to the measurements which are likely to be biased (based on the MPC information), as opposed to variance-based WLS techniques in the literature. Accuracy gains with respect to the conventional least-squares algorithm are demonstrated via Monte-Carlo simulations and verified by theoretical derivations.
Linearized iterative least-squares (LIL): A parameter fitting algorithm for component separation in multifrequency CMB experiments such as Planck  [PDF]
Rishi Khatri
Physics , 2014, DOI: 10.1093/mnras/stv1167
Abstract: We present an efficient algorithm for the least squares parameter fitting optimized for component separation in multi-frequency CMB experiments. We sidestep some of the problems associated with non-linear optimization by taking advantage of the quasi-linear nature of the foreground model. We demonstrate our algorithm, linearized iterative least-squares (LIL), on the publicly available Planck sky model FFP6 simulations and compare our result with the other algorithms. We work at full Planck resolution and show that degrading the resolution of all channels to that of the lowest frequency channel is not necessary. Finally we present results for the publicly available Planck data. Our algorithm is extremely fast, fitting 6 parameters to 7 lowest Planck channels at full resolution (50 million pixels) in less than 160 CPU-minutes (or few minutes running in parallel on few tens of cores). LIL is therefore easily scalable to future experiments which may have even higher resolution and more frequency channels. We also naturally propagate the uncertainties in different parameters due to noise in the maps as well as degeneracies between the parameters to the final errors on the parameters using Fisher matrix. One indirect application of LIL could be a front-end for Bayesian parameter fitting to find the maximum of the likelihood to be used as the starting point for the Gibbs sampling. We show for rare components, such as the carbon-monoxide emission, present in small fraction of sky, the optimal approach should combine parameter fitting with model selection. LIL may also be useful in other astrophysical applications which satisfy the quasi-linearity criteria.
Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares  [PDF]
Mert Pilanci,Martin J. Wainwright
Mathematics , 2014,
Abstract: We study randomized sketching methods for approximately solving least-squares problem with a general convex constraint. The quality of a least-squares approximation can be assessed in different ways: either in terms of the value of the quadratic objective function (cost approximation), or in terms of some distance measure between the approximate minimizer and the true minimizer (solution approximation). Focusing on the latter criterion, our first main result provides a general lower bound on any randomized method that sketches both the data matrix and vector in a least-squares problem; as a surprising consequence, the most widely used least-squares sketch is sub-optimal for solution approximation. We then present a new method known as the iterative Hessian sketch, and show that it can be used to obtain approximations to the original least-squares problem using a projection dimension proportional to the statistical complexity of the least-squares minimizer, and a logarithmic number of iterations. We illustrate our general theory with simulations for both unconstrained and constrained versions of least-squares, including $\ell_1$-regularization and nuclear norm constraints. We also numerically demonstrate the practicality of our approach in a real face expression classification experiment.
Reduced-Rank Space-Time Interference Suppression with Joint Iterative Least Squares Algorithms for Spread Spectrum Systems  [PDF]
Rodrigo C. de Lamare,Raimundo Sampaio-Neto
Mathematics , 2013,
Abstract: This paper presents novel adaptive space-time reduced-rank interference suppression least squares algorithms based on joint iterative optimization of parameter vectors. The proposed space-time reduced-rank scheme consists of a joint iterative optimization of a projection matrix that performs dimensionality reduction and an adaptive reduced-rank parameter vector that yields the symbol estimates. The proposed techniques do not require singular value decomposition (SVD) and automatically find the best set of basis for reduced-rank processing. We present least squares (LS) expressions for the design of the projection matrix and the reduced-rank parameter vector and we conduct an analysis of the convergence properties of the LS algorithms. We then develop recursive least squares (RLS) adaptive algorithms for their computationally efficient estimation and an algorithm for automatically adjusting the rank of the proposed scheme. A convexity analysis of the LS algorithms is carried out along with the development of a proof of convergence for the proposed algorithms. Simulations for a space-time interference suppression application with a DS-CDMA system show that the proposed scheme outperforms in convergence and tracking the state-of-the-art reduced-rank schemes at a comparable complexity.
A Gauss-Seidel Iterative Thresholding Algorithm for lq Regularized Least Squares Regression  [PDF]
Jinshan Zeng,Zhimin Peng,Shaobo Lin
Computer Science , 2015,
Abstract: In recent studies on sparse modeling, $l_q$ ($0
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.