Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 Statistics , 2012, Abstract: Principal component analysis is a useful dimension reduction and data visualization method. However, in high dimension, low sample size asymptotic contexts, where the sample size is fixed and the dimension goes to infinity,a paradox has arisen. In particular, despite the useful real data insights commonly obtained from principal component score visualization, these scores are not consistent even when the sample eigen-vectors are consistent. This paradox is resolved by asymptotic study of the ratio between the sample and population principal component scores. In particular, it is seen that this proportion converges to a non-degenerate random variable. The realization is the same for each data point, i.e. there is a common random rescaling, which appears for each eigen-direction. This then gives inconsistent axis labels for the standard scores plot, yet the relative positions of the points (typically the main visual content) are consistent. This paradox disappears when the sample size goes to infinity.
 Statistics , 2014, Abstract: Plots of scores from principal component analysis are a popular approach to visualize and explore high-dimensional genetic data. However, the inconsistency of the high-dimensional eigenvectors has discredited classical principal component analysis and helped motivate sparse principal component analysis where the eigenvectors are regularized. Still, classical principal component analysis is extensively and successfully used for data visualization, and our aim is to give an explanation of this paradoxical situation. We show that the visual information given by the relative positions of the scores will be consistent, if the related signal can be considered to be pervasive. Firstly, we argue that pervasive signals lead to eigenvalues scaling linearly with the dimension, and we discuss genetic applications where such pervasive signals are reasonable. Secondly, we prove within the high-dimension low sample size regime, that when eigenvalues scale linearly with the dimension, the sample component scores will appear as scaled and rotated versions of the population scores. In consequence, the relative positions and visual information conveyed by the score plots will be consistent.
 Statistics , 2012, DOI: 10.1214/10-AOS813 Abstract: We consider nonparametric estimation of the mean and covariance functions for functional/longitudinal data. Strong uniform convergence rates are developed for estimators that are local-linear smoothers. Our results are obtained in a unified framework in which the number of observations within each curve/cluster can be of any rate relative to the sample size. We show that the convergence rates for the procedures depend on both the number of sample curves and the number of observations on each curve. For sparse functional data, these rates are equivalent to the optimal rates in nonparametric regression. For dense functional data, root-n rates of convergence can be achieved with proper choices of bandwidths. We further derive almost sure rates of convergence for principal component analysis using the estimated covariance function. The results are illustrated with simulation studies.
 Statistics , 2013, DOI: 10.1088/1742-6596/490/1/012081 Abstract: Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis.
 Mathematics , 2009, Abstract: This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
 Tomokazu Konishi Statistics , 2012, Abstract: Motivation: Although principal component analysis is frequently applied to reduce the dimensionality of matrix data, the method is sensitive to noise and bias and has difficulty with comparability and interpretation. These issues are addressed by improving the fidelity to the study design. Principal axes and the components for variables are found through the arrangement of the training data set, and the centers of data are found according to the design. By using both the axes and the center, components for an observation that belong to various studies can be separately estimated. Both of the components for variables and observations are scaled to a unit length, which enables relationships to be seen between them. Results: Analyses in transcriptome studies showed an improvement in the separation of experimental groups and in robustness to bias and noise. Unknown samples were appropriately classified on predetermined axes. These axes well reflected the study design, and this facilitated the interpretation. Together, the introduced concepts resulted in improved generality and objectivity in the analytical results, with the ability to locate hidden structures in the data.
 Journal of Signal and Information Processing (JSIP) , 2013, DOI: 10.4236/jsip.2013.43B031 Abstract: The principal component analysis (PCA) is a kind of algorithms in biometrics. It is a statistics technical and used orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. PCA also is a tool to reduce multidimensional data to lower dimensions while retaining most of the information. It covers standard deviation, covariance, and eigenvectors. This background knowledge is meant to make the PCA section very straightforward, but can be skipped if the concepts are already familiar.
 Mathematics , 2010, Abstract: In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.
 Computer Science , 2014, Abstract: Principal Component Analysis (PCA) has wide applications in machine learning, text mining and computer vision. Classical PCA based on a Gaussian noise model is fragile to noise of large magnitude. Laplace noise assumption based PCA methods cannot deal with dense noise effectively. In this paper, we propose Cauchy Principal Component Analysis (Cauchy PCA), a very simple yet effective PCA method which is robust to various types of noise. We utilize Cauchy distribution to model noise and derive Cauchy PCA under the maximum likelihood estimation (MLE) framework with low rank constraint. Our method can robustly estimate the low rank matrix regardless of whether noise is large or small, dense or sparse. We analyze the robustness of Cauchy PCA from a robust statistics view and present an efficient singular value projection optimization method. Experimental results on both simulated data and real applications demonstrate the robustness of Cauchy PCA to various noise patterns.
 Statistics , 2012, Abstract: In this paper, we address the problem of dimension reduction for time series of functional data $(X_t\colon t\in\mathbb{Z})$. Such {\it functional time series} frequently arise, e.g., when a continuous-time process is segmented into some smaller natural units, such as days. Then each~$X_t$ represents one intraday curve. We argue that functional principal component analysis (FPCA), though a key technique in the field and a benchmark for any competitor, does not provide an adequate dimension reduction in a time-series setting. FPCA indeed is a {\it static} procedure which ignores the essential information provided by the serial dependence structure of the functional data under study. Therefore, inspired by Brillinger's theory of {\it dynamic principal components}, we propose a {\it dynamic} version of FPCA, which is based on a frequency-domain approach. By means of a simulation study and an empirical illustration, we show the considerable improvement the dynamic approach entails when compared to the usual static procedure.
 Page 1 /100 Display every page 5 10 20 Item