oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Fast Landmark Subspace Clustering  [PDF]
Xu Wang,Gilad Lerman
Statistics , 2015,
Abstract: Kernel methods obtain superb performance in terms of accuracy for various machine learning tasks since they can effectively extract nonlinear relations. However, their time complexity can be rather large especially for clustering tasks. In this paper we define a general class of kernels that can be easily approximated by randomization. These kernels appear in various applications, in particular, traditional spectral clustering, landmark-based spectral clustering and landmark-based subspace clustering. We show that for $n$ data points from $K$ clusters with $D$ landmarks, the randomization procedure results in an algorithm of complexity $O(KnD)$. Furthermore, we bound the error between the original clustering scheme and its randomization. To illustrate the power of this framework, we propose a new fast landmark subspace (FLS) clustering algorithm. Experiments over synthetic and real datasets demonstrate the superior performance of FLS in accelerating subspace clustering with marginal sacrifice of accuracy.
Learning Robust Subspace Clustering  [PDF]
Qiang Qiu,Guillermo Sapiro
Computer Science , 2013,
Abstract: We propose a low-rank transformation-learning framework to robustify subspace clustering. Many high-dimensional data, such as face images and motion sequences, lie in a union of low-dimensional subspaces. The subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a high-rank structure for data from different subspaces. In this way, we reduce variations within the subspaces, and increase separations between the subspaces for more accurate subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. To exploit the low-rank structures of the transformed subspaces, we further introduce a subspace clustering technique, called Robust Sparse Subspace Clustering, which efficiently combines robust PCA with sparse modeling. We also discuss the online learning of the transformation, and learning of the transformation while simultaneously reducing the data dimensionality. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art subspace clustering methods.
Noisy Sparse Subspace Clustering  [PDF]
Yu-Xiang Wang,Huan Xu
Statistics , 2013,
Abstract: This paper considers the problem of subspace clustering under noise. Specifically, we study the behavior of Sparse Subspace Clustering (SSC) when either adversarial or random noise is added to the unlabelled input data points, which are assumed to be in a union of low-dimensional subspaces. We show that a modified version of SSC is \emph{provably effective} in correctly identifying the underlying subspaces, even with noisy data. This extends theoretical guarantee of this algorithm to more practical settings and provides justification to the success of SSC in a class of real applications.
Robust subspace clustering  [PDF]
Mahdi Soltanolkotabi,Ehsan Elhamifar,Emmanuel J. Candès
Mathematics , 2013, DOI: 10.1214/13-AOS1199
Abstract: Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2009) 2790-2797] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.
Inductive Sparse Subspace Clustering  [PDF]
Xi Peng,Lei Zhang,Zhang Yi
Computer Science , 2013, DOI: 10.1049/el.2013.1789
Abstract: Sparse Subspace Clustering (SSC) has achieved state-of-the-art clustering quality by performing spectral clustering over a $\ell^{1}$-norm based similarity graph. However, SSC is a transductive method which does not handle with the data not used to construct the graph (out-of-sample data). For each new datum, SSC requires solving $n$ optimization problems in O(n) variables for performing the algorithm over the whole data set, where $n$ is the number of data points. Therefore, it is inefficient to apply SSC in fast online clustering and scalable graphing. In this letter, we propose an inductive spectral clustering algorithm, called inductive Sparse Subspace Clustering (iSSC), which makes SSC feasible to cluster out-of-sample data. iSSC adopts the assumption that high-dimensional data actually lie on the low-dimensional manifold such that out-of-sample data could be grouped in the embedding space learned from in-sample data. Experimental results show that iSSC is promising in clustering out-of-sample data.
Multilinear Subspace Clustering  [PDF]
Eric Kernfeld,Nathan Majumder,Shuchin Aeron,Misha Kilmer
Computer Science , 2015,
Abstract: In this paper we present a new model and an algorithm for unsupervised clustering of 2-D data such as images. We assume that the data comes from a union of multilinear subspaces (UOMS) model, which is a specific structured case of the much studied union of subspaces (UOS) model. For segmentation under this model, we develop Multilinear Subspace Clustering (MSC) algorithm and evaluate its performance on the YaleB and Olivietti image data sets. We show that MSC is highly competitive with existing algorithms employing the UOS model in terms of clustering performance while enjoying improvement in computational complexity.
An Initilization Method for Subspace Clustering Algorithm  [cached]
Qingshan Jiang,Yanping Zhang,Lifei Chen
International Journal of Intelligent Systems and Applications , 2011,
Abstract: Soft subspace clustering is an important part and research hotspot in clustering research. Clustering in high dimensional space is especially difficult due to the sparse distribution of the data and the curse of dimensionality. By analyzing limitations of the existing algorithms, the concept of subspace difference and an improved initialization method are proposed. Based on these, a new objective function is given by taking into account the compactness of the subspace clusters and subspace difference of the clusters. And a subspace clustering algorithm based on k-means is presented. Theoretical analysis and experimental results demonstrate that the proposed algorithm significantly improves the accuracy.
Clustering Consistent Sparse Subspace Clustering  [PDF]
Yining Wang,Yu-Xiang Wang,Aarti Singh
Computer Science , 2015,
Abstract: Subspace clustering is the problem of clustering data points into a union of low-dimensional linear or affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image processing and has been drawing avid attention in machine learning and statistics recently. In particular, a line of recent work (Elhamifar and Vidal, 2013; Soltanolkotabi et al., 2012; Wang and Xu, 2013; Soltanolkotabi et al., 2014) provided strong theoretical guarantee for the seminal algorithm: Sparse Subspace Clustering (SSC) (Elhamifar and Vidal, 2013) under various settings, and to some extent, justified its state-of-the-art performance in applications such as motion segmentation and face clustering. The focus of these work has been getting milder conditions under which SSC obeys "self-expressiveness property", which ensures that no two points from different subspaces can be clustered together. Such guarantee however is not sufficient for the clustering to be correct, thanks to the notorious "graph connectivity problem" (Nasihatkon and Hartley, 2011). In this paper, we show that this issue can be resolved by a very simple post-processing procedure under only a mild "general position" assumption. In addition, we show that the approach is robust to arbitrary bounded perturbation of the data whenever the "general position" assumption holds with a margin. These results provide the first exact clustering guarantee of SSC for subspaces of dimension greater than 3.
Greedy Subspace Clustering  [PDF]
Dohyung Park,Constantine Caramanis,Sujay Sanghavi
Computer Science , 2014,
Abstract: We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets to estimate the subspaces. As the geometric structure of the clusters (linear subspaces) forbids proper performance of general distance based approaches such as K-means, many model-specific methods have been proposed. In this paper, we provide new simple and efficient algorithms for this problem. Our statistical analysis shows that the algorithms are guaranteed exact (perfect) clustering performance under certain conditions on the number of points and the affinity between subspaces. These conditions are weaker than those considered in the standard statistical literature. Experimental results on synthetic data generated from the standard unions of subspaces model demonstrate our theory. We also show that our algorithm performs competitively against state-of-the-art algorithms on real-world applications such as motion segmentation and face clustering, with much simpler implementation and lower computational cost.
Filtrated Spectral Algebraic Subspace Clustering  [PDF]
Manolis C. Tsakiris,Rene Vidal
Computer Science , 2015,
Abstract: Algebraic Subspace Clustering (ASC) is a simple and elegant method based on polynomial fitting and differentiation for clustering noiseless data drawn from an arbitrary union of subspaces. In practice, however, ASC is limited to equi-dimensional subspaces because the estimation of the subspace dimension via algebraic methods is sensitive to noise. This paper proposes a new ASC algorithm that can handle noisy data drawn from subspaces of arbitrary dimensions. The key ideas are (1) to construct, at each point, a decreasing sequence of subspaces containing the subspace passing through that point; (2) to use the distances from any other point to each subspace in the sequence to construct a subspace clustering affinity, which is superior to alternative affinities both in theory and in practice. Experiments on the Hopkins 155 dataset demonstrate the superiority of the proposed method with respect to sparse and low rank subspace clustering methods.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.