oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Bayesian and L1 Approaches to Sparse Unsupervised Learning  [PDF]
Shakir Mohamed,Katherine Heller,Zoubin Ghahramani
Computer Science , 2011,
Abstract: The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L1 methods, in this paper we find that L1 regularisation often dramatically underperforms in terms of predictive performance when compared with other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L1 minimising factor models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner and avoiding unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance.
Unsupervised Feature Learning by Deep Sparse Coding  [PDF]
Yunlong He,Koray Kavukcuoglu,Yun Wang,Arthur Szlam,Yanjun Qi
Computer Science , 2013,
Abstract: In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
The Annealing Sparse Bayesian Learning Algorithm  [PDF]
Benyuan Liu,Hongqi Fan,Zaiqi Lu,Qiang Fu
Computer Science , 2012,
Abstract: In this paper we propose a two-level hierarchical Bayesian model and an annealing schedule to re-enable the noise variance learning capability of the fast marginalized Sparse Bayesian Learning Algorithms. The performance such as NMSE and F-measure can be greatly improved due to the annealing technique. This algorithm tends to produce the most sparse solution under moderate SNR scenarios and can outperform most concurrent SBL algorithms while pertains small computational load.
Multiple Kernel Sparse Representations for Supervised and Unsupervised Learning  [PDF]
Jayaraman J. Thiagarajan,Karthikeyan Natesan Ramamurthy,Andreas Spanias
Computer Science , 2013, DOI: 10.1109/TIP.2014.2322938
Abstract: In complex visual recognition tasks it is typical to adopt multiple descriptors, that describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1-D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods.
On The Sparse Bayesian Learning Of Linear Models  [PDF]
Yves Atchade,Chia Chye Yee
Statistics , 2015,
Abstract: This work is a re-examination of the sparse Bayesian learning (SBL) of linear regression models of Tipping (2001) in a high-dimensional setting. We propose a hard-thresholded version of the SBL estimator that achieves, for orthogonal design matrices, the non-asymptotic estimation error rate of $\sigma\sqrt{s\log p}/\sqrt{n}$, where $n$ is the sample size, $p$ the number of regressors, $\sigma$ is the regression model standard deviation, and $s$ the number of non-zero regression coefficients. We also establish that with high-probability the estimator identifies the non-zero regression coefficients. In our simulations we found that sparse Bayesian learning regression performs better than lasso (Tibshirani (1996)) when the signal to be recovered is strong.
Bayesian Unsupervised Learning of DNA Regulatory Binding Regions  [PDF]
Jukka Corander,Magnus Ekdahl,Timo Koski
Advances in Artificial Intelligence , 2009, DOI: 10.1155/2009/219743
Abstract: Identification of regulatory binding motifs, that is, short specific words, within DNA sequences is a commonly occurring problem in computational bioinformatics. A wide variety of probabilistic approaches have been proposed in the literature to either scan for previously known motif types or to attempt de novo identification of a fixed number (typically one) of putative motifs. Most approaches assume the existence of reliable biodatabase information to build probabilistic a priori description of the motif classes. Examples of attempts to do probabilistic unsupervised learning about the number of putative de novo motif types and their positions within a set of DNA sequences are very rare in the literature. Here we show how such a learning problem can be formulated using a Bayesian model that targets to simultaneously maximize the marginal likelihood of sequence data arising under multiple motif types as well as under the background DNA model, which equals a variable length Markov chain. It is demonstrated how the adopted Bayesian modelling strategy combined with recently introduced nonstandard stochastic computation tools yields a more tractable learning procedure than is possible with the standard Monte Carlo approaches. Improvements and extensions of the proposed approach are also discussed.
Supervised Dictionary Learning by a Variational Bayesian Group Sparse Nonnegative Matrix Factorization  [PDF]
Ivan Ivek
Computer Science , 2014,
Abstract: Nonnegative matrix factorization (NMF) with group sparsity constraints is formulated as a probabilistic graphical model and, assuming some observed data have been generated by the model, a feasible variational Bayesian algorithm is derived for learning model parameters. When used in a supervised learning scenario, NMF is most often utilized as an unsupervised feature extractor followed by classification in the obtained feature subspace. Having mapped the class labels to a more general concept of groups which underlie sparsity of the coefficients, what the proposed group sparse NMF model allows is incorporating class label information to find low dimensional label-driven dictionaries which not only aim to represent the data faithfully, but are also suitable for class discrimination. Experiments performed in face recognition and facial expression recognition domains point to advantages of classification in such label-driven feature subspaces over classification in feature subspaces obtained in an unsupervised manner.
Unsupervised Learning of Noisy-Or Bayesian Networks  [PDF]
Yonatan Halpern,David Sontag
Computer Science , 2013,
Abstract: This paper considers the problem of learning the parameters in Bayesian networks of discrete variables with known structure and hidden variables. Previous approaches in these settings typically use expectation maximization; when the network has high treewidth, the required expectations might be approximated using Monte Carlo or variational methods. We show how to avoid inference altogether during learning by giving a polynomial-time algorithm based on the method-of-moments, building upon recent work on learning discrete-valued mixture models. In particular, we show how to learn the parameters for a family of bipartite noisy-or Bayesian networks. In our experimental results, we demonstrate an application of our algorithm to learning QMR-DT, a large Bayesian network used for medical diagnosis. We show that it is possible to fully learn the parameters of QMR-DT even when only the findings are observed in the training data (ground truth diseases unknown).
Fast Marginalized Block Sparse Bayesian Learning Algorithm  [PDF]
Benyuan Liu,Zhilin Zhang,Hongqi Fan,Qiang Fu
Computer Science , 2012,
Abstract: The performance of sparse signal recovery from noise corrupted, underdetermined measurements can be improved if both sparsity and correlation structure of signals are exploited. One typical correlation structure is the intra-block correlation in block sparse signals. To exploit this structure, a framework, called block sparse Bayesian learning (BSBL), has been proposed recently. Algorithms derived from this framework showed superior performance but they are not very fast, which limits their applications. This work derives an efficient algorithm from this framework, using a marginalized likelihood maximization method. Compared to existing BSBL algorithms, it has close recovery performance but is much faster. Therefore, it is more suitable for large scale datasets and applications requiring real-time implementation.
Computationally Efficient Sparse Bayesian Learning via Generalized Approximate Message Passing  [PDF]
Fuwei Li,Jun Fang,Huiping Duan,Zhi Chen,Hongbin Li
Mathematics , 2015,
Abstract: The sparse Beyesian learning (also referred to as Bayesian compressed sensing) algorithm is one of the most popular approaches for sparse signal recovery, and has demonstrated superior performance in a series of experiments. Nevertheless, the sparse Bayesian learning algorithm has computational complexity that grows exponentially with the dimension of the signal, which hinders its application to many practical problems even with moderately large data sets. To address this issue, in this paper, we propose a computationally efficient sparse Bayesian learning method via the generalized approximate message passing (GAMP) technique. Specifically, the algorithm is developed within an expectation-maximization (EM) framework, using GAMP to efficiently compute an approximation of the posterior distribution of hidden variables. The hyperparameters associated with the hierarchical Gaussian prior are learned by iteratively maximizing the Q-function which is calculated based on the posterior approximation obtained from the GAMP. Numerical results are provided to illustrate the computational efficacy and the effectiveness of the proposed algorithm.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.