oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Efficient Similarity Join Method Using Unsupervised Learning  [PDF]
Bilal Hawashin,Farshad Fotouhi,William Grosky
International Journal of Computer Science & Information Technology , 2012,
Abstract: This paper proposes an efficient similarity join method using unsupervised learning, when no labeled datais available. In our previous work, we showed that the performance of similarity join could improve whenlong string attributes, such as paper abstracts, movie summaries, product descriptions, and user feedback,are used under supervised learning, where a training set exists. In this work, we adopt using long stringattributes during the similarity join under unsupervised learning. Along with its importance when nolabeled data exists, unsupervised learning is used when no labeled data is available, it acts also as a quickpreprocessing method for huge datasets. Here, we show that using long attributes during the unsupervisedlearning can further enhance the performance. Moreover, we provide an efficient dynamically expandablealgorithm for databases with frequent transactions.
Learning Probabilistic Models of Word Sense Disambiguation  [PDF]
Ted Pedersen
Computer Science , 2007,
Abstract: This dissertation presents several new methods of supervised and unsupervised learning of word sense disambiguation models. The supervised methods focus on performing model searches through a space of probabilistic models, and the unsupervised methods rely on the use of Gibbs Sampling and the Expectation Maximization (EM) algorithm. In both the supervised and unsupervised case, the Naive Bayesian model is found to perform well. An explanation for this success is presented in terms of learning rates and bias-variance decompositions.
Multi-Object Classification and Unsupervised Scene Understanding Using Deep Learning Features and Latent Tree Probabilistic Models  [PDF]
Tejaswi Nimmagadda,Anima Anandkumar
Computer Science , 2015,
Abstract: Deep learning has shown state-of-art classification performance on datasets such as ImageNet, which contain a single object in each image. However, multi-object classification is far more challenging. We present a unified framework which leverages the strengths of multiple machine learning methods, viz deep learning, probabilistic models and kernel methods to obtain state-of-art performance on Microsoft COCO, consisting of non-iconic images. We incorporate contextual information in natural images through a conditional latent tree probabilistic model (CLTM), where the object co-occurrences are conditioned on the extracted fc7 features from pre-trained Imagenet CNN as input. We learn the CLTM tree structure using conditional pairwise probabilities for object co-occurrences, estimated through kernel methods, and we learn its node and edge potentials by training a new 3-layer neural network, which takes fc7 features as input. Object classification is carried out via inference on the learnt conditional tree model, and we obtain significant gain in precision-recall and F-measures on MS-COCO, especially for difficult object categories. Moreover, the latent variables in the CLTM capture scene information: the images with top activations for a latent node have common themes such as being a grasslands or a food scene, and on on. In addition, we show that a simple k-means clustering of the inferred latent nodes alone significantly improves scene classification performance on the MIT-Indoor dataset, without the need for any retraining, and without using scene labels during training. Thus, we present a unified framework for multi-object classification and unsupervised scene understanding.
Adaptive Scene Category Discovery with Generative Learning and Compositional Sampling  [PDF]
Liang Lin,Ruimao Zhang,Xiaohua Duan
Computer Science , 2015, DOI: 10.1109/TCSVT.2014.2313897
Abstract: This paper investigates a general framework to discover categories of unlabeled scene images according to their appearances (i.e., textures and structures). We jointly solve the two coupled tasks in an unsupervised manner: (i) classifying images without pre-determining the number of categories, and (ii) pursuing generative model for each category. In our method, each image is represented by two types of image descriptors that are effective to capture image appearances from different aspects. By treating each image as a graph vertex, we build up an graph, and pose the image categorization as a graph partition process. Specifically, a partitioned sub-graph can be regarded as a category of scenes, and we define the probabilistic model of graph partition by accumulating the generative models of all separated categories. For efficient inference with the graph, we employ a stochastic cluster sampling algorithm, which is designed based on the Metropolis-Hasting mechanism. During the iterations of inference, the model of each category is analytically updated by a generative learning algorithm. In the experiments, our approach is validated on several challenging databases, and it outperforms other popular state-of-the-art methods. The implementation details and empirical analysis are presented as well.
Unsupervised Learning in Synaptic Sampling Machines  [PDF]
Emre O. Neftci,Bruno U. Pedroni,Siddharth Joshi,Maruan Al-Shedivat,Gert Cauwenberghs
Computer Science , 2015,
Abstract: Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce the Synaptic Sampling Machine (SSM), a stochastic neural network model that uses synaptic unreliability as a means to stochasticity for sampling. Synaptic unreliability plays the dual role of an efficient mechanism for sampling in neuromorphic hardware, and a regularizer during learning akin to DropConnect. Similar to the original formulation of Boltzmann machines, the SSM can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. The SSM is trained to learn generative models with a synaptic plasticity rule implementing an event-driven form of contrastive divergence. We demonstrate this by learning a model of MNIST hand-written digit dataset, and by testing it in recognition and inference tasks. We find that SSMs outperform restricted Boltzmann machines (4.4% error rate vs. 5%), they are more robust to overfitting, and tend to learn sparser representations. SSMs are remarkably robust to weight pruning: removal of more than 80% of the weakest connections followed by cursory re-learning causes only a negligible performance loss on the MNIST task (4.8% error rate). These results show that SSMs offer substantial improvements in terms of performance, power and complexity over existing methods for unsupervised learning in spiking neural networks, and are thus promising models for machine learning in neuromorphic execution platforms.
Accuracy of Latent-Variable Estimation in Bayesian Semi-Supervised Learning  [PDF]
Keisuke Yamazaki
Statistics , 2013,
Abstract: Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified.
Machine learning for metagenomics: methods and tools  [PDF]
Hayssam Soueidan,Macha Nikolski
Quantitative Biology , 2015,
Abstract: While genomics is the research field relative to the study of the genome of any organism, metagenomics is the term for the research that focuses on many genomes at the same time, as typical in some sections of environmental study. Metagenomics recognizes the need to develop computational methods that enable understanding the genetic composition and activities of communities of species so complex that they can only be sampled, never completely characterized. Machine learning currently offers some of the most computationally efficient tools for building predictive models for classification of biological data. Various biological applications cover the entire spectrum of machine learning problems including supervised learning, unsupervised learning (or clustering), and model construction. Moreover, most of biological data -- and this is the case for metagenomics -- are both unbalanced and heterogeneous, thus meeting the current challenges of machine learning in the era of Big Data. The goal of this revue is to examine the contribution of machine learning techniques for metagenomics, that is answer the question "to what extent does machine learning contribute to the study of microbial communities and environmental samples?" We will first briefly introduce the scientific fundamentals of machine learning. In the following sections we will illustrate how these techniques are helpful in answering questions of metagenomic data analysis. We will describe a certain number of methods and tools to this end, though we will not cover them exhaustively. Finally, we will speculate on the possible future directions of this research.
Unsupervised Learning in a Framework of Information Compression by Multiple Alignment, Unification and Search  [PDF]
J. G. Wolff
Computer Science , 2003,
Abstract: This paper describes a novel approach to unsupervised learning that has been developed within a framework of "information compression by multiple alignment, unification and search" (ICMAUS), designed to integrate learning with other AI functions such as parsing and production of language, fuzzy pattern recognition, probabilistic and exact forms of reasoning, and others.
Path Finding under Uncertainty through Probabilistic Inference  [PDF]
David Tolpin,Brooks Paige,Jan Willem van de Meent,Frank Wood
Computer Science , 2015,
Abstract: We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models. This approach separates problem representation from the inference algorithm and provides a framework for efficient learning of path-finding policies. We evaluate the new approach on the Canadian Traveler Problem, which we formulate as a probabilistic model, and show how probabilistic inference allows high performance stochastic policies to be obtained for this problem.
Representation Learning: A Review and New Perspectives  [PDF]
Yoshua Bengio,Aaron Courville,Pascal Vincent
Computer Science , 2012,
Abstract: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.