oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 2 )

2018 ( 1 )

2017 ( 2 )

2015 ( 10 )

Custom range...

Search Results: 1 - 10 of 100 matches for " Barnabas Poczos "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks
Barnabas Poczos,Andras Lorincz
Mathematics , 2008,
Abstract: We introduce a novel online Bayesian method for the identification of a family of noisy recurrent neural networks (RNNs). We develop Bayesian active learning technique in order to optimize the interrogating stimuli given past experiences. In particular, we consider the unknown parameters as stochastic variables and use the D-optimality principle, also known as `\emph{infomax method}', to choose optimal stimuli. We apply a greedy technique to maximize the information gain concerning network parameters at each time step. We also derive the D-optimal estimation of the additive noise that perturbs the dynamical system of the RNN. Our analytical results are approximation-free. The analytic derivation gives rise to attractive quadratic update rules.
Kalman-filtering using local interactions
Barnabas Poczos,Andras Lorincz
Computer Science , 2003,
Abstract: There is a growing interest in using Kalman-filter models for brain modelling. In turn, it is of considerable importance to represent Kalman-filter in connectionist forms with local Hebbian learning rules. To our best knowledge, Kalman-filter has not been given such local representation. It seems that the main obstacle is the dynamic adaptation of the Kalman-gain. Here, a connectionist representation is presented, which is derived by means of the recursive prediction error method. We show that this method gives rise to attractive local learning rules and can adapt the Kalman-gain.
Separation Theorem for K-Independent Subspace Analysis with Sufficient Conditions
Zoltan Szabo,Barnabas Poczos,Andras Lorincz
Mathematics , 2006,
Abstract: Here, a Separation Theorem about K-Independent Subspace Analysis (K real or complex), a generalization of K-Independent Component Analysis (KICA) is proven. According to the theorem, KISA estimation can be executed in two steps under certain conditions. In the first step, 1-dimensional KICA estimation is executed. In the second step, optimal permutation of the KICA elements is searched for. We present sufficient conditions for the KISA Separation Theorem. Namely, we shall show that (i) spherically symmetric sources (both for real and complex cases), as well as (ii) real 2-dimensional sources invariant to 90 degree rotation, among others, satisfy the conditions of the theorem.
Separation Theorem for Independent Subspace Analysis with Sufficient Conditions
Zoltan Szabo,Barnabas Poczos,Andras Lorincz
Mathematics , 2006,
Abstract: Here, a separation theorem about Independent Subspace Analysis (ISA), a generalization of Independent Component Analysis (ICA) is proven. According to the theorem, ISA estimation can be executed in two steps under certain conditions. In the first step, 1-dimensional ICA estimation is executed. In the second step, optimal permutation of the ICA elements is searched for. We present sufficient conditions for the ISA Separation Theorem. Namely, we shall show that (i) elliptically symmetric sources, (ii) 2-dimensional sources invariant to 90 degree rotation, among others, satisfy the conditions of the theorem.
Undercomplete Blind Subspace Deconvolution
Zoltan Szabo,Barnabas Poczos,Andras Lorincz
Mathematics , 2007,
Abstract: We introduce the blind subspace deconvolution (BSSD) problem, which is the extension of both the blind source deconvolution (BSD) and the independent subspace analysis (ISA) tasks. We examine the case of the undercomplete BSSD (uBSSD). Applying temporal concatenation we reduce this problem to ISA. The associated `high dimensional' ISA problem can be handled by a recent technique called joint f-decorrelation (JFD). Similar decorrelation methods have been used previously for kernel independent component analysis (kernel-ICA). More precisely, the kernel canonical correlation (KCCA) technique is a member of this family, and, as is shown in this paper, the kernel generalized variance (KGV) method can also be seen as a decorrelation method in the feature space. These kernel based algorithms will be adapted to the ISA task. In the numerical examples, we (i) examine how efficiently the emerging higher dimensional ISA tasks can be tackled, and (ii) explore the working and advantages of the derived kernel-ISA methods.
Collaborative Filtering via Group-Structured Dictionary Learning
Zoltan Szabo,Barnabas Poczos,Andras Lorincz
Mathematics , 2012, DOI: 10.1007/978-3-642-28551-6_31
Abstract: Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented technique outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.
Nonparametric Divergence Estimation with Applications to Machine Learning on Distributions
Barnabas Poczos,Liang Xiong,Jeff Schneider
Computer Science , 2012,
Abstract: Low-dimensional embedding, manifold learning, clustering, classification, and anomaly detection are among the most important problems in machine learning. The existing methods usually consider the case when each instance has a fixed, finite-dimensional feature representation. Here we consider a different setting. We assume that each instance corresponds to a continuous probability distribution. These distributions are unknown, but we are given some i.i.d. samples from each distribution. Our goal is to estimate the distances between these distributions and use these distances to perform low-dimensional embedding, clustering/classification, or anomaly detection for the distributions. We present estimation algorithms, describe how to apply them for machine learning tasks on distributions, and show empirical results on synthetic data, real word images, and astronomical data sets.
Copula-based Kernel Dependency Measures
Barnabas Poczos,Zoubin Ghahramani,Jeff Schneider
Computer Science , 2012,
Abstract: The paper presents a new copula based method for measuring dependence between random variables. Our approach extends the Maximum Mean Discrepancy to the copula of the joint distribution. We prove that this approach has several advantageous properties. Similarly to Shannon mutual information, the proposed dependence measure is invariant to any strictly increasing transformation of the marginal variables. This is important in many applications, for example in feature selection. The estimator is consistent, robust to outliers, and uses rank statistics only. We derive upper bounds on the convergence rate and propose independence tests too. We illustrate the theoretical contributions through a series of experiments in feature selection and low-dimensional embedding of distributions.
High Dimensional Bayesian Optimisation and Bandits via Additive Models
Kirthevasan Kandasamy,Jeff Schneider,Barnabas Poczos
Computer Science , 2015,
Abstract: Bayesian Optimisation (BO) is a technique used in optimising a $D$-dimensional function which is typically expensive to evaluate. While there have been many successes for BO in low dimensions, scaling it to high dimensions has been notoriously difficult. Existing literature on the topic are under very restrictive settings. In this paper, we identify two key challenges in this endeavour. We tackle these challenges by assuming an additive structure for the function. This setting is substantially more expressive and contains a richer class of functions than previous work. We prove that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all $D$ dimensions. We also demonstrate several other statistical and computational benefits in our framework. Via synthetic examples, a scientific simulation and a face detection problem we demonstrate that our method outperforms naive BO on additive functions and on several examples where the function is not additive.
Undercomplete Blind Subspace Deconvolution via Linear Prediction
Zoltan Szabo,Barnabas Poczos,Andras Lorincz
Statistics , 2007,
Abstract: We present a novel solution technique for the blind subspace deconvolution (BSSD) problem, where temporal convolution of multidimensional hidden independent components is observed and the task is to uncover the hidden components using the observation only. We carry out this task for the undercomplete case (uBSSD): we reduce the original uBSSD task via linear prediction to independent subspace analysis (ISA), which we can solve. As it has been shown recently, applying temporal concatenation can also reduce uBSSD to ISA, but the associated ISA problem can easily become `high dimensional' [1]. The new reduction method circumvents this dimensionality problem. We perform detailed studies on the efficiency of the proposed technique by means of numerical simulations. We have found several advantages: our method can achieve high quality estimations for smaller number of samples and it can cope with deeper temporal convolutions.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.