oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Outlier robust system identification: a Bayesian kernel-based approach  [PDF]
Giulio Bottegal,Aleksandr Y. Aravkin,Hakan Hjalmarsson,Gianluigi Pillonetto
Statistics , 2013,
Abstract: In this paper, we propose an outlier-robust regularized kernel-based method for linear system identification. The unknown impulse response is modeled as a zero-mean Gaussian process whose covariance (kernel) is given by the recently proposed stable spline kernel, which encodes information on regularity and exponential stability. To build robustness to outliers, we model the measurement noise as realizations of independent Laplacian random variables. The identification problem is cast in a Bayesian framework, and solved by a new Markov Chain Monte Carlo (MCMC) scheme. In particular, exploiting the representation of the Laplacian random variables as scale mixtures of Gaussians, we design a Gibbs sampler which quickly converges to the target distribution. Numerical simulations show a substantial improvement in the accuracy of the estimates over state-of-the-art kernel-based methods.
A kernel-based approach to Hammerstein system identification  [PDF]
Riccardo Sven Risuleo,Giulio Bottegal,H?kan Hjalmarsson
Computer Science , 2014,
Abstract: In this paper, we propose a novel algorithm for the identification of Hammerstein systems. Adopting a Bayesian approach, we model the impulse response of the unknown linear dynamic system as a realization of a zero-mean Gaussian process. The covariance matrix (or kernel) of this process is given by the recently introduced stable-spline kernel, which encodes information on the stability and regularity of the impulse response. The static non-linearity of the model is identified using an Empirical Bayes approach, i.e. by maximizing the output marginal likelihood, which is obtained by integrating out the unknown impulse response. The related optimization problem is solved adopting a novel iterative scheme based on the Expectation-Maximization (EM) method, where each iteration consists in a simple sequence of update rules. Numerical experiments show that the proposed method compares favorably with a standard algorithm for Hammerstein system identification.
Quantized Output Feedback Stabilization of Switched Linear Systems  [PDF]
Masashi Wakaiki,Yutaka Yamamoto
Mathematics , 2014,
Abstract: This paper studies the problem of stabilizing a continuous-time switched linear system by quantized output feedback. We assume that the quantized outputs and the switching signal are available to the controller at all time. We develop an encoding strategy by using multiple Lyapunov functions and an average dwell time property. The encoding strategy is based on the results in the case of a single mode, and it requires an additional adjustment of the "zoom" parameter at every switching time.
Bayesian Nonparametric Kernel-Learning  [PDF]
Junier Oliva,Avinava Dubey,Barnabas Poczos,Jeff Schneider,Eric P. Xing
Statistics , 2015,
Abstract: Kernel methods are ubiquitous tools in machine learning. They have proven to be effective in many domains and tasks. Yet, kernel methods often require the user to select a predefined kernel to build an estimator with. However, there is often little reason for the a priori selection of a kernel. Even if a universal approximating kernel is selected, the quality of the finite sample estimator may be greatly effected by the choice of kernel. Furthermore, when directly applying kernel methods, one typically needs to compute a $N \times N$ Gram matrix of pairwise kernel evaluations to work with a dataset of $N$ instances. The computation of this Gram matrix precludes the direct application of kernel methods on large datasets. In this paper we introduce Bayesian nonparmetric kernel (BaNK) learning, a generic, data-driven framework for scalable learning of kernels. We show that this framework can be used for performing both regression and classification tasks and scale to large datasets. Furthermore, we show that BaNK outperforms several other scalable approaches for kernel learning on a variety of real world datasets.
A Kernel Approach to Tractable Bayesian Nonparametrics  [PDF]
Ferenc Huszár,Simon Lacoste-Julien
Statistics , 2011,
Abstract: Inference in popular nonparametric Bayesian models typically relies on sampling or other approximations. This paper presents a general methodology for constructing novel tractable nonparametric Bayesian methods by applying the kernel trick to inference in a parametric Bayesian model. For example, Gaussian process regression can be derived this way from Bayesian linear regression. Despite the success of the Gaussian process framework, the kernel trick is rarely explicitly considered in the Bayesian literature. In this paper, we aim to fill this gap and demonstrate the potential of applying the kernel trick to tractable Bayesian parametric models in a wider context than just regression. As an example, we present an intuitive Bayesian kernel machine for density estimation that is obtained by applying the kernel trick to a Gaussian generative model in feature space.
Blind Identification of SIMO Wiener Systems based on Kernel Canonical Correlation Analysis  [PDF]
Steven Van Vaerenbergh,Javier Via,Ignacio Santamaria
Mathematics , 2013, DOI: 10.1109/TSP.2013.2248004
Abstract: We consider the problem of blind identification and equalization of single-input multiple-output (SIMO) nonlinear channels. Specifically, the nonlinear model consists of multiple single-channel Wiener systems that are excited by a common input signal. The proposed approach is based on a well-known blind identification technique for linear SIMO systems. By transforming the output signals into a reproducing kernel Hilbert space (RKHS), a linear identification problem is obtained, which we propose to solve through an iterative procedure that alternates between canonical correlation analysis (CCA) to estimate the linear parts, and kernel canonical correlation (KCCA) to estimate the memoryless nonlinearities. The proposed algorithm is able to operate on systems with as few as two output channels, on relatively small data sets and on colored signals. Simulations are included to demonstrate the effectiveness of the proposed technique.
Efficient Output Kernel Learning for Multiple Tasks  [PDF]
Pratik Jawanpuria,Maksim Lapin,Matthias Hein,Bernt Schiele
Computer Science , 2015,
Abstract: The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other. While previously the relationship between tasks had to be user-defined in the form of an output kernel, recent approaches jointly learn the tasks and the output kernel. As the output kernel is a positive semidefinite matrix, the resulting optimization problems are not scalable in the number of tasks as an eigendecomposition is required in each step. \mbox{Using} the theory of positive semidefinite kernels we show in this paper that for a certain class of regularizers on the output kernel, the constraint of being positive semidefinite can be dropped as it is automatically satisfied for the relaxed problem. This leads to an unconstrained dual problem which can be solved efficiently. Experiments on several multi-task and multi-class data sets illustrate the efficacy of our approach in terms of computational efficiency as well as generalization performance.
Shared kernel Bayesian screening  [PDF]
Eric F. Lock,David B. Dunson
Statistics , 2013,
Abstract: This article concerns testing for differences between groups in many related variables. For example, the focus may be on identifying genomic sites with differential methylation between tumor subtypes. Standard practice in such applications is independent screening using adjustments for multiple testing to maintain false discovery rates. We propose a Bayesian nonparametric testing methodology, which improves performance by borrowing information adaptively across the different variables through the incorporation of shared kernels and a common probability of group differences. The inclusion of shared kernels in a finite mixture, with Dirichlet priors on the different weight vectors, leads to a simple and scalable methodology that can be routinely implemented in high dimensions. We provide some theoretical results, including closed asymptotic forms for the posterior probability of equivalence in two groups and consistency even under model misspecification. The method is shown to compare favorably to frequentist and Bayesian competitors, and is applied to methylation array data from a breast cancer study.
Bayesian Efficient Multiple Kernel Learning  [PDF]
Mehmet Gonen
Computer Science , 2012,
Abstract: Multiple kernel learning algorithms are proposed to combine kernels in order to obtain a better similarity measure or to integrate feature representations coming from different data sources. Most of the previous research on such methods is focused on the computational efficiency issue. However, it is still not feasible to combine many kernels using existing Bayesian approaches due to their high time complexity. We propose a fully conjugate Bayesian formulation and derive a deterministic variational approximation, which allows us to combine hundreds or thousands of kernels very efficiently. We briefly explain how the proposed method can be extended for multiclass learning and semi-supervised learning. Experiments with large numbers of kernels on benchmark data sets show that our inference method is quite fast, requiring less than a minute. On one bioinformatics and three image recognition data sets, our method outperforms previously reported results with better generalization performance.
A Generalized Kernel Approach to Structured Output Learning  [PDF]
Hachem Kadri,Mohammad Ghavamzadeh,Philippe Preux
Computer Science , 2012,
Abstract: We study the problem of structured output learning from a regression perspective. We first provide a general formulation of the kernel dependency estimation (KDE) problem using operator-valued kernels. We show that some of the existing formulations of this problem are special cases of our framework. We then propose a covariance-based operator-valued kernel that allows us to take into account the structure of the kernel feature space. This kernel operates on the output space and encodes the interactions between the outputs without any reference to the input space. To address this issue, we introduce a variant of our KDE method based on the conditional covariance operator that in addition to the correlation between the outputs takes into account the effects of the input variables. Finally, we evaluate the performance of our KDE approach using both covariance and conditional covariance kernels on two structured output problems, and compare it to the state-of-the-art kernel-based structured output regression methods.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.