oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Variational Inference in Nonconjugate Models  [PDF]
Chong Wang,David M. Blei
Statistics , 2012,
Abstract: Mean-field variational methods are widely used for approximate posterior inference in many probabilistic models. In a typical application, mean-field methods approximately compute the posterior with a coordinate-ascent optimization algorithm. When the model is conditionally conjugate, the coordinate updates are easily derived and in closed form. However, many models of interest---like the correlated topic model and Bayesian logistic regression---are nonconjuate. In these models, mean-field methods cannot be directly applied and practitioners have had to develop variational algorithms on a case-by-case basis. In this paper, we develop two generic methods for nonconjugate models, Laplace variational inference and delta method variational inference. Our methods have several advantages: they allow for easily derived variational algorithms with a wide class of nonconjugate models; they extend and unify some of the existing algorithms that have been derived for specific models; and they work well on real-world datasets. We studied our methods on the correlated topic model, Bayesian logistic regression, and hierarchical Bayesian logistic regression.
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference  [PDF]
Edward Meeds,Max Welling
Computer Science , 2015,
Abstract: We describe an embarrassingly parallel, anytime Monte Carlo method for likelihood-free models. The algorithm starts with the view that the stochasticity of the pseudo-samples generated by the simulator can be controlled externally by a vector of random numbers u, in such a way that the outcome, knowing u, is deterministic. For each instantiation of u we run an optimization procedure to minimize the distance between summary statistics of the simulator and the data. After reweighing these samples using the prior and the Jacobian (accounting for the change of volume in transforming from the space of summary statistics to the space of parameters) we show that this weighted ensemble represents a Monte Carlo estimate of the posterior distribution. The procedure can be run embarrassingly parallel (each node handling one sample) and anytime (by allocating resources to the worst performing sample). The procedure is validated on six experiments.
Automatic Variational Inference in Stan  [PDF]
Alp Kucukelbir,Rajesh Ranganath,Andrew Gelman,David M. Blei
Statistics , 2015,
Abstract: Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it difficult to automate. We propose an automatic variational inference algorithm, automatic differentiation variational inference (ADVI). The user only provides a Bayesian model and a dataset; nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We implement ADVI in Stan (code available now), a probabilistic programming framework. We compare ADVI to MCMC sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With ADVI we can use variational inference on any model we write in Stan.
Asymptotically Exact, Embarrassingly Parallel MCMC  [PDF]
Willie Neiswanger,Chong Wang,Eric Xing
Computer Science , 2013,
Abstract: Communication costs, resulting from synchronization requirements during learning, can greatly slow down many parallel machine learning algorithms. In this paper, we present a parallel Markov chain Monte Carlo (MCMC) algorithm in which subsets of data are processed independently, with very little communication. First, we arbitrarily partition data onto multiple machines. Then, on each machine, any classical MCMC method (e.g., Gibbs sampling) may be used to draw samples from a posterior distribution given the data subset. Finally, the samples from each machine are combined to form samples from the full posterior. This embarrassingly parallel algorithm allows each machine to act independently on a subset of the data (without communication) until the final combination stage. We prove that our algorithm generates asymptotically exact samples and empirically demonstrate its ability to parallelize burn-in and sampling in several models.
Variational inference for sparse spectrum Gaussian process regression  [PDF]
Linda S. L. Tan,Victor M. H. Ong,David J. Nott,Ajay Jasra
Statistics , 2013, DOI: 10.1007/s11222-015-9600-7
Abstract: We develop a fast variational approximation scheme for Gaussian process (GP) regression, where the spectrum of the covariance function is subjected to a sparse approximation. Our approach enables uncertainty in covariance function hyperparameters to be treated without using Monte Carlo methods and is robust to overfitting. Our article makes three contributions. First, we present a variational Bayes algorithm for fitting sparse spectrum GP regression models that uses nonconjugate variational message passing to derive fast and efficient updates. Second, we propose a novel adaptive neighbourhood technique for obtaining predictive inference that is effective in dealing with nonstationarity. Regression is performed locally at each point to be predicted and the neighbourhood is determined using a measure defined based on lengthscales estimated from an initial fit. Weighting dimensions according to lengthscales, this downweights variables of little relevance, leading to automatic variable selection and improved prediction. Third, we introduce a technique for accelerating convergence in nonconjugate variational message passing by adapting step sizes in the direction of the natural gradient of the lower bound. Our adaptive strategy can be easily implemented and empirical results indicate significant speedups.
A Tool for Programming Embarrassingly Task Parallel Applications on CoW and NoW  [PDF]
Patrizio Dazzi
Computer Science , 2013,
Abstract: Embarrassingly parallel problems can be split in parts that are characterized by a really low (or sometime absent) exchange of information during their computation in parallel. As a consequence they can be effectively computed in parallel exploiting commodity hardware, hence without particularly sophisticated interconnection networks. Basically, this means Clusters, Networks of Workstations and Desktops as well as Computational Clouds. Despite the simplicity of this computational model, it can be exploited to compute a quite large range of problems. This paper describes JJPF, a tool for developing task parallel applications based on Java and Jini that showed to be an effective and efficient solution in environment like Clusters and Networks of Workstations and Desktops.
Software Alchemy: Turning Complex Statistical Computations into Embarrassingly-Parallel Ones  [PDF]
Norman Matloff
Statistics , 2014,
Abstract: The growth in the use of computationally intensive statistical procedures, especially with Big Data, has necessitated the usage of parallel computation on diverse platforms such as multicore, GPU, clusters and clouds. However, slowdown due to interprocess communication costs typically limits such methods to "embarrassingly parallel" (EP) algorithms, especially on non-shared memory platforms. This paper develops a broadly-applicable method for converting many non-EP algorithms into statistically equivalent EP ones. The method is shown to yield excellent levels of speedup for a variety of statistical computations. It also overcomes certain problems of memory limitations.
Stochastic variational inference for large-scale discrete choice models using adaptive batch sizes  [PDF]
Linda S. L. Tan
Statistics , 2014,
Abstract: Discrete choice models describe the choices made by decision makers among alternatives and play an important role in transportation planning, marketing research and other applications. The mixed multinomial logit (MMNL) model is a popular discrete choice model that captures heterogeneity in the preferences of decision makers through random coefficients. While Markov chain Monte Carlo methods provide the Bayesian analogue to classical procedures for estimating MMNL models, computations can be prohibitively expensive for large datasets. Approximate inference can be obtained using variational methods at a lower computational cost with competitive accuracy. In this paper, we develop variational methods for estimating MMNL models that allow random coefficients to be correlated in the posterior and can be extended easily to large-scale datasets. We explore three alternatives: (1) Laplace variational inference, (2) nonconjugate variational message passing and (3) stochastic linear regression. Their performances are compared using real and simulated data. To accelerate convergence for large datasets, we develop stochastic variational inference for MMNL models using each of the above alternatives. Stochastic variational inference allows data to be processed in minibatches by optimizing global variational parameters using stochastic gradient approximation. A novel strategy for increasing minibatch sizes adaptively within stochastic variational inference is proposed.
Pretty Fast Analysis: An embarrassingly parallel algorithm for biological simulation analysis  [PDF]
David N. Lebard
Physics , 2008,
Abstract: A parallel code has been written in FORTRAN90, C, and MPI for the analysis of biological simulation data. Using a master/slave algorithm, the software operates on AMBER generated trajectory data using either UNIX or MPI file IO, and it supports up to 15 simultaneous function calls. This software has been performance tested on the Ranger Supercomputer on trajectory data of an aqueous bacterial reaction center micelle. Although the parallel reading is poor, the analysis algorithm itself shows embarrassingly parallel speedup up to 1024 compute nodes. At this CPU count the overall scaling of the software compares well NAMD's best reported speedup, and outperforms AMBER's best known scaling by a factor of 3, while using only a small number of function calls and a short trajectory length.
A stochastic variational framework for fitting and diagnosing generalized linear mixed models  [PDF]
Linda S. L. Tan,David J. Nott
Statistics , 2012, DOI: 10.1214/14-BA885
Abstract: In stochastic variational inference, the variational Bayes objective function is optimized using stochastic gradient approximation, where gradients computed on small random subsets of data are used to approximate the true gradient over the whole data set. This enables complex models to be fit to large data sets as data can be processed in mini-batches. In this article, we extend stochastic variational inference for conjugate-exponential models to nonconjugate models and present a stochastic nonconjugate variational message passing algorithm for fitting generalized linear mixed models that is scalable to large data sets. In addition, we show that diagnostics for prior-likelihood conflict, which are useful for Bayesian model criticism, can be obtained from nonconjugate variational message passing automatically, as an alternative to simulation-based Markov chain Monte Carlo methods. Finally, we demonstrate that for moderate-sized data sets, convergence can be accelerated by using the stochastic version of nonconjugate variational message passing in the initial stage of optimization before switching to the standard version.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.