Abstract:
We build on the work in Fackler and King 1990, and propose a more general calibration model for implied risk neutral densities. Our model allows for the joint calibration of a set of densities at different maturities and dates through a Bayesian dynamic Beta Markov Random Field. Our approach allows for possible time dependence between densities with the same maturity, and for dependence across maturities at the same point in time. This approach to the problem encompasses model flexibility, parameter parsimony and, more importantly, information pooling across densities.

Abstract:
In this paper we study the asymptotic behavior of the Random-Walk Metropolis algorithm on probability densities with two different `scales', where most of the probability mass is distributed along certain key directions with the `orthogonal' directions containing relatively less mass. Such class of probability measures arise in various applied contexts including Bayesian inverse problems where the posterior measure concentrates on a sub-manifold when the noise variance goes to zero. When the target measure concentrates on a linear sub-manifold, we derive analytically a diffusion limit for the Random-Walk Metropolis Markov chain as the scale parameter goes to zero. In contrast to the existing works on scaling limits, our limiting Stochastic Differential Equation does not in general have a constant diffusion coefficient. Our results show that in some cases, the usual practice of adapting the step-size to control the acceptance probability might be sub-optimal as the optimal acceptance probability is zero (in the limit).

Abstract:
Bayesian networks provide a method of representing conditional independence between random variables and computing the probability distributions associated with these random variables. In this paper, we extend Bayesian network structures to compute probability density functions for continuous random variables. We make this extension by approximating prior and conditional densities using sums of weighted Gaussian distributions and then finding the propagation rules for updating the densities in terms of these weights. We present a simple example that illustrates the Bayesian network for continuous variables; this example shows the effect of the network structure and approximation errors on the computation of densities for variables in the network.

Abstract:
We focus on the biological problem of tracking organelles as they move through cells. In the past, most intracellular movements were recorded manually, however, the results are too incomplete to capture the full complexity of organelle motions. An automated tracking algorithm promises to provide a complete analysis of noisy microscopy data. In this paper, we adopt statistical techniques from a Bayesian random set point of view. Instead of considering each individual organelle, we examine a random set whose members are the organelle states and we establish a Bayesian filtering algorithm involving such set states. The propagated multi-object densities are approximated using a Gaussian mixture scheme. Our algorithm is applied to synthetic and experimental data.

Abstract:
A new method is presented for the analysis of small angle neutron scattering data from quasi-2D systems such as flux lattices, Skyrmion lattices, and aligned liquid crystals. A significant increase in signal to noise ratio, and a natural application of the Lorentz factor can be achieved by taking advantage of the knowledge that all relevant scattering is centered on a plane in reciprocal space. The Bayesian form ensures that missing information is treated in a controlled way and can be subsequently included in the analysis. A simple algorithm based on Gaussian probability assumptions is provided which provides very satisfactory results. Finally, it is argued that a generalised model-independent Bayesian data analysis method would be highly advantageous for the processing of neutron and x-ray scattering data.

Abstract:
We propose a novel model for nonlinear dimension reduction motivated by the probabilistic formulation of principal component analysis. Nonlinearity is achieved by specifying different transformation matrices at different locations of the latent space and smoothing the transformation using a Markov random field type prior. The computation is made feasible by the recent advances in sampling from von Mises-Fisher distributions.

Abstract:
In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

Abstract:
Frequentist-style large-sample properties of Bayesian posterior distributions, such as consistency and convergence rates, are important considerations in nonparametric problems. In this paper we give an analysis of Bayesian asymptotics based primarily on predictive densities. Our analysis is unified in the sense that essentially the same approach can be taken to develop convergence rate results in iid, mis-specified iid, independent non-iid, and dependent data cases.

Abstract:
After making some general remarks, I consider two examples that illustrate the use of Bayesian Probability Theory. The first is a simple one, the physicist's favorite "toy," that provides a forum for a discussion of the key conceptual issue of Bayesian analysis: the assignment of prior probabilities. The other example illustrates the use of Bayesian ideas in the real world of experimental physics.

Abstract:
Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration. Despite these successes, the approach is restricted to problems of moderate dimension, and several workshops on Bayesian optimization have identified its scaling to high-dimensions as one of the holy grails of the field. In this paper, we introduce a novel random embedding idea to attack this problem. The resulting Random EMbedding Bayesian Optimization (REMBO) algorithm is very simple, has important invariance properties, and applies to domains with both categorical and continuous variables. We present a thorough theoretical analysis of REMBO, including regret bounds that only depend on the problem's intrinsic dimensionality. Empirical results confirm that REMBO can effectively solve problems with billions of dimensions, provided the intrinsic dimensionality is low. They also show that REMBO achieves state-of-the-art performance in optimizing the 47 discrete parameters of a popular mixed integer linear programming solver.