Abstract:
Let $\mathbf {X}=\{X_t, t=1,2,... \}$ be a stationary Gaussian random process, with mean $EX_t=\mu$ and covariance function $\gamma(\tau)=E(X_t-\mu)(X_{t+\tau}-\mu)$. Let $f(\lambda)$ be the corresponding spectral density; a stationary Gaussian process is said to be long-range dependent, if the spectral density $f(\lambda)$ can be written as the product of a slowly varying function $\tilde{f}(\lambda)$ and the quantity $\lambda ^{-2d}$. In this paper we propose a novel Bayesian nonparametric approach to the estimation of the spectral density of $\mathbf {X}$. We prove that, under some specific assumptions on the prior distribution, our approach assures posterior consistency both when $f(\cdot)$ and $d$ are the objects of interest. The rate of convergence of the posterior sequence depends in a significant way on the structure of the prior; we provide some general results and also consider the fractionally exponential (FEXP) family of priors (see below). Since it has not a well founded justification in the long memory set-up, we avoid using the Whittle approximation to the likelihood function and prefer to use the true Gaussian likelihood.

Abstract:
We propose and illustrate a hierarchical Bayesian approach for matching statistical records observed on different occasions. We show how this model can be profitably adopted both in record linkage problems and in capture--recapture setups, where the size of a finite population is the real object of interest. There are at least two important differences between the proposed model-based approach and the current practice in record linkage. First, the statistical model is built up on the actually observed categorical variables and no reduction (to 0--1 comparisons) of the available information takes place. Second, the hierarchical structure of the model allows a two-way propagation of the uncertainty between the parameter estimation step and the matching procedure so that no plug-in estimates are used and the correct uncertainty is accounted for both in estimating the population size and in performing the record linkage. We illustrate and motivate our proposal through a real data example and simulations.

Abstract:
Frequentist and likelihood methods of inference based on the multivariate skew-normal model encounter several technical difficulties with this model. In spite of the popularity of this class of densities, there are no broadly satisfactory solutions for estimation and testing problems. A general population Monte Carlo algorithm is proposed which: 1) exploits the latent structure stochastic representation of skew-normal random variables to provide a full Bayesian analysis of the model and 2) accounts for the presence of constraints in the parameter space. The proposed approach can be defined as weakly informative, since the prior distribution approximates the actual reference prior for the shape parameter vector. Results are compared with the existing classical solutions and the practical implementation of the algorithm is illustrated via a simulation study and a real data example. A generalization to the matrix variate regression model with skew-normal error is also presented.

Abstract:
We describe a simple method for making inference on a functional of a multivariate distribution. The method is based on a copula representation of the multivariate distribution and it is based on the properties of an Approximate Bayesian Monte\,Carlo algorithm, where the proposed values of the functional of interest are weighed in terms of their empirical likelihood. This method is particularly useful when the "true" likelihood function associated with the working model is too costly to evaluate or when the working model is only partially specified.

Abstract:
We propose a novel use of a recent new computational tool for Bayesian inference, namely the Approximate Bayesian Computation (ABC) methodology. ABC is a way to handle models for which the likelihood function may be intractable or even unavailable and/or too costly to evaluate; in particular, we consider the problem of eliminating the nuisance parameters from a complex statistical model in order to produce a likelihood function depending on the quantity of interest only. Given a proper prior for the entire vector parameter, we propose to approximate the integrated likelihood by the ratio of kernel estimators of the marginal posterior and prior for the quantity of interest. We present several examples.

Abstract:
Gaussian time-series models are often specified through their spectral density. Such models present several computational challenges, in particular because of the non-sparse nature of the covariance matrix. We derive a fast approximation of the likelihood for such models. We propose to sample from the approximate posterior (that is, the prior times the approximate likelihood), and then to recover the exact posterior through importance sampling. We show that the variance of the importance sampling weights vanishes as the sample size goes to infinity. We explain why the approximate posterior may typically multi-modal, and we derive a Sequential Monte Carlo sampler based on an annealing sequence in order to sample from that target distribution. Performance of the overall approach is evaluated on simulated and real datasets. In addition, for one real world dataset, we provide some numerical evidence that a Bayesian approach to semi-parametric estimation of spectral density may provide more reasonable results than its Frequentist counter-parts.

Abstract:
A stationary Gaussian process is said to be long-range dependent (resp., anti-persistent) if its spectral density $f(\lambda)$ can be written as $f(\lambda)=|\lambda|^{-2d}g(|\lambda|)$, where $0

Abstract:
A 50:50 blend of polystyrene (PS) and poly(n-butyl methacrylate) (PnBMA) has been characterized with an Atomic Force Microscope (AFM) in Tapping Mode and with force-distance curves. The polymer solution has been spin-coated on a glass slide. PnBMA builds a uniform film on the glass substrate with a thickness of @200 nm. On top of it, the PS builds an approximately 100 nm thick film. The PS-film undergoes dewetting, leading to the formation of holes surrounded by about 2 μm large rims. In those regions of the sample, where the distance between the holes is larger than about 4 μm, light depressions in the PS film can be observed. Topography, dissipated energy, adhesion, stiffness and elastic modulus have been measured on these three regions (PnBMA, PS in the rims and PS in the depressions). The two polymers can be distinguished in all images, since PnBMA has a higher adhesion and a smaller stiffness than PS, and hence a higher dissipated energy. Moreover, the polystyrene in the depressions shows a very high adhesion (approximately as high as PnBMA) and its stiffness is intermediate between that of PnBMA and that of PS in the rims. This is attributed to higher mobility of the PS chains in the depressions, which are precursors of new holes.

Abstract:
We study the Jeffreys prior of the skewness parameter of a general class of scalar skew--symmetric models. It is shown that this prior is symmetric about 0, proper, and with tails $O(\lambda^{-3/2})$ under mild regularity conditions. We also calculate the independence Jeffreys prior for the case with unknown location and scale parameters. Sufficient conditions for the existence of the corresponding posterior distribution are investigated for the case when the sampling model belongs to the family of skew--symmetric scale mixtures of normal distributions. The usefulness of these results is illustrated using the skew--logistic model and two applications with real data.