Abstract:
Implicit particle filtering is a sequential Monte Carlo method for data assimilation, designed to keep the number of particles manageable by focussing attention on regions of large probability. These regions are found by minimizing, for each particle, a scalar function F of the state variables. Some previous implementations of the implicit filter rely on finding the Hessians of these functions. The calculation of the Hessians can be cumbersome if the state dimension is large or if the underlying physics are such that derivatives of F are difficult to calculate, as happens in many geophysical applications, in particular in models with partial noise, i.e. with a singular state covariance matrix. Examples of models with partial noise include models where uncertain dynamic equations are supplemented by conservation laws with zero uncertainty, or with higher order (in time) stochastic partial differential equations (PDE) or with PDEs driven by spatially smooth noise processes. We make the implicit particle filter applicable to such situations by combining gradient descent minimization with random maps and show that the filter is efficient, accurate and reliable because it operates in a subspace of the state space. As an example, we consider a system of nonlinear stochastic PDEs that is of importance in geomagnetic data assimilation.

Abstract:
The applicability and usefulness of implicit sampling in stochastic optimal control, stochastic localization, and simultaneous localization and mapping (SLAM), is explored; implicit sampling is a recently-developed variationally-enhanced sampling method. The theory is illustrated with examples, and it is found that implicit sampling is significantly more efficient than current Monte Carlo methods in test problems for all three applications.

Abstract:
The ensemble Kalman filter (EnKF) is widely used to sample a probability density function (pdf) generated by a stochastic model conditioned by noisy data. This pdf can be either a joint posterior that describes the evolution of the state of the system in time, conditioned on all the data up to the present, or a particular marginal of this posterior. We show that the EnKF collapses in the same way and under even broader conditions as a particle filter when it samples the joint posterior. However, this does not imply that EnKF collapses when it samples the marginal posterior. We we show that a localized and inflated EnKF can efficiently sample this marginal, and argue that the marginal posterior is often the more useful pdf in geophysics. This explains the wide applicability of EnKF in this field. We further investigate the typical tuning of EnKF, in which one attempts to match the mean square error (MSE) to the marginal posterior variance, and show that sampling error may be huge, even if the MSE is moderate.

Abstract:
Implicit particle filtering is a sequential Monte Carlo method for data assim- ilation, designed to keep the number of particles manageable by focussing attention on regions of large probability. These regions are found by min- imizing, for each particle, a scalar function F of the state variables. Some previous implementations of the implicit filter rely on finding the Hessians of these functions. The calculation of the Hessians can be cumbersome if the state dimension is large or if the underlying physics are such that derivatives of F are difficult to calculate. This is the case in many geophysical applica- tions, in particular for models with partial noise, i.e. with a singular state covariance matrix. Examples of models with partial noise include stochastic partial differential equations driven by spatially smooth noise processes and models for which uncertain dynamic equations are supplemented by con- servation laws with zero uncertainty. We make the implicit particle filter applicable to such situations by combining gradient descent minimization with random maps and show that the filter is efficient, accurate and reliable because it operates in a subspace whose dimension is smaller than the state dimension. As an example, we assimilate data for a system of nonlinear partial differential equations that appears in models of geomagnetism.

Abstract:
We show, using idealized models, that numerical data assimilation can be successful only if an effective dimension of the problem is not excessive. This effective dimension depends on the noise in the model and the data, and in physically reasonable problems it can be moderate even when the number of variables is huge. We then analyze several data assimilation algorithms, including particle filters and variational methods. We show that well-designed particle filters can solve most of those data assimilation problems that can be solved in principle, and compare the conditions under which variational methods can succeed to the conditions required of particle filters. We also discuss the limitations of our analysis.

Abstract:
The implicit particle filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability regions via a sequence of steps that includes minimizations. We present a new and more general derivation of this approach and extend the method to particle smoothing as well as to data assimilation for perfect models. We show that the minimizations required by implicit particle methods are similar to the ones one encounters in variational data assimilation and explore the connection of implicit particle methods with variational data assimilation. In particular, we argue that existing variational codes can be converted into implicit particle methods at a low cost, often yielding better estimates, that are also equipped with quantitative measures of the uncertainty. A detailed example is presented.

Abstract:
Implicit particle filters for data assimilation update the particles by first choosing probabilities and then looking for particle locations that assume them, guiding the particles one by one to the high probability domain. We provide a detailed description of these filters, with illustrative examples, together with new, more general, methods for solving the algebraic equations and with a new algorithm for parameter identification.

Abstract:
A particle filter "collapses", i.e.~only one of its particles is significant, if the variance of its weights is large. The ensemble Kalman filter (EnKF) can be interpreted as a particle filter and, therefore, can also collapse. We point out that EnKF collapses in the same way as any other particle filter, i.e., using EnKF as a proposal density does not solve the degeneracy problem in particle filtering. We investigate the practical implications of the collapse of EnKF and show that the mean square error (MSE) of a localized EnKF can be small even when it collapses. We explain these seemingly contradicting results. The collapse of EnKF results from a global assessment of the EnKF approximation of the posterior distribution which may not be relevant in problems where error and forecast scores are assessed locally rather than globally. Thus, the collapse of EnKF may have no significant practical consequences and may even be happening on a regular basis in geophysical applications. However, the collapse is avoided if the weight calculation was local, which suggests that a major obstacle to the practical application particle filters may be the determination of localized weights akin to the particle filter interpretation of the localized EnKF.

Abstract:
Implicit samplers are algorithms for producing independent, weighted samples from multi-variate probability distributions. These are often applied in Bayesian data assimilation algorithms. We use Laplace asymptotic expansions to analyze two implicit samplers in the small noise regime. Our analysis suggests a symmetrization of the algo- rithms that leads to improved (implicit) sampling schemes at a rel- atively small additional cost. Computational experiments confirm the theory and show that symmetrization is effective for small noise sampling problems.

Abstract:
Implicit sampling is a weighted sampling method that is used in data assimilation, where one sequentially updates estimates of the state of a stochastic model based on a stream of noisy or incomplete data. Here we describe how to use implicit sampling in parameter estimation problems, where the goal is to find parameters of a numerical model, e.g.~a partial differential equation (PDE), such that the output of the numerical model is compatible with (noisy) data. We use the Bayesian approach to parameter estimation, in which a posterior probability density describes the probability of the parameter conditioned on data and compute an empirical estimate of this posterior with implicit sampling. Our approach generates independent samples, so that some of the practical difficulties one encounters with Markov Chain Monte Carlo methods, e.g.~burn-in time or correlations among dependent samples, are avoided. We describe a new implementation of implicit sampling for parameter estimation problems that makes use of multiple grids (coarse to fine) and BFGS optimization coupled to adjoint equations for the required gradient calculations. The implementation is "dimension independent", in the sense that a well-defined finite dimensional subspace is sampled as the mesh used for discretization of the PDE is refined. We illustrate the algorithm with an example where we estimate a diffusion coefficient in an elliptic equation from sparse and noisy pressure measurements. In the example, dimension\slash mesh-independence is achieved via Karhunen-Lo\`{e}ve expansions.