Abstract:
In optimal prediction methods one estimates the future behavior of underresolved systems by solving reduced systems of equations for expectations conditioned by partial data; renormalization group methods reduce the number of variables in complex systems through integration of unwanted scales. We establish the relation between these methods for systems in thermal equilibrium, and use this relation to find renormalization parameter flows and the coefficients in reduced systems by expanding conditional expectations in series and evaluating the coefficients by Monte-Carlo. We illustrate the construction by finding parameter flows for simple spin systems and then using the renormalized (=reduced) systems to calculate the critical temperature and the magnetization.

Abstract:
Implicit particle filtering is a sequential Monte Carlo method for data assim- ilation, designed to keep the number of particles manageable by focussing attention on regions of large probability. These regions are found by min- imizing, for each particle, a scalar function F of the state variables. Some previous implementations of the implicit filter rely on finding the Hessians of these functions. The calculation of the Hessians can be cumbersome if the state dimension is large or if the underlying physics are such that derivatives of F are difficult to calculate. This is the case in many geophysical applica- tions, in particular for models with partial noise, i.e. with a singular state covariance matrix. Examples of models with partial noise include stochastic partial differential equations driven by spatially smooth noise processes and models for which uncertain dynamic equations are supplemented by con- servation laws with zero uncertainty. We make the implicit particle filter applicable to such situations by combining gradient descent minimization with random maps and show that the filter is efficient, accurate and reliable because it operates in a subspace whose dimension is smaller than the state dimension. As an example, we assimilate data for a system of nonlinear partial differential equations that appears in models of geomagnetism.

Abstract:
Methods for the reduction of the complexity of computational problems are presented, as well as their connections to renormalization, scaling, and irreversible statistical mechanics. Several statistically stationary cases are analyzed; for time dependent problems averaging usually fails, and averaged equations must be augmented by appropriate memory and random forcing terms. Approximations are described and examples are given.

Abstract:
We show how to use numerical methods within the framework of successive scaling to analyse the microstructure of turbulence, in particular to find inertial range exponents and structure functions. The methods are first calibrated on the Burgers problem and are then applied to the 3D Euler equations. Known properties of low order structure functions appear with a relatively small computational outlay; however, more sensitive properties cannot yet be resolved with this approach well enough to settle ongoing controversies.

Abstract:
Particle filters for data assimilation in nonlinear problems use "particles" (replicas of the underlying system) to generate a sequence of probability density functions (pdfs) through a Bayesian process. This can be expensive because a significant number of particles has to be used to maintain accuracy. We offer here an alternative, in which the relevant pdfs are sampled directly by an iteration. An example is discussed in detail.

Abstract:
We present a general form of the iteration and interpolation process used in implicit particle filters. Implicit filters are based on a pseudo-Gaussian representation of posterior densities, and are designed to focus the particle paths so as to reduce the number of particles needed in nonlinear data assimilation. Examples are given.

Abstract:
Many physical systems are described by nonlinear differential equations that are too complicated to solve in full. A natural way to proceed is to divide the variables into those that are of direct interest and those that are not, formulate solvable approximate equations for the variables of greater interest, and use data and statistical methods to account for the impact of the other variables. In the present paper the problem is considered in a fully discrete-time setting, which simplifies both the analysis of the data and the numerical algorithms. The resulting time series are identified by a NARMAX (nonlinear autoregression moving average with exogenous input) representation familiar from engineering practice. The connections with the Mori-Zwanzig formalism of statistical physics are discussed, as well as an application to the Lorenz 96 system.

Abstract:
We show, using idealized models, that numerical data assimilation can be successful only if an effective dimension of the problem is not excessive. This effective dimension depends on the noise in the model and the data, and in physically reasonable problems it can be moderate even when the number of variables is huge. We then analyze several data assimilation algorithms, including particle filters and variational methods. We show that well-designed particle filters can solve most of those data assimilation problems that can be solved in principle, and compare the conditions under which variational methods can succeed to the conditions required of particle filters. We also discuss the limitations of our analysis.

Abstract:
The implicit particle filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability regions via a sequence of steps that includes minimizations. We present a new and more general derivation of this approach and extend the method to particle smoothing as well as to data assimilation for perfect models. We show that the minimizations required by implicit particle methods are similar to the ones one encounters in variational data assimilation and explore the connection of implicit particle methods with variational data assimilation. In particular, we argue that existing variational codes can be converted into implicit particle methods at a low cost, often yielding better estimates, that are also equipped with quantitative measures of the uncertainty. A detailed example is presented.

Abstract:
The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.