Abstract:
We present a probabilistic T-cell model that includes negative selection and takes contrasting models of tissue-restricted antigen (TRA) expression in the thymus into account. We start from the basic model of van den Berg, Rand, and Burroughs (2001) and include negative selection via individual-based T-cell modelling, in which each T-cell is defined by its stimulation rates to all relevant self antigens. We present a simulation approach based on partial tilting of the stimulation rates recognized by a single T-cell. We investigate the effects of negative selection for diverging modes of thymic antigen presentation, namely arbitrary TRA presentation, and more or less strict emulation of tissue-specific cell lines. We observe that negative selection leads to truncation of the tail of the distribution of the stimulation rates mature T-cells receive from self antigens, i.e., the self background is reduced. This increases the activation probabilities of single T-cells in the presence of non-self antigens.

Abstract:
Rare events are events that are expected to occur infrequently, or more technically, those that have low probabilities (say, order of $10^{-3}$ or less) of occurring according to a probability model. In the context of uncertainty quantification, the rare events often correspond to failure of systems designed for high reliability, meaning that the system performance fails to meet some design or operation specifications. As reviewed in this section, computation of such rare-event probabilities is challenging. Analytical solutions are usually not available for non-trivial problems and standard Monte Carlo simulation is computationally inefficient. Therefore, much research effort has focused on developing advanced stochastic simulation methods that are more efficient. In this section, we address the problem of estimating rare-event probabilities by Monte Carlo simulation, Importance Sampling and Subset Simulation for highly reliable dynamic systems.

Abstract:
Rare event simulation and estimation for systems in equilibrium are among the most challenging topics in molecular dynamics. As was shown by Jarzynski and others, nonequilibrium forcing can theoretically be used to obtain equilibrium rare event statistics. The advantage seems to be that the external force can speed up the sampling of the rare events by biasing the equilibrium distribution towards a distribution under which the rare events is no longer rare. Yet algorithmic methods based on Jarzynski's and related results often fail to be efficient because they are based on sampling in path space. We present a new method that replaces the path sampling problem by minimization of a cross-entropy-like functional which boils down to finding the optimal nonequilibrium forcing. We show how to solve the related optimization problem in an efficient way by using an iterative strategy based on milestoning.

Abstract:
Importance sampling has been known as a powerful tool to reduce the variance of Monte Carlo estimator for rare event simulation. Based on the criterion of minimizing the variance of Monte Carlo estimator within a parametric family, we propose a general account for finding the optimal tilting measure. To this end, when the moment generating function of the underlying distribution exists, we obtain a simple and explicit expression of the optimal alternative distribution. The proposed algorithm is quite general to cover many interesting examples, such as normal distribution, noncentral $\chi^2$ distribution, and compound Poisson processes. To illustrate the broad applicability of our method, we study value-at-risk (VaR) computation in financial risk management and bootstrap confidence regions in statistical inferences.

Abstract:
This paper provides a detailed introductory description of Subset Simulation, an advanced stochastic simulation method for estimation of small probabilities of rare failure events. A simple and intuitive derivation of the method is given along with the discussion on its implementation. The method is illustrated with several easy-to-understand examples. For demonstration purposes, the MATLAB code for the considered examples is provided. The reader is assumed to be familiar only with elementary probability theory and statistics.

Abstract:
We develop rare-event simulation methodology for the analysis of loss events in a many-server loss system under quality-driven regime, focusing on the steady-state loss probability (i.e. fraction of lost customers over arrivals) and the behavior of the whole system leading to loss events. The analysis of these events requires working with the full measure-valued process describing the system. This is the first algorithm that is shown to be asymptotically optimal, in the rare-event simulation context, under the setting of many-server queues involving a full measure-valued descriptor.

Abstract:
We consider systems of stochastic differential equations with multiple scales and small noise and assume that the coefficients of the equations are ergodic and stationary random fields. Our goal is to construct provably-efficient importance sampling Monte Carlo methods that allow efficient computation of rare event probabilities or expectations of functionals that can be associated with rare events. Standard Monte Carlo algorithms perform poorly in the small noise limit and hence fast simulations algorithms become relevant. The presence of multiple scales complicates the design and the analysis of efficient importance sampling schemes. An additional complication is the randomness of the environment. We construct explicit changes of measures that are proven to be logarithmic asymptotically efficient with probability one with respect to the random environment (i.e., in the quenched sense). Numerical simulations support the theoretical results.

Abstract:
The Cross Entropy method is a well-known adaptive importance sampling method for rare-event probability estimation, which requires estimating an optimal importance sampling density within a parametric class. In this article we estimate an optimal importance sampling density within a wider semiparametric class of distributions. We show that this semiparametric version of the Cross Entropy method frequently yields efficient estimators. We illustrate the excellent practical performance of the method with numerical experiments and show that for the problems we consider it typically outperforms alternative schemes by orders of magnitude.

Abstract:
In this paper we use splitting technique to estimate the probability of hitting a rare but critical set by the continuous component of a switching diffusion. Instead of following classical approach we use Wonham filter to achieve multiple goals including reduction of asymptotic variance and exemption from sampling the discrete components.

Abstract:
Multilevel Splitting methods, also called Sequential Monte-Carlo or \emph{Subset Simulation}, are widely used methods for estimating extreme probabilities of the form $P[S(\mathbf{U}) > q]$ where $S$ is a deterministic real-valued function and $\mathbf{U}$ can be a random finite- or infinite-dimensional vector. Very often, $X := S(\mathbf{U})$ is supposed to be a continuous random variable and a lot of theoretical results on the statistical behaviour of the estimator are now derived with this hypothesis. However, as soon as some threshold effect appears in $S$ and/or $\mathbf{U}$ is discrete or mixed discrete/continuous this assumption does not hold any more and the estimator is not consistent. In this paper, we study the impact of discontinuities in the \emph{cdf} of $X$ and present three unbiased \emph{corrected} estimators to handle them. These estimators do not require to know in advance if $X$ is actually discontinuous or not and become all equal if $X$ is continuous. Especially, one of them has the same statistical properties in any case. Efficiency is shown on a 2-D diffusive process as well as on the \emph{Boolean SATisfiability problem} (SAT).