Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 Mathematics , 2007, DOI: 10.1214/105051606000000664 Abstract: Large deviation theory has provided important clues for the choice of importance sampling measures for Monte Carlo evaluation of exceedance probabilities. However, Glasserman and Wang [Ann. Appl. Probab. 7 (1997) 731--746] have given examples in which importance sampling measures that are consistent with large deviations can perform much worse than direct Monte Carlo. We address this problem by using certain mixtures of exponentially twisted measures for importance sampling. Their asymptotic optimality is established by using a new class of likelihood ratio martingales and renewal theory.
 Bernd A. Berg Physics , 2002, DOI: 10.1103/PhysRevLett.90.180601 Abstract: A funnel transformation is introduced, which acts recursively from higher towards lower temperatures. It biases the a-priori probabilities of a canonical or generalized ensemble Metropolis simulation, so that they zoom in on the global energy minimum, if a funnel exists indeed. A first, crude approximation to the full transformation, called rugged Metropolis one (RM$_1$), is tested for Met-Enkephalin. At 300K the computational gain is a factor of two and, due to its simplicity, RM$_1$ is well suited to replace the conventional Metropolis updating for these kind of systems.
 Physics , 1998, DOI: 10.1016/S0375-9474(98)00478-3 Abstract: Langevin equations are used to model many processes of physical interest, including low-energy nuclear collisions. In this paper we develop a general method for computing probabilities of very rare events (e.g. small fusion cross-sections) for processes described by Langevin dynamics. As we demonstrate with numerical examples as well as an exactly solvable model, our method can converge to the desired answer at a rate which is orders of magnitude faster than that achieved with direct simulations of the process in question.
 Computer Science , 2012, Abstract: The paper introduces AND/OR importance sampling for probabilistic graphical models. In contrast to importance sampling, AND/OR importance sampling caches samples in the AND/OR space and then extracts a new sample mean from the stored samples. We prove that AND/OR importance sampling may have lower variance than importance sampling; thereby providing a theoretical justification for preferring it over importance sampling. Our empirical evaluation demonstrates that AND/OR importance sampling is far more accurate than importance sampling in many cases.
 Gérard Letac Statistics , 2008, DOI: 10.1214/08-STS252A Abstract: Comment on Lancaster Probabilities and Gibbs Sampling'' [arXiv:0808.3852]
 Statistics , 2012, Abstract: An interesting statistical problem is to find regions where some studied process exceeds a certain level. Estimating such regions so that the probability for exceeding the level in the entire set is equal to some predefined value is a difficult problem that occurs in several areas of applications ranging from brain imaging to astrophysics. In this work, a method for solving this problem, as well as the related problem of finding uncertainty regions for contour curves, for latent Gaussian models is proposed. The method is based on using a parametric family for the excursion sets in combination with a sequential importance sampling method for estimating joint probabilities. The accuracy of the method is investigated using simulated data and two environmental applications are presented. In the first application, areas where the air pollution in the Piemonte region in northern Italy exceeds the daily limit value, set by the European Union for human health protection, are estimated. In the second application, regions in the African Sahel that experienced an increase in vegetation after the drought period in the early 1980s are estimated.
 Mathematics , 2009, Abstract: This paper introduces a new Importance Sampling scheme, called Adaptive Twisted Importance Sampling, which is adequate for the improved estimation of rare event probabilities in he range of moderate deviations pertaining to the empirical mean of real i.i.d. summands. It is based on a sharp approximation of the density of long runs extracted from a random walk conditioned on its end value.
 Statistics , 2014, Abstract: In multiple importance sampling we combine samples from a finite list of proposal distributions. When those proposal distributions are used to create control variates, it is possible (Owen and Zhou, 2000) to bound the ratio of the resulting variance to that of the unknown best proposal distribution in our list. The minimax regret arises by taking a uniform mixture of proposals, but that is conservative when there are many components. In this paper we optimize the mixture component sampling rates to gain further efficiency. We show that the sampling variance of mixture importance sampling with control variates is jointly convex in the mixture probabilities and control variate regression coefficients. We also give a sequential importance sampling algorithm to estimate the optimal mixture from the sample data.
 Mathematics , 2012, Abstract: Importance sampling is a popular method for efficient computation of various properties of a distribution such as probabilities, expectations, quantiles etc. The output of an importance sampling algorithm can be represented as a weighted empirical measure, where the weights are given by the likelihood ratio between the original distribution and the sampling distribution. In this paper the efficiency of an importance sampling algorithm is studied by means of large deviations for the weighted empirical measure. The main result, which is stated as a Laplace principle for the weighted empirical measure arising in importance sampling, can be viewed as a weighted version of Sanov's theorem. The main theorem is applied to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The analysis yields an estimate of the sample size needed to reach a desired precision as well as of the reduction in cost for importance sampling compared to standard Monte Carlo.
 Mathematics , 2014, Abstract: Let Y={f(i), Af(i),..., A^{li} f(i): i in Omega}, where A is a bounded operator on l^2(I). The problem under consideration is to find necessary and sufficient conditions on A, Omega, {l_i:i in Omega} in order to recover any f \in l^2(I) from the measurements Y. This is the so called dynamical sampling problem in which we seek to recover a function f by combining coarse samples of f and its futures states A^l f. We completely solve this problem in finite dimensional spaces, and for a large class of self adjoint operators in infinite dimensional spaces. In the latter case, the M\"untz-Sz\'asz Theorem combined with the Kadison-Singer/Feichtinger Theorem allows us to show that Y can never be a Riesz basis when Omega is finite. We can also show that, when Omega is finite, Y={f(i), Af(i),..., A^{li}f(i): i in Omega} is not a frame except for some very special cases. The existence of these special cases is derived from Carleson's Theorem for interpolating sequences in the Hardy space H^2(D).
 Page 1 /100 Display every page 5 10 20 Item