oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Multiscale Kinetic Monte-Carlo for Simulating Epitaxial Growth  [PDF]
J. P. DeVita,L. M. Sander,P. Smereka
Physics , 2005, DOI: 10.1103/PhysRevB.72.205421
Abstract: We present a fast Monte-Carlo algorithm for simulating epitaxial surface growth, based on the continuous-time Monte-Carlo algorithm of Bortz, Kalos and Lebowitz. When simulating realistic growth regimes, much computational time is consumed by the relatively fast dynamics of the adatoms. Continuum and continuum-discrete hybrid methods have been developed to approach this issue; however in many situations, the density of adatoms is too low to efficiently and accurately simulate as a continuum. To solve the problem of fast adatom dynamics, we allow adatoms to take larger steps, effectively reducing the number of transitions required. We achieve nearly a factor of ten speed up, for growth at moderate temperatures and large D/F.
Simulating `Complex' Problems with Quantum Monte Carlo  [PDF]
Lizeng Zhang,Geoff Canright,Ted Barnes
Physics , 1993, DOI: 10.1103/PhysRevB.49.12355
Abstract: We present a new quantum Monte Carlo algorithm suitable for generically complex problems, such as systems coupled to external magnetic fields or anyons in two spatial dimensions. We find that the choice of gauge plays a nontrivial role, and can be used to reduce statistical noise in the simulation. Furthermore, it is found that noise can be greatly reduced by approximate cancellations between the phases of the (gauge dependent) statistical flux and the external magnetic flux.
Diffusion Monte Carlo: Exponential scaling of computational cost for large systems  [PDF]
Norbert Nemec
Physics , 2009, DOI: 10.1103/PhysRevB.81.035119
Abstract: The computational cost of a Monte Carlo algorithm can only be meaningfully discussed when taking into account the magnitude of the resulting statistical error. Aiming for a fixed error per particle, we study the scaling behavior of the diffusion Monte Carlo method for large quantum systems. We identify the correlation within the population of walkers as the dominant scaling factor for large systems. While this factor is negligible for small and medium sized systems that are typically studied, it ultimately shows exponential scaling. The scaling factor can be estimated straightforwardly for each specific system and we find that is typically only becomes relevant for systems containing more than several hundred atoms.
Computational complexity analysis for Monte Carlo approximations of classically scaled population processes  [PDF]
David F. Anderson,Desmond J. Higham,Yu Sun
Mathematics , 2015,
Abstract: We analyze and compare the computational complexity of different simulation strategies for Monte Carlo in the setting of classically scaled population processes. This setting includes stochastically modeled biochemical systems. We consider the task of approximating the expected value of some function of the state of the system at a fixed time point. We study the use of standard Monte Carlo when samples are produced by exact simulation and by approximation with tau-leaping or an Euler-Maruyama discretization of a diffusion approximation. Appropriate modifications of recently proposed multilevel Monte Carlo algorithms are also studied for the tau-leaping and Euler-Maruyama approaches. In order to quantify computational complexity in a tractable yet meaningful manner, we consider a parameterization that, in the mass action chemical kinetics setting, corresponds to the classical system size scaling. We then introduce a novel asymptotic regime where the required accuracy is a function of the model scaling parameter. Our new analysis shows that for this particular scaling a diffusion approximation offers little from a computational standpoint. Instead, we find that multilevel tau-leaping, which combines exact and tau-leaped samples, is the most promising method. In particular, multilevel tau-leaping provides an unbiased estimate and, up to a logarithm factor, is as efficient as a diffusion approximation combined with multilevel Monte Carlo. Computational experiments confirm the effectiveness of the multilevel tau-leaping approach.
A wave-function Monte Carlo method for simulating conditional master equations  [PDF]
Kurt Jacobs
Physics , 2009, DOI: 10.1103/PhysRevA.81.042106
Abstract: Wave-function Monte Carlo methods are an important tool for simulating quantum systems, but the standard method cannot be used to simulate decoherence in continuously measured systems. Here we present a new Monte Carlo method for such systems. This was used to perform the simulations of a continuously measured nano-resonator in [Phys. Rev. Lett. 102, 057208 (2009)].
On Stochastic Error and Computational Efficiency of the Markov Chain Monte Carlo Method  [PDF]
J. Li,P. Vignal,S. Sun,V. M. Calo
Physics , 2012,
Abstract: In Markov Chain Monte Carlo (MCMC) simulations, the thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for the MCMC method but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them.
A Monte Carlo algorithm for simulating fermions on Lefschetz thimbles  [PDF]
Andrei Alexandru,Gokce Basar,Paulo Bedaque
Physics , 2015,
Abstract: A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with non-zero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple thimbles is important, even in the continuum limit.
Optimizing the accuracy of Lattice Monte Carlo algorithms for simulating diffusion  [PDF]
Mykyta V. Chubynsky,Gary W. Slater
Physics , 2010,
Abstract: The behavior of a Lattice Monte Carlo algorithm (if it is designed correctly) must approach that of the continuum system that it is designed to simulate as the time step and the mesh step tend to zero. However, we show for an algorithm for unbiased particle diffusion that if one of these two parameters remains fixed, the accuracy of the algorithm is optimal for a certain finite value of the other parameter. In one dimension, the optimal algorithm with moves to the two nearest neighbor sites reproduces the correct second and fourth moments (and minimizes the error for the higher moments at large times) of the particle distribution and preserves the first two moments of the first-passage time distributions. In two and three dimensions, the same level of accuracy requires simultaneous moves along two axes ("diagonal" moves). Such moves attempting to cross an impenetrable boundary should be projected along the boundary, rather than simply rejected. We also treat the case of absorbing boundaries.
Monte Carlo Algorithm for Simulating Reversible Aggregation of Multisite Particles  [PDF]
Qiang Chang,Jin Yang
Physics , 2010, DOI: 10.1103/PhysRevE.83.056701
Abstract: We present an efficient and exact Monte Carlo algorithm to simulate reversible aggregation of particles with dedicated binding sites. This method introduces a novel data structure of dynamic bond tree to record clusters and sequences of bond formations. The algorithm achieves a constant time cost for processing cluster association and a cost between $\mathcal{O}(\log M)$ and $\mathcal{O}(M)$ for processing bond dissociation in clusters with $M$ bonds. The algorithm is statistically exact and can reproduce results obtained by the standard method. We applied the method to simulate a trivalent ligand and a bivalent receptor clustering system and obtained an average scaling of $\mathcal{O}(M^{0.45})$ for processing bond dissociation in acyclic aggregation, compared to a linear scaling with the cluster size in standard methods. The algorithm also demands substantially less memory than the conventional method.
Simulating Controlled Radical Polymerizations with mcPolymer—A Monte Carlo Approach  [PDF]
Marco Drache,Georg Drache
Polymers , 2012, DOI: 10.3390/polym4031416
Abstract: Utilizing model calculations may lead to a better understanding of the complex kinetics of the controlled radical polymerization. We developed a universal simulation tool (mcPolymer), which is based on the widely used Monte Carlo simulation technique. This article focuses on the software architecture of the program, including its data management and optimization approaches. We were able to simulate polymer chains as individual objects, allowing us to gain more detailed microstructural information of the polymeric products. For all given examples of controlled radical polymerization (nitroxide mediated radical polymerization (NMRP) homo- and copolymerization, atom transfer radical polymerization (ATRP), reversible addition fragmentation chain transfer polymerization (RAFT)), we present detailed performance analyses demonstrating the influence of the system size, concentrations of reactants, and the peculiarities of data. Different possibilities were exemplarily illustrated for finding an adequate balance between precision, memory consumption, and computation time of the simulation. Due to its flexible software architecture, the application of mcPolymer is not limited to the controlled radical polymerization, but can be adjusted in a straightforward manner to further polymerization models.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.