oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Cholla : A New Massively-Parallel Hydrodynamics Code For Astrophysical Simulation  [PDF]
Evan E. Schneider,Brant E. Robertson
Physics , 2014, DOI: 10.1088/0067-0049/217/2/24
Abstract: We present Cholla (Computational Hydrodynamics On ParaLLel Architectures), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind (CTU) algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively-parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (> 256^3) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.
Nyx: A Massively Parallel AMR Code for Computational Cosmology  [PDF]
Ann Almgren,John Bell,Mike Lijewski,Zarija Luki?,Ethan Van Andel
Physics , 2013, DOI: 10.1088/0004-637X/765/1/39
Abstract: We present a new N-body and gas dynamics code, called Nyx, for large-scale cosmological simulations. Nyx follows the temporal evolution of a system of discrete dark matter particles gravitationally coupled to an inviscid ideal fluid in an expanding universe. The gas is advanced in an Eulerian framework with block-structured adaptive mesh refinement (AMR); a particle-mesh (PM) scheme using the same grid hierarchy is used to solve for self-gravity and advance the particles. Computational results demonstrating the validation of Nyx on standard cosmological test problems, and the scaling behavior of Nyx to 50,000 cores, are presented.
GreeM : Massively Parallel TreePM Code for Large Cosmological N-body Simulations  [PDF]
Tomoaki Ishiyama,Toshiyuki Fukushige,Junichiro Makino
Physics , 2009, DOI: 10.1093/pasj/61.6.1319
Abstract: In this paper, we describe the implementation and performance of GreeM, a massively parallel TreePM code for large-scale cosmological N-body simulations. GreeM uses a recursive multi-section algorithm for domain decomposition. The size of the domains are adjusted so that the total calculation time of the force becomes the same for all processes. The loss of performance due to non-optimal load balancing is around 4%, even for more than 10^3 CPU cores. GreeM runs efficiently on PC clusters and massively-parallel computers such as a Cray XT4. The measured calculation speed on Cray XT4 is 5 \times 10^4 particles per second per CPU core, for the case of an opening angle of \theta=0.5, if the number of particles per CPU core is larger than 10^6.
ls1 mardyn: The massively parallel molecular dynamics code for large systems  [PDF]
Christoph Niethammer,Stefan Becker,Martin Bernreuther,Martin Buchholz,Wolfgang Eckhardt,Alexander Heinecke,Stephan Werth,Hans-Joachim Bungartz,Colin W. Glass,Hans Hasse,Jadran Vrabec,Martin Horsch
Physics , 2014,
Abstract: The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures, and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales which were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multi-center rigid potential models based on Lennard-Jones sites, point charges and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, e.g. for fluids at interfaces, as well as non-equilibrium molecular dynamics simulation of heat and mass transfer.
GRay: a Massively Parallel GPU-Based Code for Ray Tracing in Relativistic Spacetimes  [PDF]
Chi-kwan Chan,Dimitrios Psaltis,Feryal Ozel
Physics , 2013, DOI: 10.1088/0004-637X/777/1/13
Abstract: We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This GPU-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 nanosecond per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing CPU-based ray tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and lightcurves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.
Extensible Component Based Architecture for FLASH, A Massively Parallel, Multiphysics Simulation Code  [PDF]
A. Dubey,L. B. Reid,K. Weide,K. Antypas,M. K. Ganapathy,K. Riley,D. Sheeler,A. Siegal
Computer Science , 2009, DOI: 10.1016/j.parco.2009.08.001
Abstract: FLASH is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH has been successful because its capabilities have been driven by the needs of scientific applications, without compromising maintainability, performance, and usability. In its newest incarnation, FLASH3 consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing verifiability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development. In this paper we describe the FLASH3 architecture, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes. We also include results from user surveys conducted in 2005 and 2007, which highlight the success of the code.
RH 1.5D: a massively parallel code for multi-level radiative transfer with partial frequency redistribution and Zeeman polarisation  [PDF]
Tiago M. D. Pereira,Han Uitenbroek
Physics , 2014, DOI: 10.1051/0004-6361/201424785
Abstract: The emergence of three-dimensional magneto-hydrodynamic (MHD) simulations of stellar atmospheres has sparked a need for efficient radiative transfer codes to calculate detailed synthetic spectra. We present RH 1.5D, a massively parallel code based on the RH code and capable of performing Zeeman polarised multi-level non-local thermodynamical equilibrium (NLTE) calculations with partial frequency redistribution for an arbitrary amount of chemical species. The code calculates spectra from 3D, 2D or 1D atmospheric models on a column-by-column basis (or 1.5D). While the 1.5D approximation breaks down in the cores of very strong lines in an inhomogeneous environment, it is nevertheless suitable for a large range of scenarios and allows for faster convergence with finer control over the iteration of each simulation column. The code scales well to at least tens of thousands of CPU cores, and is publicly available. In the present work we briefly describe its inner workings, strategies for convergence optimisation, its parallelism, and some possible applications.
An Open Source, Massively Parallel Code for Non-LTE Synthesis and Inversion of Spectral Lines and Zeeman-induced Stokes Profiles  [PDF]
H. Socas-Navarro,J. de la Cruz Rodriguez,A. Asensio Ramos,J. Trujillo Bueno,B. Ruiz Cobo
Physics , 2014, DOI: 10.1051/0004-6361/201424860
Abstract: With the advent of a new generation of solar telescopes and instrumentation, the interpretation of chromospheric observations (in particular, spectro-polarimetry) requires new, suitable diagnostic tools. This paper describes a new code, NICOLE, that has been designed for Stokes non-LTE radiative transfer, both for synthesis and inversion of spectral lines and Zeeman-induced polarization profiles, spanning a wide range of atmospheric heights, from the photosphere to the chromosphere. The code fosters a number of unique features and capabilities and has been built from scratch with a powerful parallelization scheme that makes it suitable for application on massive datasets using large supercomputers. The source code is being publicly released, with the idea of facilitating future branching by other groups to augment its capabilities.
PORTA: A three-dimensional multilevel radiative transfer code for modeling the intensity and polarization of spectral lines with massively parallel computers  [PDF]
Jiri Stepan,Javier Trujillo Bueno
Physics , 2013, DOI: 10.1051/0004-6361/201321742
Abstract: The interpretation of the intensity and polarization of the spectral line radiation produced in the atmosphere of the Sun and of other stars requires solving a radiative transfer problem that can be very complex, especially when the main interest lies in modeling the spectral line polarization produced by scattering processes and the Hanle and Zeeman effects. One of the difficulties is that the plasma of a stellar atmosphere can be highly inhomogeneous and dynamic, which implies the need to solve the non-equilibrium problem of the generation and transfer of polarized radiation in realistic three-dimensional (3D) stellar atmospheric models. Here we present PORTA, an efficient multilevel radiative transfer code we have developed for the simulation of the spectral line polarization caused by scattering processes and the Hanle and Zeeman effects in 3D models of stellar atmospheres. The numerical method of solution is based on the non-linear multigrid iterative method and on a novel short-characteristics formal solver of the Stokes-vector transfer equation which uses monotonic B\'ezier interpolation. Therefore, with PORTA the computing time needed to obtain at each spatial grid point the self-consistent values of the atomic density matrix (which quantifies the excitation state of the atomic system) scales linearly with the total number of grid points. Another crucial feature of PORTA is its parallelization strategy, which allows us to speed up the numerical solution of complicated 3D problems by several orders of magnitude with respect to sequential radiative transfer approaches, given its excellent linear scaling with the number of available processors. The PORTA code can also be conveniently applied to solve the simpler 3D radiative transfer problem of unpolarized radiation in multilevel systems.
A massively parallel code for polarization calculations  [cached]
Shizuka Akiyama,Peter Hu00F6flich
Revista mexicana de astronomía y astrofísica , 2001,
Abstract: Presentamos una implementaci on de nuestro m etodo de Monte Carlo para transporte radiativo, en atm osferas fuera de equilibrio t ermico en expansi on r apida, para computadoras paralelas que utilicen memoria distribuida y compartida. Esto nos permite aprovechar la comunicaci on r apida con varios procesadores, y llevar al l mite la capacidad de escalar el trabajo con el n umero de nodos, al comparar con una versi on basada en memoria compartida. Los c alculos de las pruebas utilizando un arreglo Beowulf de 20 nodos con procesadores duales muestran mejor escalamiento en un 40%.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.