Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Stream Processor Generator for HPC to Embedded Applications on FPGA-based System Platform  [PDF]
Kentaro Sano,Hayato Suzuki,Ryo Ito,Tomohiro Ueno,Satoru Yamamoto
Computer Science , 2014,
Abstract: This paper presents a stream processor generator, called SPGen, for FPGA-based system-on-chip platforms. In our research project, we use an FPGA as a common platform for applications ranging from HPC to embedded/robotics computing. Pipelining in application-specific stream processors brings FPGAs power-efficient and high-performance computing. However, poor productivity in developing custom pipelines prevents the reconfigurable platform from being widely and easily used. SPGen aims at assisting developers to design and implement high-throughput stream processors by generating their HDL codes with our domain-specific high-level stream processing description, called SPD.With an example of fluid dynamics computation, we validate SPD for describing a real application and verify SPGen for synthesis with a pipelined data-flow graph. We also demonstrate that SPGen allows us to easily explore a design space for finding better implementation than a hand-designed one.
Status of the APENet project  [PDF]
R. Ammendola,R. Petronzio,D. Rossetti,A. Salamon,N. Tantalo,P. Vicini
Physics , 2005,
Abstract: We present the current status of APENet, our custom 3-dimensional interconnect architecture for PC clusters environment. We report some micro-benchmarks on our recent large installation as well as new developments on the software and hardware side. The low level device driver has been reworked by following a custom hardware RDMA architecture, and MPICH-VMI, an implementation of the MPI library, has been ported to APENet.
APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters  [PDF]
Roberto Ammendola,Andrea Biagioni,Ottorino Frezza,Francesca Lo Cicero,Alessandro Lonardo,Pier Stanislao Paolucci,Davide Rossetti,Andrea Salamon,Gaetano Salina,Francesco Simula,Laura Tosoratto,Piero Vicini
Physics , 2011, DOI: 10.1088/1742-6596/331/5/052029
Abstract: We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera FPGA, are provided.
APENet: LQCD clusters a la APE  [PDF]
R. Ammendola,M. Guagnelli,G. Mazza,F. Palombi,R. Petronzio,D. Rossetti,A. Salamon,P. Vicini
Physics , 2004, DOI: 10.1016/j.nuclphysbps.2004.11.373
Abstract: Developed by the APE group, APENet is a new high speed, low latency, 3-dimensional interconnect architecture optimized for PC clusters running LQCD-like numerical applications. The hardware implementation is based on a single PCI-X 133MHz network interface card hosting six indipendent bi-directional channels with a peak bandwidth of 676 MB/s each direction. We discuss preliminary benchmark results showing exciting performances similar or better than those found in high-end commercial network systems.
Big Data at HPC Wales  [PDF]
Sidharth N. Kashyap,Ade J. Fewings,Jay Davies,Ian Morris,Andrew Thomas Thomas Green,Martyn F. Guest
Computer Science , 2015,
Abstract: This paper describes an automated approach to handling Big Data workloads on HPC systems. We describe a solution that dynamically creates a unified cluster based on YARN in an HPC Environment, without the need to configure and allocate a dedicated Hadoop cluster. The end user can choose to write the solution in any combination of supported frameworks, a solution that scales seamlessly from a few cores to thousands of cores. This coupling of environments creates a platform for applications to utilize the native HPC solutions along with the Big Data Frameworks. The user will be provided with HPC Wales APIs in multiple languages that will let them integrate this flow into their environment, thereby ensuring that the traditional means of HPC access do not become a bottleneck. We describe the behavior of the cluster creation and performance results on Terasort.
Failure Data Analysis of HPC Systems  [PDF]
Charng-Da Lu
Computer Science , 2013,
Abstract: Continuous availability of HPC systems built from commodity components have become a primary concern as system size grows to thousands of processors. In this paper, we present the analysis of 8-24 months of real failure data collected from three HPC systems at the National Center for Supercomputing Applications (NCSA) during 2001-2004. The results show that the availability is 98.7-99.8% and most outages are due to software halts. On the other hand, the downtime are mostly contributed by hardware halts or scheduled maintenance. We also used failure clustering analysis to identify several correlated failures.
Pilot-Abstraction: A Valid Abstraction for Data-Intensive Applications on HPC, Hadoop and Cloud Infrastructures?  [PDF]
Andre Luckow,Pradeep Mantha,Shantenu Jha
Computer Science , 2015,
Abstract: HPC environments have traditionally been designed to meet the compute demand of scientific applications and data has only been a second order concern. With science moving toward data-driven discoveries relying more on correlations in data to form scientific hypotheses, the limitations of HPC approaches become apparent: Architectural paradigms such as the separation of storage and compute are not optimal for I/O intensive workloads (e.g. for data preparation, transformation and SQL). While there are many powerful computational and analytical libraries available on HPC (e.g. for scalable linear algebra), they generally lack the usability and variety of analytical libraries found in other environments (e.g. the Apache Hadoop ecosystem). Further, there is a lack of abstractions that unify access to increasingly heterogeneous infrastructure (HPC, Hadoop, clouds) and allow reasoning about performance trade-offs in this complex environment. At the same time, the Hadoop ecosystem is evolving rapidly and has established itself as de-facto standard for data-intensive workloads in industry and is increasingly used to tackle scientific problems. In this paper, we explore paths to interoperability between Hadoop and HPC, examine the differences and challenges, such as the different architectural paradigms and abstractions, and investigate ways to address them. We propose the extension of the Pilot-Abstraction to Hadoop to serve as interoperability layer for allocating and managing resources across different infrastructures. Further, in-memory capabilities have been deployed to enhance the performance of large-scale data analytics (e.g. iterative algorithms) for which the ability to re-use data across iterations is critical. As memory naturally fits in with the Pilot concept of retaining resources for a set of tasks, we propose the extension of the Pilot-Abstraction to in-memory resources.
Challenges and Recommendations for Preparing HPC Applications for Exascale  [PDF]
Erika Abraham,Costas Bekas,Ivona Brandic,Samir Genaim,Einar Broch Johnsen,Ivan Kondov,Sabri Pllana,Achim Streit
Computer Science , 2015,
Abstract: While the HPC community is working towards the development of the first Exaflop computer (expected around 2020), after reaching the Petaflop milestone in 2008 still only few HPC applications are able to fully exploit the capabilities of Petaflop systems. In this paper we argue that efforts for preparing HPC applications for Exascale should start before such systems become available. We identify challenges that need to be addressed and recommend solutions in key areas of interest, including formal modeling, static analysis and optimization, runtime analysis and optimization, and autonomic computing. Furthermore, we outline a conceptual framework for porting HPC applications to future Exascale computing systems and propose steps for its implementation.
Evaluation of USR Technology on the Destruction of HPC Organisms
M.H. Dehghani,Gh. Jahed,F. Vaezi
Pakistan Journal of Biological Sciences , 2006,
Abstract: The primary aim of this study was to investigate the effect of ultrasonic reactor (USR) at different sonication times on HPC. Heterotrophs are broadly defined as microorganisms that require organic carbon for growth. A variety of simple culture-based tests, which are intended to recover a wide range of microorganisms from water, are collectively referred to as heterotrophic plate count or HPC. USR is able to inactivate bacteria through a number of physical, mechanical and chemical effects arising from acoustic cavitation. Results showed that a significant increase in percent kill for HPC bacteria with increasing duration of sonication in 42 kHz after 90 min sonication.
About Brill Initial Data Sets and HPC  [PDF]
Bogdan C. Serbanoiu
Physics , 2004,
Abstract: The goal of this paper is to present the physics behind Brill initial data sets as an excellent tool for numerical experiments of axisymmetric spacetimes, data sets which are practical applications of HPC for numerical solutions of Einstein's vacuum equations.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.