oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
STAR-Scheduler: A Batch Job Scheduler for Distributed I/O Intensive Applications  [PDF]
V. Mandapaka,C. Pruneau,J. Lauret,S. Zeadally
Physics , 2004,
Abstract: We present the implementation of a batch job scheduler designed for single-point management of distributed tasks on a multi-node compute farm. The scheduler uses the notion of a meta-job to launch large computing tasks simultaneously on many nodes from a single user command. Job scheduling on specific computing nodes is predicated on the availability of user specified data files co-located with the CPUs where the analysis is meant to take place. Large I/O intensive data analyses may thus be efficiently conducted on multiple CPUs without the limitations implied by finite LAN or WAN bandwidths. Although this Scheduler was developed specifically for the STAR Collaboration at Brookhaven National Laboratory, its design is sufficiently general, it can be adapted to virtually any other data analysis tasks carried out by large scientific collaborations.
FLASH redshift survey - I. Observations and Catalogue  [PDF]
Raven Kaldare,Matthew Colless,Somak Raychaudhury,Bruce A. Peterson
Physics , 2001, DOI: 10.1046/j.1365-8711.2003.05695.x
Abstract: The FLAIR Shapley-Hydra (FLASH) redshift survey catalogue consists of 4613 galaxies brighter than $\bJ = 16.7$ (corrected for Galactic extinction) over a 605 sq. degree region of sky in the general direction of the Local Group motion. The survey region is an approximately $60\degr \times 10\degr$ strip spanning the sky from the Shapley Supercluster to the Hydra cluster, and contains 3141 galaxies with measured redshifts. Designed to explore the effect of the galaxy concentrations in this direction (in particular the Supergalactic plane and the Shapley Supercluster) upon the Local Group motion, the 68% completeness allows us to sample the large-scale structure better than similar sparsely-sampled surveys. The survey region does not overlap with the areas covered by ongoing wide-angle (Sloan or 2dF) complete redshift surveys. In this paper, the first in a series, we describe the observation and data reduction procedures, the analysis for the redshift errors and survey completeness, and present the survey data.
Flashmon V2: Monitoring Raw NAND Flash Memory I/O Requests on Embedded Linux  [PDF]
Pierre Olivier,Jalil Boukhobza,Eric Senn
Computer Science , 2013,
Abstract: This paper presents Flashmon version 2, a tool for monitoring embedded Linux NAND flash memory I/O requests. It is designed for embedded boards based devices containing raw flash chips. Flashmon is a kernel module and stands for "flash monitor". It traces flash I/O by placing kernel probes at the NAND driver level. It allows tracing at runtime the 3 main flash operations: page reads / writes and block erasures. Flashmon is (1) generic as it was successfully tested on the three most widely used flash file systems that are JFFS2, UBIFS and YAFFS, and several NAND chip models. Moreover, it is (2) non intrusive, (3) has a controllable memory footprint, and (4) exhibits a low overhead (<6%) on the traced system. Finally, it is (5) simple to integrate and used as a standalone module or as a built-in function / module in existing kernel sources. Monitoring flash memory operations allows a better understanding of existing flash management systems by studying and analyzing their behavior. Moreover it is useful in development phase for prototyping and validating new solutions.
Bulk Scheduling with the DIANA Scheduler  [PDF]
Ashiq Anjum,Richard McClatchey,Arshad Ali,Ian Willers
Computer Science , 2006, DOI: 10.1109/TNS.2006.886047
Abstract: Results from the research and development of a Data Intensive and Network Aware (DIANA) scheduling engine, to be used primarily for data intensive sciences such as physics analysis, are described. In Grid analyses, tasks can involve thousands of computing, data handling, and network resources. The central problem in the scheduling of these resources is the coordinated management of computation and data at multiple locations and not just data replication or movement. However, this can prove to be a rather costly operation and efficient sing can be a challenge if compute and data resources are mapped without considering network costs. We have implemented an adaptive algorithm within the so-called DIANA Scheduler which takes into account data location and size, network performance and computation capability in order to enable efficient global scheduling. DIANA is a performance-aware and economy-guided Meta Scheduler. It iteratively allocates each job to the site that is most likely to produce the best performance as well as optimizing the global queue for any remaining jobs. Therefore it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results indicate that considerable performance improvements can be gained by adopting the DIANA scheduling approach.
The impact of Basel I capital requirements on bank behavior and the efficacy of monetary policy  [PDF]
Juliusz Jablecki
International Journal of Economic Sciences and Applied Research , 2009,
Abstract: The paper attempts to investigate the influence of the 1988 Basel Accord on bank behavior and monetary policy. It is argued that the Accord was successful in that it forced commercial banks in all of G-10 countries to maintain higher capital ratios.Tentative research suggests, however, that – at least among American banks – the Accord also encouraged the widespread resort to regulatory capital arbitrage techniques, in particular securitization. The paper also reviews the literature on the transmission mechanism of monetary policy and shows that the Basel Accord has affected the bank lending channel.
Optimal Threshold Scheduler for Cellular Networks  [PDF]
Sanket Kamthe,Smriti Gopinath
Computer Science , 2013,
Abstract: The conventional wireless schedulers of Unicast and Multicast either fulfill multiuser diversity or broadcast gain, but not both together. To achieve optimal system throughput we need a scheduler that exploits both the multiuser diversity gain and the Multicasting gain simultaneously. We first propose a new medianthreshold scheduler that selects all users having instantaneous SNR above the median value for transmission. The system rate equation for the proposed scheduler is also derived. We then optimize the median-threshold so that it performs well over an entire SNR range.With the help of simulation results we compare performance of the proposed scheduler with schedulers like Unicast, Multicast and other Opportunistic Multicast Schedulers (OMS) which are the best schedulers in terms of throughput and show that the proposed optimized threshold scheme outperforms all of them.
Bulk Scheduling with DIANA Scheduler  [PDF]
Ashiq Anjum,Richard McClatchey,Arshad Ali,Ian Willers
Computer Science , 2006,
Abstract: Results from and progress on the development of a Data Intensive and Network Aware (DIANA) Scheduling engine, primarily for data intensive sciences such as physics analysis, are described. Scientific analysis tasks can involve thousands of computing, data handling, and network resources and the size of the input and output files and the amount of overall storage space allocated to a user necessarily can have significant bearing on the scheduling of data intensive applications. If the input or output files must be retrieved from a remote location, then the time required transferring the files must also be taken into consideration when scheduling compute resources for the given application. The central problem in this study is the coordinated management of computation and data at multiple locations and not simply data movement. However, this can be a very costly operation and efficient scheduling can be a challenge if compute and data resources are mapped without network cost. We have implemented an adaptive algorithm within the DIANA Scheduler which takes into account data location and size, network performance and computation capability to make efficient global scheduling decisions. DIANA is a performance-aware as well as an economy-guided Meta Scheduler. It iteratively allocates each job to the site that is likely to produce the best performance as well as optimizing the global queue for any remaining pending jobs. Therefore it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results suggest that considerable performance improvements are to be gained by adopting the DIANA scheduling approach.
Making Random Choices Invisible to the Scheduler  [PDF]
Konstantinos Chatzikokolakis,Catuscia Palamidessi
Computer Science , 2007,
Abstract: When dealing with process calculi and automata which express both nondeterministic and probabilistic behavior, it is customary to introduce the notion of scheduler to solve the nondeterminism. It has been observed that for certain applications, notably those in security, the scheduler needs to be restricted so not to reveal the outcome of the protocol's random choices, or otherwise the model of adversary would be too strong even for ``obviously correct'' protocols. We propose a process-algebraic framework in which the control on the scheduler can be specified in syntactic terms, and we show how to apply it to solve the problem mentioned above. We also consider the definition of (probabilistic) may and must preorders, and we show that they are precongruences with respect to the restricted schedulers. Furthermore, we show that all the operators of the language, except replication, distribute over probabilistic summation, which is a useful property for verification.
The core helium flash revisited III. From Pop I to Pop III stars  [PDF]
Miroslav Mocak,Simon W. Campbell,Ewald Mueller,Konstantinos Kifonidis
Physics , 2010, DOI: 10.1051/0004-6361/201014461
Abstract: Degenerate ignition of helium in low-mass stars at the end of the red giant branch phase leads to dynamic convection in their helium cores. One-dimensional (1D) stellar modeling of this intrinsically multi-dimensional dynamic event is likely to be inadequate. Previous hydrodynamic simulations imply that the single convection zone in the helium core of metal-rich Pop I stars grows during the flash on a dynamic timescale. This may lead to hydrogen injection into the core, and a double convection zone structure as known from one-dimensional core helium flash simulations of low-mass Pop III stars. We perform hydrodynamic simulations of the core helium flash in two and three dimensions to better constrain the nature of these events. To this end we study the hydrodynamics of convection within the helium cores of a 1.25 \Msun metal-rich Pop I star (Z=0.02), and a 0.85 \Msun metal-free Pop III star (Z=0) near the peak of the flash. These models possess single and double convection zones, respectively. We use 1D stellar models of the core helium flash computed with state-of-the-art stellar evolution codes as initial models for our multidimensional hydrodynamic study, and simulate the evolution of these models with the Riemann solver based hydrodynamics code Herakles which integrates the Euler equations coupled with source terms corresponding to gravity and nuclear burning. The hydrodynamic simulation of the Pop I model involving a single convection zone covers 27 hours of stellar evolution, while the first hydrodynamic simulations of a double convection zone, in the Pop III model, span 1.8 hours of stellar life. We find differences between the predictions of mixing length theory and our hydrodynamic simulations. The simulation of the single convection zone in the Pop I model shows a strong growth of the size of the convection zone due to turbulent entrainment. Hence we predict that for the Pop I model a hydrogen injection phase (i.e. hydrogen injection into the helium core) will commence after about 23 days, which should eventually lead to a double convection zone structure known from 1D stellar modeling of low-mass Pop III stars. Our two and three-dimensional hydrodynamic simulations of the double (Pop III) convection zone model show that the velocity field in the convection zones is different from that predicted by stellar evolutionary calculations. The simulations suggest that the double convection zone decays quickly, the flow eventually being dominated by internal gravity waves.
Scheduler Vulnerabilities and Attacks in Cloud Computing  [PDF]
Fangfei Zhou,Manish Goel,Peter Desnoyers,Ravi Sundaram
Computer Science , 2011,
Abstract: In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I/O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider's hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-space versions of the attack. We show results demonstrating the vulnerability in the lab, consuming as much as 98% of CPU time regardless of fair share, as well as on Amazon EC2, where Xen modifications protect other users but still allow theft of service. In case of EC2, following the responsible disclosure model, we have reported this vulnerability to Amazon; they have since implemented a fix that we have tested and verified (See Appendix B). We provide a novel analysis of the necessary conditions for such attacks, and describe scheduler modifications to eliminate the vulnerability. We present experimental results demonstrating the effectiveness of these defenses while imposing negligible overhead.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.