全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
-  2018 

一种在复杂环境中支持容错的高性能规约框架
A fault tolerant high-performance reduction framework in complex environment

DOI: 10.13700/j.bh.1001-5965.2017.0786

Keywords: 规约,集合通信,复杂环境,干扰,容错,并行计算
reduction
,collective communication,complex environment,interference,fault tolerance,parallel computing

Full-Text   Cite this paper   Add to My Lib

Abstract:

摘要 规约是并行应用最常用的集合通信操作之一,现存规约算法存在2方面主要问题。第一,不适应复杂环境,当计算环境出现干扰时,规约效率显著降低。第二,不支持容错,当节点发生故障时,规约被迫中断。针对上述问题,提出一种基于任务并行的高性能分布式规约框架。首先,该框架将规约拆分为一系列独立的计算任务,使用任务调度器以保证就绪任务被优先调度到具有较高性能的节点上执行,从而有效避免了慢节点对整体性能的影响。其次,该框架基于规约数据的可靠性存储和故障侦听机制,以任务为粒度,可在应用不退出的前提下实现故障恢复。在复杂环境中的实验结果表明,分布式规约框架具有高可靠性,与现有规约算法相比,规约性能最高提升了2.2倍,并发规约性能最高提升了4倍。
Abstract:Reduction is one of the most commonly used collective communication operations for parallel applications. There are two problems for the existing reduction algorithms:First, they cannot adapt to complex environment. When interferences appear in computing environment, the efficiency of reduction degrades significantly. Second, they are not fault tolerant. The reduction operation is interrupted when a node failure occurs. To solve these problems, this paper proposes a task-based parallel high-performance distributed reduction framework. Firstly, each reduction operation is divided into a series of independent computing tasks. The task scheduler is adopted to guarantee that ready tasks will take precedence in execution and each task will be scheduled to the computing node with better performance. Thus, the side effect of slow nodes on the whole efficiency can be reduced. Secondly, based on the reliability storage for reduction data and fault detecting mechanism, fault tolerance can be implemented in tasks without stopping the application. The experimental results in complex environment show that the distributed reduction framework promises high availability and, compared with the existing reduction algorithm, the reduction performance and concurrent reduction performance of distributed reduction framework are improved by 2.2 times and 4 times, respectively.

References

[1]  GONG Y,HE B,ZHONG J.An overview of CMPI:Network performance aware MPI in the cloud[J].ACM SIGPLAN Notices,2012,47(8):297-298.
[2]  GONG Y,HE B,ZHONG J.Network performance aware MPI collective communication operations in the cloud[J].IEEE Transactions on Parallel and Distributed Systems,2015,26(11):3079-3089.
[3]  HASANOV K,LASTOVETSKY A.Hierarchical optimization of MPI reduce algorithms[J].Lecture Notes in Computer Science,2015,9251:21-34.
[4]  HEIEN E,KONDO D,GAINARU A,et al.Modeling and tolerating heterogeneous failures in large parallel systems[C]//Proceedings of International Conference for High Performance Computing,Networking,Storage and Analysis.Piscataway,NJ:IEEE Press,2001:1-11.
[5]  SCHROEDER B,GIBSON G.Understanding failures in petascale computers[C]//Journal of Physics:Conference Series.Philadelphia,PA:IOP Publishing,2007:12-22.
[6]  ELNOZAHY E,ALVISI L,WANG Y,et al.A survey of rollback-recovery protocols in message-passing systems[J].ACM Computing Surveys(CSUR),2002,34(3):375-408.
[7]  BRONEVETSKY G,MARQUES D,PINGALI K,et al.C3:A system for automating application-level checkpointing of MPI Programs[J].Lecture Notes in Computer Science,2003,2958:357-373.
[8]  GROPP W,LUSK E,DOSS N,et al.A high-performance,portable implementation of the MPI message passing interface standard[J].Parallel Computing,1996,22(6):789-828.
[9]  CHAN W E,HEIMLICH F M,PURAKAYASTHA A,et al.On optimizing collective communication[C]//Proceedings of the IEEE International Conference on Cluster Computing.Piscataway,NJ:IEEE Press,2004:145-155.
[10]  FAGG E G,DONGARRA J.FT-MPI:Fault tolerant MPI,supporting dynamic applications in a dynamic world[M].Berlin:Springer,2000:346-353.
[11]  HURSEY J,GRAHAM R L.Analyzing fault aware collective performance in a process fault tolerant MPI[J].Parallel Computing,2012,38(1):15-25.
[12]  RABENSEIFNER R.Automatic MPI counter profiling of all users:First results on a CRAY T3E 900-512[C]//Proceedings of the Message Passing Interface Developer's and User's Conference.Piscataway,NJ:IEEE Press,1999:77-85
[13]  GROPP W,LUSK E.Users guide for mpich,a portable implementation of MPI[J].Office of Scientific & Technical Information Technical Reports,1996,1996(17):2096-2097.
[14]  THAKUR R,RABENSEIFNER R,GROPP W.Optimization of collective communication operations in MPICH[J].International Journal of High Performance Computing Applications,2005,19(1):49-66.
[15]  HUSBANDS P,HOE J C.MPI-StarT:Delivering network performance to numerical applications[C]//Proceedings of the ACM/IEEE Conference on Supercomputing.Piscataway,NJ:IEEE Press,1998:1-15.
[16]  VADHIYAR S S,FAGG E G,DONGARRA J.Automatically tuned collective communications[C]//Proceedings of the ACM/IEEE Conference on Supercomputing.Piscataway,NJ:IEEE Press,2000:3-13.
[17]  MAKPAISIT P,ICHIKAWA K,UTHAYOPAS P,et al.MPI_reduce algorithm for open flow-enabled network[C]//Proceedings of the IEEE International Symposium on Communications and Information Technologies.Piscataway,NJ:IEEE Press,2015:261-264.
[18]  KIELMANN T,HOFMAN R F H,BAl H E,et al.MagPIe:MPI's collective communication operations for clustered wide area systems[J].ACM SIGPLAN Notices,1999,34(8):131-140.
[19]  WANG R,YAO E,CHEN M,et al.Building algorithmically nonstop fault tolerant MPI programs[C]//Proceedings of IEEE International Conference on High Performance Computing (HiPC).Piscataway,NJ:IEEE Press,2011:1-9.
[20]  CHEN Z,DONGARRA J.Algorithm-based fault tolerance for fail-stop failures[J].IEEE Transactions on Parallel and Distributed Systems,2008,19(12):1628-1641.
[21]  GORLATCH S.Send-receive considered harmful:Myths and realities of message passing[J].ACM Transactions on Programming Languages & Systems,2004,26(1):47-56.
[22]  LI C,ZHAO C,YAN H,et al.Event-driven fault tolerance for building nonstop active message programs[C]//Proceedings of IEEE International Conference on High Performance Computing.Piscataway,NJ:IEEE Press,2013:382-390.
[23]  LI C,WANG Y,ZHAO C,et al.Parallel Kirchhoff pre-stack depth migration on large high performance clusters[M].Berlin:Springer,2015:251-266.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133