|
计算机科学 2013
High Performance Massive Data Computing Framework Based on Hadoop Cluster
|
Abstract:
HPC of massive data presents tremendous value. However, cloud systems still lack HPC computing power.This study improved the HPC ability of cloud computing technology by adding GPU to the cloud system. The proposed platform is based on Hadoop MapReduce programming model, and it defines some OpenMP like directives to annotate MapReduce program. The annotated code will try to be executed in parallel. A GPUClassloader was designed to convert annotated java code regions to CUDA code. With JNI,generated CUDA code and run on the GPUs. The computing resups of GPUs can be transferred back to the map function, in the end, the map function finishes the rest computing. The platform can support the user to complete CPU, GPU collaborative large-scale data parallel processing programming conveniently.