%0 Journal Article %T Architectural Considerations for Compiler-guided Unroll-and-Jam of CUDA Kernels %J American Journal of Computer Architecture %@ TBD %D 2012 %I %R 10.5923/j.ajca.20120102.01 %X Hundreds of cores per chip and support for fine-grain multithreading have made GPUs a central player in todays HPC world. Much of the responsibility of achieving high performance on these complex systems lies with software like the compiler. This paper describes a compiler-based strategy for automatic and profitable application of the unroll-and-jam transformation to CUDA kernels. The framework supports specification of unroll factors through source-code annotation and also implements a heuristic based on register pressure and occupancy that recommends unroll factors for improved memory performance. We present experimental results on a GE 9800 GT on four CUDA kernels. The results show that the proposed strategy is generally able to select profitable unroll factors. The results also indicate that the selected unroll amounts strike the right balance between register pressure and occupancy. %K GPU %K Compiler Optimization %K Memory Hierarchy %U http://article.sapub.org/10.5923.j.ajca.20120102.01.html