%0 Journal Article %T Interoperability strategies for GASPI and MPI in large %A Christian Simmendinger %A Dana Akhmetova %A Erwin Laure %A Luis Cebamanos %A Mirko Rahn %A Roman Iakymchuk %A Stefano Markidis %A Tiberiu Rotaru %A Valeria Bartsch %J The International Journal of High Performance Computing Applications %@ 1741-2846 %D 2019 %R 10.1177/1094342018808359 %X One of the main hurdles of partitioned global address space (PGAS) approaches is the dominance of message passing interface (MPI), which as a de facto standard appears in the code basis of many applications. To take advantage of the PGAS APIs like global address space programming interface (GASPI) without a major change in the code basis, interoperability between MPI and PGAS approaches needs to be ensured. In this article, we consider an interoperable GASPI/MPI implementation for the communication/performance crucial parts of the Ludwig and iPIC3D applications. To address the discovered performance limitations, we develop a novel strategy for significantly improved performance and interoperability between both APIs by leveraging GASPI shared windows and shared notifications. First results with a corresponding implementation in the MiniGhost proxy application and the Allreduce collective operation demonstrate the viability of this approach %K Interoperability %K GASPI %K MPI %K iPIC3D %K Ludwig %K MiniGhost %K halo exchange %K Allreduce %U https://journals.sagepub.com/doi/full/10.1177/1094342018808359