Datenbestand vom 15. November 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 15. November 2024

ISBN 9783843902311

60,00 € inkl. MwSt, zzgl. Versand


978-3-8439-0231-1, Reihe Ingenieurwissenschaften

Katharina Benkert
Adaptive Parallel Communications for Large-Scale Computational Fluid Dynamics

143 Seiten, Dissertation Universität Stuttgart (2011), Softcover, A5

Zusammenfassung / Abstract

Nowadays, simulation methods in engineering are recognized as equally important as the traditional fields of theory and experiment. Because of their complexity, large-scale parallel computations are necessary, commonly using the Message Passing Interface (MPI) in distributed memory environments. One of the main obstacles end-users are facing when performing those simulations is the choice between performance and portability. A possible solution are empirical optimization libraries which offer a rich set of codelets, i.e. implementations, for a particular problem as well as methods to detect the best-performing one. This allows for tuning applications at install- or runtime without special knowledge or intervention of the end-user.

In this work, two large-scale Computational Fluid Dynamics (CFD) applications are optimized using the empirical auto-tuning library ADCL. For this, ADCL is extended beyond neighborhood communication patterns to allow tuning of collective communications. It is shown that ADCL can shorten runtimes more than 30% for the chosen test cases. As empirical optimization libraries base their choice of the best-performing codelet on empirical data, this work also investigates two fundamental problems that are associated with collecting this data and its evaluation. Firstly, the empirical data is obtained by measuring the execution times of various codelets where the measurement method greatly influences its informational value. Secondly, the evaluation of the data is encumbered by unpredictable variations in execution time which frequently occur in a parallel setting resulting in data points which differ greatly from the observed average, so-called outliers. In this thesis, recommendations are given on how runtime tuning in a parallel environment needs to be carried out to get optimal results.

Based on the results presented in this work, the library ADCL now possesses the means to tune MPI communications in most application scenarios and uses a sound empirical framework to choose the best-performing codelet in MPI parallel simulations. Optimization of collectives as well as code which overlaps communication and computation is now possible. This makes the performance of MPI communications portable, i.e. the communications perform well on different machines without spending anew time on optimizations, and together with the easy use of the library, reduces the complexity for the user.