Programming models and scheduling techniques for heterogeneous architectures

  1. Planas Carbonell, Judit
Dirigida por:
  1. Rosa M Badia Director/a
  2. Eduard Ayguadé Parra Director/a

Universidad de defensa: Universitat Politècnica de Catalunya (UPC)

Fecha de defensa: 03 de noviembre de 2015

Tribunal:
  1. Manuel Prieto Matías Presidente
  2. Marc Casas Guix Secretario/a
  3. Hatem Ltaief Vocal

Tipo: Tesis

Teseo: 412510 DIALNET lock_openTDX editor

Resumen

There is a clear trend nowadays to use heterogeneous high-performance computers, as they offer considerably greater computing power than homogeneous CPU systems. Extending traditional CPU systems with specialized units (accelerators such as GPGPUs) has become a revolution in the HPC world. Both the traditional performance-per-Watt and the performance-per-Euro ratios have been increased with the use of such systems. Heterogeneous machines can adapt better to different application requirements, as each architecture type offers different characteristics. Thus, in order to maximize application performance in these platforms, applications should be divided into several portions according to their execution requirements. These portions should then be scheduled to the device that better fits their requirements. Hence, heterogeneity introduces complexity in application development, up to the point of reaching the programming wall: on the one hand, source codes must be adapted to fit new architectures and, on the other, resource management becomes more complicated. For example, multiple memory spaces that require explicit data movements or additional synchronizations between different code portions that run on different units. For all these reasons, efficient programming and code maintenance in heterogeneous systems is extremely complex and expensive. Although several approaches have been proposed for accelerator programming, like CUDA or OpenCL, these models do not solve the aforementioned programming challenges, as they expose low level hardware characteristics to the programmer. Therefore, programming models should be able to hide all these complex accelerator programming by providing a homogeneous development environment. In this context, this thesis contributes in two key aspects: first, it proposes a general design to efficiently manage the execution of heterogeneous applications and second, it presents several scheduling mechanisms to spread application execution among all the units of the system to maximize performance and resource utilization. The first contribution proposes an asynchronous design to manage execution, data movements and synchronizations on accelerators. This approach has been developed in two steps: first, a semi-asynchronous proposal and then, a fully-asynchronous proposal in order to fit contemporary hardware restrictions. The experimental results tested on different multi-accelerator systems showed that these approaches could reach the maximum expected performance. Even if compared to native, hand-tuned codes, they could get the same results and outperform native versions in selected cases. The second contribution presents four different scheduling strategies. They focus and combine different aspects related to heterogeneous programming to minimize application's execution time. For example, minimizing the amount of data shared between memory spaces, or maximizing resource utilization by scheduling each portion of code on the unit that fits better. The experimental results were performed on different heterogeneous platforms, including CPUs, GPGPU and Intel Xeon Phi devices. As shown in these tests, it is particularly interesting to analyze how all these scheduling strategies can impact application performance. Three general conclusions can be extracted: first, application performance is not guaranteed across new hardware generations. Then, source codes must be periodically updated as hardware evolves. Second, the most efficient way to run an application on a heterogeneous platform is to divide it into smaller portions and pick the unit that better fits to run each portion. Hence, system resources can cooperate together to execute the application. Finally, and probably the most important, the requirements derived from the first and second conclusions can be implemented inside runtime frameworks, so the complexity of programming heterogeneous architectures is completely hidden to the programmer.