PC
Parallel Computing
Objectives
This curricular unit aims to develop skills in parallel computing, namely:
- Characterise computer systems and their computing units.
- Evaluate computer systems and their computing units qualitatively/quantitatively.
- Understand the main existing parallel programming paradigms, namely shared memory programming and distributed memory programming with message communication.
- Develop parallel applications, with an emphasis on scientific computing.
- Improve the execution efficiency of parallel applications, with an emphasis on scientific computing.
- Identify the main limitations in the performance of HPC applications and ways of overcoming these limitations.
Program
- Analysis of the architecture of generic processors, in terms of the various types and levels of parallelism (in a shared/distributed memory environment) and the memory hierarchy.
- Analysis and evaluation of shared memory and distributed memory systems
- Parallel computing methodology and models: development phases of parallel applications; programming in a multi-threaded environment, tools (including OpenMP) and languages; the communication of messages in a distributed memory environment (inc. MPI).
- Design of parallel applications with a focus on scientific computing.
- Measurement and optimization of the performance of parallel applications.
Bibliography
- Parallel Computing Architectures and APIs, Vivek Kale, Chapman and Hall/CRC, 2019
- Structured Parallel Programming, Michael McCool, Arch Robison & James Reinders, 2018
- Programming Massively Parallel Processors, A Hands-on Approach, 3rd Ed., David Kirk & Wen-mei Hwu, Morgan Kaufmann, 2016
- Computer Architecture. A Quantitative Approach, 6th Ed., David Patterson & John Hennessy, Morgan Kaufmann, 2017
- Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw-Hill Education, 2003