Target Audience:
This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including exploiting shared-memory access from MPI programs, communicator management and neighbourhood collectives. We also look at performance aspects such as which MPI routines to use for scalability, MPI internal implementation issues and overlapping communication and calculation. Intended learning outcomes
- Understanding of how internal MPI implementation details affect performance
- Techniques for overlapping communications and calculation
- Knowledge of MPI memory models for RMA operations
- Understanding of best practice for MPI+OpenMP programming
- Familiarity with neighbourhood collective operations in MPI
Prerequisites:
Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER2 MPI course.
Requirements:
Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on.
They are also required to abide by the ARCHER2 Code of Conduct.
Timetable:
Day 1: Wednesday 29th June
- 09:30 - 09:45 ARCHER2 and PRACE training
- 09:45 - 10:15 MPI Quiz (“Room Name” is: HPCQUIZ)
- 10:15 - 11:00 MPI History
- 11:00 - 11:30 Coffee
- 11:30 - 13:00 Point-to-point Performance
- 13:00 - 14:00 Lunch
- 14:00 - 15:30 MPI Optimisations
- 15:30 - 16:00 Coffee
- 16:00 - 17:00 Collectives
- 17:00 CLOSE
Day 2: Thursday 30th June
- 09:30 - 11:00 MPI + OpenMP (i)
- 11:00 - 11:30 Coffee
- 11:30 - 13:00 MPI + OpenMP (ii) - same slide deck as above
- 13:00 - 14:00 Lunch
- 14:00 - 14:30 RMA Access in MPI
- 14:30 - 15:30 New MPI shared-memory model
- 15:30 - 16:00 Coffee
- 16:00 - 17:00 Finish Exercises
- 17:00 CLOSE
Course materials
Videos
Session 1
Session 2
Session 3
Session 4
Session 5
Session 6
Session 7
Feedback
This course is part-funded by the PRACE project and is free to all.