COMP40730 High Performance Computing

Academic Year 2023/2024

This module is suitable for students who are experienced programmers, with knowledge of the C/C++ programming languages. The aim of this module is to introduce students to design and development of parallel programs for different parallel architectures. There is a particular emphasis on practical implementation of shared-memory parallel algorithms using Pthreads and OpenMP and message-passing parallel algorithms using the message-passing interface MPI with the C programming language.
Students are introduced to some simple but typical computationally intensive problems, and strategies for solving these problems on multi-processor machines are introduced. Methods to analyse the performance of parallel algorithms on the executing parallel architecture are also introduced. Students are required to implement and experiment with a number of parallel applications. Topics covered in the module include: Vector and superscalar processors: architecture and programming model, optimizing compilers (dependency analysis and code generation), array libraries (BLAS), parallel languages (Fortran 90); Shared-memory multi-processors and multicores: architecture and programming models, optimizing compilers, thread libraries (Pthreads), parallel languages (OpenMP); Distributed-memory multi-processors: architecture and programming model, performance models, message-passing libraries (MPI); Hybrid parallel computing on lusters of multicore CPUs with MPI+OpenMP.

NB. This is a professional module which is part of a professional MSc

Show/hide contentOpenClose All

Curricular information is subject to change

Learning Outcomes:

By the end of this module, students should: Understand the main principles of parallel computing and orient themselves in parallel computing technologies; Be able to apply simple performance models to performance analysis of parallel algorithms; Be able to write parallel programs using MPI, OpenMP and Pthreads, and MPI+OpenMP; Be able to to theoretically and experimentally analyse the performance of parallel applications.

Indicative Module Content:

Vector and superscalar processors: architecture and programming model, optimizing compilers (dependency analysis and code generation), array libraries (BLAS), parallel languages (Fortran 90).

Shared-memory multi-processors and multicore CPUs: architecture and programming models, optimizing compilers, thread libraries (Pthreads), parallel languages (OpenMP).

Distributed-memory multi-processors: architecture and programming model, performance models, message-passing libraries (MPI), parallel languages (HPF).

Hybrid parallel programming for clusters of mutlicore CPUs with MPI+OpenMP.

Student Effort Hours: 
Student Effort Type Hours
Lectures

24

Practical

36

Autonomous Student Learning

140

Total

200

Approaches to Teaching and Learning:
Each topic will be covered at lectures. During lab sessions students will work on individual practical assignments under the guidance of the TA and demonstrators. Practical assignments will require development and implementation of parallel scientific programs, conducting experiments, analysing the results and writing reports. 
Requirements, Exclusions and Recommendations

Not applicable to this module.


Module Requisites and Incompatibles
Not applicable to this module.
 
Assessment Strategy  
Description Timing Open Book Exam Component Scale Must Pass Component % of Final Grade
Practical Examination: Lab. assignments Varies over the Trimester n/a Standard conversion grade scale 40% No

50

Examination: Final examination 2 hour End of Trimester Exam No Standard conversion grade scale 40% No

50


Carry forward of passed components
Yes
 
Remediation Type Remediation Timing
In-Module Resit Prior to relevant Programme Exam Board
Please see Student Jargon Buster for more information about remediation types and timing. 
Feedback Strategy/Strategies

• Feedback individually to students, post-assessment
• Online automated feedback

How will my Feedback be Delivered?

Assignments' grades are released with comments from TA. Student can ask TA for further individual feedback regarding assessments.

Timetabling information is displayed only for guidance purposes, relates to the current Academic Year only and is subject to change.
 

There are no rows to display