COMP30250 Parallel Computing

Academic Year 2022/2023

Nowadays, parallel architectures are not only used for high performance computing. The advent of multicore processors, which can be found in all modern desktops, laptops, mobile and embedded devices, has turned parallel architectures into the mainstream architecture for commodity computing. Correspondingly, parallel programming paradigm is becoming the predominant one in the main stream programming practice.

The module introduces into parallel programming. It covers the following main topics:

-Vector and superscalar processors: architecture and programming model, optimizing compilers (dependency analysis and code generation), array libraries (BLAS), parallel languages (Fortran 90).

- Shared-memory multi-processors and multicore CPUs: architecture and programming models, optimizing compilers, thread libraries (Pthreads), parallel languages (OpenMP).

- Distributed-memory multi-processors: architecture and programming model, performance models, message-passing libraries (MPI), parallel languages (HPF).

- Hybrid parallel programming for clusters of mutlicore CPUs with MPI+OpenMP.

Show/hide contentOpenClose All

Curricular information is subject to change

Learning Outcomes:

- Understand parallel programming paradigm and orient yourself in parallel computing technologies

- Write and experiment with parallel programs using MPI, OpenMP and Pthreads, and MPI+OpenMP

- Use parallel libraries

Indicative Module Content:

-Vector and superscalar processors: architecture and programming model, optimizing compilers (dependency analysis and code generation), array libraries (BLAS), parallel languages (Fortran 90).

- Shared-memory multi-processors and multicore CPUs: architecture and programming models, optimizing compilers, thread libraries (Pthreads), parallel languages (OpenMP).

- Distributed-memory multi-processors: architecture and programming model, performance models, message-passing libraries (MPI), parallel languages (HPF).

- Hybrid parallel programming for clusters of mutlicore CPUs with MPI+OpenMP.

Student Effort Hours: 
Student Effort Type Hours
Lectures

24

Tutorial

12

Practical

12

Autonomous Student Learning

72

Total

120

Approaches to Teaching and Learning:
Each topic will be covered at lectures. During lab sessions students will work on individual practical assignments under the guidance of the TA and demonstrators. Practical assignments will require development and implementation of parallel programs, conducting experiments and writing reports. 
Requirements, Exclusions and Recommendations
Learning Recommendations:

Students taking this course should have already successfully completed an introductory C programming course. Familiarity with the material covered by COMP20200 Unix Programming would be beneficial.


Module Requisites and Incompatibles
Not applicable to this module.
 
Assessment Strategy  
Description Timing Open Book Exam Component Scale Must Pass Component % of Final Grade
Examination: Final examination 2 hour End of Trimester Exam No Standard conversion grade scale 40% No

65

Practical Examination: Lab assignments Varies over the Trimester n/a Standard conversion grade scale 40% No

35


Carry forward of passed components
No
 
Resit In Terminal Exam
Spring Yes - 2 Hour
Please see Student Jargon Buster for more information about remediation types and timing. 
Feedback Strategy/Strategies

• Feedback individually to students, post-assessment
• Online automated feedback

How will my Feedback be Delivered?

Assignments' marks are released to the students with comments. Questions (if any) regarding the marks can be directed to TA individually.

A. Lastovetsky, Parallel computing on heterogeneous networks. Wiley, 2003.
Name Role
Atefeh Khazaei Ghoozhdi Tutor