BDIC3011J Parallel Computing (BDIC)

Academic Year 2021/2022

In a parallel computation, multiple processors work together to solve a given problem. These are exciting times in parallel computing. The largest parallel machine has over a hundred thousand processors, and it is believed that machines with over ten thousand processors will be commonly available by the end of the decade. Furthermore, with most chip manufacturers moving toward multicore processors, most machines will soon be parallel ones. It is, therefore, essential to learn to use parallel machines effectively.

Includes a broad coverage of the fundamental concepts of parallel computation rather than focusing primarily on the latest trends, which are often quickly outdated due to the rapid changes in technology in this area. The principal types of parallel computation are covered by investigating three key features for each: typical architectural features, typical programming languages, and algorithm design techniques. Also the popular MPI language used with a wide range of parallel computers will be introduced.

Show/hide contentOpenClose All

Curricular information is subject to change

Learning Outcomes:

Objectives
• Describe different parallel architectures, inter-connect networks, programming models, and algorithms for common operations such as matrix-vector multiplication..
• Given a problem, develop an efficient parallel algorithm to solve it..
• Given a parallel algorithm, analyze its time complexity as a function of the problem size and number of processors.
• Given a parallel algorithm, implement it using MPI, OpenMP, pthreads, or a combination of MPI and OpenMP.
• Given a parallel code, analyze its performance, determine computational bottlenecks, and optimize the performance of the code.
• Given a parallel code, debug it and fix the errors.
• Given a problem, implement an efficient and correct code to solve it, analyze its performance, and give convincing written and oral presentations explaining your achievements.

Indicative Module Content:

Week Topic
1 Introduction -- Chapter 1
2 Programming Technology Based on Message Transfer and Cluster System -- Chapter 2
3 Parallel Architecture and Processing Technology -- Chapter 3
4 Parallel algorithm design -- Chapter 3 (to be continued)
5 Parallel algorithm design -- Chapter 3 (completed)
6 Performance analysis – Chapter 4 (to be continued)
7 Performance analysis – Chapter 4 (completed)
8 Combining MPI and OpenMP – Chapter 5 (to be continued)
9 Combining MPI and OpenMP – Chapter 5 (to be continued)
10 Combining MPI and OpenMP – Chapter 5 (completed)
11 Revision
12 Examination

Student Effort Hours: 
Student Effort Type Hours
Autonomous Student Learning

72

Lectures

32

Computer Aided Lab

16

Total

120

Approaches to Teaching and Learning:
active/task-based learning; peer and group work; lectures; reflective learning; lab/studio work; enquiry & problem-based learning; ; student presentations, etc. 
Requirements, Exclusions and Recommendations

Not applicable to this module.


Module Requisites and Incompatibles
Required:
BDIC1034J - College English 1, BDIC1035J - College English 2, BDIC1036J - College English 3, BDIC1037J - College English 4, BDIC1047J - English for Uni Studies BDIC, BDIC1048J - English Gen Acad Purposes BDIC, BDIC2007J - English for Spec Acad Purposes, BDIC2015J - Acad Wrt & Comm Skills


 
Assessment Strategy  
Description Timing Open Book Exam Component Scale Must Pass Component % of Final Grade
Class Test: 30%: Quiz. Throughout the Trimester n/a Graded No

30

Continuous Assessment: Tutorial questions and attendance Throughout the Trimester n/a Alternative linear conversion grade scale 60% (Chinese modules) No

10

Examination: 60%: Final exam.
End of trimester MCQ No Alternative linear conversion grade scale 60% (Chinese modules) No

60


Carry forward of passed components
No
 
Remediation Type Remediation Timing
In-Module Resit Prior to relevant Programme Exam Board
Please see Student Jargon Buster for more information about remediation types and timing. 
Feedback Strategy/Strategies

• Feedback individually to students, on an activity or draft prior to summative assessment
• Feedback individually to students, post-assessment
• Group/class feedback, post-assessment
• Self-assessment activities

How will my Feedback be Delivered?

Not yet recorded.

Textbook
Chen Guoliang, Parallel Computing: Structure, Algorithms, Programming, Higher Education Press, 2011

References
• Parallel Programming in C with MPI and OpenMP by M.J. Quinn, McGraw-Hill Science/Engineering/Math, 1 st edition, 2003, ISBN: 0072822562
• Du Zhihui, Advanced Computing Parallel Programming Technology - MPI Parallel Programming, Tsinghua University Press, 2001
Timetabling information is displayed only for guidance purposes, relates to the current Academic Year only and is subject to change.
 

There are no rows to display