2CSE60E5: Parallel Programming

Learning Outcomes: 
After learning the course the students should be able to
Describe different types of parallelism, their principles and structures
Comprehend the principles, techniques, and practices relevant to the design and implementation of parallel computing systems
Construct parallel algorithms for distributed and shared memory parallel systems
Syllabus: 
Unit NoTopics
Introduction

Von Neumann architecture, Why do we need high speed computing? How do we increase the speed of computers?, Some interesting features of parallel computers

Solving Problems in Parallel

Temporal parallelism, Data Parallelism, Combined temporal and data parallelism, Data parallelism with dynamic assignment,  Data  parallelism  with quasi-dynamic assignment, Comparison of Temporal and Data Parallel Processing

Instruction level Parallel Processing

Pipelining of Processing Elements, Delays  in  Pipeline  Execution,  Delay  due to resource constraints, Delay due to data dependency, Pipeline delay due to branch instructions, Hardware modification to reduce delay due  to branches, Software modification to reduce delay due to branches, Difficulties inPipelining

Parallel Algorithms

Models of Computation, Random access machine, parallel random access machine, Interconnection networks, combinational circuits, Analysis of Parallel Algorithms, Running time, Number of processors and cost

Introduction to Parallel Processing

Architectural Classification schemes, Multiplicity of instruction – data stream, Serial versus parallel processing, Parallelism versus pipelining, Parallel Processing Applications

Principles of Pipelining and Vector Processing

Principles of designing Pipeline Processors, Instruction prefetch and branch handling, Data buffering and busing structures, Internal forwarding and register tagging, Hazard detection and resolution

Structures and Algorithms for Array Processors

SIMD Array Processors, SIMD computer organization, Masking and data routing mechanisms, Inter PE communications

Processes, Shared Memory and Simple Parallel Programs

Introduction, Processes and processors, Shared memory–1, Forking-Creating Processes, Shared memory-2, Processes are randomly scheduled – Contention

Basic Parallel Programming Techniques

Introduction, Loop splitting, ideal speedup, Spin-locks, Contention and Self- scheduling, Histogram

Barriers And Race Conditions

Introduction, The Barrier Calls, Expression splitting

Introduction To Scheduling – Nested loops

Introduction, Variations on loop splitting, Variation on self – scheduling, indirect scheduling

Overcoming Data Dependencies

Introduction, Induction variable, Forward dependency,  Block  scheduling  and forward dependency, Backward dependency, Split table loops, Special scheduling – Assign based oncondition

Scheduling Summary

Introduction, Loop splitting, Expression splitting, Self scheduling, Indirect scheduling, Block scheduling, Specialscheduling

Reference Books: 
Name: 
Computer architecture and parallel processing
Author: 
by Kai Hwang
Name: 
Parallel Computers – Architecture and Programming
Author: 
by V. Rajaraman and C. Siva Ram Murthy
Name: 
Introduction to Parallel Programming
Author: 
by Steven Braver
Name: 
Parallel Programming in C with MPI and Open MP
Author: 
Michael J. Quinn
Publication: 
Tata McGraw-Hill Publishing Company Ltd., 2003.
Syllabus PDF: 
branch: 
CBA
Course: 
2014
2016
Stream: 
B.Tech