Upload
ibrar7876
View
218
Download
0
Embed Size (px)
Citation preview
8/8/2019 Lecture 1 (11-Sep-2008)
1/19
Introduction to Parallel
Processing
Asim Munir
CH03
Lecture 1
8/8/2019 Lecture 1 (11-Sep-2008)
2/19
Program The concept of program
ordered set of instructions (programmers
view)
executable file (operating systems view)
8/8/2019 Lecture 1 (11-Sep-2008)
3/19
Process OS view, process relates to execution
Process creation
setting up the process description
allocating an address space
loading the program into the allocated
address space, and passing the process description to the
scheduler
8/8/2019 Lecture 1 (11-Sep-2008)
4/19
Process spawning
(independent processes)
A
CB
D E
Spawning results in concurrency
8/8/2019 Lecture 1 (11-Sep-2008)
5/19
Thread smaller chunks of code (lightweight)
threads are created within and belong to
process
for parallel thread processing, scheduling
is performed on a per-thread basis
finer-grain, less overhead on switching
from thread to thread
8/8/2019 Lecture 1 (11-Sep-2008)
6/19
Single-thread process or
multi-thread (dependent) Thread tree Process
Threads
8/8/2019 Lecture 1 (11-Sep-2008)
7/19
Concurrent execution (N-client
1-server)
Non pre-emptive Pre-emptive
Time-shared Priotized
Priority
Sever SeverSever
Client ClientClient
Pre-Emption Rule vs Selection Rule
8/8/2019 Lecture 1 (11-Sep-2008)
8/19
Parallel execution N-client N-server model
Synchronous (SIMD) or Asynchronous
(MIMD)
SeverClient
8/8/2019 Lecture 1 (11-Sep-2008)
9/19
Concurrent and parallel
programming languages
Classification
Languages 1_client1-servermodel
N_client1-servermode
1_clientN-servermodel
N_clientN-servermodel
sequential + - - -
concurrent - + - -
Data-parallel - - + -
Parallel - + - +
Classification of programming languages.
8/8/2019 Lecture 1 (11-Sep-2008)
10/19
Types and levels of parallelismAvailable and utilized parallelism
available: in program or in the problem
solutions
utilized: during execution
Types of available parallelism
functional arises from the logic of a problem solution
data
arises from data structures like vector or matrix
8/8/2019 Lecture 1 (11-Sep-2008)
11/19
Available and utilized levels of functional parallelism
Available levels Utilized levels
User (program) level Userlevel
Procedure level Process level
Loop level Thread level
Instruction levelInstruction level
2
1
1.Exploited by architectures
2.Exploited by means of operating systems
8/8/2019 Lecture 1 (11-Sep-2008)
12/19
Utilization of functional parallelism
Available parallelism can be utilized by
architecture,
instruction-level parallel architectures
compilers
parallel optimizing compiler
operating system multitasking
8/8/2019 Lecture 1 (11-Sep-2008)
13/19
Concurrent execution models
User level --- Multiprogramming, time
sharing
Process level --- Multitasking
Thread level --- Multi-threading
8/8/2019 Lecture 1 (11-Sep-2008)
14/19
Utilization of data parallelism
by using data-parallel architecture
8/8/2019 Lecture 1 (11-Sep-2008)
15/19
Classification of parallel architectures
Flynns classification
SISD
SIMD
MISD (Multiple Instruction Single Date)
MIMD
8/8/2019 Lecture 1 (11-Sep-2008)
16/19
Parallel architecturesPAs
Data-parallel architectures Function-parallel architectures
Instruction-level
PAs
Thread-level
PAs
Process-level
PAs
ILPS MIMDs
Vectorarchitecture
Associativearchitecture
architectureand neural SIMDs Systolic Pipelinedprocessors
Processors)
processorsVLIWs Superscalar
Ditributedmemory
(multi-computer)
Sharedmemory(multi-MIMD
DPs
Parallel architectures //
8/8/2019 Lecture 1 (11-Sep-2008)
17/19
Basic parallel technique
Pipelining (time)
a number of functional units are employed in
sequence to perform a single computation
a number of steps for each computation
8/8/2019 Lecture 1 (11-Sep-2008)
18/19
Basic parallel technique
Replication (space)
a number of functional units perform multiply
computation simultaneously
more processors
more memory
more I/O
more computers
8/8/2019 Lecture 1 (11-Sep-2008)
19/19
Relationships between languages
and parallel architecture SPMD (Single Procedure Multiple data)
Loop: split into N threads that works on different invocations ofthe same loop
threads can execute the same code at different speeds synchronize the parallel threads at the end of the loop
barrier synchronization
use MIMD
Data-parallel languages DAP Fortran
C = A + B
use SIMD