Professional Documents
Culture Documents
CS 15-319
Programming Models- Part I
Lecture 4, Jan 25, 2012
Todays session
Programming Models Part I
Announcement:
Project update is due today
Pregel,
Dryad, and
MapReduce GraphLab
Message
Passing
Examples of Interface (MPI)
parallel
Traditional processing
models of
Parallel parallel
computer programming
Why architectures
parallelism?
Although you use 4 processors you cannot get a speedup more than
2.5 times (or 40% of the serial running time)
20 80 20 80
Serial Serial
Parallel 20 20 Parallel 20 20
Process 1 Process 1
Process 2 Process 2
Cannot be parallelized
Process 3 Process 3
Cannot be parallelized Can be parallelized
Pregel,
Dryad, and
MapReduce GraphLab
Message
Passing
Examples of Interface (MPI)
parallel
Traditional processing
models of
Parallel parallel
computer programming
Why architectures
parallelism?
Multi-Chip Single-Chip
Multiprocessors Multiprocessors
Address Space
Shared Individual
Memory
Memory
I/O
Interconnection Network
Multi-Chip
Multi-Chip Single-Chip
Multiprocessors
Multiprocessors Multiprocessors
Pregel,
Dryad, and
MapReduce GraphLab
Message
Passing
Examples of Interface (MPI)
parallel
Traditional processing
models of
Parallel parallel
computer programming
Why architectures
parallelism?
Time
S1 S1 Spawn
P1
P1 P2 P3 P3
P2
Join
P3 S2 Shared Address Space
P4
S2
Process
Process
Parallel
Carnegie Mellon University in Qatar 22
Why Locks?
Unfortunately, threads in a shared memory model need to synchronize
Mutual exclusion requires that when there are multiple threads, only one
thread is allowed to write to a shared memory location (or the critical
section) at any time
Thread 0 Thread 1
The execution of ld, bnz, and sti is not atomic (or indivisible)
Several threads may be executing them at the same time
int turn;
int interested[n]; // initialized to 0
No Synchronization
Problem
No Synchronization
Problem
Message Passing Interface (MPI) programs are the best fit with the
message passing programming model
Time
S1 S1 S1 S1 S1
P1 P1 P1 P1 P1
P2 S2 S2 S2 S2
P3
P4
Process 0 Process 1 Process 2 Process 3
S2
Node 1 Node 2 Node 3 Node 4
Process
if (id == 0)
send_msg (P1, b[4..7], c[4..7]);
for (i=0; i<8; i++) else
a[i] = b[i] + c[i]; recv_msg (P0, b[4..7], c[4..7]);
sum = 0;
for (i=0; i<8; i++)
No Mutual
for (i=start_iter; Exclusion isi++)
i<end_iter;
if (a[i] > 0) Required!
a[i] = b[i] + c[i];
sum = sum + a[i];
Print sum; local_sum = 0;
for (i=start_iter; i<end_iter; i++)
Sequential if (a[i] > 0)
local_sum = local_sum + a[i];
if (id == 0) {
recv_msg (P1, &local_sum1);
sum = local_sum + local_sum1;
Print sum;
}
else
send_msg (P0, local_sum);
Pregel,
Dryad, and
MapReduce GraphLab
Message
Passing
Examples of Interface (MPI)
parallel
Traditional processing
models of
Parallel parallel
computer programming
Why architectures
parallelism?
a.out
MPMD has two styles, the master/worker and the coupled analysis
Example
is ie
1 2 3 4 5 6
a
a a a
Pregel,
Dryad, and
MapReduce GraphLab
Message
Passing
Examples of Interface (MPI)
parallel
Traditional processing
models of
Parallel parallel
computer programming
Why architectures
parallelism?