37
MT-MPI: Multithreaded MPI for Many-Core Environments Min Si 1,2 Antonio J. Pea 2 Pavan Balaji 2 Masamichi Takagi 3 Yutaka Ishikawa 1 1 University of Tokyo 2 Argonne National Laboratory 3 NEC Corporation 1

MT-MPI: Multithreaded MPI for Many-Core Environments

  • Upload
    merrill

  • View
    62

  • Download
    0

Embed Size (px)

DESCRIPTION

MT-MPI: Multithreaded MPI for Many-Core Environments . Min Si 1,2 Antonio J. Peña 2 Pavan Balaji 2 Masamichi Takagi 3 Yutaka Ishikawa 1 1 University of Tokyo 2 Argonne National Laboratory 3 NEC Corporation. Presentation Overview. Background and Motivation - PowerPoint PPT Presentation

Citation preview

Page 1: MT-MPI: Multithreaded MPI for Many-Core Environments

MT-MPI: Multithreaded MPI for Many-Core Environments

Min Si1,2 Antonio J. Pena2 Pavan Balaji 2 Masamichi Takagi3 Yutaka Ishikawa1

1 University of Tokyo 2 Argonne National Laboratory3 NEC Corporation

1

Page 2: MT-MPI: Multithreaded MPI for Many-Core Environments

Presentation Overview• Background and Motivation • Proposal and Challenges • Design and Implementation– OpenMP Runtime Extension – MPI Internal Parallelism

• Evaluation• Conclusion

2

Page 3: MT-MPI: Multithreaded MPI for Many-Core Environments

Many-core Architectures

• Massively parallel environment• Intel® Xeon Phi co-processor– 60 cores inside a single chip, 240

hardware threads– SELF-HOSTING in next generation,

NATIVE mode in current version• Blue Gene/Q– 16 cores per node, 64 hardware threads

• Lots of “light-weight” cores is becoming a common model

3

Page 4: MT-MPI: Multithreaded MPI for Many-Core Environments

MPI programming on Many-Core Architectures

4

Thread Single mode

/* user computation */MPI_Function ( );/* user computation */

MPI Process

COMP.

COMP.

MPI COMM.

#pragma omp parallel { /* user computation */ }MPI_Function ( );#pragma omp parallel { /* user computation */ }

Funneled / Serialized mode

#pragma omp parallel { /* user computation */ MPI_Function ( ); /* user computation */}

MPI Process

COMP.

COMP.

MPI COMM.

Multithreading mode

Page 5: MT-MPI: Multithreaded MPI for Many-Core Environments

• Funneled / Serialized mode– Multiple threads are created for user computation– Single thread issues MPI

Problem in Funneled / Serialized mode

5

MPI Process

COMP.

COMP.

MPI COMM.

#pragma omp parallel { /* user computation */ }MPI_Function ( );#pragma omp parallel { /* user computation */ }ISSUE

1. Many threads are IDLE !2. Single lightweight core delivers poor

performance

Page 6: MT-MPI: Multithreaded MPI for Many-Core Environments

• Sharing Idle Threads with Application inside MPI

Our Approach

6

MPI Process

COMP.

COMP.

MPI COMM.

MPI COMM.

#pragma omp parallel { /* user computation */ }MPI_Function ( ){#pragma omp parallel{ /* MPI internal task */}}#pragma omp parallel { /* user computation */ }

#pragma omp parallel{ /* MPI internal task */}

Page 7: MT-MPI: Multithreaded MPI for Many-Core Environments

• Some parallel algorithms are not efficient with insufficient threads, need tradeoff

• But the number of available threads is UNKNOWN !

SINGLE SECTION

Challenges (1/2)

#pragma omp parallel{ /* user computation */#pragma omp single{ /* MPI_Calls */}} Barrier

7

Page 8: MT-MPI: Multithreaded MPI for Many-Core Environments

Challenges (2/2)• Nested parallelism– Simply creates new Pthreads, and offloads thread

scheduling to OS, – Causes threads OVERRUNNING issue

8

#pragma omp parallel{ #pragma omp single{#pragma omp parallel{ … }}}

Creates N Pthreads !

Creates N Pthreads !

ONLY use IDLE threads

Page 9: MT-MPI: Multithreaded MPI for Many-Core Environments

Design and Implementation

Implementation is based on Intel OpenMP runtime (version 20130412) 9

- OpenMP Runtime Extension- MPI Internal Parallelism

Page 10: MT-MPI: Multithreaded MPI for Many-Core Environments

Guaranteed Idle Threads VS Temporarily Idle Threads• Guaranteed Idle Threads

– Guaranteed idle until Current thread exits

• Temporarily Idle Threads– Current thread does not know

when it may become active again

#pragma omp parallel{ #pragma omp single nowait{…}#pragma omp critical{…}}

#pragma omp parallel{ #pragma omp single{…}} Barrier

#pragma omp parallel{ #pragma omp critical{…}}

Example 1

Example 2

Example 3

10

Page 11: MT-MPI: Multithreaded MPI for Many-Core Environments

Expose Guaranteed Idle Threads• MPI uses Guaranteed Idle Threads to schedule its

internal parallelism efficiently (i.e. change algorithm, specify number of threads)

#pragma omp parallel#pragma omp single{ MPI_Function { num_idle_threads = omp_get_num_guaranteed_idle_threads( ); if ( num_idle_threads < N ) { /* Sequential algorithm */ } else { #pragma omp parallel num_threads(num_idle_threads) { … }} }} 11

Page 12: MT-MPI: Multithreaded MPI for Many-Core Environments

1. Derived Datatype Related Functions2. Shared Memory Communication3. Network-specific Optimizations

Implementation is based on MPICH v3.0.4 12

Design and Implementation

- OpenMP Runtime Extension- MPI Internal Parallelism

Page 13: MT-MPI: Multithreaded MPI for Many-Core Environments

1. Derived Datatype Packing Processing• MPI_Pack / MPI_Unpack• Communication using Derived Datatype – Transfer non-contiguous data– Pack / unpack data internally

#pragma omp parallel forfor ( i=0; i<count; i++ ){dest[i] = src[i * stride];}

block_lengthcount

stride

0 5 10 15 20

13

Page 14: MT-MPI: Multithreaded MPI for Many-Core Environments

2. Shared Memory Communication• Parallel algorithm

– Get as many available cells as we can

– Parallelize large data movement

14

Shared Buffer

Cell[0]

Cell[1]

Cell[2]

Cell[3]

User Buffer

User Buffer

Sender ReceiverShared Buffer

Cell[0]

Cell[1]

Cell[2]

Cell[3]

User Buffer

User Buffer

Sender Receiver

(a) Sequential Pipelining (b) Parallel pipelining

• Original sequential algorithm– Shared user space buffer

between processes– Pipelining copy on both sender

side and receiver side

Page 15: MT-MPI: Multithreaded MPI for Many-Core Environments

Sequential Pipelining VS Parallelism• Small Data transferring ( < 128K )– Threads synchronization overhead > parallel improvement

• Large Data transferring– Data transferred using Sequential Fine-Grained Pipelining

– Data transferred using Parallelism with only a few of threads (worse)

– Data transferred using Parallelism with many threads (better)

Sender BufferShared Buffer

Receiver Buffer

15

Page 16: MT-MPI: Multithreaded MPI for Many-Core Environments

3. InfiniBand Communication

• Structures – IB context– Protection Domain– Queue Pair

• 1 QP per connection– Completion Queue

• Shared by 1 or more QPs

• RDMA communication– Post RDMA operation to QP – Poll completion from CQ

• OpenMP contention issue

P0 P1

P2

QPQPCQPD

HCA

IB CTX

ADI3

CH3

SHM nemesis

TCP IB …

(critical)

(critical)

16

Page 17: MT-MPI: Multithreaded MPI for Many-Core Environments

Evaluation

1. Derived Datatype Related Functions2. Shared Memory Communication3. Network-specific Optimizations

All our experiments are executed on the Stampede supercomputer at the Texas Advanced Computing Center (https://www.tacc.utexas.edu/stampede/).

17

Page 18: MT-MPI: Multithreaded MPI for Many-Core Environments

Derived Datatype PackingParallel packing 3D matrix of double

1 2 4 8 16 32 64 128 2401

10

100

10002561K4K16K64K256KIdeal

Number of Threads

Spee

dup

Z

Y

X

Packing the X-Z plane with varying Z

Graph Data: Fixed matrix volume 1 GBFixed length of Y: 2 doublesLength of Z: graph legend

Packing the Y-Z plane with varying Y

1 2 4 8 16 32 64 128 2400.1

1

10

100

10002561K4K16K64K256KIdeal

Number of Threads

Spee

dup

Graph Data: Fixed matrix volume 1 GBFixed length of X: 2 doublesLength of Y: graph legend

Block

Vector

Vector (Parallelize here)

Hvector

18

simin
Updated experiment: 2D->3D top surfaceAdded description of the relationship between datatype and X/Y/Z length.
Page 19: MT-MPI: Multithreaded MPI for Many-Core Environments

3D internode halo exchange using 64 MPI processes

1 2 4 8 16 32 64 128 2400.1

1

10

100

Cube

Large X

Large Y

Large Z

Number of Threads

Spee

dup

Cube 512*512*512 double

Large X 16K*128*64 double

Large Y 64*16K*128 double

Large Z 64*128*16K double

Not strong scalingBUT we are using IDLE RESOURCES !

Z

Y

X

19

simin
Updated experiment: 2D halo -> 3D haloAdded reason and data sets introduction
Page 20: MT-MPI: Multithreaded MPI for Many-Core Environments

Hybrid MPI+OpenMP NAS Parallel MG benchmark

1 2 4 8 16 32 64 128 2400

0.51

1.52

2.53

3.54

4.55

Communication Time SpeedupExecution Time Speedup

Number of Threads

Spee

dup

V-cycle multi-grid algorithm to solve a 3D discrete Poisson equation.

Halo exchanges with various dimension sizes from 2 to 514 doubles in class E with

64 MPI processes

Graph Data: Class E using 64 MPI processes

20

Page 21: MT-MPI: Multithreaded MPI for Many-Core Environments

Shared Memory Communication• OSU MPI micro-benchmark

1 2 4 8 16 32 64 1200.1

1

10

64 KB256 KB1 MB4 MB16 MB

Number of Threads

Spee

dup

P2P Bandwidth

1 2 4 8 16 32 64 1200.1

1

10

1 2 4 8 16 32 64 1200.1

1

10

Similar results of Latency and Message rate

Caused by poor sequential performance due to too small Eager/Rendezvous communication threshold on Xeon Phi.Not by MT-MPI !

Poor pipelining but worse parallelism

21

Page 22: MT-MPI: Multithreaded MPI for Many-Core Environments

One-sided Operations and IB netmod Optimization

• Micro benchmark– One to All experiment using 65 processes

• root sends many MPI_PUT operations to all the other 64 processes (64 IB QPs)

P0

P1

P2

P64

QP

QP

QPQP

QPQP

……

Parallelized

1 2 4 8 16 32 641

1.1

1.2

1.3

1.4

1.5 1.44

100020004000800016000

Number of Threads

Spee

dup

1 2 4 8 16 32 641

10

3.34 QPs only

Number of Threads

BW Im

prov

emen

t

Parallelized IB Communication

NthreadsTime(s)

Total SP SP/Total1 5.8 2.2 37.9%

Profile of the experiment issuing 16000 operations

22

Ideal Speedup = 1.61

simin
Updated graph 9processes -> 65 processesAdded comparision with ideal speedup
Page 23: MT-MPI: Multithreaded MPI for Many-Core Environments

One-sided Graph500 benchmark• Every process issues many MPI_Accumulate

operations to the other processes in every breadth first search iteration.

• Scale 222, 16 edge factor, 64 MPI processes

1 2 4 8 16 32 641.0E+061.1E+061.2E+061.3E+061.4E+061.5E+061.6E+061.7E+06

1.0

1.1

1.2

1.3

1.4 ImprovementHarmonic Mean TEPS

Number of Threads

Harm

onic

Mea

n TE

PS

Impr

ovem

ent

23

Page 24: MT-MPI: Multithreaded MPI for Many-Core Environments

Conclusion• Many-core Architectures• Most popular Funneled / Serialized mode in Hybrid MPI

+ threads programming model– Many threads parallelize user computation– Only single thread issues MPI calls

• Threads are IDLE during MPI calls !• We utilize these IDLE threads to parallelize MPI

internal tasks, and delivers better performance in various aspects– Derived datatype packing processing– Shared memory communication– IB network communication

24

Page 25: MT-MPI: Multithreaded MPI for Many-Core Environments

Backup

25

Page 26: MT-MPI: Multithreaded MPI for Many-Core Environments

OpenMP Runtime Extension 2

• Waiting progress in barrier– SPIN LOOP until timeout !– May cause OVERSUBSCRIBING

• Solution: Force waiting threads to enter in a passive wait mode inside MPI– set_fast_yield (sched_yield)– set_fast_sleep (pthread_cond_wait)

while (time < KMP_BLOCKTIME){if (done) break;/* spin loop */}pthread_cond_wait (…);#pragma omp parallel#pragma omp single{ set_fast_yield (1);

#pragma omp parallel{ … }}

26

Page 27: MT-MPI: Multithreaded MPI for Many-Core Environments

OpenMP Runtime Extension 2• set_fast_sleep VS set_fast_yield– Test bed: Intel Xeon Phi cards (SE10P, 61 cores)

set_fast_sleep set_fast_yield

0 50 100 150 200 250 300-50

0 50

100 150 200 250 300 350

KMP_BLOCKTIME

Ove

rhea

d (u

s)

0 50 100 150 200 250 300-50

0 50

100 150 200 250 300 350

1 Thread4 Threads16 Threads64 Threads240 Threads

KMP_BLOCKTIME

27

simin
Updated graphUpdated xeon phi production code: B0 -> SE10P
Page 28: MT-MPI: Multithreaded MPI for Many-Core Environments

Prefetching issue when compiler vectorized non-contiguous data

0 500 1000 1500 2000 25000.00E+001.00E+052.00E+053.00E+054.00E+055.00E+056.00E+057.00E+058.00E+05 Total Execution Cycle

vecno-vec

Stride (bytes) 0 500 1000 1500 2000 25000

1000200030004000500060007000 L2 Cache Miss

vecno-vec

Stride (byte)

for (i=0; i<count; i++){*dest++ = *src;src += stride;}#pragma omp parallel forfor (i=0; i<count; i++){dest[i] = src[i * stride];}

(a) Sequential implementation (not vectorized) (b) Parallel implementation (vectorized)

28

Page 29: MT-MPI: Multithreaded MPI for Many-Core Environments

Intra-node Large Message Communication• Sequential pipelining LMT– Shared user space buffer– Pipelining copy on both sender side and receiver side

Shared Buffer

Cell[0]

Cell[1]

Cell[2]

Cell[3]

User Buffer

User Buffer

Sender ReceiverSenderGet a EMTPY cell from shared buffer, and copies data into this cell, and marks the cell FULL; Then, fill next cell.

ReceiverGet a FULL cell from shared buffer, then copies the data out, and marks the cell EMTPY ; Then, clear next cell.

29

Page 30: MT-MPI: Multithreaded MPI for Many-Core Environments

Parallel InfiniBand communication Two level parallel policies

Parallelize the operations to different IB CTXs Parallelize the operations to different CQs / QPs

HCA

QPCQ

IB CTX

QPCQ

IB CTX

QPCQ

IB CTX

HCA

CQ

IB CTX

QPQPCQ CQ QP

(a) Parallelism on different IB CTXs (b) Parallelism on different CQs / QPs

30

Page 31: MT-MPI: Multithreaded MPI for Many-Core Environments

Parallelize InfiniBand Small Data Transfer

• 3 parallelism experiments based on ib_write_bw:1. IB contexts 1 process per node, 64 IB CTX per process, 1 QP + 1 CQ per IB CTX2. QPs and CQs 1 process per node, 1 IB CTX per process, 64 QPs + 64 CQs per IB CTX3. QPs only 1 process per node, 1 IB CTX per process, 64 QPs + 1 shared CQ per IB CTX

Test bed: Intel Intel Xeon Phi SE10P Coprocessor, InfiniBand FDRData size: 64 Bytes

1 2 4 8 16 32 641

10

1.0

2.0 2.7 2.9

3.3 3.6 3.6

1.0

1.6

2.2 2.7

3.1 3.3 3.1 IB contextsQPs and CQsQPs only

Number of Threads

BW Im

prov

emen

t

31

simin
Updated graph32 CTX/QP/CQ -> 64 CTX/QP/CQUpdated test bedUpdated data size 2->64
Page 32: MT-MPI: Multithreaded MPI for Many-Core Environments

Eager Message Transferring in IB netmod• When send many small messages

– Limited IB resources• QP, CQ, remote RDMA buffer

– Most of the messages are enqueued into SendQ

– All sendQ messages are sent out in wait progress

• Major steps in wait progress– Clean up issued requests– Receiving

• Poll RDMA-buffer • Copy received messages out

– Sending• Copy sending messages from user buffer• Issue RDMA op

P0

Enqueue messages into SendQ

wait progress start

Send some messages immediately

Send messages in SendQ

P1

SYNC

wait progress start

SYNC

wait progress end wait progress end…

Send messages in SendQ

Receive message

Receive message

Parallelizable

Parallelizable

Clean up request

Clean up request

32

Page 33: MT-MPI: Multithreaded MPI for Many-Core Environments

Parallel Eager protocol in IB netmod

• Parallel policy – Parallelize large set of messages sending to different

connections • Different QPs : Sending processing

– Copy sending messages from user buffer– Issue RDMA op

• Different RDMA buffers : Receiving processing– Poll RDMA-buffer – Copy received messages out

33

Page 34: MT-MPI: Multithreaded MPI for Many-Core Environments

Example: One-sided Communication

• Feature– Large amount of small non-blocking RMA

operations sending to many targets– Wait ALL the completion at the second

synchronization call (MPI_Win_fence)• MPICH implementation

– Issue all the operations in the second synchronization call

PUT 0

Origin Process

Target Process 2

Target Process 1

PUT 1

PUT 2

PUT 3

PUT 4

PUT 5

MPI_Win_fence

MPI_Win_fence

Th 0Th 1Th 0Th 1

34

Page 35: MT-MPI: Multithreaded MPI for Many-Core Environments

Parallel One-sided communication

• Challenges in parallelism– Global SendQ

Group messages by targets– Queue structures

Stored in [Ring Array + Head / Tail ptr]

– Netmod internal messages (SYNC etc.)Enqueue to another SendQ (SYNC-SendQ)

P0

1 2 30

Send-Arrays

SYNC-SendQ

IN

OUT

35

Page 36: MT-MPI: Multithreaded MPI for Many-Core Environments

Parallel One-sided communication

• Optimization– Every OP is issued through long

critical section• Issue all Ops together

– Create large number of Requests• Only create one request

ADI3

CH3

SHM nemesis

IB

PUT PUT PUT PUT PUT

SendQ

Critical

Fig. Issue RMA operations from CH3 to IB netmod

36

Page 37: MT-MPI: Multithreaded MPI for Many-Core Environments

Profiling• Measured the execution time of the netmod send-

side communication processing at the root process (SP)– Copy from the user buffer to a preregistered chunk– Posting of the operations to the IB network.

1 2 4 8 16 32 641

10

100

1000

2000

4000

8000

16000

Number of Threads

Spee

dup Nthreads

Time SpeedupTotal SP SP/Total Total SP

1 5.8 2.2 37.9% 1.0 1.0 4 4.7 1.3 27.3% 1.2 1.7

16 4.0 0.4 9.8% 1.4 5.0 64 4.0 0.3 8.0% 1.4 6.9

Expected speedup but the percentage of time spent in SP

becomes less and less

Profile of the experiment issuing 16000 operations

37