MPI use c language

Preview:

DESCRIPTION

Mpi use c language

Citation preview

MPI use C language (1)

Speaker :呂宗螢Adviser :梁文耀 老師Date : 2006/10/27

Embedded and Parallel Systems Lab

2

Outline

MPI Introduction 撰寫程式和執行的步驟 撰寫平行程式的基本觀念 MPI 一些基本 function MPI function 的基本架構

MPI_COMM_WORLD MPI point to point 的傳送

Blocking Non-blocking

Message passing 規則

Embedded and Parallel Systems Lab

3

Outline

Communication mode Standard Synchronous Buffered Ready

Blocking Message Passing Hello.c

Non-Blocking Message Passing Wait Test Isend-Irecv.c

Embedded and Parallel Systems Lab

4

MPI Introduction

MPI , Message Passing Interface 適用於平行運算環境,定義了 process 和 process 之間傳送訊息的一

個標準,不止是單一電腦內的 process 和 process 傳送訊息,還可以在網路上不同電腦間的 process 與 process 的溝通。

目的是希望能提供一套可移值性和高效率的傳送訊息的標準,其內容包含了分散式記憶體( Distributed-Memory )和共享式記憶體( Shared-Memory) 架構。

有相同功能的是 PVM (Parallel Virtual Machine) ,但現在一般較多人使用的是 MPI 。

MPICH2

Embedded and Parallel Systems Lab

5

撰寫程式和執行的步驟

1. 啟動MPI環境mpdboot -n 4 -f mpd.hosts

2. 撰寫MPI程式vi hello.c

3. Compilempicc hello.c –o hello.o  

4. 執行程式mpiexec –n 4 ./hello.o

5. 結束MPImpdallexit

Embedded and Parallel Systems Lab

6

撰寫平行程式的基本觀念

需由程式設計師來規畫平行化程式

Embedded and Parallel Systems Lab

7

撰寫平行程式的基本觀念

Embedded and Parallel Systems Lab

8

撰寫平行程式的基本觀念

並不是只要將程式平行化之後就能提高程式的效率

Embedded and Parallel Systems Lab

9

MPI 程式基本架構 #include "mpi.h"

MPI_Init();

Do some work or MPI functionexample: MPI_Send() / MPI_Recv()

MPI_Finalize();

Embedded and Parallel Systems Lab

10

MPI一些基本 function

int MPI_Init( int *argc, char *argv[]) 必須在所有 MPI function 前使用 初始 MPI_COMM_WORLD 和 MPI_COMM_SELF 指令參數 (argc, argv) 複製到所有 process

int MPI_Comm_rank ( MPI_Comm comm, int *rank) 取得 process 自己的 process ID Rank = Process ID

double MPI_Wtime() 傳回一個時間代表目前時間

int MPI_Finzlize() 結束 MPI 執行環境,在所有工作完成後必須呼叫

int MPI_Abort(MPI_Comm comm, int errorcode)

結束所有 MPI 行程,並強制結束程式

Embedded and Parallel Systems Lab

11

MPI一些基本 function

MPI_COMM_WORLD 是一個 communicator ,主要作用是指出辨別所有有加入平行運算環

境的 Processes ,而 process 和 process 溝通時所使用的 function call 都要用他做為參數,才能讓 process 互相找到 process 做溝通。

Embedded and Parallel Systems Lab

12

MPI function的基本架構

其回傳值就是用來判斷 MPI function 是不是成功的完成 只有 double MPI_Wtime() 和 double MPI_Wtick() 兩個 function 例外。

int result;result= MPI-function();

function int MPI_Comm_size( MPI_Comm comm, int *size)

功能 取得總共有多少 process數在該 communicator

parameters comm : IN , MPI_COMM_WORLDsize : OUT,總計 process數目

return value int:如果執行成功回傳MPI_SUCCESS,0

Embedded and Parallel Systems Lab

13

MPI function的一些錯誤回傳值

MPI_SUCCESS MPI function 成功完成,沒有錯誤

MPI_ERR_COMM Communicator 錯誤,或是 Communicator 是 NULL

MPI_ERR_COUNT Count 參數錯誤

MPI_ERR_TYPE 錯誤的資料型態 (Datatype) ,可能是使用非 MPI 定義

的 Datatype

MPI_ERR_BUFFER buffer 錯誤

MPI_ERR_ROOT 錯誤的 root ,是指 Rank(ID) 不是 communicator 內的,

通常是 >0 && < communicator’ size

Embedded and Parallel Systems Lab

14

MPI point to point 的傳送

Blocking

Non-Blocking

Send MPI_Send(buffer, count, datatype, dest, tag, comm)

Receive MPI_Recv(buffer, count, datatype, source, tag, comm, status)

Send MPI_Isend(buffer, count, datatype, dest, tag, comm, request)

Receive MPI_Irecv(buffer, count, datatype, source, tag, comm, request)

Embedded and Parallel Systems Lab

15

MPI_Status

typedef struct MPI_Status {

int count;

int cancelled;

int MPI_SOURCE; // 來源 ID

int MPI_TAG; // 來源傳送的 tag

int MPI_ERROR; // 錯誤控制碼 } MPI_Status;

Embedded and Parallel Systems Lab

16

MPICH 實作的方法

Embedded and Parallel Systems Lab

17

Blocking

Embedded and Parallel Systems Lab

18

Non-Blocking

Embedded and Parallel Systems Lab

19

Message passing 規則 在傳訊接收訊息時,MPI可以保證順序性

如果有兩個 Send 成功傳送了兩個訊息 a 和 b ,那接收者 B ,開始接收訊息時,一定先收到 a 再收到 b 。

如果有兩個 Receive , a 和 b 同時在接收時,那可能會接收到同一個Send 的訊息,但 a 一定會在 b 之前接收到。

但是如果是一個multiple threads的程式就沒辦法保證了。 另外就是在如果 process 0 要傳送給 process 2 ,同時 process 1 也

要傳送給 process ,而 process 只有一個 receive 的動作時,只有一個 process 的傳送動作會成功完成

Embedded and Parallel Systems Lab

20

DataTypeC

MPI_CHAR signed char

MPI_SHORT signed short int

MPI_INT signed int

MPI_LONG signed long int

MPI_UNSIGNED_CHAR unsigned char

MPI_UNSIGNED_SHORT unsigned short int

MPI_UNSIGNED unsigned int

MPI_UNSIGNED_LONG unsigned long int

MPI_FLOAT float

MPI_DOUBLE double

MPI_LONG_DOUBLE long double

MPI_BYTE 8 binary digits

MPI_PACKED data packed or unpacked with MPI_Pack()/ MPI_Unopack()

Embedded and Parallel Systems Lab

21

Communication mode

Standard Synchronous Buffered Ready

Embedded and Parallel Systems Lab

22

Standard mode

Embedded and Parallel Systems Lab

23

Synchronous mode

Embedded and Parallel Systems Lab

24

Buffered mode

Embedded and Parallel Systems Lab

25

Ready mode

Embedded and Parallel Systems Lab

26

Blocking Message Passing

int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

int MPI_Ssend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

Embedded and Parallel Systems Lab

27

Blocking Message Passing

int MPI_Bsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Buffer_attach(void* buffer_addr, int* size) int MPI_Buffer_detach(void* buffer_addr, int* size)

int MPI_Rsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count)

Embedded and Parallel Systems Lab

28

First program Hello.c

目的:產生兩個 Process ,利用 blocking 方式,互相

傳送訊息,並且計算在傳送接收的時間

Embedded and Parallel Systems Lab

29

hello.c#include "mpi.h"#include <stdio.h>#define SIZE 20

int main(int argc,char *argv[]){ int numtasks, rank, dest, source, rc, count, tag=1; char inmsg[SIZE]; char outmsg[SIZE];

double starttime, endtime; MPI_Status Stat; MPI_Datatype strtype;

MPI_Init(&argc,&argv); //起始MPI環境 MPI_Comm_rank(MPI_COMM_WORLD, &rank); //取得自己的 process ID

MPI_Type_contiguous(SIZE, MPI_CHAR, &strtype); //設定新的資料型態 string MPI_Type_commit(&strtype);   //建立新的資料型態 string

starttune=MPI_Wtime(); //取得目前時間

Embedded and Parallel Systems Lab

30

hello.c if (rank == 0) { dest = 1; source = 1; strcpy(outmsg,"Who are you?");

//傳送訊息到 process 0 rc = MPI_Send(outmsg, 1, strtype, dest, tag, MPI_COMM_WORLD); printf("process %d has sended message: %s\n",rank, outmsg);

//接收來自 process 1 的訊息 rc = MPI_Recv(inmsg, 1, strtype, source, tag, MPI_COMM_WORLD, &Stat); printf("process %d has received: %s\n",rank, inmsg); } else if (rank == 1) { dest = 0; source = 0; strcpy(outmsg,"I am process 1"); rc = MPI_Recv(inmsg, 1, strtype, source, tag, MPI_COMM_WORLD, &Stat); printf("process %d has received: %s\n",rank, inmsg); rc = MPI_Send(outmsg, 1 , strtype, dest, tag, MPI_COMM_WORLD); printf("process %d has sended message: %s\n",rank, outmsg); }

Embedded and Parallel Systems Lab

31

hello.c

endtime=MPI_Wtime(); // 取得結束時間//使用MPI_CHAR來計算實際收到多少資料

rc = MPI_Get_count(&Stat, MPI_CHAR, &count); printf("Task %d: Received %d char(s) from task %d with tag %d and use

time is %f \n", rank, count, Stat.MPI_SOURCE, Stat.MPI_TAG, endtime-starttime);

MPI_Type_free(&strtype); //釋放 string資料型態 MPI_Finalize(); //結束MPI}

1. Compilempicc hello.c –o hello.o  

2. 執行程式mpiexec –n 4 ./hello.o

Embedded and Parallel Systems Lab

32

hello.c 執行結果

process 0 has sended message: Who are you? process 1 has received: Who are you? process 1 has sended message: I am process 1 Task 1: Received 20 char(s) from task 0 with tag 1 and use time is

0.001302 process 0 has received: I am process 1 Task 0: Received 20 char(s) from task 1 with tag 1 and use time is

0.002133

Embedded and Parallel Systems Lab

33

Non-blocking Message Passing

int MPI_Isend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)

int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request)

int MPI_Issend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)

int MPI_Ibsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)

int MPI_Irsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)

Embedded and Parallel Systems Lab

34

Wait

int MPI_Wait(MPI_Request *request, MPI_Status *status)

int MPI_Waitall(int count, MPI_Request *array_of_requests, MPI_Status *array_of_statuses)

int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index, MPI_Status *status)

int MPI_Waitsome(int incount, MPI_Request *array_of_requests, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses)

Embedded and Parallel Systems Lab

35

Test

int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)

int MPI_Testall(int count, MPI_Request *array_of_requests, int *flag, MPI_Status *array_of_statuses)

int MPI_Testany(int count, MPI_Request *array_of_requests, int *index, int *flag, MPI_Status *status)

int MPI_Testsome(int incount, MPI_Request *array_of_requests, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses)

Embedded and Parallel Systems Lab

36

Isend-Irecv.c

目的:每個 process 會去 receive 上一個和下一個

process ID 的 send每個 process 會去 send 訊息給上一個和下一

個 process利用 non-blocking並且測試 non-blocking 的動作是否完成

Embedded and Parallel Systems Lab

37

Isend-Irecv.c

#include “mpi.h”#include <stdio.h>int main(int argc,char *argv[]){ int numtasks, rank, next, prev, buf[2], tag1=1, tag2=2; MPI_Request reqs[4]; MPI_Status stats[4]; int flag; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); prev = rank-1; next = rank+1; if (rank == 0) prev = numtasks - 1; if (rank == (numtasks - 1)) next = 0;

Embedded and Parallel Systems Lab

38

Isend-Irecv.c

//使用 non-blocking 的 receive的接收來算上個 process的資料 ,並將 handle存進reqs[0] MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[0]);

MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[1]);//使用 non-blocking 的 send傳送到上個 process,並將 handle存進 reqs[2]

MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]); MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]); MPI_Waitall(4, reqs, stats); //等待所有在 reqs內的 handle完成

MPI_Test(&reqs[0],&flag, &stats[0]); //第一個MPI_Irecv是否完成 printf("Process %d: has receive data %d from prevenient process %d\n", rank, buf[0], prev); printf("Process %d: has receive data %d from next process %d\n", rank, buf[1],

prev); printf("Process %d: test %d\n",rank , flag); MPI_Finalize();}

Embedded and Parallel Systems Lab

39

Isend-Irecv.c 執行結果 Process 2: has receive data 1 from prevenient process 1 Process 2: has receive data 3 from next process 1 Process 2: test 1 Process 0: has receive data 3 from prevenient process 3 Process 0: has receive data 1 from next process 3 Process 0: test 1 Process 1: has receive data 0 from prevenient process 0 Process 1: has receive data 2 from next process 0 Process 1: test 1 Process 3: has receive data 2 from prevenient process 2 Process 3: has receive data 0 from next process 2 Process 3: test 1

Embedded and Parallel Systems Lab

40

The End

Thank you very much!

Recommended