Cơ bản lập trình song song MPI (C/C++)

Embed Size (px)

Citation preview

  • C bn lp trnh song song MPI cho C/C++

    ng Nguyn [email protected]

    Ngy 23 thng 11 nm 2013

    Mc lc

    1 M u 2

    2 MPI 32.1 Gii thiu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Ci t MPICH2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Bin dch v thc thi chng trnh vi MPICH2 . . . . . . . . . . . . . . . . . . . 4

    3 Cu trc c bn ca mt chng trnh MPI 53.1 Cu trc chng trnh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2 Cc khi nim c bn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.3 V d Hello world . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4 V d truyn thng ip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    4 Cc lnh MPI 134.1 Cc lnh qun l mi trng MPI . . . . . . . . . . . . . . . . . . . . . . . . . . 134.2 Cc kiu d liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.3 Cc c ch truyn thng ip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.4 Cc lnh truyn thng ip blocking . . . . . . . . . . . . . . . . . . . . . . . . . 184.5 Cc lnh truyn thng ip non-blocking . . . . . . . . . . . . . . . . . . . . . . . 194.6 Cc lnh truyn thng tp th . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    5 Mt s v d 235.1 V d tnh s pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2 V d nhn ma trn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    Ti liu tham kho 29

    1

  • ng Nguyn Phng Ti liu ni b NMTP

    1 M u

    Thng thng hin nay, hu ht cc chng trnh tnh ton u c thit k chy trn mtli (single core), l cch tnh ton tun t (serial computation). c th chy c chngtrnh mt cch hiu qu trn cc h thng my tnh (cluster) hoc cc cpu a li (multi-core),chng ta cn phi tin hnh song song ha chng trnh . u im ca vic tnh ton songsong (parallel computation) chnh l kh nng x l nhiu tc v cng mt lc. Vic lp trnhsong song c th c thc hin thng qua vic s dng cc hm th vin (vd: mpi.h) hoc ccc tnh c tch hp trong cc chng trnh bin dch song song d liu, chng hn nhOpenMP trong cc trnh bin dch fortran F90, F95.

    Cng vic lp trnh song song bao gm vic thit k, lp trnh cc chng trnh my tnh songsong sao cho n chy c trn cc h thng my tnh song song. Hay c ngha l song songho cc chng trnh tun t nhm gii quyt mt vn ln hoc lm gim thi gian thc thihoc c hai. Lp trnh song song tp trung vo vic phn chia bi ton tng th ra thnh cccng vic con nh hn ri nh v cc cng vic n tng b x l (processor) v ng bcc cng vic nhn c kt qu cui cng. Nguyn tc quan trng nht y chnh l tnhng thi hoc x l nhiu tc v (task) hay tin trnh (process) cng mt lc. Do , trc khilp trnh song song ta cn phi bit c rng bi ton c th c song song ho hay khng(c th da trn d liu hay chc nng ca bi ton). C hai hng chnh trong vic tip cnlp trnh song song:

    Song song ho ngm nh (implicit parallelism): b bin dch hay mt vi chng trnhkhc t ng phn chia cng vic n cc b x l.

    Song song ho bng tay (explicit parallelism): ngi lp trnh phi t phn chia chngtrnh ca mnh n c th c thc thi song song.

    Ngoi ra trong lp trnh song song, ngi lp trnh cng cn phi tnh n yu t cn bng ti(load balancing) trong h thng. Phi lm cho cc b x l thc hin s cng vic nh nhau,nu c mt b x l c ti qu ln th cn phi di chuyn cng vic n b x l c ti nh hn.

    Mt m hnh lp trnh song song l mt tp hp cc k thut phn mm th hin cc giithut song song v a vo ng dng trong h thng song song. M hnh ny bao gm cc ngdng, ngn ng, b bin dch, th vin, h thng truyn thng v vo/ra song song. Trong thct, cha c mt my tnh song song no cng nh cch phn chia cng vic cho cc b x lno c th p dng hiu qu cho mi bi ton. Do , ngi lp trnh phi la chn chnh xcmt m hnh song song hoc pha trn gia cc m hnh vi nhau pht trin cc ng dngsong song trn mt h thng c th.

    Hin nay c rt nhiu m hnh lp trnh song song nh m hnh a lung (multi-threads), truynthng ip (message passing), song song d liu (data parallel), lai (hybrid),... Cc loi m hnhny c phn chia da theo hai tiu ch l tng tc gia cc tin trnh (process interaction)v cch thc x l bi ton (problem decomposition). Theo tiu ch th nht, chng ta c 2 loim hnh song song ch yu l m hnh dng b nh chia s (shared memory) hoc truyn thngip (message passing). Theo tiu ch th hai, chng ta cng c hai loi m hnh l song songha tc v (task parallelism) v song song ha d liu (data parallelism).

    Vi m hnh b nh chia s, tt c cc x l u truy cp mt d liu chung thng quamt vng nh dng chung.

    Vi m hnh truyn thng ip th mi x l u c ring mt b nh cc b ca n, ccx l trao i d liu vi nhau thng qua hai phng thc gi v nhn thng ip.

    Song song tc v l phng thc phn chia cc tc v khc nhau n cc nt tnh tonkhac nhau, d liu c s dng bi cc tc v c th hon ton ging nhau.

    2

  • ng Nguyn Phng Ti liu ni b NMTP

    Song song d liu l phng thc phn phi d liu ti cc nt tnh ton khc nhau c x l ng thi, cc tc v ti cc nt tnh ton c th hon ton ging nhau.

    M hnh truyn thng ip l mt trong nhng m hnh c s dng rng ri nht trongtnh ton song song hin nay. N thng c p dng cho cc h thng phn tn (distributedsystem). Cc c trng ca m hnh ny l:

    Cc lung (thread) s dng vng nh cc b ring ca chng trong sut qu trnh tnhton.

    Nhiu lung c th cng s dng mt ti nguyn vt l.

    Cc lung trao i d liu bng cch gi nhn cc thng ip

    Vic truyn d liu thng yu cu thao tc iu phi thc hin bi mi lung. V d,mt thao tc gi mt lung th phi ng vi mt thao tc nhn lung khc.

    Ti liu ny c xy dng vi mc ch cung cp cc kin thc c bn bc u nhm tmhiu kh nng vit mt chng trnh song song bng ngn ng lp trnh C/C++ theo c chtrao i thng ip s dng cc th vin theo chun MPI. Mc ch l nhm ti vic thc thicc chng trnh C/C++ trn my tnh a li hoc h thng cm my tnh (computer cluster)gip nng cao hiu nng tnh ton. Trong ti liu ny, th vin MPICH2 c s dng bindch cc chng trnh C/C++ trn h iu hnh Linux.

    2 MPI

    2.1 Gii thiu

    M hnh truyn thng ip l mt trong nhng m hnh lu i nht v c ng dng rngri nht trong lp trnh song song. Hai b cng c ph bin nht cho lp trnh song song theom hnh ny l PVM (Parallel Virtual Machine) v MPI (Message Passing Interface). Cc bcng c ny cung cp cc hm dng cho vic trao i thng tin gia cc tin trnh tnh tontrong h thng my tnh song song.

    MPI (Message Passing Interface) l mt chun m t cc c im v c php ca mt th vinlp trnh song song, c a ra vo nm 1994 bi MPIF (Message Passing Interface Forum),v c nng cp ln chun MPI-2 t nm 2001. C rt nhiu cc th vin da trn chun MPIny chng hn nh MPICH, OpenMPI hay LAM/MPI.

    MPICH2 l mt th vin min ph bao gm cc hm theo chun MPI dng cho lp trnh songsong theo phng thc truyn thng ip, c thit k cho nhiu ngn ng lp trnh khc nhau(C++, Fortran, Python,. . . ) v c th s dng trn nhiu loi h iu hnh (Windows, Linux,MacOS,. . . ).

    2.2 Ci t MPICH2

    Gi MPICH2 c th c ci t trn tt c cc my tnh thng qua lnh sau

    $ sudo aptget install mpich2

    Sau khi ci t thnh cng MPICH2, ta cn phi cu hnh trc khi chy song song. Trongtrng hp phin bn c s dng l 1.2.x tr v trc th trnh qun l thc thi mc nh sl MPD, cn t 1.3.x tr v sau th trnh qun l s l Hydra. Cch thc cu hnh dnh cho 2trnh qun l s nh sau:

    3

  • ng Nguyn Phng Ti liu ni b NMTP

    MPD To 2 file mpd.hosts v .mpd.conf trong th mc ch (vd: /home/phuong). Trong ,file mpd.hosts s cha tn ca cc my con trong h thng, v d nh

    masternode1node2node3

    Cn i vi file .mpd.conf, ta cn phi thit lp quyn truy cp cho file ny thng qua lnh

    $ chmod 600 . mpd . conf

    Sau m file v thm dng sau vo trong file

    secretword=random_text_here

    khi ng MPD, g lnh sau trn my ch

    $ mpdboot n vi N l s my c trong h thng.

    Hydra tng t nh vi MPD nhng n gin hn, ta ch cn to duy nht 1 file c tn hoststi th mc /home/phuong cha tn ca tt c cc my con trong h thng

    masternode1node2node3

    2.3 Bin dch v thc thi chng trnh vi MPICH2

    Bin dch bin dch mt chng trnh ng dng vi MPICH2, ta c th s dng mt trongcc trnh bin dch sau

    Ngn ng Trnh bin dchC mpicc

    C++ mpicxx, mpic++, mpiCCFortran mpif77, mpif90, mpifort

    V d nh ta mun bin dch mt chng trnh ng dng vit bng ngn ng C/C++, ta c thg lnh sau

    mpicc o helloworld helloworld . cTrong , helloworld.c l file cha m ngun ca chng trnh, ty chnh -o cho ta xc nhtrc tn ca file ng dng c bin dch ra, trong trng hp ny l file helloworld.

    Thc thi Trong trng hp phin bn MPICH2 s dng trnh qun l MPD, trc khi thcthi chng trnh ta cn gi MPD qua lnh mpdboot nh cp n trn hoc

    mpd &

    Chng trnh c bin dch bng MPI c th c thc thi bng cch s dng lnh

    mpirun -np

    hoc

    mpiexec -n

    4

  • ng Nguyn Phng Ti liu ni b NMTP

    Trong N l s tc v song song cn chy v tenchuongtrinh l tn ca chng trnh ngdng cn thc thi.

    V d:

    $ mpd &$ mpirun np 8 helloworld

    3 Cu trc c bn ca mt chng trnh MPI

    3.1 Cu trc chng trnh

    Cu trc c bn ca mt chng trnh MPI nh sau:

    Khai bo cc header, bin, prototype,...

    Bt u chng trnh. . .

    . . .

    Khi ng mi trng MPI. . .

    . . .

    Kt thc mi trng MPI. . .

    . . .

    Kt thc chng trnh

    3.2 Cc khi nim c bn

    Mt chng trnh song song MPI thng cha nhiu hn mt tc v (task) hay cn gi l tintrnh (process) thc thi. Mi tc v (tin trnh) c phn bit vi nhau bi ch s tc v (cgi l rank hay task ID). Ch s ny l mt s nguyn t 0 n (N1) vi N l tng s tc vMPI tham gia chy chng trnh. i vi cc chng trnh chy theo c ch master/slave thtrong h thng thng c mt tc v ch (master) iu khin cc tc v khc c gi l tcv con (slave), tc v ch ny thng c ch s l 0 cn cc tc v con c ch s t 1 n (N1).Tp hp ca cc tc v MPI cng chy mt chng trnh c gi l mt nhm (group). Vtp hp ca cc tc v trong cng mt nhm m c th trao i thng tin vi nhau c gil mt communicator. Khi bt u chng trnh, communicator m bao gm tt c cc tc vthc thi c mc nh l MPI_COMM_WORLD.

    Cc tc v trong MPI trao i vi nhau thng qua vic gi/nhn cc thng ip (message). Mithng ip u cha hai thnh phn bao gm d liu (data) v header, mi header bao gm:

    Ch s ca tc v gi

    Ch s ca tc v nhn

    Nhn (tag) ca thng ip

    Ch s ca communicator

    5

  • ng Nguyn Phng Ti liu ni b NMTP

    3.3 V d Hello world

    bc u lm quen vi vic vit mt chng trnh MPI, ta s bt u vi mt v d ngin, l vit chng trnh Hello world bng ngn ng C. Cc bc thc hin nh sau:

    Bc u tin, ta s to mt file c tn hello.c v m file ny bng cc chng trnh sontho vn bn dng text (vd: gedit, emacs, vim,...)

    Khai bo tn chng trnh v thm header MPI

    #include #inc lude

    int main ( int argc , char argv ){

    }

    Cn lu rng header MPI (mpi.h) cn phi c thm vo trong file c th gi ccc lnh MPI.

    Khai bo mi trng MPI

    #include #include

    int main ( int argc , char argv ){

    MPI_Init (&argc , &argv ) ;}

    Lnh MPI_Init khi to mi trng MPI thc hin tc v song song, lnh ny s trv mt gi tr nguyn trong qu trnh khi to mi trng.

    Gi lnh qun l s tc v song song

    #include #include

    int main ( int argc , char argv ){

    i n t ntasks ;

    MPI_Init (&argc , &argv ) ;MPI_Comm_size (MPI_COMM_WORLD, &ntasks ) ;

    }

    Lnh MPI_Comm_size tr v gi tr s lng tc v song song vo trong bin ntasks.Tham s MPI_COMM_WORLD l tham s ch communicator ton cc, c gi tr l mt hngs nguyn.

    Gi lnh xc nh ch s ca tc v

    #include #include

    int main ( int argc , char argv ){

    int ntasks , mytask ;

    MPI_Init (&argc , &argv ) ;MPI_Comm_size ( MPI_COMM_WORLD , &ntasks ) ;

    6

  • ng Nguyn Phng Ti liu ni b NMTP

    MPI_Comm_rank (MPI_COMM_WORLD, &mytask ) ;}

    Lnh MPI_Comm_rank tr v ch s (rank) ca tc v vo trong bin mytask, ch s ny cgi tr t 0 n ntasks-1, v c s dng nhn bit tc v khi iu khin gi/nhnthng tin.

    Thc hin lnh xut ra mn hnh

    #include #include

    int main ( int argc , char argv ){

    int ntasks , mytask ;

    MPI_Init (&argc , &argv ) ;MPI_Comm_size ( MPI_COMM_WORLD , &ntasks ) ;MPI_Comm_rank ( MPI_COMM_WORLD , &mytask ) ;p r i n t f ( "He l lo world from task %d o f %d \n" , mytask , ntasks ) ;

    }

    Kt thc mi trng MPI

    #include #include

    int main ( int argc , char argv ){

    int ntasks , mytask ;

    MPI_Init (&argc , &argv ) ;MPI_Comm_size ( MPI_COMM_WORLD , &ntasks ) ;MPI_Comm_rank ( MPI_COMM_WORLD , &mytask ) ;printf ( "Hello world from task %d of %d \n" , mytask , ntasks ) ;MPI_Finalize ( ) ;r e turn 0 ;

    }

    Lnh MPI_Finalize ng mi trng MPI, tuy nhin cc tc v chy song song ang cthc thi vn c tip tc. Tt c cc lnh MPI c gi sau MPI_Finalize u khngc hiu lc v b bo li.

    Ngoi ra, ta cng c th vit li chng trnh Hello world ny theo ngn ng C++, ta s tomt file mi hello.cc c ni dung nh sau

    #include #include

    int main ( int argc , char argv ){

    int ntasks , mytask ;

    MPI : : Init (argc , argv ) ;ntasks = MPI : : COMM_WORLD . Get_size ( ) ;mytask = MPI : : COMM_WORLD . Get_rank ( ) ;std : : cout

  • ng Nguyn Phng Ti liu ni b NMTP

    }

    Lu rng cch thc s dng lnh MPI trong C v C++ khc nhau hai im chnh sau y:

    Cc hm trong C++ c s dng vi khng gian tn (namespace) MPI.

    Cc tham s (argument) c s dng trong cc hm C++ l tham chiu (reference) thayv l con tr (pointer) nh trong cc hm C. V d nh cc tham s argc v argv cahm MPI_Init trong C c s dng vi du & pha trc, cn trong C++ th khng.

    3.4 V d truyn thng ip

    Trong v d Hello world ta lm quen vi 4 lnh c bn ca MPI. Trong thc t, rt nhiuchng trnh song song MPI c th c xy dng ch vi 6 lnh c bn, ngoi 4 lnh va ktrn ta cn s dng thm hai lnh na l MPI_Send gi thng ip v MPI_Recv nhnthng ip gia cc tc v vi nhau. Cu trc ca hai lnh ny trong C nh sau:

    MPI_Send (&buffer,count,datatype,destination,tag,communicator)

    MPI_Recv (&buffer,count,datatype,source,tag,communicator,&status)

    Trong buffer mng d liu cn chuyn/nhncount s phn t trong mngdatatype kiu d liu (vd: MPI_INT, MPI_FLOAT,...)destination ch s ca tc v ch (bn trong communicator)source ch s ca tc v ngun (bn trong communicator)tag nhn ca thng ip (dng s nguyn)communicator tp hp cc tc vstatus trng thi ca thng ipierror m s li

    c th hiu r hn v cch s dng hai lnh ny, ta s xem v d vng lp (fixed_loop) sau.Trong v d ny, MPI_Send c s dng gi i s vng lp hon thnh t mi tc v con(c ch s t 1 n N1) n tc v ch (ch s l 0). Lnh MPI_Recv c gi N1 ln tcv ch nhn N1 thng tin c gi t N1 tc v con.Cc bc khai bo u tin cng tng t nh trong v d Hello world

    #include #include

    int main ( int argc , char argv ){

    int i , rank , ntasks , count , start , stop , nloops , total_nloops ;

    MPI_Init(&argc , &argv ) ;

    MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;MPI_Comm_size ( MPI_COMM_WORLD , &ntasks ) ;

    }

    Gi s chng ta mun thc hin vng lp 1000 ln, do s ln lp ca mi tc v s bng1000/ntasks vi ntasks l tng s tc v. Chng ta s s dng ch s ca mi tc v nhdu phn khc lp ca mi tc v

    8

  • ng Nguyn Phng Ti liu ni b NMTP

    #include #include

    int main ( int argc , char argv ){

    int i , rank , ntasks , count , start , stop , nloops , total_nloops ;

    MPI_Init(&argc , &argv ) ;

    MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;MPI_Comm_size ( MPI_COMM_WORLD , &ntasks ) ;

    count = 1000 / ntasks ;s t a r t = rank count ;stop = s t a r t + count ;

    n loops = 0 ;f o r ( i=s t a r t ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    }

    }

    Trong trng hp tc v ny l tc v ch, n s nhn gi tr nloops t cc tc v con gi vcng dn li

    #include #include

    int main ( int argc , char argv ){

    int i , rank , ntasks , count , start , stop , nloops , total_nloops ;

    MPI_Init(&argc , &argv ) ;

    MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;MPI_Comm_size ( MPI_COMM_WORLD , &ntasks ) ;

    count = 1000 / ntasks ;start = rank count ;stop = start + count ;

    nloops = 0 ;for (i=start ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    stop = start + count ;

    nloops = 0 ;for (i=start ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    }nloops = 0 ;for (i=total_nloops ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    4 Cc lnh MPI

    4.1 Cc lnh qun l mi trng MPI

    Cc lnh ny c nhim v thit lp mi trng cho cc lnh thc thi MPI, truy vn ch s catc v, cc th vin MPI,...

    MPI_Init khi ng mi trng MPI

    MPI_Init (&argc,&argv)Init (argc,argv)

    MPI_Comm_size tr v tng s tc v MPI ang c thc hin trong communicator(chng hn nh trong MPI_COMM_WORLD)

    MPI_Comm_size (comm,&size)Comm::Get_size()

    MPI_Comm_rank tr v ch s ca tc v (rank). Ban u mi tc v s c gn chomt s nguyn t 0 n (N1) vi N l tng s tc v trong communicator MPI_COMM_WORLD.

    MPI_Comm_rank (comm,&rank)Comm::Get_rank()

    MPI_Abort kt thc tt c cc tin trnh MPI

    MPI_Abort (comm,errorcode)Comm::Abort(errorcode)

    MPI_Get_processor_name tr v tn ca b x l

    MPI_Get_processor_name (&name,&resultlength)Get_processor_name(&name,resultlen)

    MPI_Initialized tr v gi tr 1 nu MPI_Init() c gi, 0 trong trng hp ngc li

    MPI_Initialized (&flag)Initialized (&flag)

    MPI_Wtime tr v thi gian chy (tnh theo giy) ca b x l

    MPI_Wtime ()Wtime ()

    MPI_Wtick tr v phn gii thi gian (tnh theo giy) ca MPI_Wtime()

    MPI_Wtick ()Wtick ()

    MPI_Finalize kt thc mi trng MPI

    MPI_Finalize ()Finalize ()

    V d:

    13

  • ng Nguyn Phng Ti liu ni b NMTP

    #include #include

    4 i n t main ( i n t argc , char argv ){

    i n t numtasks , rank , len , rc ;char hostname [ MPI_MAX_PROCESSOR_NAME ] ;

    9 rc = MPI_Init(&argc ,&argv ) ;i f (rc != MPI_SUCCESS ) {

    printf ( "Error s t a r t i n g MPI program . Terminating . \ n" ) ;MPI_Abort ( MPI_COMM_WORLD , rc ) ;

    }14

    MPI_Comm_size ( MPI_COMM_WORLD ,&numtasks ) ;MPI_Comm_rank ( MPI_COMM_WORLD ,&rank ) ;MPI_Get_processor_name ( hostname , &len ) ;printf ( "Number o f ta sk s= %d My rank= %d Running on %s\n" , numtasks ,

    rank , hostname ) ;19

    / do some work /

    MPI_Finalize ( ) ;}

    4.2 Cc kiu d liu

    Mt s kiu d liu c bn ca MPI c lit k trong bng sau

    Tn Kiu d liuMPI_CHAR signed characterMPI_WCHAR wide characterMPI_SHORT signed shortMPI_INT signed intMPI_LONG signed longMPI_LONG_LONG signed long longMPI_UNSIGNED_CHAR unsigned characterMPI_UNSIGNED_SHORT unsigned shortMPI_UNSIGNED unsigned intMPI_UNSIGNED_LONG unsigned longMPI_FLOAT floatMPI_DOUBLE doubleMPI_LONG_DOUBLE long double

    Tn Kiu d liuMPI_C_COMPLEX float_ComplexMPI_C_DOUBLE_COMPLEX double_ComplexMPI_C_BOOL boolMPI_INT8_T int8_tMPI_INT16_T int16_tMPI_INT32_T int32_tMPI_INT64_T int64_tMPI_UINT8_T uint8_tMPI_UINT16_T uint16_tMPI_UINT32_T uint32_tMPI_UINT64_T uint64_tMPI_BYTE byteMPI_PACKED data packed

    Ngoi ra ngi dng cn c th t to ra cc cu trc d liu ring cho mnh da trn cc kiud liu c bn ny. Cc kiu d liu c cu trc do ngi dng t nh ngha c gi l deriveddata types. Cc lnh nh ngha cu trc d liu mi bao gm:

    MPI_Type_contiguous to ra kiu d liu mi bng cch lp count ln kiu d liu c.

    MPI_Type_contiguous (count,oldtype,&newtype)Datatype::Create_contiguous(count)

    14

  • ng Nguyn Phng Ti liu ni b NMTP

    MPI_Type_vector tng t nh contigous nhng c cc phn on (stride) c nh, kiud liu mi c hnh thnh bng cch lp mt dy cc khi (block) ca kiu d liu c c kchthc bng nhau ti cc v tr c tnh tun hon.

    MPI_Type_vector (count,blocklength,stride,oldtype,&newtype)Datatype::Create_vector(count,blocklength,stride)

    MPI_Type_indexed kiu d liu mi c hnh thnh bng cch to mt dy cc khi cakiu d liu c, mi khi c th cha s lng cc bn sao kiu d liu c khc nhau.

    MPI_Type_indexed (count,blocklens[],offsets[],oldtype,&newtype)Datatype::Create_hindexed(count,blocklens[],offsets[])

    MPI_Type_struct tng t nh trn nhng mi khi c th c to thnh bi cc kiud liu c khc nhau.

    MPI_Type_struct (count,blocklens[],offsets[],oldtypes,&newtype)Datatype::Create_struct(count, blocklens[],offsets[],oldtypes[])

    Hnh 1 trnh by mt s v d cho cc cch to cu trc d liu mi. Trong trng hp truyncc cu trc d liu khng cng kiu, ta c th s dng cc lnh MPI_Packed v MPI_Unpacked ng gi d liu trc khi gi.

    Hnh 1: V d cc cch to cu trc d liu mi

    MPI_Type_extent tr v kch thc (tnh theo byte) ca kiu d liu

    MPI_Type_extent (datatype,&extent)Datatype::Get_extent(lb,extent)

    MPI_Type_commit a kiu d liu mi nh ngha vo trong h thng

    MPI_Type_commit (&datatype)Datatype::Commit()

    15

  • ng Nguyn Phng Ti liu ni b NMTP

    MPI_Type_free b kiu d liu

    MPI_Type_free (&datatype)Datatype::Free()

    V d: to kiu d liu vector

    #include 2 #include

    #define SIZE 4

    i n t main ( i n t argc , char argv ){

    7 i n t numtasks , rank , source=0, dest , tag=1, i ;f l o a t a [ SIZE ] [ SIZE ] =

    {1 . 0 , 2 . 0 , 3 . 0 , 4 . 0 ,5 . 0 , 6 . 0 , 7 . 0 , 8 . 0 ,9 . 0 , 10 . 0 , 11 . 0 , 12 . 0 ,

    12 13 . 0 , 14 . 0 , 15 . 0 , 1 6 . 0 } ;f l o a t b [ SIZE ] ;

    MPI_Status stat ;MPI_Datatype columntype ;

    17

    MPI_Init(&argc ,&argv ) ;MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;MPI_Comm_size ( MPI_COMM_WORLD , &numtasks ) ;

    22 MPI_Type_vector (SIZE , 1 , SIZE , MPI_FLOAT , &columntype ) ;MPI_Type_commit(&columntype ) ;

    i f ( numtasks == SIZE ) {i f ( rank == 0) {

    27 f o r (i=0; i

  • ng Nguyn Phng Ti liu ni b NMTP

    Non-blocking : cc lnh gi/nhn d liu s kt thc ngay m quan tm n vic d liu thc s c hon ton gi i hoc nhn v hay cha. Vic d liu thc s cgi i hay nhn v s c kim tra cc lnh khc trong th vin MPI.

    Synchronous: gi d liu ng b, qu trnh gi d liu ch c th c kt thc khi qutrnh nhn d liu c bt u.

    Buffer : mt vng nh m s c to ra cha d liu trc khi c gi i, ngidng c th ghi ln vng b nh cha d liu m khng s lm mt d liu chun bgi.

    Ready : qu trnh gi d liu ch c th c bt u khi qu trnh nhn d liu snsng.

    Bng di y tng hp cc ch giao tip im-im v cc lnh gi/nhn thng ip tngng, thng tin chi tit v cc lnh ny s c trnh by nhng phn sau

    Ch iu kin kt thc Blocking Non-blockingSend Thng ip c gi MPI_Send MPI_IsendReceive Thng ip c nhn MPI_Recv MPI_IrecvSynchronous send Khi qu trnh nhn bt u MPI_Ssend MPI_IssendBuffer send Lun kt thc, khng quan tm qu

    trnh nhn bt u hay chaMPI_Bsend MPI_Ibsend

    Ready send Lun kt thc, khng quan tm qutrnh nhn kt thc hay cha

    MPI_Rsend MPI_Irsend

    Collective communication c ch giao tip tp th, lin quan ti tt c cc tc v nmtrong phm vi ca communicator, cc kiu giao tip trong c ch ny (xem Hnh 2) gm c

    Broadcast : d liu ging nhau c gi t tc v gc (root) n tt c cc tc v khctrong communicator.

    Scatter : cc d liu khc nhau c gi t tc v gc n tt c cc tc v khc trongcommunicator.

    Gather : cc d liu khc nhau c thu thp bi tc v gc t tt c cc tc v khctrong communicator.

    Reduce: phng thc ny cho php ta c th thu thp d liu t mi tc v, rt gn dliu, lu tr d liu vo trong mt tc v gc hoc trong tt c cc tc v.

    Hnh 2: Minh ha cc kiu giao tip theo c ch tp th

    17

  • ng Nguyn Phng Ti liu ni b NMTP

    4.4 Cc lnh truyn thng ip blocking

    Mt s lnh thng dng cho ch truyn thng ip blocking gm c:

    MPI_Send gi cc thng tin c bn

    MPI_Send (&buf,count,datatype,dest,tag,comm)Comm::Send(&buf,count,datatype,dest,tag)

    MPI_Recv nhn cc thng tin c bn

    MPI_Recv (&buf,count,datatype,source,tag,comm,&status)Comm::Recv(&buf,count,datatype,source,tag,status)

    MPI_Ssend gi ng b thng tin, lnh ny s ch cho n khi thng tin c nhn(thng tin c gi s b gi li cho n khi b m ca tc v gi c gii phng c ths dng li v tc v ch (destination process) bt u nhn thng tin)

    MPI_Ssend (&buf,count,datatype,dest,tag,comm)Comm::Ssend(&buf,count,datatype,dest,tag)

    MPI_Bsend to mt b nh m (buffer) m d liu c lu vo cho n khi c gi i,lnh ny s kt thc khi hon tt vic lu d liu vo b nh m.

    MPI_Bsend (&buf,count,datatype,dest,tag,comm)Comm::Bsend(&buf,count,datatype,dest,tag)

    MPI_Buffer_attach cp pht dung lng b nh m cho thng tin c s dng bi lnhMPI_Bsend()

    MPI_Buffer_attach (&buffer,size)Attach_buffer(&buffer,size)

    MPI_Buffer_detach b cp pht dung lng b nh m cho thng tin c s dng bilnh MPI_Bsend()

    MPI_Buffer_detach (&buffer,size)Detach_buffer(&buffer,size)

    MPI_Rsend gi thng tin theo ch ready, ch nn s dng khi ngi lp trnh chc chnrng qu trnh nhn thng tin sn sng.

    MPI_Rsend (&buf,count,datatype,dest,tag,comm)Comm::Rsend(&buf,count,datatype,dest,tag)

    MPI_Sendrecv gi thng tin i v sn sng cho vic nhn thng tin t tc v khc

    MPI_Sendrecv (&sendbuf,sendcount,sendtype,dest,sendtag,&recvbuf,recvcount,recvtype,source,recvtag,comm,&status)

    Comm::Sendrecv(&sendbuf,sendcount,sendtype,dest,sendtag,&recvbuf,recvcount,recvtype,source,recvtag,status)

    MPI_Wait ch cho n khi cc tc v gi v nhn thng tin hon thnh

    MPI_Wait (&request,&status)Request::Wait(status)

    18

  • ng Nguyn Phng Ti liu ni b NMTP

    MPI_Probe kim tra tnh blocking ca thng tin

    MPI_Probe (source,tag,comm,&status)Comm::Probe(source,tag,status)

    V d:

    #include 2 #include

    in t main ( i n t argc , char argv ){

    i n t numtasks , rank , dest , source , rc , count , tag=1;7 char inmsg , outmsg= 'x ' ;

    MPI_Status Stat ;

    MPI_Init(&argc ,&argv ) ;MPI_Comm_size ( MPI_COMM_WORLD , &numtasks ) ;

    12 MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;

    i f ( rank == 0) {dest = 1 ;source = 1 ;

    17 rc = MPI_Send(&outmsg , 1 , MPI_CHAR , dest , tag , MPI_COMM_WORLD ) ;rc = MPI_Recv(&inmsg , 1 , MPI_CHAR , source , tag , MPI_COMM_WORLD , &

    Stat ) ;} e l s e i f ( rank == 1) {

    dest = 0 ;source = 0 ;

    22 rc = MPI_Recv(&inmsg , 1 , MPI_CHAR , source , tag , MPI_COMM_WORLD , &Stat ) ;

    rc = MPI_Send(&outmsg , 1 , MPI_CHAR , dest , tag , MPI_COMM_WORLD ) ;}

    rc = MPI_Get_count(&Stat , MPI_CHAR , &count ) ;27 printf ( "Task %d : Received %d char ( s ) from task %d with tag %d \n" ,

    rank , count , Stat . MPI_SOURCE , Stat . MPI_TAG ) ;

    MPI_Finalize ( ) ;}

    4.5 Cc lnh truyn thng ip non-blocking

    Mt s lnh thng dng cho ch truyn thng ip non-blocking gm c:

    MPI_Isend gi thng ip non-blocking, xc nh mt khu vc ca b nh thc hin nhimv nh l mt b m gi thng tin.

    MPI_Isend (&buf,count,datatype,dest,tag,comm,&request)Request Comm::Isend(&buf,count,datatype,dest,tag)

    MPI_Irecv nhn thng ip non-blocking, xc nh mt khu vc ca b nh thc hin nhimv nh l mt b m nhn thng tin.

    MPI_Irecv (&buf,count,datatype,source,tag,comm,&request)Request Comm::Irecv(&buf,count,datatype,source,tag)

    19

  • ng Nguyn Phng Ti liu ni b NMTP

    MPI_Issend gi thng ip non-blocking ng b (synchronous).

    MPI_Issend (&buf,count,datatype,dest,tag,comm,&request)Request Comm::Issend(&buf,count,datatype,dest,tag)

    MPI_Ibsend gi thng ip non-blocking theo c ch buffer.

    MPI_Ibsend (&buf,count,datatype,dest,tag,comm,&request)Request Comm::Ibsend(&buf,count,datatype,dest,tag)

    MPI_Irsend gi thng ip non-blocking theo c ch ready.

    MPI_Irsend (&buf,count,datatype,dest,tag,comm,&request)Request Comm::Irsend(&buf,count,datatype,dest,tag)

    MPI_Test kim tra trng thi kt thc ca cc lnh gi v nhn thng ip non-blockingIsend(), Irecv(). Tham s request l tn bin yu cu c dng trong cc lnh gi vnhn thng ip, tham s flag s tr v gi tr 1 nu thao tc hon thnh v gi tr 0 trongtrng hp ngc li.

    MPI_Test (&request,&flag,&status)Request::Test(status)

    MPI_Iprobe kim tra tnh non-blocking ca thng ip

    MPI_Iprobe (source,tag,comm,&flag,&status)Comm::Iprobe(source,tag,status)

    V d:

    #include #include

    4 i n t main ( i n t argc , char argv ){

    i n t numtasks , rank , next , prev , buf [ 2 ] , tag1=1, tag2=2;MPI_Request reqs [ 4 ] ;MPI_Status stats [ 2 ] ;

    9

    MPI_Init(&argc ,&argv ) ;MPI_Comm_size ( MPI_COMM_WORLD , &numtasks ) ;MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;

    14 prev = rank1;next = rank+1;i f ( rank == 0) prev = numtasks 1 ;i f ( rank == ( numtasks 1) ) next = 0 ;

    19 MPI_Irecv(&buf [ 0 ] , 1 , MPI_INT , prev , tag1 , MPI_COMM_WORLD , &reqs [ 0 ] ) ;MPI_Irecv(&buf [ 1 ] , 1 , MPI_INT , next , tag2 , MPI_COMM_WORLD , &reqs [ 1 ] ) ;

    MPI_Isend(&rank , 1 , MPI_INT , prev , tag2 , MPI_COMM_WORLD , &reqs [ 2 ] ) ;MPI_Isend(&rank , 1 , MPI_INT , next , tag1 , MPI_COMM_WORLD , &reqs [ 3 ] ) ;

    24

    { /do some work/ }

    MPI_Waitall (4 , reqs , stats ) ;

    29 MPI_Finalize ( ) ;

    20

  • ng Nguyn Phng Ti liu ni b NMTP

    }

    4.6 Cc lnh truyn thng tp th

    Mt s lnh thng dng cho cho c ch truyn thng tp th gm c:

    MPI_Barrier lnh ng b ha (ro chn), tc v ti ro chn (barrier) phi ch cho nkhi tt c cc tc v khc trn cng mt communicator u hon thnh (xem Hnh 3).

    MPI_Barrier (comm)Intracomm::Barrier()

    Hnh 3: Minh ha lnh ro chn

    MPI_Bcast gi bn sao ca b m c kch thc count t tc v root n tt c cc tintrnh khc trong cng mt communicator.

    MPI_Bcast (&buffer,count,datatype,root,comm)Intracomm::Bcast(&buffer,count,datatype,root)

    MPI_Scatter phn pht gi tr b m ln tt c cc tc v khc, b m c chia thnhsendcnt phn.

    MPI_Scatter (&sendbuf,sendcnt,sendtype,&recvbuf,recvcnt,recvtype,root,comm)Intracomm::Scatter(&sendbuf,sendcnt,sendtype,&recvbuf,recvcnt,recvtype,root)

    MPI_Gather to mi mt gi tr b m ring cho mnh t cc mnh d liu gp li.

    MPI_Gather (&sendbuf,sendcnt,sendtype,&recvbuf,recvcnt,recvtype,root,comm)Intracomm::Gather(&sendbuf,sendcnt,sendtype,&recvbuf,recvcnt,recvtype,root)

    MPI_Allgather tng t nh MPI_GATHER nhng sao chp b m mi cho tt c cc tcv.

    MPI_Allgather (&sendbuf,sendcnt,sendtype,&recvbuf,recvcount,recvtype,comm)Intracomm::Allgather(&sendbuf,sendcnt,sendtype,&recvbuf,recvcnt,recvtype)

    MPI_Reduce p dng cc ton t rt gn (tham s op) cho tt c cc tc v v lu ktqu vo mt tc v duy nht.

    MPI_Reduce (&sendbuf,&recvbuf,count,datatype,op,root,comm)Intracomm::Reduce(&sendbuf,&recvbuf,count,datatype,op,root)

    Cc ton t rt gn gm c: MPI_MAX (cc i), MPI_MIN (cc tiu), MPI_SUM (tng), MPI_PROD(tch), MPI_LAND (ton t AND logic), MPI_BAND (ton t AND bitwise), MPI_LOR (ton t OR

    21

  • ng Nguyn Phng Ti liu ni b NMTP

    logic), MPI_BOR (ton t OR bitwise), MPI_LXOR (ton t XOR logic), MPI_BXOR (ton t XORbitwise), MPI_MAXLOC (gi tr cc i v v tr), MPI_MINLOC (gi tr cc tiu v v tr).

    MPI_Allreduce tng t nh MPI_Reduce nhng lu kt qu vo tt c cc tc v.

    MPI_Allreduce (&sendbuf,&recvbuf,count,datatype,op,comm)Intracomm::Allreduce(&sendbuf,&recvbuf,count,datatype,op)

    MPI_Reduce_scatter tng ng vi vic p dng lnh MPI_Reduce ri ti lnh MPI_Scatter.

    MPI_Reduce_scatter (&sendbuf,&recvbuf,recvcount,datatype,op,comm)Intracomm::Reduce_scatter(&sendbuf,&recvbuf,recvcount[], datatype,op)

    MPI_Alltoall tng ng vi vic p dng lnh MPI_Scatter ri ti lnh MPI_Gather.

    MPI_Alltoall (&sendbuf,sendcount,sendtype,&recvbuf,recvcnt,recvtype,comm)Intracomm::Alltoall(&sendbuf,sendcount,sendtype,&recvbuf,recvcnt,recvtype)

    MPI_Scan kim tra vic thc hin ton t rt gn ca cc tc v.

    MPI_Scan (&sendbuf,&recvbuf,count,datatype,op,comm)Intracomm::Scan(&sendbuf,&recvbuf,count,datatype,op)

    Hnh 4: Minh ha mt s lnh giao tip tp th

    V d:

    #include #include

    in t main ( i n t argc , char argv )5 {

    i n t numtasks , rank , sendcount , recvcount , source ;f l o a t sendbuf [ SIZE ] [ SIZE ] = {

    {1 . 0 , 2 . 0 , 3 . 0 , 4 . 0} ,{5 . 0 , 6 . 0 , 7 . 0 , 8 . 0} ,

    10 {9 . 0 , 10 . 0 , 11 . 0 , 12 .0} ,{13 .0 , 14 . 0 , 15 . 0 , 16 .0} } ;

    f l o a t recvbuf [ SIZE ] ;

    MPI_Init(&argc ,&argv ) ;15 MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;

    MPI_Comm_size ( MPI_COMM_WORLD , &numtasks ) ;

    22

  • ng Nguyn Phng Ti liu ni b NMTP

    i f ( numtasks == SIZE ) {source = 1 ;

    20 sendcount = SIZE ;recvcount = SIZE ;MPI_Scatter ( sendbuf , sendcount , MPI_FLOAT , recvbuf , recvcount ,

    MPI_FLOAT , source , MPI_COMM_WORLD ) ;printf ( " rank= %d Resu l t s : %f %f %f %f \n" ,rank , recvbuf [ 0 ] ,

    25 recvbuf [ 1 ] , recvbuf [ 2 ] , recvbuf [ 3 ] ) ;} e l s e

    printf ( "Must s p e c i f y %d p ro c e s s o r s . Terminating . \ n" , SIZE ) ;

    MPI_Finalize ( ) ;30 }

    5 Mt s v d

    5.1 V d tnh s pi

    Trong v d ny ta s tin hnh lp trnh song song cho php tnh s pi. Gi tr ca s pi c thc xc nh qua cng thc tch phn

    pi =

    10f(x)dx , vi f(x) =

    4

    (1 + x2)(1)

    Tch phn ny c th c xp x theo gii tch s nh sau

    pi =1

    n

    ni=1

    f(xi) , vi xi =(i 12)n

    (2)

    Cng thc xp x trn c th d dng xy dng vi C

    h = 1.0 / ( double ) n ;sum = 0 . 0 ;for (i = 1 ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    vi myid l ch s ca tc v ang thc thi.

    Kt qu chy t cc tc v s c tnh tng v lu tc v ch thng qua lnh MPI_Reduce

    MPI_Reduce(&mypi , &pi , 1 , MPI_DOUBLE , MPI_SUM , 0 , MPI_COMM_WORLD ) ;

    on code chng trnh sau khi thm phn MPI

    #include #include #inc lude

    4

    int main ( int argc , char argv ){

    i n t n , myid , numprocs , i ;double mypi , pi , h , sum , x ;

    9

    MPI_Init(&argc ,&argv ) ;MPI_Comm_size(MPI_COMM_WORLD,&numprocs ) ;MPI_Comm_rank(MPI_COMM_WORLD,&myid ) ;

    14 MPI_Bcast(&n , 1 , MPI_INT, 0 , MPI_COMM_WORLD) ;

    h = 1.0 / ( double ) n ;sum = 0 . 0 ;for (i = myid + 1 ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    break ;23 e l s e {

    h = 1.0 / ( double ) n ;sum = 0 . 0 ;f o r (i = myid + 1 ; i

  • ng Nguyn Phng Ti liu ni b NMTP

    }37 }

    MPI : : Finalize ( ) ;r e turn 0 ;

    }

    5.2 V d nhn ma trn

    Trong v d ny ta s xy dng mt chng trnh tnh tch ca hai ma trn bng phng phptnh ton song song. Gi s ta c ma trn C l tch ca hai ma trn A v B, ta c th khai botrong C nh sau

    #define NRA 62#define NCA 15#define NCB 7

    5 double a [ NRA ] [ NCA ] , b [ NCA ] [ NCB ] , c [ NRA ] [ NCB ] ;

    trong NRA, NCA v NCB ln lt l s ct v s dng ca ma trn A, s dng ca ma trn B.

    im c bit trong v d ny l ta s chia cc tc v (tin trnh) ra lm hai loi: tc v chnhv tc v con, mi loi tc v s thc hin nhng cng vic khc nhau. thun tin trongvic ghi nh, ta c th khai bo cc tham s biu din cho ch s ca tc v chnh (MASTER), vnhn (tag) nh du rng d liu c gi l t tc v chnh (FROM_MASTER) hay tc v con(FROM_WORKER).

    #define MASTER 0#define FROM_MASTER 1#define FROM_WORKER 2

    Cc cng vic ca tc v chnh gm c

    Khi to cc ma trn A v B

    f o r (i=0; i

  • ng Nguyn Phng Ti liu ni b NMTP

    offset = offset + rows ;}

    Nhn d liu t tc v con

    / Receive r e s u l t s from worker ta sk s /mtype = FROM_WORKER ;f o r (i=1; i

  • ng Nguyn Phng Ti liu ni b NMTP

    MPI_Send(&rows , 1 , MPI_INT , MASTER , mtype , MPI_COMM_WORLD ) ;MPI_Send(&c , rowsNCB , MPI_DOUBLE , MASTER , mtype , MPI_COMM_WORLD ) ;

    Chng trnh nhn hai ma trn hon chnh nh sau

    #include #include #include

    5 #define NRA 62#define NCA 15#define NCB 7#define MASTER 0#define FROM_MASTER 1

    10 #define FROM_WORKER 2

    i n t main ( i n t argc , char argv ){

    i n t numtasks , taskid , numworkers , source , dest , mtype , rows , averow ,extra , offset , i , j , k , rc ;

    15 double a [ NRA ] [ NCA ] , b [ NCA ] [ NCB ] , c [ NRA ] [ NCB ] ;MPI_Status status ;

    MPI_Init(&argc ,&argv ) ;MPI_Comm_rank ( MPI_COMM_WORLD ,&taskid ) ;

    20 MPI_Comm_size ( MPI_COMM_WORLD ,&numtasks ) ;i f ( numtasks < 2 ) {

    printf ( "Need at l e a s t two MPI ta sk s . Quit t ing . . . \ n" ) ;MPI_Abort ( MPI_COMM_WORLD , rc ) ;exit (1 ) ;

    25 }numworkers = numtasks1;

    / master task /i f ( taskid == MASTER ) {

    30 printf ( "mpi_mm has s t a r t ed with %d tasks . \ n" , numtasks ) ;printf ( " I n i t i a l i z i n g ar rays . . . \ n" ) ;f o r (i=0; i

  • ng Nguyn Phng Ti liu ni b NMTP

    }

    55 / Receive r e s u l t s from worker ta sk s /mtype = FROM_WORKER ;f o r (i=1; i

  • ng Nguyn Phng Ti liu ni b NMTP

    Ti liu

    [1] William Gropp et al, MPICH2 Users Guide Version 1.0.6, Mathematics and ComputerScience Division, Argonne National Laboratory, 2007.

    [2] Serrano Pereira, Building a simple Beowulf cluster with Ubuntuhttp://byobu.info/article/Building_a_simple_Beowulf_cluster_with_Ubuntu/

    [3] Blaise Barney, Message Passing Interface (MPI)https://computing.llnl.gov/tutorials/mpi/

    [4] Paul Burton, An Introduction to MPI Programminghttp://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_Programming.pdf

    [5] Stefano Cozzini, MPI tutorial, Democritos/ICTP course in Tools for computationalphysics, 2005http://www.democritos.it/events/computational_physics/lecture_stefano4.pdf

    [6] Ng Vn Thanh, Tnh ton song songhttp://iop.vast.ac.vn/~nvthanh/cours/parcomp/

    [7] https://www.surfsara.nl/systems/shared/mpi/mpi-intro

    [8] http://chryswoods.com/book/export/html/117

    [9] http://beige.ucs.indiana.edu/B673/node150.html

    [10] http://www.cs.indiana.edu/classes/b673/notes/mpi1.html

    [11] http://geco.mines.edu/workshop/class2/examples/mpi/index.html

    [12] http://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/main.htm

    30

    M uMPIGii thiuCi t MPICH2Bin dich v thc thi chuong trnh vi MPICH2

    Cu trc co ban cua mt chuong trnh MPICu trc chuong trnhCc khi nim co banV du ``Hello world''V du truyn thng ip

    Cc lnh MPICc lnh quan l mi trung MPICc kiu d liuCc co ch truyn thng ipCc lnh truyn thng ip blockingCc lnh truyn thng ip non-blockingCc lnh truyn thng tp th

    Mt s v duV du tnh s V du nhn ma trn

    Ti liu tham khao