62
CUDA Optimization Ludan 滷蛋

Cuda optimization

Embed Size (px)

Citation preview

Page 1: Cuda optimization

CUDA Optimization

Ludan 滷蛋

Page 2: Cuda optimization

Outline● High occupancy● Coalesced global memory transaction● Shared memory access without bank conflicts● Read only cache● Little thread divergence and loop unrolling

Page 3: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

CUDA programming modelHigh occupancy可將存取各種memory的lantency盡量視而不見,那為什麼呢?

概念上就是買雞排,如果客人超多,雞排炸下去的同時,老闆還可以接受點單、打包雞排、找零錢、裝狠忙不能休息。 所以老闆就是GPU,客人就是task,炸雞排就是存取Memory。We want to fully utilize the GPU to hide memory latency !!! 就是努力讓客人盡量多,老闆就會工作滿載,炸雞排的同時不會讓老闆閒置就可以將炸雞排的lantency視而不見。

硬體上是怎麼做到排隊買雞排的? 複習一下CUDA programming model

Page 4: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

CUDA programming model舉個例子 做影像處理的人都認識的Lina,當照片這麼大張的時候,我可以把它切成

四塊,這四塊裡面的每個像素都是一個thread,今天我想讓它每個pixel value都加上

零,那它會變成什麼? 當然還是Lina,沒有變一一"。把特,每個thread都讓每個pixel加上零,而且是同時做,是不是有給它快的感覺!!! 這就是CUDA programming model !!!

Page 5: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

CUDA programming model每一個塊Lina在CUDA就是thread block,每一個塊Lina裡的每個pixel就是thread。

I am wondering why you are able to understand CUDA programming model if you only refer to the right picture. XDD

Page 6: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Gigathread sheduler那High occupancy白話的說就是 "狠多狠多買雞排的人”,跟Lina有什麼關係呢?

今天我們讓Lina變再大張一點

Page 7: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Gigathread sheduler每一塊Lina的大小唯持不變,所以每一塊Lina裡的Piexl還是一樣多,但是照片變大

了,問題的數量變大了,所以可以分成更多塊Lina

Page 8: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Gigathread sheduler更多塊的Lina就是更多的客人也就是更多的task,有帶來什麼好處嗎? 想像一下如果

今天我的雞排店原來只有一間,但為了接更多的生意,所以我就在一條街上開三間雞

排店,然後咧? 更多間的雞排店就是更多的GPU,那老闆就不炸雞排了,老闆就開始

負責安排每一塊Lina給不同的雞排店去處理

老闆將狠多群客人排程給每間雞排店

Page 9: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Gigathread sheduler再來要比較深入的探討老闆開更多間店 他要做什麼了 一一"

老闆努力排程每一塊Lina給每間雞排店,意圖使每間雞排店不要閒置,這件事在GPU裡是由Gigathread sheduler來負責,讓我們由外而內開始看

老闆將狠多群客人排程給每間雞排店

Page 10: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Gigathread sheduler上層的分配是Gigathread sheduler就是老闆,它會負責將每一塊thread block(每一塊

Lina)分給不同的SMX,每當一個SMX完成一塊thread block,Gigathread scheduler就會將待處理的thread block送去給閒著的SMX,一直到每一塊thread block都處理

完,在kepler的架構中每個SMX一次可以被分到最高16個thread block

Gigathread scheduler 排程每一塊thread block給SMX

Page 11: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Gigathread sheduler所以觀念上請記得gigathread sheduler是安排thread blocks給SMX,SMX吃的是

thread blocks,另外它還負責同步執行多個kernel(由cpu發起gpu一次程式呼叫稱作

kernel)在每個SMX

Page 12: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Warp scheduler每一個thread block(每一塊Lina),到了每個SMX會以32個thread(32pixel)一組,切成

狠多組,叫做”warp",這個warp(32個thread)在每個SMX(每間雞排店)裡就是排程的

單位。店的外面由老闆(gigathread scheduler)安排thread blocks進SMX,進到每個

SMX裡,就需要掌廚的人,負責排每間店內部裡面的threads。廚師就是”warp scheduler”

Warp scheduler會將每個thread block分多個warps(32個thread),排程執行

A thread block

a thread block is divided into warps

Page 13: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Warp scheduler怕名詞出現太多,整理一下,Gigathread Engine負責安排thread blocks進入SMX,

當有一個thread block被SMX執行完,Gigathread Engine會再送一個新的thread block進SMX。而每個thread block在SMX裡面,會由warp scheduler將thread block裡的threads每32個一組(warp)送去執行,所以thread block建議是由32個thread的倍

數組成是這個原因

Warp scheduler會將每個thread block分多個warps(32個thread),排程執行

A waitting thread block is assigned to the SMX by Gigathread schduler as long as a finished thread block is done in the SMX

Page 14: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Warp scheduler思考一個問題,以Nvidia Kepler架構來說,每個SMX都有192個int/float ALU(CUDA core),只有一個warp scheduler每次捉32個threads送去執行,這樣太沒效率了,要增

加效率最簡單的方法就是有多個warp scheduler,以Kepler來說,每個SMX裡有4個warp scheduler,這4個同一時間就可以捉4個warp(128threads)送去給ALU執行,這

也是為何建議每個thread block最好以128 threads以上來組成

Warp scheduler會將每個thread block分多個warps(32個thread),排程執行

Page 15: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

High occupancy現在就可以進入主題,在GPU裡是怎麼做到High occupancy可將存取各種memory的lantency盡量視而不見。Nvidia的warp scheduler能夠快速的切換warp執行,什麼

意思? 就是每個一個warp它在執行memory access這時候它會stall,說時遲那時快 warp scheduler這時候會立馬順間捉另一個ready的warp來執行,厲害吧!!

理想上要像左邊這樣,warp stall就切換另一個 ready的執行,這樣一來所有 latency就可視而不見

Page 16: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

High occupancy那為什麼能夠快速的切換warp呢? 先扯一下怎麼設計每個thread block裡的thread數量了。前面講了兩點: 第一是thread block最好是32的倍數,因為warp是由32個thread組成的,第二是thread的數量最好是128個起跳,因為有4個warp scheduler可同時捉4個warp去執行。為了能夠無痛切換warp執行,所以每個warp都會分到固定的

register、shared memory等硬體資源,但是不可能是無限多的硬體資源。

Register / Shared memory ………………….

WarpWarp

WarpWarp

WarpWarp

WarpWarp

WarpWarp

WarpWarp

SMX每個Warp都會分到自己的硬體資源

Page 17: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

High occupancy每個warp都會分到固定的register、shared memory等硬體資源,一旦某個執行中的

warp存取memory需要stall,馬上就切換到另一個ready的warp執行,因為每個warp都有分到自己的register等硬體資源,所以切換的時候不用額外的load/store去儲存或

恢復register、shared memory等狀態,因此可以快速的切換

Register / Shared memory ………………….

WarpWarp

WarpWarp

WarpWarp

WarpWarp

WarpWarp

WarpWarp

SMX 每個Warp都會分到自己的硬體資源

Page 18: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

High occupancy所以需要注意的是,如果thread block裡的threads太多,硬體資源不夠分,最後能夠

被排程的warp反而會變少(因為可能幾個warps就把硬體資源吃光了),這會跟演算法

有關係,需要注意。反之,如果thread block裡的threads狠少,每個thread可以分配到

比較多硬體資源,但不夠提供給4個warp scheduler,一樣會有閒置浪費

Register / Shared memory ………………….

WarpWarp

WarpWarp

WarpWarp

WarpWarp

WarpWarp

WarpWarp

SMX 每個Warp都會分到自己的硬體資源

Page 19: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

High occupancy因為每個SMX硬體資源有限,SMX在kepler架構來說,每個SMX可以同時吃最多16個thread block以及最多64個warp,計算一下就是每個thread block可以分成4個warp(128 threads)。理想狀態下就是16 thread blocks * 64 warps在SMX中被排程,

假設硬體資源都夠分的狀況下,需要注意:

1. 如果thread block由2個warp組成,這樣就只能有16(thread blocks) * 2(warps) 在SMX中被排程,warps總數沒全部利用到。

2. 如果thread block由8個warp組成,這樣就 8 warps * 多少 = 64 warps ,答案是8 thread blocks,造成thread blocks沒有全部利用到,但warps是完全利用到了,這個

需要看應用而定(這樣就沒有warps可以給其它想切換執行的kernel用了)。

Page 20: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

High occupancy總結High occupancy(Kepler為例):

1. 每個thread block由128個以上,32倍數的threads組成

2. 過多或過少的threads所組成的thread block都會影響最後能排程的warp數3. 可使用nvvp(profiler)來分析

4. 最後是9527的經驗談,如果不能將問題畫分成狠多thread blocks來執行(1000或更多個),就算之後的最佳化做得再好,幫助也不會太大,如果常常發現你會有這

種困擾的話,可能要考慮一下

是不是應該要換工作XDD

Page 21: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Memory hierarchy講到這裡如果還沒有換工作的話,我們就開始來談各種記憶體的最佳化,從lantency最大的global memory講起

這張實在太大張了,可以看得出來L2 Cache是在chip的外部,理所當然它的access latency是最長的→

Page 22: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Memory hierarchy佛心來的Nvidia永遠給我們不同的選擇,這張看Global memory比較清楚

常識是這樣子的,default來說,L1 Cache捉不到就會去捉 L2 Cache,L2 Cache還是捉不到才去Global memory捉資料,可以使用compile參數-Xptxas -dlcm=cg來關掉L1使用,這樣就會先捉L2 Cache,捉不到才去捉Global memory

● L1在Maxwell就沒了

Page 23: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L1 cache探討不同的狀況的bus的利用率

有L1的情況 ( L1 cache-line 寬度是128 bytes) : 一個warp有32threads,每個thread捉四個bytes,共128個bytes,下面不論是連續存取或是亂續存取,在同一個cache-line中,bus的利用率都百分之百

Page 24: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L1 cache探討不同的狀況的bus的利用率

有L1的情況 ( L1 cache-line 寬度是128 bytes) : 一個warp有32threads,每個thread捉連續的四個bytes,共128個bytes,因為存取的資料在不同的兩個cache-line,所以

要捉兩次,bus利用率50%

Page 25: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L1 cache探討不同的狀況的bus的利用率

有L1的情況 ( L1 cache-line 寬度是128 bytes) : 一個warp有32threads,每個thread捉同一個四個bytes,共128個bytes,在同一個cache-line但要捉32次,bus利用率

3.125%

Page 26: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L1 cache探討不同的狀況的bus的利用率

有L1的情況 ( L1 cache-line 寬度是128 bytes) : 一個warp有32threads,每個thread捉不同的32 cache-line,也是一樣慘,bus利用率2.125%

Page 27: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L2 cache探討不同的狀況的bus的利用率

關掉L1的情況 ( L2 cache-line 寬度是32 bytes) : 一個warp有32threads,捉連續32個或是交錯但對齊的4bytes,bus利用率也都是100% 跟有L1是一樣的 (4個segments會在同一次)

Page 28: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L2 cache探討不同的狀況的bus的利用率

關掉L1的情況 ( L2 cache-line 寬度是32 bytes) : 一個warp有32threads,捉連續32的4bytes但不對齊,需要多一個segment,bus利用率也都是80% 比有L1的bus利用

率50%佳

Page 29: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L2 cache探討不同的狀況的bus的利用率

關掉L1的情況 ( L2 cache-line 寬度是32 bytes) : 一個warp有32threads,捉同一位

置的4bytes, 4 segments*4bytes / 128bytes = 12.5%bus利用率,比L1 bus利用率

3.125%來的好

Page 30: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

L2 cache探討不同的狀況的bus的利用率

關掉L1的情況 ( L2 cache-line 寬度是32 bytes) : 一個warp有32threads,捉不同

cache-line的4個bytes,4 segments*4bytes / 128bytes = 12.5%bus利用率,還是比

L1bus利用率2.125%佳

Page 31: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Coalesced global memory transaction總結 :

1. 對齊cache-line存取(記憶體起始位址要是cache-line的倍數)2. warp存取memory盡量在連續的位置

3. 避免四散的存取cache-line (可以考慮texture)4. 如果問題可以符合上面,誰願意變慢一一"5. 最後探討一個問題是關於struct 宣告,下面兩者是哪個比較好?

struct { uint8 r, g, b } AoS[N];

struct { uint8 r[N], g[N], b[N] } SoA;

Page 32: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Array of structure這個要依問題來看,以下面為例,我想要存取一整個pixel包含三個channel,用第一

種宣告,會比較好(Array of structure),下圖來看它會是連續的

struct { uint8 r, g, b } AoS[N]; ← 就它喇

struct { uint8 r[N], g[N], b[N] } SoA;

R G B R G B R G B R G B

T0 T1 T2 T3

Page 33: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Structure of array如果我想要存取pixel的某一個channel,用第二種宣告,會比較好(Structure of array),下圖來看它會是連續的

struct { uint8 r, g, b } AoS[N];

struct { uint8 r[N], g[N], b[N] } SoA; ← 就它喇

R G BR G BR G BR G B

T0 T1 T2 T3

Page 34: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Thread indexing複習一下 1D grid of 1D blocks

特別的暫存器threadIdx.x / blockIdx.x 會依不同的thread給予不同的值,這是為什麼

程式碼只有一份,但可以

平行處理不同的位置的資料

Ref : http://study.marearts.com/2015/03/meaning-of-threadidx-blockidx-blockdim.html

Page 35: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Thread indexing複習一下 1D grid of 2D blocks

右圖是二維的結構,可以想成

Lina照片是切成2*3個blocks

每個block是3*2threads組成

Ref : http://study.marearts.com/2015/03/meaning-of-threadidx-blockidx-blockdim_12.html

Page 36: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Thread indexing例子說明二維的結構如何定址到不同的global位址

Ref : http://study.marearts.com/2015/03/meaning-of-threadidx-blockidx-blockdim_12.html

Page 37: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

On-chip memory進到share memory,它是一個on-chip的memory,速度是僅次於register,圖中的

SMEM就是它,當你的問題只需讀寫一次的時候,用它是佔不到便宜的。多次的讀寫

就千萬要想起它

Page 38: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

On-chip memory within each SMX進到share memory,它是一個on-chip的memory,速度是僅次於register,圖中的

SMEM就是它,它在每個SMX中都有,所以回憶一下,它就是每個thread block共用

它 ,也就是說每個warp中的thread可以透過它來交換資料,而暫存器是不被thread共用的,暫存器只能被分到thread使用,不能拿來跟其它的thread共用

Page 39: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Shared memory banks總共32個bank,存取寬度以kepler來說是8bytes,為何要分這麼多bank? 為什麼是32個? 還記得warp是多少個thread組成的? 沒錯就是32個,所以未看先猜32 banks是為

讓warp中的32個threads可以同時存取shared memory。如果沒猜錯得話,如果32個thread同時存取不同的32個bank,就一次捉回來了

Page 40: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts可以推得32個bank可以在同一時間伺候32個thread,但如果1個bank同時伺候2個以

上的thread呢? 這就是bank conflicts 。什麼情形是1個bank要服務2個以上的thread? 舉例來說同樣顏色代表同樣的bank,這裡我要是不給個程式碼,大家一定以為我只會

嘴,所以給一個超強的例子,以下例來說0 ~ 31都是不同的bank,32 ~ 63都是不同的

bank,但word 0 跟 word 32是同一個bank,word 1 跟 word 33是同一個bank,依此

類推每32個會重覆

__shared__ int s_mem[64];

0 1 2 ................. 61 62 6332 33 34 .................

Page 41: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts看一個無聊程式在這一維的shared memory會有bank conflicts,今天想要第0 ~ 15個thread讓shared memory第0 ~ 15word設成0,第16 ~ 31個thread讓改成讓第0 ~ 15word變成1,用這麼無聊的程式當例子,有沒有看到不同的thread同時存取同一個

bank,這樣是不是就bank conflicts,答案是否定的!!!

__shared__ int s_mem[64]; int num = s_mem[threadIdx.x % 16];

0 1 2 ................. 61 62 6332 33 34 .................

T0

T16

T1

T17 T18

T2

Page 42: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts為什麼不是呢? 因為當不同thread存取同一個word,是不會有bank conflicts,這樣應

該可以接受吧。不同thread要同一個bank的同一個word這是broadcast的模式

__shared__ int s_mem[64]; int num = s_mem[threadIdx.x % 16];

0 1 2 ................. 61 62 6332 33 34 .................

T0

T16

T1

T17 T18

T2

Page 43: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts小小改一下程式把取餘拿掉 ,這樣子會讓T0與T16,T1與T17…. 存取相同的bank但不同的word,那會不會有bank conflicts,答案是肯定der !!!

這種情況bank就要提供兩次服務囉,要避免這類情形

__shared__ int s_mem[64]; int num = s_mem[threadIdx.x];

0 1 2 ................. 61 62 6332 33 34 .................

T0 T16T1 T17 T18T2

Page 44: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts再來看一個二維的例子,沒有bank conflicts要怎麼存取咧? 以下面的例子我們給每

個word不同的id值,這樣子是不會有bank conflicts的,每個warp裡的thread會存取不

同的column

__shared__ int s_tile[32][32]; int x = blockIdx.x * blockDim.x + threadIdx.x;int y = blockIdx.y * blockDim.y + threadIdx.y;int idx = x + y * (blockDim.x * gridDim.x);

s_tile[threadIdx.y][threadIdx.x] = idx;

T0 T1 T2 T31

Page 45: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts我們對調threadIdx.x與threadIdx.y,這樣就bank conflicts了,同一個warp裡的thread會存取同一個bank,就挫塞了,有辦法嗎?

__shared__ int s_tile[32][32]; int x = blockIdx.x * blockDim.x + threadIdx.x;int y = blockIdx.y * blockDim.y + threadIdx.y;int idx = x + y * (blockDim.x * gridDim.x);

s_tile[threadIdx.x][threadIdx.y] = idx;T0

T1

T2

T31

Page 46: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Bank conflicts我們在column加一個padding,這樣是不是就沒有bank conflicts了也

__shared__ int s_tile[32][33];

int x = blockIdx.x * blockDim.x + threadIdx.x;int y = blockIdx.y * blockDim.y + threadIdx.y;int idx = x + y * (blockDim.x * gridDim.x);

s_tile[threadIdx.x][threadIdx.y] = idx;T0

T1

T2

T31

Oh man!! each thread will access the different bank by adding a padding column

Page 47: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Shared memory access without bank conflicts Shared memory可以得到比global memory少二三十倍的latency,當資料有超過二次以上access的情況就要想到使用它,shared memory總共有32個bank提供服務 給一個warp(32threads),bank conflicts發生在二個以上的thread要存取不同的word但在同一個bank時,會有bank conflicts的狀況應該不是狠高,如果有bank conflicts可以考慮padding的做法來避免,盡量喇,如果你狠常有bank conflicts出現,就換工作吧XD

Page 48: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Constant memory接下來要介紹兩種狠容易程式的cache,總共有兩種read only memory,先講第一種,constant memory,有寫過openGL的朋友應該知道,我們常常在vertex shader中使各式各樣的空間轉換、投影等,vertex shader其實就是處理點坐標,在每個SMX中的ALU都可處理某一個點座標,對每個點坐標來說,空間、投影轉換所需要的矩陣都是一樣的,所以常常用uniform來存這些矩陣,它的特性是只需要讀出來用,且每個SMX中處理vertex的ALU都是用相同的uniform(每個點都是用相同的矩陣),所以constant memory在CUDA中是thread blocks內都共用 (thread block的是分配給SMX的單位),因為它原來設計是為了uniform,所以使用這個memory最好是warp裡的thread都存取同一個地址,用它會得到好處

Page 49: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Constant memory讓我們看清楚SMX的下半身,Uniform Cache就是constant memory的位置

← Uniform就是constant memory,it is designed to broadcast a single memory address to all threads in a warp

Page 50: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Constant memory為了怕別人說我只會嘴炮,這裡要給個例子說明

__constant__ int foo[1024];

// host code: cudaMemcpyToSymbol(foo, h_src, sizeof(int) * 1024);

int i = const_mem[threadIdx.x]; // 這樣用是挫塞了,threads沒有存取同一位址

int i = const_mem[10]; // 這樣才會有效,threads存取同一位址

Page 51: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Instruction level parallelism進入texture memory之前,先講一下ILP,因為在kepler裡面每個warp scheduler都有兩個instruction dispatch,所以同一時間每個warp可以同時執行兩道獨立的指令

舉例子:

a = src[i]; // load a d = a + e; // a : dependent add

a = src[i]; // load ad = src[i + 1]; // src[i + 1] : dependent load

a = src[i];d = b + c; // independent add, good !!

Page 52: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Instruction level parallelismKepler中每個warp scheduler都有2個instruction dispatch,每個cycle可以執行2個instruction,這2個dispatch可以選擇執行instruction的路徑有

● 192 cores for int and float● 64 cores for double● 32 load/store units● 32 special function units● 16 texture units

所以Instruction 也是可以平行der

Page 53: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Read only path進入texture memory,要先講一個比較容易優化的技巧,因為在SMX中讀取texture cache和access memory是不同的path,所以這個path是可以平行做的,可用來優化速度

__global__ void i_am_kernel(int w, int h, const uchar* __restrict src, uchar* dst) { uchar v1 = dst[threadIdx.x]; uchar v2 = src[threadIdx.x]; dst[threadIdx.x] = v1 + v2;}

加上 const __restrict是告訴compiler它可以是Read only cache,但未必保證

Page 54: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Read only path進入texture memory,要先講一個比較容易優化的技巧,因為在SMX中讀取texture cache和access memory是不同的path,所以這個path是可以平行做的,可用來優化速度

__global__ void i_am_kernel(int w, int h, const uchar* __restrict src, uchar* dst) { uchar v1 = dst[threadIdx.x]; uchar v2 = __ldg(&src[threadIdx.x]); dst[threadIdx.x] = v1 + v2;}

加上 __ldg 明確的指令用 read only cache

Page 55: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Loop unrolling最後要講一些程式可以調整的技巧,我們可以利用compile time就做一些工作,分擔run time的一點工作,例如

int num = 5;for (int i = 0; i < num; i++) { array[i] = i;}

這個迴圈在run time需要判斷 i 變數是不是小於五,如果num狠大,就要多判斷num次,所以想法狠簡單,可以將loop在compile time 展開像下面這樣array[0] = 0;array[1] = 1;…array[4] = 4;

Page 56: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Loop unrolling只有5個展開是還好,但如果要展開有100個或是1000個,這樣就太沒效率了,所以CUDA compiler有提供一個保留字,像下面這樣,compiler在compile time就會幫我們展開了,記得num必須是compile time就確定的數字,不可以是run time才決定der

int num = 5;

#pragma unrollfor (int i = 0; i < num; i++) { b[i] = i;}

Page 57: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Loop unrolling結合instruction level parallelism 與 loop unrolling,先看一個例子,展開迴圈後,會像右圖那樣,可以再更快點嗎?

float a = 0.0f;#pragma unrollfor (int i = 0; i < N; i++) a += logf(b[i]);

Page 58: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Loop unrolling還記得每個warp scheduler有都有2個instruction dispatcher,所以我們可以2-way ILP,就像下面這個例子,

float a = 0.0f;float a0 = 0.0f;float a1 = 0.0f;

#pragma unrollfor (int i = 0; i < N; i+=2) { a0 += logf(b[i]); a1 += logf(b[i+1]);}

a = a0 + a1;

Page 59: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Branch divergence最後再講一個需要注意的事,因為GPU本來的設計是拿來繪圖的,跟CPU的出發哲學是不樣的,所以GPU的if else這類的判斷指令會受限於它的硬體架構,Sometimes it is not efficencient !! 舉個例子

int i = 0;if ( 3 < 5 ) i = 3;else i = 5;

這個程式在同一個warp裡是不會有divergence,因為這個判斷對同一個warp裡的32個thread,都是做同一件事,就是 i = 3;

T0 T1 T2 T3 ...... T31

i = 3;

Page 60: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Branch divergence再舉一個例子

int i = 0;if ( threadIdx.x % 2 == 0 ) i = 0;else i = 1;

這個例子會有divergence,warp在執行if else這件事情產生了不同的指令要執行,所以要做兩次,第一次,偶數的thread像0, 2, 4, … 30,要做 i = 0; 其它單數項的thread要等,第二次換單數項的thread像1, 3, 5, …., 31,要做 i = 1; 其它偶數項的thread要等

T0 T1 T2 T3 ...... T31

i = 0;

i = 1;

Page 61: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

Branch divergence所以如果事情變成這樣的話,warp在這個branch的狀況下要做八次,如果branch中還有branch的話,這事情就大條了,所以要盡量避免這種情形,如果你一直都要遇到這個狀況的話,老話一句 換工作喇 XDD

if ( threadIdx.x % 8 == 0 ) i = 0;else if ( threadIdx.x % 8 == 1 ) i = 1;else if ( threadIdx.x % 8 == 2 ) i = 2;else if ( threadIdx.x % 8 == 3 ) i = 3;else if ( threadIdx.x % 8 == 4 ) i = 4;else if ( threadIdx.x % 8 == 5 ) i = 5;else if ( threadIdx.x % 8 == 6 ) i = 6;else if ( threadIdx.x % 8 == 7 ) i = 7;

Page 62: Cuda optimization

High occupancy global memory shared meory read only cache divergence and loop unrolling

References[n] NVIDIA