Upload
argus
View
31
Download
0
Embed Size (px)
DESCRIPTION
The Memory Hierarchy CENG331: Introduction to Computer Systems 10 th Lecture. Instructor: Erol Sahin. Acknowledgement: Most of the slides are adapted from the ones prepared by R.E. Bryant, D.R. O’Hallaron of Carnegie-Mellon Univ. Overview. Topics Storage technologies and trends - PowerPoint PPT Presentation
Citation preview
Instructor:
Erol Sahin
The Memory HierarchyCENG331: Introduction to Computer Systems10th Lecture
Acknowledgement: Most of the slides are adapted from the ones prepared by R.E. Bryant, D.R. O’Hallaron of Carnegie-Mellon Univ.
– 2 –
OverviewTopics
Storage technologies and trends Locality of reference Caching in the memory hierarchy
– 3 –
Random-Access Memory (RAM)Key features
RAM is packaged as a chip. Basic storage unit is a cell (one bit per cell). Multiple RAM chips form a memory.
Static RAM (SRAM) Each cell stores bit with a six-transistor circuit. Retains value indefinitely, as long as it is kept powered. Relatively insensitive to disturbances such as electrical noise. Faster and more expensive than DRAM.
Dynamic RAM (DRAM) Each cell stores bit with a capacitor and transistor. Value must be refreshed every 10-100 ms. Sensitive to disturbances. Slower and cheaper than SRAM.
– 4 –
SRAMEach bit in an SRAM is stored on four transistors
that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. A typical SRAM uses six MOSFETs to store each memory bit.
SRAM is more expensive, but faster and significantly less power hungry (especially idle) than DRAM. It is therefore used where either bandwidth or low power, or both, are principal considerations. SRAM is also easier to control (interface to) and generally more truly random access than modern types of DRAM. Due to a more complex internal structure, SRAM is less dense than DRAM and is therefore not used for high-capacity, low-cost applications such as the main memory in personal computers.
– 5 –
DRAMDRAM is usually arranged in a square array of one capacitor and
transistor per data bit storage cell. The illustrations to the right show a simple example with only 4 by 4 cells
Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The main memory (the "RAM") in personal computers is Dynamic RAM (DRAM), as is the "RAM" of home game consoles (PlayStation, Xbox 360 and Wii), laptop, notebook and workstation computers.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Unlike flash memory, it is volatile memory (cf. non-volatile memory), since it loses its data when power is removed. The transistors and capacitors used are extremely small—millions can fit on a single memory chip.
– 6 –
SRAM vs DRAM Summary
Tran. Accessper bit time Persist? Sensitive? Cost Applications
SRAM 6 1X Yes No 100x cache memories
DRAM 1 10X No Yes 1X Main memories,frame buffers
– 7 –
Conventional DRAM Organization
d x w DRAM: dw total bits organized as d supercells of size w bits
cols
rows
0 1 2 3
0
1
2
3
internal row buffer
16 x 8 DRAM chip
addr
data
supercell(2,1)
2 bits/
8 bits/
memorycontroller
(to CPU)
– 8 –
Reading DRAM Supercell (2,1)Step 1(a): Row access strobe (RAS) selects row 2.
cols
rows
RAS = 2 0 1 2 3
0
1
2
internal row buffer
16 x 8 DRAM chip
3
addr
data
2/
8/
memorycontroller
Step 1(b): Row 2 copied from DRAM array to row buffer.
– 9 –
Reading DRAM Supercell (2,1)Step 2(a): Column access strobe (CAS) selects column 1.
cols
rows
0 1 2 3
0
1
2
3
internal row buffer
16 x 8 DRAM chip
CAS = 1
addr
data
2/
8/
memorycontroller
Step 2(b): Supercell (2,1) copied from buffer to data lines, and eventually back to the CPU.
supercell (2,1)
supercell (2,1)
To CPU
– 10 –
Memory Modules
: supercell (i,j)
64 MB memory moduleconsisting ofeight 8Mx8 DRAMs
addr (row = i, col = j)
Memorycontroller
DRAM 7
DRAM 0
031 78151623243263 394047485556
64-bit doubleword at main memory address A
bits0-7
bits8-15
bits16-23
bits24-31
bits32-39
bits40-47
bits48-55
bits56-63
64-bit doubleword
031 78151623243263 394047485556
64-bit doubleword at main memory address A
– 11 –
Enhanced DRAMsAll enhanced DRAMs are built around the conventional DRAM
core. Fast page mode DRAM (FPM DRAM)
Access contents of row with [RAS, CAS, CAS, CAS, CAS] instead of [(RAS,CAS), (RAS,CAS), (RAS,CAS), (RAS,CAS)].
Extended data out DRAM (EDO DRAM)Enhanced FPM DRAM with more closely spaced CAS signals.
Synchronous DRAM (SDRAM)Driven with rising clock edge instead of asynchronous control signals.
Double data-rate synchronous DRAM (DDR SDRAM)Enhancement of SDRAM that uses both clock edges as control signals.
Video RAM (VRAM) Like FPM DRAM, but output is produced by shifting row bufferDual ported (allows concurrent reads and writes)
– 12 –
Nonvolatile MemoriesDRAM and SRAM are volatile memories
Lose information if powered off.
Nonvolatile memories retain value even if powered off. Generic name is read-only memory (ROM). Misleading because some ROMs can be read and modified.
Types of ROMs Programmable ROM (PROM) Erasable programmable ROM (EPROM) Electrically erase PROM (EEPROM) Flash memory
Firmware Program stored in a ROM
Boot time code, BIOS (basic input/ouput system) graphics cards, disk controllers.
– 13 –
Typical Bus Structure Connecting CPU and MemoryA bus is a collection of parallel wires that carry address, data, and
control signals.
Buses are typically shared by multiple devices.
mainmemory
I/O bridgebus interface
ALU
register file
CPU chip
system bus memory bus
– 14 –
Memory Read Transaction (1)
CPU places address A on the memory bus.
ALU
register file
bus interfaceA 0
Ax
main memoryI/O bridge
%eax
Load operation: movl A, %eax
– 15 –
Memory Read Transaction (2)
Main memory reads A from the memory bus, retrieves word x, and places it on the bus.
ALU
register file
bus interface
x 0
Ax
main memory
%eax
I/O bridge
Load operation: movl A, %eax
– 16 –
Memory Read Transaction (3)
CPU read word x from the bus and copies it into register %eax.
xALU
register file
bus interface x
main memory0
A
%eax
I/O bridge
Load operation: movl A, %eax
– 17 –
Memory Write Transaction (1)
CPU places address A on bus. Main memory reads it and waits for the corresponding data word to arrive.
yALU
register file
bus interfaceA
main memory0
A
%eax
I/O bridge
Store operation: movl %eax, A
– 18 –
Memory Write Transaction (2)
CPU places data word y on the bus.
yALU
register file
bus interfacey
main memory0
A
%eax
I/O bridge
Store operation: movl %eax, A
– 19 –
Memory Write Transaction (3)
Main memory read data word y from the bus and stores it at address A.
yALU
register file
bus interface y
main memory0
A
%eax
I/O bridge
Store operation: movl %eax, A
– 20 –
Disk Geometry
Disks consist of platters, each with two surfaces.
Each surface consists of concentric rings called tracks.
Each track consists of sectors separated by gaps.
spindle
surfacetracks
track k
sectors
gaps
– 21 –
Disk Geometry (Muliple-Platter View)
Aligned tracks form a cylinder.
surface 0surface 1surface 2surface 3surface 4surface 5
cylinder k
spindle
platter 0
platter 1
platter 2
– 22 –
Disk CapacityCapacity: maximum number of bits that can be stored.
Vendors express capacity in units of gigabytes (GB), where 1 GB = 10^9.
Capacity is determined by these technology factors: Recording density (bits/in): number of bits that can be squeezed into a 1 inch
segment of a track. Track density (tracks/in): number of tracks that can be squeezed into a 1 inch
radial segment. Areal density (bits/in2): product of recording and track density.
Modern disks partition tracks into disjoint subsets called recording zones Each track in a zone has the same number of sectors, determined by the
circumference of innermost track. Each zone has a different number of sectors/track
– 23 –
Computing Disk CapacityCapacity = (# bytes/sector) x (avg. # sectors/track) x
(# tracks/surface) x (# surfaces/platter) x
(# platters/disk)
Example: 512 bytes/sector 300 sectors/track (on average) 20,000 tracks/surface 2 surfaces/platter 5 platters/disk
Capacity = 512 x 300 x 20000 x 2 x 5
= 30,720,000,000
= 30.72 GB
– 24 –
Disk Operation (Single-Platter View)
The disk surface spins at a fixedrotational rate
spindle
By moving radially, the arm can position the read/write head over any track.
The read/write headis attached to the endof the arm and flies over the disk surface ona thin cushion of air.
spindle
spindle
spin
dlespindle
– 25 –
Disk Operation (Multi-Platter View)
arm
read/write heads move in unison
from cylinder to cylinder
spindle
– 26 –
Disk Access TimeAverage time to access some target sector approximated by :
Taccess = Tavg seek + Tavg rotation + Tavg transfer
Seek time (Tavg seek) Time to position heads over cylinder containing target sector. Typical Tavg seek = 9 ms
Rotational latency (Tavg rotation) Time waiting for first bit of target sector to pass under r/w head. Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min
Transfer time (Tavg transfer) Time to read the bits in the target sector. Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min.
– 27 –
Disk Access Time ExampleGiven:
Rotational rate = 7,200 RPM Average seek time = 9 ms. Avg # sectors/track = 400.
Derived: Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms. Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0.02 ms Taccess = 9 ms + 4 ms + 0.02 ms
Important points: Access time dominated by seek time and rotational latency. First bit in a sector is the most expensive, the rest are free. SRAM access time is about 4 ns/doubleword, DRAM about 60 ns
Disk is about 40,000 times slower than SRAM, 2,500 times slower then DRAM.
– 28 –
Logical Disk BlocksModern disks present a simpler abstract view of the complex
sector geometry: The set of available sectors is modeled as a sequence of b-sized logical
blocks (0, 1, 2, ...)
Mapping between logical blocks and actual (physical) sectors Maintained by hardware/firmware device called disk controller. Converts requests for logical blocks into (surface,track,sector) triples.
Allows controller to set aside spare cylinders for each zone. Accounts for the difference in “formatted capacity” and “maximum
capacity”.
– 29 –
I/O Bus
mainmemory
I/O bridgebus interface
ALU
register file
CPU chip
system bus memory bus
disk controller
graphicsadapter
USBcontroller
mousekeyboard monitordisk
I/O bus Expansion slots forother devices suchas network adapters.
– 30 –
Reading a Disk Sector (1)
mainmemory
ALU
register file
CPU chip
disk controller
graphicsadapter
USBcontroller
mousekeyboard monitordisk
I/O bus
bus interface
CPU initiates a disk read by writing a command, logical block number, and destination memory address to a port (address) associated with disk controller.
– 31 –
Reading a Disk Sector (2)
mainmemory
ALU
register file
CPU chip
disk controller
graphicsadapter
USBcontroller
mousekeyboard monitordisk
I/O bus
bus interface
Disk controller reads the sector and performs a direct memory access (DMA) transfer into main memory.
– 32 –
Reading a Disk Sector (3)
mainmemory
ALU
register file
CPU chip
disk controller
graphicsadapter
USBcontroller
mousekeyboard monitordisk
I/O bus
bus interface
When the DMA transfer completes, the disk controller notifies the CPU with an interrupt (i.e., asserts a special “interrupt” pin on the CPU)
– 33 –
Storage Trends
(Culled from back issues of Byte and PC Magazine)
metric 1980 1985 1990 1995 2000 2000:1980
$/MB 8,000 880 100 30 1 8,000access (ns) 375 200 100 70 60 6typical size(MB) 0.064 0.256 4 16 64 1,000
DRAM
metric 1980 1985 1990 1995 2000 2000:1980
$/MB 19,200 2,900 320 256 100 190access (ns) 300 150 35 15 2 100
SRAM
metric 1980 1985 1990 1995 2000 2000:1980
$/MB 500 100 8 0.30 0.05 10,000access (ms) 87 75 28 10 8 11typical size(MB) 1 10 160 1,000 9,000 9,000
Disk
– 34 –
CPU Clock Rates
1980 1985 1990 1995 2000 2000:1980processor 8080 286 386 Pent P-IIIclock rate(MHz) 1 6 20 150 750 750cycle time(ns) 1,000 166 50 6 1.6 750
– 35 –
The CPU-Memory Gap
The increasing gap between DRAM, disk, and CPU speeds.
110
1001,000
10,000100,000
1,000,00010,000,000
100,000,000
1980 1985 1990 1995 2000
year
ns
Disk seek timeDRAM access timeSRAM access timeCPU cycle time
– 36 –
LocalityPrinciple of Locality:
Programs tend to reuse data and instructions near those they have used recently, or that were recently referenced themselves.
Temporal locality: Recently referenced items are likely to be referenced in the near future.
Spatial locality: Items with nearby addresses tend to be referenced close together in time.
Locality Example:• Data
– Reference array elements in succession (stride-1 reference pattern):
– Reference sum each iteration:• Instructions
– Reference instructions in sequence:– Cycle through loop repeatedly:
sum = 0;for (i = 0; i < n; i++)
sum += a[i];return sum;
Spatial locality
Spatial localityTemporal locality
Temporal locality
– 37 –
Locality Example
Claim: Being able to look at code and get a qualitative sense of its locality is a key skill for a professional programmer.
Question: Does this function have good locality?
int sumarrayrows(int a[M][N]){ int i, j, sum = 0;
for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum}
– 38 –
Locality Example
Question: Does this function have good locality?
int sumarraycols(int a[M][N]){ int i, j, sum = 0;
for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum}
– 39 –
Locality Example
Question: Can you permute the loops so that the function scans the 3-d array a[] with a stride-1 reference pattern (and thus has good spatial locality)?
int sumarray3d(int a[M][N][N]){ int i, j, k, sum = 0;
for (i = 0; i < M; i++) for (j = 0; j < N; j++) for (k = 0; k < N; k++) sum += a[k][i][j]; return sum}
– 40 –
Memory Hierarchies
Some fundamental and enduring properties of hardware and software: Fast storage technologies cost more per byte and have less capacity. The gap between CPU and main memory speed is widening. Well-written programs tend to exhibit good locality.
These fundamental properties complement each other beautifully.
They suggest an approach for organizing memory and storage systems known as a memory hierarchy.
– 41 –
An Example Memory Hierarchy
registers
on-chip L1cache (SRAM)
main memory(DRAM)
local secondary storage(local disks)
Larger, slower,
and cheaper (per byte)storagedevices
remote secondary storage(distributed file systems, Web servers)
Local disks hold files retrieved from disks on remote network servers.
Main memory holds disk blocks retrieved from local disks.
off-chip L2cache (SRAM)
L1 cache holds cache lines retrieved from the L2 cache memory.
CPU registers hold words retrieved from L1 cache.
L2 cache holds cache lines retrieved from main memory.
L0:
L1:
L2:
L3:
L4:
L5:
Smaller,faster,and
costlier(per byte)storage devices
– 42 –
CachesCache: A smaller, faster storage device that acts as a staging
area for a subset of the data in a larger, slower device.Fundamental idea of a memory hierarchy:
For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1.
Why do memory hierarchies work? Programs tend to access the data at level k more often than they
access the data at level k+1. Thus, the storage at level k+1 can be slower, and thus larger and
cheaper per bit. Net effect: A large pool of memory that costs as much as the cheap
storage near the bottom, but that serves data to programs at the rate of the fast storage near the top.
– 43 –
Caching in a Memory Hierarchy
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
Larger, slower, cheaper storagedevice at level k+1 is partitionedinto blocks.
Data is copied betweenlevels in block-sized transfer units
8 9 14 3Smaller, faster, more expensivedevice at level k caches a subset of the blocks from level k+1
Level k:
Level k+1: 4
4
4 10
10
10
– 44 –
Request14
Request12
General Caching ConceptsProgram needs object d, which is stored in
some block b.
Cache hit Program finds b in the cache at level k. E.g.,
block 14.
Cache miss b is not at level k, so level k cache must fetch
it from level k+1. E.g., block 12. If level k cache is full, then some current block
must be replaced (evicted). Which one is the “victim”?
Placement policy: where can the new block go? E.g., b mod 4
Replacement policy: which block should be evicted? E.g., LRU
9 3
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
Level k:
Level k+1:
1414
12
14
4*
4*12
12
0 1 2 3
Request12
4*4*12
– 45 –
General Caching Concepts
Types of cache misses: Cold (compulsary) miss
Cold misses occur because the cache is empty. Conflict miss
Most caches limit blocks at level k+1 to a small subset (sometimes a singleton) of the block positions at level k.
E.g. Block i at level k+1 must be placed in block (i mod 4) at level k+1.Conflict misses occur when the level k cache is large enough, but multiple
data objects all map to the same level k block.E.g. Referencing blocks 0, 8, 0, 8, 0, 8, ... would miss every time.
Capacity missOccurs when the set of active cache blocks (working set) is larger than the
cache.
– 46 –
Examples of Caching in the Hierarchy
Hardware0On-Chip TLBAddress translations
TLB
Web browser
10,000,000Local diskWeb pagesBrowser cache
Web cache
Network buffer cache
Buffer cache
Virtual MemoryL2 cacheL1 cache
Registers
Cache Type
Web pages
Parts of filesParts of files
4-KB page32-byte block32-byte block
4-byte word
What Cached
Web proxy server
1,000,000,000Remote server disks
OS100Main memory
Hardware1On-Chip L1Hardware10Off-Chip L2
AFS/NFS client
10,000,000Local disk
Hardware+OS
100Main memory
Compiler0 CPU registers
Managed By
Latency (cycles)
Where Cached
Instructor:
Erol Sahin
Cache MemoriesCENG331: Introduction to Computer Systems10th Lecture
Acknowledgement: Most of the slides are adapted from the ones prepared by R.E. Bryant, D.R. O’Hallaron of Carnegie-Mellon Univ.
– 48 –
OverviewTopics
Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance
– 49 –
Cache MemoriesCache memories are small, fast SRAM-based memories
managed automatically in hardware. Hold frequently accessed blocks of main memory
CPU looks first for data in L1, then in L2, then in main memory.
Typical bus structure:
mainmemory
I/Obridgebus interfaceL2 cache
ALU
register fileCPU chip
cache bus system bus memory bus
L1 cache
– 50 –
Inserting an L1 Cache Between the CPU and Main Memory
a b c dblock 10
p q r sblock 21
...
...
w x y zblock 30
...
The big slow main memoryhas room for many 4-word
blocks.
The small fast L1 cache has roomfor two 4-word blocks.
The tiny, very fast CPU register filehas room for four 4-byte words.
The transfer unit betweenthe cache and main
memory is a 4-word block(16 bytes).
The transfer unit betweenthe CPU register file and
the cache is a 4-byte block.line 0
line 1
– 51 –
General Org of a Cache Memory
• • • B–110
• • • B–110
valid
valid
tag
tagset 0:
B = 2b bytesper cache block
E lines per set
S = 2s sets
t tag bitsper line
1 valid bitper line
Cache size: C = B x E x S data bytes
• • •
• • • B–110
• • • B–110
valid
valid
tag
tagset 1: • • •
• • • B–110
• • • B–110
valid
valid
tag
tagset S-1: • • •
• • •
Cache is an arrayof sets.
Each set containsone or more lines.
Each line holds ablock of data.
– 52 –
Addressing Caches
t bits s bits b bits
0m-1
<tag> <set index> <block offset>
Address A:
• • • B–110
• • • B–110
v
v
tag
tagset 0: • • •
• • • B–110
• • • B–110
v
v
tag
tagset 1: • • •
• • • B–110
• • • B–110
v
v
tag
tagset S-1: • • •
• • •The word at address A is in the cache ifthe tag bits in one of the <valid> lines in
set <set index> match <tag>.
The word contents begin at offset <block offset> bytes from the beginning
of the block.
– 53 –
Direct-Mapped Cache
Simplest kind of cache
Characterized by exactly one line per set.
valid
valid
valid
tag
tag
tag
• • •
set 0:
set 1:
set S-1:
E=1 lines per setcache block
cache block
cache block
– 54 –
Accessing Direct-Mapped Caches
Set selection Use the set index bits to determine the set of interest.
valid
valid
valid
tag
tag
tag
• • •
set 0:
set 1:
set S-1:t bits s bits
0 0 0 0 10m-1
b bits
tag set index block offset
selected set
cache block
cache block
cache block
– 55 –
Accessing Direct-Mapped Caches
Line matching and word selection Line matching: Find a valid line in the selected set with a matching tag Word selection: Then extract the word
1
t bits s bits100i0110
0m-1
b bits
tag set index block offset
selected set (i):
(3) If (1) and (2), then cache hit,
and block offset selects
starting byte.
=1? (1) The valid bit must be set
= ?(2) The tag bits in the cache
line must match thetag bits in the address
0110 w3w0 w1 w2
30 1 2 74 5 6
– 56 –
Direct-Mapped Cache SimulationM=16 byte addresses, B=2 bytes/block,
S=4 sets, E=1 entry/set
Address trace (reads):0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002]
xt=1 s=2 b=1
xx x
1 0 m[1] m[0]v tag data
0 [00002] (miss)
(1)1 0 m[1] m[0]v tag data
1 1 m[13] m[12]
13 [11012] (miss)
(3)
1 1 m[9] m[8]v tag data
8 [10002] (miss)
(4)1 0 m[1] m[0]v tag data
1 1 m[13] m[12]
0 [00002] (miss)
(5)
0 M[0-1]1
1 M[12-13]1
1 M[8-9]1
1 M[12-13]1
0 M[0-1]1
1 M[12-13]1
0 M[0-1]1
– 57 –
Why Use Middle Bits as Index?
High-Order Bit Indexing Adjacent memory lines would map
to same cache entry Poor use of spatial locality
Middle-Order Bit Indexing Consecutive memory lines map to
different cache lines Can hold C-byte region of address
space in cache at one time
4-line Cache High-OrderBit Indexing
Middle-OrderBit Indexing
00011011
0000000100100011010001010110011110001001101010111100110111101111
0000000100100011010001010110011110001001101010111100110111101111
– 58 –
Set Associative Caches
Characterized by more than one line per set
valid tagset 0: E=2 lines per set
set 1:
set S-1:
• • •
cache block
valid tag cache block
valid tag cache block
valid tag cache block
valid tag cache block
valid tag cache block
– 59 –
Accessing Set Associative Caches
Set selection identical to direct-mapped cache
valid
valid
tag
tagset 0:
valid
valid
tag
tagset 1:
valid
valid
tag
tagset S-1:
• • •
t bits s bits0 0 0 0 1
0m-1
b bits
tag set index block offset
Selected set
cache block
cache block
cache block
cache block
cache block
cache block
– 60 –
Accessing Set Associative Caches
Line matching and word selection must compare the tag in each valid line in the selected set.
1 0110 w3w0 w1 w2
1 1001
t bits s bits100i0110
0m-1
b bits
tag set index block offset
selected set (i):
=1? (1) The valid bit must be set.
= ?(2) The tag bits in one of the cache lines must
match the tag bits inthe address
(3) If (1) and (2), then cache hit, and
block offset selects starting byte.
30 1 2 74 5 6
– 61 –
Multi-Level Caches
Options: separate data and instruction caches, or a unified cache
size:speed:
$/Mbyte:line size:
200 B3 ns
8 B
8-64 KB3 ns
32 B
128 MB DRAM60 ns
$1.50/MB8 KB
30 GB8 ms
$0.05/MB
larger, slower, cheaper
Memory
L1 d-cache
RegsUnified
L2 Cache
Processor
1-4MB SRAM6 ns
$100/MB32 B
L1 i-cache
disk
– 62 –
Processor Chip
Intel Pentium Cache Hierarchy
L1 Data1 cycle latency
16 KB4-way assocWrite-through
32B lines
L1 Instruction16 KB, 4-way
32B lines
Regs. L2 Unified128KB--2 MB4-way assocWrite-back
Write allocate32B lines
MainMemory
Up to 4GB
– 63 –
Intel i7 processorBefore, Intel Core 2 Duo and Quad
processors had just an L1 and L2 cache.
i7 features L1, L2, and shared L3 caches.
• 64K L1 cache (32K Instruction, 32K Data) per core,
• 1MB of total L2 cache, and
• 8MB of L3 cache that is shared across all the cores.
• That means that all Intel Core i7 processors have over 9MB of memory right there on the 45nm processor
– 64 –
Intel i7 architecture
Core 132 KB L1-I32 KB L1-d256 KB L2
Core 232 KB L1-I32 KB L1-d256 KB L2
Core 332 KB L1-I32 KB L1-d256 KB L2
Core 432 KB L1-I32 KB L1-d256 KB L2
8 MB shared L3
Integrated Memory Controller Quick Path interconnect
32 KB L1 inst cache is 4-way set associative
32 KB L1 data cache is 8-way set associative 256 KB L2 unified i+d cache is 8-way set associative 8 MB L3 shared cache 16-way set associative
all cache lines are 64 bytes in size
MESI+F (forward) coherency
– 65 –
Hits
L1 cache hit 4 cycles
L2 cache hit 10 cycles
L3 cache hit, line unshared ~40 cycles
L3 cache hit, shared line in another core ~65 cycles
L3 cache hit, modified in another core ~75 cycles
Local DRAM ~60-180 cycles
Remote L3 cache or remote DRAM ~100-300 cycles
– 66 –
TLB cache
7-entry instruction TLB0, fully associative, maps 2 MB or 4 MB super pages
32-entry data TLB0, 4-way set associative, maps 2 MB or 4 MB super pages
64-entry instruction TLB, 4-way set associative, maps 4 KB pages
64-entry data TLB, 4-way set associative, maps 4 KB pages
512-entry shared second-level TLB, 4-way set associative, maps 4 KB pages
– 67 –
Cache Performance MetricsMiss Rate
Fraction of memory references not found in cache (misses/references) Typical numbers:
3-10% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc.
Hit Time Time to deliver a line in the cache to the processor (includes time to
determine whether the line is in the cache) Typical numbers:
1 clock cycle for L13-8 clock cycles for L2
Miss Penalty Additional time required because of a miss
Typically 25-100 cycles for main memory
– 68 –
Writing Cache Friendly Code
Repeated references to variables are good (temporal locality)
Stride-1 reference patterns are good (spatial locality)
Examples: cold cache, 4-byte words, 4-word cache blocks
int sumarrayrows(int a[M][N]){ int i, j, sum = 0;
for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum;}
int sumarraycols(int a[M][N]){ int i, j, sum = 0;
for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum;}
Miss rate = Miss rate = 1/4 = 25% 100%
– 69 –
The Memory Mountain
Read throughput (read bandwidth) Number of bytes read from memory per second (MB/s)
Memory mountain Measured read throughput as a function of spatial and temporal
locality. Compact way to characterize memory system performance.
– 70 –
Memory Mountain Test Function
/* The test function */void test(int elems, int stride) { int i, result = 0; volatile int sink;
for (i = 0; i < elems; i += stride)result += data[i];
sink = result; /* So compiler doesn't optimize away the loop */}
/* Run test(elems, stride) and return read throughput (MB/s) */double run(int size, int stride, double Mhz){ double cycles; int elems = size / sizeof(int);
test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */}
– 71 –
Memory Mountain Main Routine/* mountain.c - Generate the memory mountain. */#define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */#define MAXBYTES (1 << 23) /* ... up to 8 MB */#define MAXSTRIDE 16 /* Strides range from 1 to 16 */#define MAXELEMS MAXBYTES/sizeof(int)
int data[MAXELEMS]; /* The array we'll be traversing */
int main(){ int size; /* Working set size (in bytes) */ int stride; /* Stride (in array elements) */ double Mhz; /* Clock frequency */
init_data(data, MAXELEMS); /* Initialize each element in data to 1 */ Mhz = mhz(0); /* Estimate the clock frequency */ for (size = MAXBYTES; size >= MINBYTES; size >>= 1) {
for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%.1f\t", run(size, stride, Mhz));printf("\n");
} exit(0);}
– 72 –
The Memory Mountain
s1
s3
s5
s7
s9
s11
s13
s15
8m
2m 512k 12
8k 32k 8k
2k
0
200
400
600
800
1000
1200
read
thro
ughp
ut (M
B/s)
stride (words) working set size (bytes)
Pentium III Xeon550 MHz16 KB on-chip L1 d-cache16 KB on-chip L1 i-cache512 KB off-chip unifiedL2 cache
Ridges ofTemporalLocality
L1
L2
mem
Slopes ofSpatialLocality
xe
– 73 –
Ridges of Temporal Locality
Slice through the memory mountain with stride=1 illuminates read throughputs of different caches and memory
0
200
400
600
800
1000
1200
8m 4m 2m
1024
k
512k
256k
128k 64
k
32k
16k 8k 4k 2k 1k
working set size (bytes)
read
thro
ugpu
t (M
B/s
)
L1 cacheregion
L2 cacheregion
main memoryregion
– 74 –
A Slope of Spatial Locality
Slice through memory mountain with size=256KB shows cache block size.
0
100
200
300
400
500
600
700
800
s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16
stride (words)
read
thro
ughp
ut (M
B/s
)
one access per cache line
– 75 –
Matrix Multiplication Example
Major Cache Effects to Consider Total cache size
Exploit temporal locality and keep the working set small (e.g., by using blocking) Block size
Exploit spatial locality
Description: Multiply N x N matrices O(N3) total operations Accesses
N reads per source element N values summed per destination
» but may be able to hold in register
/* ijk */for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; }}
Variable sumheld in register
– 76 –
Miss Rate Analysis for Matrix Multiply
Assume: Line size = 32B (big enough for 4 64-bit words) Matrix dimension (N) is very large
Approximate 1/N as 0.0 Cache is not even big enough to hold multiple rows
Analysis Method: Look at access pattern of inner loop
CA
k
i
B
k
j
i
j
– 77 –
Layout of C Arrays in Memory (review)C arrays allocated in row-major order
each row in contiguous memory locations
Stepping through columns in one row: for (i = 0; i < N; i++)
sum += a[0][i]; accesses successive elements if block size (B) > 4 bytes, exploit spatial locality
compulsory miss rate = 4 bytes / B
Stepping through rows in one column: for (i = 0; i < n; i++)
sum += a[i][0]; accesses distant elements no spatial locality!
compulsory miss rate = 1 (i.e. 100%)
– 78 –
Matrix Multiplication (ijk)
/* ijk */for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; }}
A B C
(i,*)
(*,j)(i,j)
Inner loop:
Column-wise
Row-wise Fixed
Misses per Inner Loop Iteration:A B C
0.25 1.0 0.0
– 79 –
Matrix Multiplication (jik)
/* jik */for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum }}
A B C
(i,*)
(*,j)(i,j)
Inner loop:
Row-wise Column-wise
Fixed
Misses per Inner Loop Iteration:
A B C0.25 1.0 0.0
– 80 –
Matrix Multiplication (kij)
/* kij */for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; }}
A B C
(i,*)(i,k) (k,*)
Inner loop:
Row-wise Row-wiseFixed
Misses per Inner Loop Iteration:A B C
0.0 0.25 0.25
– 81 –
Matrix Multiplication (ikj)
/* ikj */for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; }}
A B C
(i,*)(i,k) (k,*)
Inner loop:
Row-wise Row-wiseFixed
Misses per Inner Loop Iteration:A B C
0.0 0.25 0.25
– 82 –
Matrix Multiplication (jki)
/* jki */for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }}
A B C
(*,j)(k,j)
Inner loop:
(*,k)
Column -wise
Column-wise
Fixed
Misses per Inner Loop Iteration:
A B C1.0 0.0 1.0
– 83 –
Matrix Multiplication (kji)
/* kji */for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }}
A B C
(*,j)(k,j)
Inner loop:
(*,k)
FixedColumn-wise
Column-wise
Misses per Inner Loop Iteration:A B C
1.0 0.0 1.0
– 84 –
Summary of Matrix Multiplication
for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum +=
a[i][k]*b[k][j]; c[i][j] = sum; }}
ijk (& jik): • 2 loads, 0 stores• misses/iter = 1.25
for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] +=
r*b[k][j]; }}
for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] +=
a[i][k] * r; }}
kij (& ikj): • 2 loads, 1 store• misses/iter = 0.5
jki (& kji): • 2 loads, 1 store• misses/iter = 2.0
– 85 –
Pentium Matrix Multiply Performance
Miss rates are helpful but not perfect predictors.Code scheduling matters, too.
0
10
20
30
40
50
60
25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400
Array size (n)
Cyc
les/
itera
tion
kjijkikijikjjikijk
– 86 –
Improving Temporal Locality by Blocking
Example: Blocked matrix multiplication “block” (in this context) does not mean “cache block”. Instead, it mean a sub-block within the matrix. Example: N = 8; sub-block size = 4
C11 = A11B11 + A12B21 C12 = A11B12 + A12B22
C21 = A21B11 + A22B21 C22 = A21B12 + A22B22
A11 A12
A21 A22
B11 B12
B21 B22
X = C11 C12
C21 C22
Key idea: Sub-blocks (i.e., Axy) can be treated just like scalars.
– 87 –
Blocked Matrix Multiply (bijk)
for (jj=0; jj<n; jj+=bsize) { for (i=0; i<n; i++) for (j=jj; j < min(jj+bsize,n); j++) c[i][j] = 0.0; for (kk=0; kk<n; kk+=bsize) { for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; } } }}
– 88 –
Blocked Matrix Multiply Analysis Innermost loop pair multiplies a 1 X bsize sliver of A by a bsize X bsize
block of B and accumulates into 1 X bsize sliver of C Loop over i steps through n row slivers of A & C, using same B
A B C
block reused n times in succession
row sliver accessedbsize times
Update successiveelements of sliver
i ikk
kk jjjj
for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; }
InnermostLoop Pair
– 89 –
Pentium Blocked Matrix Multiply PerformanceBlocking (bijk and bikj) improves performance by a factor of two
over unblocked versions (ijk and jik) relatively insensitive to array size.
0
10
20
30
40
50
60
Array size (n)
Cyc
les/
itera
tion
kjijkikijikjjikijkbijk (bsize = 25)bikj (bsize = 25)
– 90 –
Concluding Observations
Programmer can optimize for cache performance How data structures are organized How data are accessed
Nested loop structureBlocking is a general technique
All systems favor “cache friendly code” Getting absolute optimum performance is very platform specific
Cache sizes, line sizes, associativities, etc. Can get most of the advantage with generic code
Keep working set reasonably small (temporal locality)Use small strides (spatial locality)