Chapter 7Storage Systems
吳俊興高雄大學資訊工程學系
December 2004
EEF011 Computer Architecture計算機結構
2
Chapter 7. Storage Systems
7.1 Introduction7.2 Types of Storage Devices7.3 Busses - Connecting IO Devices to
CPU/Memory7.4 Reliability, Availability and Dependability7.5 RAID: Redundant Arrays of Inexpensive Disks7.6 Errors and Failures in Real Systems7.7 I/O Performance Measures7.8 A Little Queuing Theory7.9 Benchmarks of Storage Performance and
Availability
3
7.1 Introduction
Motivation: Who cares about I/O?• Gaps between CPU and I/O system are getting worse!
– CPU Performance: Improves 55% per year– I/O system performance limited by mechanical delays (disk I/O):
improves less than 10% per year (IO per sec)
• Amdahl's Law: system speed-up limited by the slowest part!Assume I/O takes 10% of overall system times– 10x CPU 5.26x Performance (lose 50% of CPU gain)– 100x CPU 9.17x Performance (lose 90% of CPU gain)
• I/O bottleneck: –Diminishing fraction of time in CPU–Diminishing value of faster CPUs
• Magnetic disks ( inclusive of hard disks and floppy disks) have dominated nonvolatile (permanent) storage since 1965
4
Who cares about CPUs?
• Why still important to keep CPUs busy vs. IO devices
("CPU time"), as CPUs not costly?–Moore's Law leads to both large, fast CPUs but also to very
small, cheap CPUs–2001 Hypothesis: 600 MHz PC is fast enough for Office
Tools?–PC slowdown since fast enough unless games, new apps?
• People care more about storing information and
communicating information than calculating–"Information Technology" vs. "Computer Science"–1960s and 1980s: Computing Revolution → Execution Time–1990s and 2000s: Information Age → Dependability
5
I/O Systems
6
Storage Technology Drivers
• Driven by the prevailing computing paradigm– 1950s: migration from batch to on-line processing– 1990s: migration to ubiquitous computing
• computers in phones, books, cars, video cameras, …• nationwide fiber optical network with wireless tails
• Effects on storage industry– Embedded storage
• smaller, cheaper, more reliable, lower power– Data utilities
• high capacity, hierarchically managed storage
7
• Purpose– Long-term, nonvolatile storage– Large, inexpensive, slow level in the storage hierarchy
• Types– Magnetic storages: disk, floppy, tape– Optical storages: compact discs (CD), digital
video/versatile discs (DVD)– Electrical storage: flash memory
• Bus Interface– IDE: ATA, S-ATA– SCSI – Small Computer System Interface– USB – Universal Serial Bus– 1394, Fibre Channel, etc.
7.2 Types of Storage Devices
8
Magnetic Disks
Disk latency = Controller overhead+ Seek Time+ Rotation Time+ transfer TimeSector
Track
Cylinder
HeadPlatter
Read Cache
WriteCache
Electronics (controller)
Data Control
Arm
Actuator
• Seek Time depends no. tracks move arm, seek speed of disk• Rotation Time depends on speed disk rotates, how far sector is from head • Transfer Time depends on data rate (bandwidth) of disk (bit density), size
of request
•Characteristics– Seek Time (~8 ms avg)
• positional latency + rotational latency•Transfer rate
– 10-40 MByte/sec, Blocks•Capacity
– Approaching 400 Gigabytes– Quadruples every 2 years
7200 RPM = 120 RPS => 8 ms per rev ave rot. latency = 4 ms128 sectors per track => 0.25 ms per sector1 KB per sector => 16 MB / s
9
Data Rate: Inner vs. Outer Tracks
• To keep things simple, originally kept same number of
sectors per track– Since outer track longer, lower bits per inch (BPI)
• Competition decided to keep BPI the same for all
tracks (“constant bit density”)– More capacity per disk– More of sectors per track towards edge– Since disk spins at constant speed,
outer tracks have faster data rate
• Bandwidth: outer track 1.7X inner track!– Inner track highest density, outer track lowest, so not really constant– 2.1X length of track outer / inner, 1.7X bits outer / inner
10
Photo of Disk Head, Arm, Actuator
Actuator
ArmHead
Platters (12)}
Spindle
State of the Art: Barracuda 180
•181.6 GB, 3.5 inch disk•12 platters, 24 surfaces•24,247 cylinders•7,200 RPM; (4.2 ms avg. latency)
•7.4/8.2 ms avg. seek (r/w)
•64 to 35 MB/s (internal)•0.1 ms controller time•10.3 watts (idle)
11
1 inch disk drive!
• 2000 IBM MicroDrive:– 1.7” x 1.4” x 0.2” – 1 GB, 3600 RPM,
5 MB/s, 15 ms seek– Digital camera, PalmPC?
• 2004 Apple iPod Mini MP3 Player– 4 GB, 249 USD– 3.6” x 2.0” x 0.5” case– iPod: 20-40GB, 299 USD / 1.8” hard drive
• 2006 MicroDrive?• 9 GB, 50 MB/s!
12
Figure 7.2 Characteristics of
three magnetic disks of 2000
13
Magnetic Disk History
source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces”
Data density
(Mbit/inch2)
Capacity ofUnit Shown
(Megabytes)
14
Historical Perspective
• 1956 IBM Ramac — early 1970s Winchester– Developed for mainframe computers, proprietary interfaces– Steady shrink in form factor: 27 in. to 14 in
• Form factor and capacity drives market, more than performance
• 1970s: Mainframes 14 inch diameter disks• 1980s: Minicomputers,Servers 8”,5 1/4” diameter• PCs, workstations Late 1980s/Early 1990s:
– Mass market disk drives become a reality• industry standards: SCSI, IDE
– Pizzabox PCs 3.5 inch diameter disks– Laptops, notebooks 2.5 inch disks– Palmtops didn’t use disks,
so 1.8 inch diameter disks didn’t make it• 2000s:
– 1 inch for cameras, camcorders, cell phones?
15
The Future of Magnetic Disks
Disk Performance Model /Trends• Capacity
+ 100%/year (2X / 1.0 yrs)
• Transfer rate (BW)+ 40%/year (2X / 2.0 yrs)
• Rotation + Seek time– 8%/ year (1/2 in 10 yrs)
• MB/$> 100%/year (2X / 1.0 yrs)
Fewer chips + areal density
16
Areal Density• Bits recorded along a track
– Metric is Bits Per Inch (BPI)• Number of tracks per surface
– Metric is Tracks Per Inch (TPI)• Disk Designs Brag about bit density per unit area
– Metric is Bits Per Square Inch, called Areal Density• Areal Density = BPI x TPI
– through about 1988: 29% per year (double every three years)– then and about 1996: 60% per year (quadruple every three years)– from 1997 to 2001: 100% (double every year)
Year Areal Density
1973 1.7
1979 7.7
1989 63
1997 3090
2000 17100
17
Figure 7.3 Price per PC disk by capacity (MBytes) between 1983 and 2001
• 4000 times in 18 years• Price declines become steeper from 30% per year to 100% per year
18
Figure 7.4 Price per GByte of PC disk over time, dropping a factor of 10,1000 between 1983 and 2001
The center point is the median price. The spread from min to max becomes large due to increasing difference in price between ATA.IDE and SCSI disks
19
Figure 7.5 Cost vs. access time for SRAM, DRAM, and magnetic disk
• DRAM latency is about 100,000 times less than disk, although bandwidth is only about 50 times larger
• Two-order-of-magnitude gap in cost and access times between semiconductor memory and rotating magnetic disks
20
Magnetic Tapes
Tape vs. Disk•Magnetic tapes use similar technology as disks, and hence historically have followed the same density improvements
•Disk head flies above surface, tape head lies on surface•Disk fixed, tape removable•Inherent cost-performance based on geometries:
fixed rotating platters with gaps
(random access, limited area, 1 media / reader)vs. removable long strips wound on spool
(sequential access, "unlimited" length, multiple tapes / reader)•usually 10-100x over disks in price per GB (good for backup), but in 2001, the prices of a 40GB IDE disk and a 40GB tape is almost the same
•Helical Scan (developed for VCR, Camcorder, etc) •Spins head at angle to tape to improve density by a factor of 20 to 50
21
Current Drawbacks to Tape
• Wear out– Tape wear out:
» Helical 100s of passes to 1000s for longitudinal – Head wear out:
» 2000 hours for helicalBoth must be accounted for in economic / reliability model
• Compatibility problem– Readers must be compatible with multiple generations of media– Disks are a closed system
• Longer times– Long rewind, eject, load, spin-up times– long recovery time: take longer time to restore data from tape than disk
• Designed for archival, market is small– PCs are a small market for tapes– cannot afford a separate large research and development effort
• Get rid of tapes?– Replaced by networks and remote disks to replicate the data
geographically
22
Automated Cartridge System: StorageTek Powderhorn 9310
• 6000 x 50 GB 9830 tapes = 300 TBytes in 2000 (uncompressed)
– Library of Congress: all information in the world; in 1992,ASCII of all books = 30 TB
– Exchange up to 450 tapes per hour (8 secs/tape)
• 1.7 to 7.7 Mbyte/sec per reader, up to 10 readers
7.7 feet
10.7 feet
8200 pounds,1.1 kilowatts
23
Flash Memory
•Flash memory: electronic solid state storage– written by inducing the tunneling of charge from transistor gain to a floating gate– The floating gate acts as a potential well that stores the charge, and the charge
cannot move from there without applying an external force– Restricts writes to multi-kilobyte blocks, increasing memory capacity per chip by
reducing area dedicated to control– Two basic types in 2001:
» NOR: 1-2s to erase 64 KB to 128 KB blocks, 10 us/B to write, 0.1M write-cycles» NAND: 5-6 ms to erase 4KB to 8KB blocks, 1.5 us/B to write, 1M write-cycles
•Applications– Nonvolatile storage: used for embedded devices and wireless portable devices
(e.g., mobile phones, palms, mp3 player, digital cameras)– Rewritable ROM: allow software to be upgraded without having to replace chips
•Comparisons– Offer lower power consumption (<50 mw) than disks; and can be sold in small sizes– Offer read access times comparable to DRAMs
» 150ns fro 128M and 65ns for 16M in 2001– writing is much slower and more complicated
» sharing characteristics with older EPROM and EEPROM: erased first then written
24
• Interconnect = glue that interfaces computer system components• High speed hardware interfaces + logical protocols• Networks, channels, backplanes
Network
>1000 m
10 - 1000 Mb/s
high ( 1ms)
lowExtensive CRC
Channel
10 - 100 m
40 - 1000 Mb/s
medium
mediumByte Parity
Backplane
0.1 m
320 - 2000+ Mb/s
low (Nanosecs.)
highByte Parity
Distance
Bandwidth
Latency
Reliability
memory-mappedwide pathways
centralized arbitration
message-basednarrow pathwaysdistributed arbitration
Connects Machines ChipsDevices
7.3 Busses - Connecting IO Devices to CPU/Memory
25
A Computer System with One Bus: Backplane Bus
• A single bus (the backplane bus) is used for:– Processor to memory communication– Communication between I/O devices and memory
• Advantages: Simple and low cost• Disadvantages: slow and the bus can become a major
bottleneck• Example: IBM PC - AT
Processor Memory
I/O Devices
Backplane Bus
26
A Two-Bus System
•I/O buses tap into the processor-memory bus via bus adaptors:– Processor-memory bus: mainly for processor-memory traffic– I/O buses: provide expansion slots for I/O devices
•Apple Macintosh-II– NuBus: Processor, memory, and a few selected I/O devices– SCCI Bus: the rest of the I/O devices
Processor Memory
I/OBus
Processor Memory Bus
BusAdaptor
BusAdaptor
BusAdaptor
I/OBus
I/OBus
27
A Three-Bus System
• A small number of backplane buses tap into the processor-memory bus
– Processor-memory bus is only used for processor-memory traffic
– I/O buses are connected to the backplane bus• Advantage: loading on the processor bus is greatly reduced
Processor Memory
Processor Memory Bus
BusAdaptor
BusAdaptor
BusAdaptor
I/O BusBackplane Bus
I/O Bus
28
Separate Bus Architecture: North/South Bridges
• Separate sets of pins for different functions– Memory bus – Caches– Graphics bus (for fast frame buffer)– I/O busses are connected to the backplane bus
• Advantages: – Busses can run at different speeds– Much less overall loading!
MemoryProcessor Memory Bus
BusAdaptor
BusAdaptor
I/O Bus
Backplane Bus
I/O Bus
“ backsidecache”
Processor
Director
29
Example: Intel Chipset
I/O Controller Hub
Memory Controller Hub
System Bus: 1066, 800 Mhz
82925XE MCH
FSB/Mem: 2 DIMMs/channel * 2
1066/800 DDR2-533/400
4 GB
Interface: PCI Express * 16
ICH6R ICH
PCI Support: 4 PCI Express
Storage Interface: ATA 100SATA 150/4, UDMA
USB: USB 2.0 * 8 ports
Audio: AC'97/20-bit audio
I/O: SMBus 2.0 / GPIO
30
Examples: VIA ChipsetsPT880 + VT8837 for Intel P4 K8T890+VT8251 for AMD Athlon 64
31
•Synchronous Bus: includes a clock in the control lines– A fixed protocol for communication that is relative to the clock– Advantage: involves very little logic and can run very fast– Disadvantages:
• Every device on the bus must run at the same clock rate• To avoid clock skew, busses cannot be long if they are fast
•Asynchronous Bus: not clocked– It can accommodate a wide range of devices– It can be lengthened without worrying about clock skew– It requires a handshaking protocol
•Bus Master: has ability to control the bus, initiates transaction•Bus Slave: module activated by the transaction•Bus Communication Protocol: specification of sequence of events and timing requirements in transferring information
Bus Design
° ° °Master Slave
Control LinesAddress LinesData Lines
32
Bus Design Options
33
Summary of Parallel and Serial I/O Buses
34
Interfacing I/O to CPU
The interface consists of setting up the device with what operation is to be performed:
• Read or Write• Size of transfer• Location on device• Location in memoryThen triggering the device to start the operationWhen operation complete, the device will interrupt.
I/O instructions (in,out) unique from memory access instructionsLDD R0,D,P <-- Load R0 with the contents found at device D, port P.
Method 1: I/O instructions
Device registers are mapped to look like regular memory:LD R0,Mem1 <-- Load R0 with the contents found at device D, port P.
This works because an initialization has correlated the device characteristics with location Mem1.
Method 2: memory-mapped I/O
35
Memory Mapped I/O
CPU
IOC
(1) Issuesinstructionto IOC
memory
(2)
(3)
Device to/from memorytransfers are controlledby the IOC directly.
OP Device Address
target devicewhere commands are
IOP looks in memory for commands
OP Addr Cnt Other
whatto do
whereto putdata
howmuch
specialrequests
(4) IOC interrupts CPU when done
ROM
RAM
I/O
Virtual Memory Pointing at IO space.
• Some physical addresses are set aside; There is no REAL memory at these addresses
• Instead when the processor sees these addresses, it knows to aim the instruction at the IO processor
36
Transfer Method 1:Programmed I/O (Polling)
CPU
IOC
device
Memory
Is thedata
ready?
readdata
storedata
yesno
done? no
yes
busy wait loopnot an efficient
way to use the CPUunless the device
is very fast!
but checks for I/O completion can bedispersed amongcomputationallyintensive code
37
Device Interrupts
• An I/O interrupt is just like the exception handlers except:
– An I/O interrupt is asynchronous– Further information needs to be conveyed
• An I/O interrupt is asynchronous with respect to instruction execution:
– I/O interrupt is not associated with any instruction– I/O interrupt does not prevent any instruction from completion
• You can pick your own convenient point to take an interrupt
• I/O interrupt is more complicated than exception:– Needs to convey the identity of the device generating the interrupt– Interrupt requests can have different urgencies:
• Interrupt request needs to be prioritized
38
add $r1,$r2,$r3subi $r4,$r1,#4slli $r4,$r4,#2
Hiccup(!)
lw $r2,0($r4)lw $r3,4($r4)add $r2,$r2,$r3sw 8($r4),$r2
Raise priorityReenable All IntsSave registers
lw $r1,20($r0)lw $r2,0($r1)addi $r3,$r0,#5sw $r3,0($r1)
Restore registersClear current IntDisable All IntsRestore priorityRTI
Ext
ern
al I
nte
rru
pt
PC saved
Disable
All Ints
Superviso
r Mode
Restore PC
User Mode
“In
terr
up
t H
and
ler”
Device Interrupts
• Advantage:– User program progress is only halted during actual transfer
• Disadvantage, special hardware is needed to:– Cause an interrupt (I/O device)– Detect an interrupt (processor)– Save the proper states to resume after the interrupt (processor)
39
Transfer Method 2:Interrupt Driven Data Transfer
CPU
IOC
device
Memory
addsubandornop
readstore...rti
memory
userprogram(1) I/O
interrupt
(2) save PC
(3) interruptservice addr
interruptserviceroutine(4)
Device xfer rate = 10 MBytes/sec => 0 .1 x 10-6 sec/byte => 0.1 µsec/byte => 1000 bytes = 100 µsec 1000 transfers x 100 µsecs = 100 ms = 0.1 CPU seconds
User program progress is only halted during actual transfer. Interrupt handler code does the transfer.
1000 transfers at 1000 bytes each: 1000 interrupts @ 2 µsec per interrupt 1000 interrupt service @ 98 µsec each = 0.1 CPU seconds
Still far from device transfer rate! 1/2 in interrupt overhead
40
Delegating I/O Responsibility from CPU: DMA
Direct Memory Access (DMA):– External to the CPU– Act as a master on the bus– Transfers blocks of data to or from
memory without CPU intervention
CPU
IOC
device
Memory
CPU sends a starting address, direction, and length count to IOC. Then issues "start".
IOC provides handshakesignals for PeripheralController, and MemoryAddresses and handshakesignals for Memory.
41
Transfer Method 3: Direct Memory AccessTime to do 1000 transfers at 1000 bytes each:
1 DMA set-up sequence @ 50 µsec1 interrupt @ 2 µsec1 interrupt service sequence @ 48 µsec
.0001 second of CPU time
CPU
IOC
device
Memory
CPU sends a starting address, direction, and length count to DMAC. Then issues "start"
IOC provides handshake signals for PeripheralController, and Memory Addresses and handshakesignals for Memory
0 ROM
RAM
Peripherals
IO Buffersn
Memory Mapped I/O
42
7.5 RAID: Redundant Array of Independent Disks
14”10”5.25”3.5”
3.5”
Disk Array: 1 disk design
Conventional: 4 disk designs
Low End High End
Use Arrays of Small Disks? Katz and Patterson asked in 1987: • Can smaller disks be used to close gap in performance between disks and
CPUs?
43
Array Reliability
•Reliability of N disks = Reliability of 1 Disk ÷ N
1,200,000 Hours ÷ 100 disks = 12,000 hours 1 year = 365 * 24 = 8700 hours
Disk system MTTF: drops from 140 years to about 1.5 years!
• Arrays (without redundancy) too unreliable to be useful!
Hot spares support reconstruction in parallel with access: very high media availability can be achievedHot spares support reconstruction in parallel with access: very high media availability can be achieved
44
Redundant Arrays of Inexpensive Disks
Ideas• Files are "striped" across multiple spindles• Redundancy yields high data availability
Disks will failContents reconstructed from data redundantly stored in the array
– Capacity penalty to store redundant info– Bandwidth penalty to update
Techniques– Mirroring/Shadowing (high capacity cost)– Parity
45
Figure 7.17 Standard RAID Levels
46
RAID 1: Disk Mirroring/Shadowing
• Each disk is fully duplicated onto its "shadow" Very high availability can be achieved• Bandwidth sacrifice on write: Logical write = two physical writes• Reads may be optimized• Most expensive solution: 100% capacity overhead
Targeted for high I/O rate, high availability environments
(RAID 2 not interesting, so skip)
recoverygroup
shadowshadow
47
RAID 3: Bit-Interleaved Parity Disk
P100100111100110110010011
. . .
logical record 10010011
11001101
10010011
11001101
Striped physicalrecords
• Parity computed across recovery group to protect against hard disk failures• 33% capacity cost for parity in this configuration• wider arrays reduce capacity costs, decrease expected availability,
increase reconstruction time• Arms logically synchronized, spindles rotationally synchronized
• logically a single high capacity, high transfer rate disk
Targeted for high bandwidth applications: Scientific, Image Processing
48
Inspiration for RAID 4
• RAID 3 relies on parity disk to discover errors on Read, but every sector has an error detection field
– In RAID 3, every access (read/write) went to all disks– We rely on error detection field to catch errors on read, not on the
parity disk
• How to allows independent reads to different disks simultaneously?
• RAID 4 and RAID 5: block-interleaved parity and distributed block-interleaved parity
– The parity is stored as blocks– Smaller accesses is allowed to occur in parallel– Small write involves 2 disks instead of accessing all disks in RAID 3
49
RAID 4:Block-Interleaved Parity
D0 D1 D2 D3 P
D4 D5 D6 PD7
D8 D9 PD10 D11
D12 PD13 D14 D15
PD16 D17 D18 D19
D20 D21 D22 D23 P
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.Disk Columns
IncreasingLogicalDisk
Address
Stripe
Insides of 5 disksInsides of 5 disks
Example:small read D0 & D5, large write D12-D15
Example:small read D0 & D5, large write D12-D15
High I/O Rate Parity
50
Figure 7.18 Small write update on RAID
3 vs. RAID 4/5
•Assume 4 blocks of data and 1 block of parity
•RAID 3: read blocks D1, D2 and D3, add block D0’ to calculate new P’, and write D0’ and P’
•RAID4/5: read D0 and P, calculate new P’, and write D0’ and P’
51
Inspiration for RAID 5
• RAID 4 works well for small reads; for small writes: – Option 1: read other data disks, create new sum and write to Parity Disk– Option 2: since P has old sum, compare old data to new data, add the
difference to P– Small writes are limited by Parity Disk: Write to D0, D5 both also write to
P disk• RAID 5: distributed block-interleaved parity
– Distribute parity blocks to disks• RAID 6: P+Q Redundancy
– Parity-based schemes in RAID 1-5 protect against a single self-identifying failure only
– Second check block allows recovery from a second failure (Reed-Solomon codes) 1 extra Q disk
– There are 6 disk accesses instead of 4 to update both P and Q information (read Q and write Q’)
D0 D1 D2 D3 P
D4 D5 D6 PD7
52
RAID 5: Distributed Block-Interleaved Parity
Independent writespossible because ofinterleaved parity
Independent writespossible because ofinterleaved parity
D0 D1 D2 D3 P
D4 D5 D6 P D7
D8 D9 P D10 D11
D12 P D13 D14 D15
P D16 D17 D18 D19
D20 D21 D22 D23 P
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.Disk Columns
IncreasingLogical
Disk Addresses
Example: write to D0, D5 uses disks 0, 1, 3, 4
High I/O Rate Interleaved Parity
53
System Availability: Orthogonal RAIDs
ArrayController
StringController
StringController
StringController
StringController
StringController
StringController
. . .
. . .
. . .
. . .
. . .
. . .
Data Recovery Group: unit of data redundancy
Redundant Support Components: fans, power supplies, controller, cables
End to End Data Integrity: internal parity protected data paths
54
System-Level Availability
Fully dual redundantI/O Controller I/O Controller
Array Controller Array Controller
. . .
. . .
. . .
. . . . . .
.
.
.RecoveryGroup
Goal: No SinglePoints ofFailure
Goal: No SinglePoints ofFailure
host host
with duplicated paths, higher performance can beobtained when there are no failures
55
Summary: RAID Techniques: Goal was performance, popularity due to reliability of storage
• Disk Mirroring, Shadowing (RAID 1)
Each disk is fully duplicated onto its "shadow" Logical write = two physical writes
100% capacity overhead
• Parity Data Bandwidth Array (RAID 3)
Parity computed horizontally
Logically a single high data bw disk
• High I/O Rate Parity Array (RAID 5)
Interleaved parity blocks
Independent reads and writes
Logical write = 2 reads + 2 writes
10010011
11001101
10010011
00110010
10010011
10010011
56
Summary
Chapter 7. Storage Systems7.1 Introduction7.2 Types of Storage Devices7.3 Busses - Connecting IO Devices to
CPU/Memory7.4 Reliability, Availability and7.5 RAID: Redundant Arrays of Inexpensive Disks7.6 Errors and Failures in Real Systems7.7 I/O Performance Measures7.8 A Little Queuing Theory7.9 Benchmarks of Storage Performance and
Availability