24
Virtual Memory

Virtual Memory. Next in memory hierarchy Motivations: to remove programming burdens of a small, limited amount of main memory to allow efficient

Embed Size (px)

Citation preview

Page 1: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Virtual Memory

Page 2: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Virtual Memory

Next in memory hierarchy Motivations:

to remove programming burdens of a small, limited amount of main memory

to allow efficient and safe sharing of memory among mul-tiple programsthe amount of main memory vs.the total amount of memory required by all the programmers a fraction of the total required memory is active at any time bring those active only to the physical memory each program uses its own address space VM implements the mapping of a program’s virtual address

space to physical addresses

Page 3: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Motivations Use physical DRAM as a cache for the disk

Address space of a process can exceed physical memory size

Sum of address spaces of multiple processes can exceed physical memory

Simplify memory management Multiple processes resident in main memory

Each process with its own address space Only “active” code and data is actually in memory

Allocate more memory to process as needed Provide virtually contiguous memory space

Provide protection One process can’t interfere with another

Because they operate in different address spaces User process cannot access privileged information

Different sections of address spaces have different permis-sions.

3

Page 4: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Motivation 1: Caching

Use DRAM as a cache for the disk Full address space is quite large:

32-bit addresses: 4 billion (~4 x 109) bytes 64-bit addresses: 16 quintillion (~16 x 1018) bytes

Disk storage is ~300X cheaper than DRAM storage 160GB of DRAM: ~$32,000 160GB of disk: ~$100

To access large amounts of data in a cost-effective man-ner, the bulk of the data must be stored on disk

4

1GB: ~$200

DISK

160 GB: ~$100

DRAMSRAM

4 MB: ~$500

Page 5: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

DRAM vs. Disk

DRAM vs. disk is more extreme than SRAM vs. DRAM Access latencies:

DRAM: ~10X slower than SRAM Disk: ~100,000X slower than DRAM

Importance of exploiting spatial locality: Disk: First byte is ~100,000X slower than successive DRAM: ~4X improvement for fast page-mode vs. regular ac-

cesses Bottom line:

Design decisions made for DRAM caches driven by enormous cost of misses

5

1GB: 10X slower 160 GB: 100,000X slower

4 MB:

DISKDRAMSRAM

Page 6: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Cache for Disk Design parameters

Line (block) size? Large, since disk better at transferring large blocks

Associativity? (Block placement) High, to minimize miss rate

Block identification? Using tables

Write through or write back? (Write strategy) Write back, since can’t afford to perform small writes to disk

Block replacement? Not to be used in the future Based on LRU (Least Recently Used)

Impact of design decisions for DRAM cache (main memory) Miss rate: Extremely low. << 1% Hit time: Match DRAM performance Miss latency: Very high. ~20ms Tag storage overhead: Low, relative to block size

6

Page 7: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Locating an Object DRAM Cache

Each allocated page of virtual memory has entry in page table

Mapping from virtual pages to physical pages Page table entry even if page not in memory

Specifies disk address OS retrieves information

Page fault handler places a page from disk in DRAM

7

Data

243

17

105

•••

0:

1:

N-1:

X

Object Name

Location

•••

D:

J:

X: 1

0

On Disk

“Main Memory”Page Table

Page 8: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

If there is physical memory only

Examples Most Cray machines, early PCs, nearly all embedded sys-

tems, etc. Addresses generated by the CPU correspond directly to

bytes in physical memory

8

CPU

0:1:

N-1:

Memory

PhysicalAddresses

Page 9: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

System with Virtual Memory

Examples Workstations, servers, modern PCs, etc. Address translation: Hardware converts virtual addresses

to physical addresses via OS-managed lookup table (page table)

9

CPU

0:1:

N-1:

Memory

0:1:

P-1:

Page Table

Disk

VirtualAddresses

PhysicalAddresses

Page 10: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Page Fault (Cache Miss)

What if an object is on disk rather than in memory? Page table entry indicates virtual addresses not in mem-

ory OS exception handler invoked to move data from disk into

memory Current process suspends, others can resume OS has full control over placement, etc.

10

Memory

Page Table

Disk

VirtualAddresses

PhysicalAddresses

CPU

Memory

Page Table

Disk

VirtualAddresses

PhysicalAddresses

Before fault After fault

CPU

Page 11: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Servicing a Page Fault

Processor signals controller Read block of length P

starting at disk address X and store starting at memory address Y

Read occurs Direct Memory Address

(DMA) Under control of I/O con-

troller I/O controller signals com-

pletion Interrupt processor OS resumes suspended

process

11

Disk

Memory-I/O Bus

Processor

Cache

Memory

I/OController

Reg

(2) DMA Transfer

(1) Initiate Block Read

(3) Read Done

Disk

Page 12: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Motivation 2: Management

Multiple processes can reside in physical memory Simplifying linking

Each process see the same basic format of its memory image.

12

kernel virtual memory

Memory mapped region for shared libraries

runtime heap (via malloc)

program text (.text)

initialized data (.data)

uninitialized data (.bss)

stack

forbidden0

%esp

memory invisible to user code

the “brk” ptr

Linux/x86 processmemory image

Page 13: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Virtual Address Space

Two processes share physical pages by mapping in page ta-ble

Separate Virtual Address Spaces (separate page table) Virtual and physical address spaces divided into equal-

sized blocks Blocks are called “pages” (both virtual and physical)

Each process has its own virtual address space OS controls how virtual pages are assigned to physical mem-

ory

13

Virtual Ad-dress Space for Process 1:

Physical Address Space (DRAM)

VP 1VP 2

PP 2

Address Translation0

0

N-1

0

N-1M-1

VP 1VP 2

PP 7

PP 10

(e.g., read/only library code)

...

...

Virtual Ad-dress Space for Process 2:

Page 14: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Memory Management

Allocating, deallocating, and moving memory Can be done by manipulating page tables

Allocating contiguous chunks of memory Can map contiguous range of virtual addresses to disjoint

ranges of physical addresses Loading executable binaries

Just fix page tables for processes Data in the binaries are paged in on demand

Protection Store protection information on page table entries Usually checked by hardware

14

Page 15: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Motivation 3: Protection

Page table entry contains access rights information Hardware enforces this protection (trap into OS if violation

occurs

15

Page Tables

Pro

ces

s i:

Physical AddrRead? Write?

PP 9Yes No

PP 4Yes Yes

XXXXXXXNo No

VP 0:

VP 1:

VP 2:•••

•••

•••

0:1:

N-1:

Memory

Physical AddrRead? Write?

PP 6Yes

PP 9No

XXXXXXXNo•••

•••

•••

VP 0:

VP 1:

VP 2:

No

No

Yes

SUP?

No

Yes

No

SUP?

Yes

Yes

NoPro

ces

s j:

Page 16: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

VM Address Translation

Page size = 212 = 4 KB # Physical pages = 218 => 2 (12+18) = 1 GB Main Mem. Virtual address space = 232 = 4 GB

16

Page 17: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

VM Address Translation: Hit

17

Processor

HardwareAddr TransMechanism

MainMemorya

a'

physical addressvirtual address part of the on-chipmemory management unit (MMU)

Page 18: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

VM Address Translation: Miss

18

Processor

HardwareAddr TransMechanism

faulthandler

MainMemory

Secondary memorya

a'

page fault

physical addressOS performsthis transfer(only if miss)

virtual address part of the on-chipMMU

Page 19: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Page Table

19

virtual page number (VPN) page offset

virtual address

physical page number (PPN)page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

Page 20: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Multi-Level Page Table

Given: 4KB (212) page size 32-bit address space 4-byte PTE

Problem: Would need a 4 MB page ta-

ble! 220 *4 bytes

Common solution multi-level page tables e.g., 2-level table (P6)

Level 1 table: 1024 entries, each of which points to a Level 2 page table.

Level 2 table: 1024 entries, each of which points to a page

20

Level 1Table

...

Level 2Tables

Page 21: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

TLB

“Translation Lookaside Buffer” (TLB) Small hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of

pages

21

missVA

TLBLookup

Cache MainMemory

PA

hit

data

Trans-lation

hit

miss

CPU

Page 22: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

Address Translation with TLB

22

virtual addressvirtual page number page offset

physical address

n–1 0p–1p

valid physical page numbertag

valid tag data

data=

cache hit

tag byte offsetindex

=

TLB hit

TLB

Cache

. ..

Page 23: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

MIPS R2000 TLB

23

Valid Tag Data

Page offset

Page offset

Virtual page number

Virtual address

Physical page numberValid

1220

20

16 14

Cache index

32

Cache

DataCache hit

2

Byteoffset

Dirty Tag

TLB hit

Physical page number

Physical address tag

TLB

Physical address

31 30 29 15 14 13 12 11 10 9 8 3 2 1 0

Page 24: Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient

VM Summary

Programmer’s View Large “flat” address space

Can allocate large blocks of contiguous addresses Process “owns” machine

Has private address space Unaffected by behavior of other processes

System View Use virtual address space created by mapping to set of

pages Need not be contiguous Allocated dynamically Enforce protection during address translation

OS manages many processes simultaneously Continually switching among processes Especially when one must wait for resources

e.g. disk I/O to handle page faults

24