Operating Systems PrinciplesMemory Management
Lecture 8: Virtual Memory
主講人:虞台文
Content Principles of Virtual Memory Implementations of Virtual Memory
– Paging– Segmentation– Paging With Segmentation– Paging of System Tables– Translation Look-Aside Buffers
Memory Allocation in Paged Systems– Global Page Replacement Algorithms– Local Page Replacement Algorithms– Load Control and Thrashing– Evaluation of Paging
Operating Systems PrinciplesMemory Management
Lecture 8: Virtual Memory
Principles of Virtual Memory
The Virtual Memory
Virtual memory is a technique that allows processes that may not be entirely in the memory to execute by means of automatic storage allocation upon request.
The term virtual memory refers to the abstraction of separating LOGICAL memory memory as seen by the process from PHYSICAL memory memory as seen by the processor.
The programmer needs to be aware of only the logical memory space while the operating system maintains two or more levels of physical memory space.
Principles of Virtual Memory
...... ...
...
......
00000000 00000000
FFFFFFFF FFFFFFFF
00000000
3FFFFFF
4G 4GVirtualMemory
VirtualMemory
PhysicalMemory64M
Principles of Virtual Memory
...... ...
...
......
00000000 00000000
FFFFFFFF FFFFFFFF
00000000
3FFFFFF
4G 4GVirtualMemory
VirtualMemory
PhysicalMemory64M
AddressMap
AddressMap
Principles of Virtual Memory
...... ...
...
......
00000000 00000000
FFFFFFFF FFFFFFFF
00000000
3FFFFFF
4G 4GVirtualMemory
VirtualMemory
PhysicalMemory64M
AddressMap
AddressMap
For each process, the system creates the illusion of large contiguousmemory space(s)
Relevant portions ofVirtual Memory (VM)are loaded automatically and transparently
Address Map translates Virtual Addresses to Physical Addresses
For each process, the system creates the illusion of large contiguousmemory space(s)
Relevant portions ofVirtual Memory (VM)are loaded automatically and transparently
Address Map translates Virtual Addresses to Physical Addresses
Approaches
Single-segment Virtual Memory: – One area of 0…n1 words– Divided into fix-size pages
Multiple-Segment Virtual Memory: – Multiple areas of up to 0…n1 (words)– Each holds a logical segment
(e.g., function, data structure)– Each is contiguous or divided into pages
.
.
.
...
...
...
Main Issues in VM Design
Address mapping– How to translate virtual addresses to physical addresses?
Placement– Where to place a portion of VM needed by process?
Replacement– Which portion of VM to remove when space is needed?
Load control– How many process could be activated at any one time?
Sharing– How can processes share portions of their VMs?
Operating Systems PrinciplesMemory Management
Lecture 8: Virtual Memory
Implementations
of Virtual Memory
Implementations of VM
PagingSegmentationPaging With SegmentationPaging of System TablesTranslation Look-aside Buffers
Paged Virtual Memory
Page No.
012
p
P1
0
w
2n1
12
Page size 2n
Virtual Memory
Offset
Virtual Address (p, w)
Physical Memory
Frame No.
012
f
F1
0
w
2n1
12
Frame size 2n
Physical Memory
Offset
Physical Address (f, w)
The size of physical memory is usually much smaller than the size of virtual memory.
Virtual & Physical Addresses
Page number pPage number p Offset wOffset w
The size of physical memory is usually much smaller than the size of virtual memory.
va
Frame number fFrame number f Offset wOffset wpa
|p| bits |w| bits
| f | bits |w| bits
MemorySize
| | | |2 p w
| | | |2 f w
Address
| |2 wp w
| |2 wf w
( )p w
( )f w
Page No.
012
p
P1
Virtual Memory
Address MappingFrame No.
012
f
F1
Physical Memory
(p, w)(f, w)
AddressMap
Given (p, w), how to determine f from p?
Page No.
012
p
P1
Virtual Memory
Address MappingFrame No.
012
f
F1
Physical Memory
(id, p, w)(f, w)
AddressMap
Each process (pid = id) has its own virtual memory.
Given (id, p, w), how to determine f from (id, p)?
Frame Tables
IDID pp ww
IDID pp
Frame Table FT
0
1
f
F1
frame f 1
frame f
frame f +1
w
Physical Memory
ID pf
pid page
Each process (pid = id) has its own virtual memory.
Given (id, p, w), how to determine f from (id, p)?
Address Translation via Frame Table
address_map(id,p,w){ pa = UNDEFINED; for(f=0; f<F; f++) if(FT[f].pid==id && FT[f].page==p)
pa = f+w; return pa;}
IDID pp wwIDID pp ww
IDID pp
Frame Table FT
IDID pp
Frame Table FT
0
1
f
F 1
0
1
f
F 1
frame f 1
frame f
frame f +1
ww
Physical Memory
ID pf
pid page
Each process (pid = id) has its own virtual memory.
Given (id, p, w), how to determine f from (id, p)?
Disadvantages
IDID pp wwIDID pp ww
IDID pp
Frame Table FT
IDID pp
Frame Table FT
0
1
f
F 1
0
1
f
F 1
frame f 1
frame f
frame f +1
ww
Physical Memory
ID pf
pid page
Inefficient: mapping must
be performed for every
memory reference.
Costly: Search must be
done in parallel in hardware
(i.e., associative memory).
Sharing of pages
difficult or not possible.
Associative Memories as Translation Look-Aside Buffers
IDID pp wwIDID pp ww
IDID pp
Frame Table FT
IDID pp
Frame Table FT
0
1
f
F 1
0
1
f
F 1
frame f 1
frame f
frame f +1
ww
Physical Memory
ID pf
pid page
When memory size is large,
frame tables tend to be
quite large and cannot be
kept in associative memory
entirely.
To alleviate this,
associative memories are
used as translation look-
aside buffers.
– To be detailed shortly.
Physical Memory
Page Tables
f
Page Tables
frame f 1
frame f
frame f +1
pp ww
PTRp
w
Page Table
Register
Page Tables
Physical MemoryPhysical Memory
f
Page Tables
frame f 1
frame f
frame f +1
pp wwpp ww
PTRpp
ww
Page Table
Register
A page table keeps track of current locations of all pages belonging to a given process.
PTR points at PT of the current process at run time by OS.
Drawback:
An extra memory accesses needed for any read/write operations.
Solution:– Translation Look-Aside Buffer
address_map(p, w) { pa = *(PTR+p)+w; return pa;}
Demand Paging
Pure Paging– All pages of VM are loaded initially– Simple, but maximum size of VM = size of
PM
Demand Paging– Pages are loaded as needed: “on demand”– Additional bit in PT indicates
a page’s presence/absence in memory– “Page fault” occurs when page is absent
Demand Paging
Pure PagingAll pages of VM can be loaded initiallySimple, but maximum size of VM = size of PM
Demand Paging– Pages a loaded as needed: “on demand”– Additional bit in PT indicates
a page’s presence/absence in memory– “Page fault” occurs when page is absent
address_map(p, w) { if (resident(*(PTR+p))) { pa = *(PTR+p)+w; return pa; } else page_fault;}
resident(m) True: the mth page is in memory.False: the mth page is missing.
Segmentation
Multiple contiguous spaces (“segments”)– More natural match to program/data structure– Easier sharing (Chapter 9)
va = (s, w) mapped to pa (but no frames) Where/how are segments placed in PM?
– Contiguous versus paged application
Contiguous Allocation Per Segment
Segment Tables
Segment x
Segment s
Segment y
Physical Memory
ss ww
STRs
w
Segment Table
Register
Contiguous Allocation Per Segment
Segment Tables
Segment x
Segment s
Segment y
Physical Memory
ss wwss ww
STRss
ww
Segment Table
Register
Each segment is contiguous in PM Segment Table (ST) tracks starting locations STR points to ST Address translation:
Drawback:External fragmentation
address_map(s, w) { if (resident(*(STR+s))) { pa = *(STR+s)+w; return pa; } else segment_fault;}
address_map(s, w) { if (resident(*(STR+s))) { pa = *(STR+s)+w; return pa; } else segment_fault;}
Contiguous Allocation Per Segment
Each segment is contiguous in PM Segment Table (ST) tracks starting locations STR points to ST Address translation:
Drawback:External fragmentation
address_map(s, w) { if (resident(*(STR+s))) { pa = *(STR+s)+w; return pa; } else segment_fault;}
Paging with segmentation
Paging with segmentation
Each segment is divided into fix-size pages va = (s, p, w)
– |s| determines # of segments (size of ST)– |p| determines # of pages
per segment (size of PT)– |w| determines page size
Address Translation:
Drawback:2 extra memory references
address_map(s, p, w) { pa = *(*(STR+s)+p)+w; return pa;}
Paging of System Tables
Paging of System Tables
ST or PT may be too large to keep in PM– Divide ST or PT into pages– Keep track by additional page table
Paging of ST– ST divided into pages– Segment directory
keeps track of ST pages Address Translation:
address_map(s1, s2, p, w) { pa = *(*(*(STR+s1)+s2)+p)+w; return pa;}
Drawback:3 extra memory references.
Translation Look-Aside Buffers (TLB)
Advantage of VM– Users view memory in logical sense.
Disadvantage of VM– Extra memory accesses needed.
Solution– Translation Look-aside Buffers (TLB)
A special high-speed memory Basic idea of TLB
– Keep the most recent translation of virtual to physical addresses readily available for possible future use.
– An associative memory is employed as a buffer.
Translation Look-Aside Buffers (TLB)
Translation Look-Aside Buffers (TLB)
When the search of (s, p) fails, the complete address translation is needed.
Replacement strategy: LRU.
When the search of (s, p) fails, the complete address translation is needed.
Replacement strategy: LRU.
Translation Look-Aside Buffers (TLB)
The buffer is searched associatively (in
parallel)
The buffer is searched associatively (in
parallel)
Operating Systems PrinciplesMemory Management
Lecture 8: Virtual Memory
Memory Allocation
in Paged Systems
Memory Allocation with Paging
Placement policy where to allocate the memory on request?– Any free frame is OK (no external fragmentation)– Keep track of available space is sufficient both for statica
lly and dynamically allocated memory systems
Replacement policy which page(s) to be replaced on page fault?– Goal: Minimize the number of page faults and/or the tota
l number pages loaded.
Global/Local Replacement Policies
Global replacement: – Consider all resident pages (regardless of
owner).
Local replacement– Consider only pages of faulting process, i.e.,
the working set of the process.
Criteria for Performance Comparison
Tool: Reference String (RS)
r0 r1 ... rt ... rT
rt : the page number referenced at time t Criteria:
1. The number of page faults2. The total number of pages loaded
0
( )T
TOT tt
IN RS IN
1 page fault at time
0 otherwiset
tIN
Criteria for Performance Comparison
Tool: Reference String (RS)
r0 r1 ... rt ... rT
rt : the page number referenced at time t Criteria:
1. The number of page faults2. The total number of pages loaded
0
( )T
TOT tt
IN RS IN
1 page fault at time
0 otherwiset
tIN
Equivalent
=1
To be used in the following
discussions.
To be used in the following
discussions.
Global Page Replacement Algorithms
Optimal Replacement Algorithm (MIN)– Accurate prediction needed– Unrealizable
Random Replacement Algorithm– Playing roulette wheel
FIFO Replacement Algorithm– Simple and efficient– Suffer from Belady’s anormaly
Least Recently Used Algorithm (LRU)– Doesn’t suffer from Belady’s anormaly, but high overhead
Second-Chance Replacement Algorithm– Economical version of LRU
Third-Chance Replacement Algorithm– Economical version of LRU– Considering dirty pages
Optimal Replacement Algorithm (MIN)
Replace page that will not be referenced for the longest time in the future.
Problem: Reference String not known in advance.
Example:Optimal Replacement Algorithm (MIN)
RS = c a d b e b a b c d
Replace page that will not be referenced for the longest time in the future.
Time t
RS
Frame 0
Frame 1
Frame 2
Frame 3
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
a
b
c
d
c a d b e b a b c d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
e
d
b
c
e
e
d
a
b
c
e
a
b
c
e
a
b
c
e
a
b
c
e
d
a
Problem: Reference String not known in advance.
2 page faults
Random Replacement
Program reference string are never know in advance.
Without any prior knowledge, random replacement strategy can be applied.
Is there any prior knowledge available for common programs?– Yes, the locality of reference.
Random replacement is simple but without considering such a property.
The Principle of Locality
Most instructions are sequential– Except for branch instruction
Most loops are short– for-loop– while-loop
Many data structures are accessed sequentially– Arrays– files of records
FIFO Replacement Algorithm
FIFO: Replace oldest page– Assume that pages residing the longest in memo
ry are least likely to be referenced in the future.
Easy but may exhibit Belady’s anormaly, i.e.,– Increasing the available memory can result in m
ore page faults.
Example:FIFO Replacement Algorithm
RS = c a d b e b a b c dTime t
RS
Frame 0
Frame 1
Frame 2
Frame 3
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
a
b
c
d
c a d b e b a b c d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
e
b
c
d
d
a
b
c
e
a
e
b
c
d
e
a
c
d
e
a
b
d
e
a
b
c
d
e
Replace oldest page.
a
b
b
c
c
d
5 page faults
Example: Belady’s Anormaly
RS = dcbadcedcbaecdcbae
17 page faults
Time t
RS
Frame 0
Frame 1
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 70 8
d c b a d c e d c b a e c d c b a e
d d
c
b
c
b
a
d
a
d
c
e
c
e
d
c
d
c
b
a
b
e
b
c
b
c
d
c
d
b
d
b
a
e
a
14 page faultsTime t
RS
Frame 0
Frame 1
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 70 8
Frame 2
d c b a d c e d c b a e c d c b a e
d d
c
d
c
a
c
a
d
b b b
a
d
c
e
d
c
e
d
c
e
d
c
e
b
c
e
b
a
e
b
a
e
c
a
e
d
a
c
d
a
c
b
a
c
b
a
c
e
a
Example: Belady’s Anormaly
14 page faultsTime t
RS
Frame 0
Frame 1
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 70 8
Frame 2
d c b a d c e d c b a e c d c b a e
d d
c
d
c
a
c
a
d
b b b
a
d
c
e
d
c
e
d
c
e
d
c
e
b
c
e
b
a
e
b
a
e
c
a
e
d
a
c
d
a
c
b
a
c
b
a
c
e
a
15 page faultsd c b a d c e d c b a e c d c b a e
d d
c
d
c
d
c
b b
a
d
c
b
a
d
c
b
a
e
c
b
a
e
d
b
a
e
d
c
a
e
d
c
b
a
d
c
b
a
e
c
b
a
e
c
b
a
e
d
b
a
e
d
c
b
e
d
c
b
a
d
c
b
a
e
c
Time t
RS
Frame 0
Frame 1
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 70 8
Frame 2
Frame 3
Least Recently Used Replacement (LRU)
Replace Least Recently Used page– Remove the page which has not been reference
d for the longest time.– Comply with the principle of locality
Doesn’t suffer from Belady’s anormaly
Example:Least Recently Used Replacement (LRU)
RS = c a d b e b a b c d
Replace Least Recently Used page.
3 page faults
Time t
RS
Frame 0Frame 1Frame 2Frame 3
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
Queue end
Queue head
a
bcd
c a d b e b a b c d
a
bcd
a
bcd
a
bcd
a
bcd
a
bed
a
bdc
e
c
a
bed
a
bed
a
bed
a
bec
d
e
c
d
dcba
cdba
acdb
dacb
bdac
ebda
beda
abed
baed
cbae
dcba
LRU Implementation
Software queue: too expensive Time-stamping
– Stamp each referenced page with current time– Replace page with oldest stamp
Hardware capacitor with each frame– Charge at reference– Replace page with smallest charge
n-bit aging register with each frame– Set left-most bit of referenced page to 1– Shift all registers to right at every reference or
periodically– Replace page with smallest value
R = Rn1 Rn2… R1 R0R = Rn1 Rn2… R1 R0
Second-Chance Algorithm
Approximates LRU
Implement use-bit u with each frame
Set u=1 when page referenced
To select a page:– If u==0, select page
– Else, set u=0 and consider next frame
Used page gets a second chance to stay in PM
Also called clock algorithm since search cycles through page frames.
Example:Second-Chance Algorithm
To select a page: If u==0, select page Else, set u=0 and consider next frame
cycle through page frames
RS = c a d b e b a b c dTime t
RS
Frame 0
Frame 1
Frame 2
Frame 3
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
4 page faults
c a d b e b a b c d
a/1
b/1
c/1
d/1
a/1
b/1
c/1
d/1
a/1
b/1
c/1
d/1
a/1
b/1
c/1
d/1
a/1
b/1
c/1
d/1
e/1
b/0
c/0
d/0
e
a
e/1
b/1
c/0
d/0
e/1
b/0
a/1
d/0
a
c
e/1
b/1
a/1
d/0
e/1
b/1
a/1
c/1
c
d
d/1
b/0
a/0
c/0
d
e
Third-Chance Algorithm
Second chance algorithm does not distinguish between read and write access
Write access more expensive Give modified pages a third chance:
– u-bit set at every reference (read and write)– w-bit set at write reference– to select a page, cycle through frames,
resetting bits, until uw==00:uw uw1 1 0 11 0 0 00 1 0 0 * (remember modification)0 0 select
Can be implemented by an additional bit.
Can be implemented by an additional bit.
Example:Third-Chance Algorithm
RS = c aw d bw e b aw b c d 3 page faults
uw uw1 1 0 11 0 0 00 1 0 0 * 0 0 select
Time t
RS
Frame 0
Frame 1
Frame 2
Frame 3
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
a/00*
d/10
e/00
c/00
c aw d bw e b aw b c d
a/10
b/10
c/10
d/10
a/10
b/10
c/10
d/10
a/11
b/10
c/10
d/10
a/11
b/10
c/10
d/10
a/11
b/11
c/10
d/10
a/00*
b/00*
e/10
d/00
e
c
a/00*
b/10*
e/10
d/00
a/11
b/10*
e/10
d/00
a/11
b/10*
e/10
d/00
a/11
b/10*
e/10
c/10
c
d
d
b
Local Page Replacement Algorithms
Measurements indicate thatEvery program needs a “minimum” set of pages– If too few, thrashing occurs– If too many, page frames are wasted
The “minimum” varies over time
How to determine and implement this “minimum”?– Depending only on the behavior of the process itself.
Local Page Replacement Algorithms
Optimal Page Replacement Algorithm (VMIN)
The Working Set Model (WS)
Page Fault Frequency Replacement Algorithm (PFF)
Optimal Page Replacement Algorithm (VMIN)
The method– Define a sliding window (t, t +) width + 1 is a parameter (a system constant)– At any time t, maintain as resident all pages
visible in window
Guaranteed to generate smallest number of page faults corresponding to a given window width.
Example:Optimal Page Replacement Algorithm (VMIN)
RS = c c d b c e c e a d 5 page faults
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
d b c
e
e
a
a
d
d
= 3
Example:Optimal Page Replacement Algorithm (VMIN)
RS = c c d b c e c e a d 5 page faults
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
d b c
e
e
a
a
d
d
= 3
Example:Optimal Page Replacement Algorithm (VMIN)
RS = c c d b c e c e a d 5 page faults
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
d b c
e
e
a
a
d
d
= 3 By increasing , the number of page faults can arbitrarily be reduced, of course at the expense of using more page frames.
VMIN is unrealizable since the reference string is unavailable.
By increasing , the number of page faults can arbitrarily be reduced, of course at the expense of using more page frames.
VMIN is unrealizable since the reference string is unavailable.
Working Set Model
Use trailing window (instead of future window) Working set W(t, ) is all pages referenced during the
interval (t – , t) (instead of (t, t + ) )
At time t:– Remove all pages not in W(t, ) – Process may run only if entire W(t, ) is resident
Example:Working Set Model
RS = e d a c c d b c e c e a d 5 page faults
= 3
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
a b
e a d
a
e d
Example:Working Set Model
RS = e d a c c d b c e c e a d 5 page faults
= 3
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
a b
e a d
a
e d
Approximate Working Set Model
Drawback: costly to implement Approximations:
1. Each page frame with an aging register Set left-most bit to 1 whenever referenced Periodically shift right all aging registers Remove pages which reach zero from working set.
2. Each page frame with a use bit and a time stamp Use bit is turned on by hardware whenever referenced Periodically do following for each page frame:
– Turn off use-bit if it is on and set current time as its time stamp– Compute turn-off time toff of the page frame– Remove the page from the working set if toff > tmax
Page Fault Frequency (PFF) Replacement
Main objective: Keep page fault rate low Basic principle:
– If the time btw the current (tc) and the previous (tc1) page faults exceeds a critical value , all pages not referenced during that time interval are removed from memory.
The algorithm of PFF:– If time between page faults
grow resident set by adding new page to resident set– If time between page faults >
shrink resident set by adding new page and removing all pages not referenced since last page fault
1 1
1
( , )( )
( ) ( )c c c c
cc c
RS t t t tresident t
resident t RS t otherwise
Example:Page Fault Frequency (PFF) Replacement
RS = c c d b c e c e a d 5 page faults
= 2
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
a, e
e a d
a
e b, d
Example:Page Fault Frequency (PFF) Replacement
RS = c c d b c e c e a d 5 page faults
= 2
Time t
RS
Page a
INt
OUTt
0 1 2 3 4 5 6 7 8 9 10
c c d b c e c e a d
Page b
Page c
Page d
Page e
c b
a, e
e a d
a
e b, d
Load Control and Thrashing
Main issues:– How to choose the degree of multiprogramming?
Decrease? Increase?– When level decreased,
which process should be deactivated?– When a process created or a suspended one
reactivated,which of its pages should be loaded?One or many?
Load Control– Policy to set the number & type of concurrent
processes Thrashing
– Most system’s effort pays on moving pages between main and secondary memory, i.e., low CPU utilization.
Load control Choosing Degree of Multiprogramming
Local replacement:– Each process has a well-defined resident set,
e.g., Working set model & PFF replacement– This automatically imposes a limit, i.e., up to
the point where total memory is allocated.
Global replacement– No working set concept– Use CPU utilization as a criterion– With too many processes, thrashing occurs
Load control Choosing Degree of Multiprogramming
Local replacement:– Each process has a well-defined resident set,
e.g., Working set model & PFF replacement– This automatically imposes a limit, i.e., up to
the point where total memory is allocated.
Global replacement– No working set concept– Use CPU utilization as a criterion– With too many processes, thrashing occurs
L = mean time between faultsS = mean page fault service time
Load control Choosing Degree of Multiprogramming
Local replacement:– Each process has a well-defined resident set,
e.g., Working set model & PFF replacement– This automatically imposes a limit, i.e., up to
the point where total memory is allocated.
Global replacement– No working set concept– Use CPU utilization as a criterion– With too many processes, thrashing occurs
L = mean time between faultsS = mean page fault service timeThrashing
How to determine the optimum, i.e.,
Nmax
How to determine the optimum, i.e.,
Nmax
Load control Choosing Degree of Multiprogramming
L=S criterion:– Page fault service S needs to keep
up with mean time between faults L.
50% criterion: – CPU utilization is highest when
paging disk 50% busy (found experimentally).
Clock load control– Scan the list of page frames to
find replaced page.– If the pointer advance rate is too
low, increase multiprogramming level.
How to determine Nmax ?
Load control Choosing the Process to Deactivate
Lowest priority process– Consistent with scheduling policy
Faulting process– Eliminate the process that would be blocked
Last process activated– Most recently activated process is considered least
important. Smallest process
– Least expensive to swap in and out Largest process
– Free up the largest number of frames
Load control Prepaging
Which pages to load when process activated– Prepage last resident set
Evaluation of Paging
Advantages of paging– Simple placement strategy– No external fragmentation
Parameters affecting the dynamic behavior of paged systems– Page size– Available memory
Evaluation of Paging
Evaluation of Paging
A process requires a certain percentages of its pages within the short time period after activation.
A process requires a certain percentages of its pages within the short time period after activation.
Prepaging is important.Prepaging is important.
Evaluation of Paging
Smaller page size is
beneficial.Smaller page size is
beneficial.
Another advantage with a small page size:Reduce memory waste due to internal fragmentation.
Another advantage with a small page size:Reduce memory waste due to internal fragmentation.
However, small pages require lager page tables.
Evaluation of Paging
W: Minimum amount of memory to avoid
thrashing.
W: Minimum amount of memory to avoid
thrashing.
Load control is
important.Load control is
important.