Upload
gurpreet-singh
View
222
Download
0
Embed Size (px)
Citation preview
8/7/2019 Gurpreet Singh RA1805 Roll No 17
1/99
DESGIN
PROBLEM-2
SUBJECT CODE: CSE 366
SUBMITTED BY:
SUBMITTED TO:
Name: Gurpreet Singh Lec. Mr.
Ramandeep Singh
8/7/2019 Gurpreet Singh RA1805 Roll No 17
2/99
Roll No: 17
Department of CSE
Sec No: RA1805 LOVELY
PROFESSIONAL
Group No: G2
UNIVERSITY
Reg. No: 10805721
SERIAL
NO. CONTENTS
PAGE
NO:
1. CPU SCHEDULING 8
2. BRIEF INTRODUCTION 8- 9
3. VARIOUS TYPES OF
OPERATING SYSTEMSCHEDULERS.
9- 12
a)Long Term Scheduler
b)Mid Term Scheduler
c)Short Term Scheduler
8/7/2019 Gurpreet Singh RA1805 Roll No 17
3/99
Explanation for schedulers
4. DISPATCHER 12
5. SCHEDULING CRITERIA 13- 18
5.1 Scheduling algorithms
5.2 Goals For Scheduling
5.3 Context Switching
5.4 Context Of A Process.
6
PREEMPTIVE VS NONPREEMPTIVE
SCHEDULING.
18- 33
6.1Types Of Preemptive
Scheduling.
a)Round Robin
b) SRT
c) Priority based preemptive
6.2 Types of Non Preemptive
Scheduling.
a)Fifo
b)Priority Based Non
Preemptive.
c) SJF (SHORTEST JOB FIRST)
8/7/2019 Gurpreet Singh RA1805 Roll No 17
4/99
7. MULTILEVEL
FEEDBACK QUEUE
SCHEDULING.
34- 35
8. PROS AND CONS OF
DIFFERENT SCHEDULING
ALGORITHMS.
35- 40
8.1 FCFS
8.2 SJF
8.3 FIXED PRIORITY BASED
PREEMPTIVE.
8.4 ROUND ROBIN SCHEDULING.
8.5 MULTILEVEL FEEDBACK
QUEUE SCHEDULING.
9. HOW TO CHOOSE
SCHEDULING
ALGORITHMS.
32- 36
10. OPERATING SYSTEM
SCHEDULER
IMPLEMENTATION.
40- 41
10.1 WINDOWS
10.2 MAC OS
8/7/2019 Gurpreet Singh RA1805 Roll No 17
5/99
10.3 LINUX
10.4 FREEBSD
10.5 NETBSD
10.6 SOLARIS
SUMMARY.
11. COMPARISON BETWEEN
OS SCHEDULERS.
41-63
11.1 solaris-2 scheduling.
11.2 windows scheduling.
11.3 linux scheduling.
11.4 symmetric multiprocessing
in XP.
11.5 COMPARISON.
11.6 DIAGRAMETICALREPRESENTATION.
12. MEMORY MANAGEMENT. 63- 85
12.1 INTRODUCTION.
a)Requirement.
b)Relocation.
c) Protection.
d)Sharing.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
6/99
e)Logical organization
f) Physical organization
12.2 DOS MEMORY MANAGER.
12.3 MAC MEMORY MANAGERS.
12.4 MEMORY MANAGEMENT IN
WINDOWS
12.5 MEMORY MANAGEMENT IN
LINUX
12.6 VIRTUAL MEMORY AREAS.
12.7 MAC OS MEMORY
MANAGEMENT.
12.8 FRAGMENTATION.
12.9 SWITCHER.
HOW IS VIRTUAL MEMORY
HANDLES IN MAC OS X?
13. CODE IN C FOR
IMPLEMENTATION OF CPU
ALGORITHAMS AND
MEMORY MANAGEMENT
TECHNIQUES.
85- 121
8/7/2019 Gurpreet Singh RA1805 Roll No 17
7/99
INTRODUCTION
Scheduling isakeyconceptin computer
multitasking, multiprocessingoperating system and real-time
operating system designs. Scheduling refers to the
way processes are assigned to run on the available CPUs,
since there are typically many more processes running than there
http://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Multiprocessinghttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Real-time_operating_systemhttp://en.wikipedia.org/wiki/Real-time_operating_systemhttp://en.wikipedia.org/wiki/Process_(computing)http://en.wikipedia.org/wiki/Process_(computing)http://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Multiprocessinghttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Real-time_operating_systemhttp://en.wikipedia.org/wiki/Real-time_operating_systemhttp://en.wikipedia.org/wiki/Process_(computing)http://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Computer_multitasking8/7/2019 Gurpreet Singh RA1805 Roll No 17
8/99
are available CPUs. This assignment is carried out by
softwares known as a scheduler and dispatcher.
The scheduler is concerned mainly with:
CPU utilization - to keep the CPU as busy as possible.
Throughput - number of processes that complete their
execution per time unit.
Turnaround Time - Total time between submission of a
process and its completion.
Waiting time - amount of time a process has been waiting
in the ready queue.
Response time - amount of time it takes from when a
request was submitted until the first response is produced.
Fairness - Equal CPU time to each thread.
TYPES OF OPERATING
SYSTEM SCHEDULERS
Operating systems may feature up to 3 distinct types of
schedulers:
Long-term scheduler .
Mid-term or medium-term scheduler.
Short-term scheduler.
http://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Throughputhttp://en.wikipedia.org/wiki/Ready_queuehttp://en.wikipedia.org/wiki/Response_timehttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Throughputhttp://en.wikipedia.org/wiki/Ready_queuehttp://en.wikipedia.org/wiki/Response_time8/7/2019 Gurpreet Singh RA1805 Roll No 17
9/99
EXPLANATION
1. Long-term scheduler
The long-term, or admission, scheduler decides which jobs or
processes are to be admitted to the ready queue,that is, when
an attempt is made to execute a program, its admission to the set
of currently executing processes is either authorized or delayed by
the long-term scheduler.
2. Mid-term scheduler
The mid-term scheduler temporarily removes processes from
main memory and places them on secondary memory (such
as a disk drive) or vice versa. This is commonly referred to as
"swapping out" or "swapping in" (also incorrectly as
"paging out" or "paging in").
The mid-term scheduler may decide to swap
out a process
A Process which has not been active for some time.
http://en.wikipedia.org/wiki/Paginghttp://en.wikipedia.org/wiki/Paging8/7/2019 Gurpreet Singh RA1805 Roll No 17
10/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
11/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
12/99
characteristics are used for comparison can make a substantial
difference in which algorithm is judged to be best.
The criteria for the CPU scheduling include the following:
CPU Utilization: We want to keep the CPU as busy as
possible.
Throughput : If the CPU is busy executing processes, then
work is being done. One measure of work is the number of
processes that are completed per time unit, called throughput.
For long processes, this rate may be one process per hour; for
short transactions, it may be 10 processes per second.
Turnaround time: From the point of view of a particular
process, the important criterion is how long it takes to execute
that process. The interval from the time of submission of a
process to the time of completion is the turnaround time.
a) Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.
Waiting time. The CPU scheduling algorithm does not
affect the amount of the time during which a process executes or
does I/O. it affects only the amount of time that a process spends
waiting in the ready queue. Waiting time is the sum of periods
spend waiting in the ready queue.
Response time . In an interactive system, turnaround
time may not be the best criterion. Often, a process can
produce some output fairly early and can continue
8/7/2019 Gurpreet Singh RA1805 Roll No 17
13/99
computing new results while previous results are being
output to the user. Thus, another measure is the time from the
submission of a request until the first response is produced. This
measure, called response time, is the time it takes to start
responding, not the time it takes to output the response. The
turnaround time is generally limited by the speed of the output
device.
SCHEDULING ALGORITHMA multiprogramming operating system allows more than one
process to be loaded into the executable memory at a time
and for the loaded process to share the CPU using time-
multiplexing. Part of the reason for using multiprogramming is that
the operating system itself is implemented as one or more
processes, so there must be a way for the operating system and
application processes to share the CPU. Another main reason is
the need for processes to perform I/O operations in the
normal course of computation. Since I/O operations
ordinarily require orders of magnitude more time to
complete than do CPU instructions, multiprogramming
systems allocate the CPU to another process whenever a
process invokes an I/O operation.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
14/99
GOALS FOR SCHEDULING
Make sure your scheduling strategy is good enough with the
following criteria:
Utilization/Efficiency: keep the CPU busy 100% of the
time with useful work.
Throughput: maximize the number of jobs processed per
hour.
Turnaround time: from the time of submission to the time
of completion, minimize the time batch users must wait for
output.
Waiting time: Sum of times spent in ready queue -
Minimize this.
Response Time: time from submission till the first
response is produced, minimize response time for interactive
users.
Fairness: make sure each process gets a fair share of the
CPU.
Context Switching
8/7/2019 Gurpreet Singh RA1805 Roll No 17
15/99
Typically there are several tasks to perform in a computer
system.
So if one task requires some I/O operation, you want to initiate the
I/O operation and go on to the next task. You will come back to it
later.
This act of switching from one process to another is called a
"Context Switch"
When you return back to a process, you should resume where you
left off. For all practical purposes, this process should never know
there was a switch, and it should look like this was the only process
in the system.
To implement this, on a context switch, you have to do as
given below:
To save the context of the current process.
It selects the next process to run.
It restores the context of this new process.
Context of a process
Program Counter.
Stack Pointer.
Registers.
Code + Data + Stack (also called Address Space).
8/7/2019 Gurpreet Singh RA1805 Roll No 17
16/99
Other state information maintained by the OS for the process
(open files, scheduling info, I/O devices being used etc.).
All this information is usually stored in a structure called Process Control Block
(PCB).
All the above has to be saved and restored.
Non-Pre-emptive Vs
Preemptive Scheduling Non-Preemptive: Non-preemptive algorithms are designed
so that once a process enters the running state(is allowed a
process), it is not removed from the processor until it has
completed its service time ( or it explicitly yields the processor).
context_switch() is called only when the process terminates or
blocks.
Preemptive: Preemptive algorithms are driven by the
notion of prioritized computation. The process with the highest
priority should always be the one currently using the processor. If
a process is currently using the processor and a new process with
a higher priority enters, the ready list, the process on the
processor should be removed and returned to the ready list until
it is once again the highest-priority process in the system.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
17/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
18/99
Suppose we use 1 ms time slice: then compute-bound process gets
interrupted 9 times unnecessarily before I/O-bound process is runnable
LIMITATION: Round robin assumes that all processes are
equally important; each receives an equal portion of the CPU. This
sometimes produces bad results. Consider three processes that
start at the same time and each requires three time slices to finish.
Using FIFO how long does it take the average job to complete (what
is the average response time)? How about using round robin?
* Process A finishes after 3 slices, B 6, and C 9. The average is
(3+6+9)/3 = 6 slices.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
19/99
The process which have large cpu time will get
processed with enough level of difficulty. Thus
this method is not used frequently.
Hence Round Robin is fair, but uniformly inefficient.
Solution:Introduce priority based scheduling.
It is explained here by taking 1 example:
Process ArrivalTime
CPUTime
Priority
P1 0 6 2
P2 1 4 1
P3 1 3 3
p4 3 2 2
p5 3 8 3
P6 4 6 1
P7 5 7 4
P8 5 6 3
Round Robin with time slice=3
Solution:The Gantt chart is drawn as:
8/7/2019 Gurpreet Singh RA1805 Roll No 17
20/99
Waiting time for P1=0+6=6 ms
Waiting time for P2=(3+20)-1=22 ms
Waiting time for P3= 14-1 =13 ms
Waiting time for P4= 12-3=9 ms
Waiting time for P5= (17+10+6)-3 =30 ms
Waiting time for P6=(6+18)-4=20 ms
Waiting time for P7= (23+10+2)-5=30 ms
Waiting time for P8= (20+10)-5 =25 ms
Average Waiting Time = (6+22+13+9+30+20+30+25)/8
=19.375 ms
Average Turnaround Time =A.V.T + Av. Execution Time
=19.375+ (42)/
8=15.3+5.25=24.62 ms
OR ( method-2)
Average Turnaround Time = (12+ (27-1) + (17-1) + (14-3) +
(41-3) + (30-4) + (42-5) + (36-5) =
=24.625 ms
Average Response Time = 15.3 m
2) Shortest remaining time :It is explained by the example given
below:
8/7/2019 Gurpreet Singh RA1805 Roll No 17
21/99
Process
ArrivalTime
CPUTime
X 1 4
Y 0 6
Z 1 3
SRT
Solution: The SRT algorithm is:
The Gantt chart is:
Y Z X y
0 1 4 8
13
Average Waiting Time = (8+0+3)/3 = 3.67 ms
Average Turnaround Time= (13+3+7)/3 = 7.67 ms
Average Response Time= (0+0+3)/3 = 1 ms
3) Priority Based preemptive
The SJF algorithm is special case of the general priority scheduling
algorithm. A priority is associated with each process and the Cpu is
allocates to the process with the highest priority. Equal priority
processes are scheduled in the FCFS order. An SJF is simply a priority
algorithm where the priority p is inverse of the predicted next cpu
burst cycle.
Run highest-priority processes first, use round-robin among
processes of equal priority. Re-insert process in run queue
behind all processes of greater or equal priority.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
22/99
It Allows CPU to be given preferentially to important
processes.
The Scheduler adjusts dispatcher priorities to achieve the
desired overall priorities for the processes, e.g. one process
gets 90% of the CPU.
It is explained by 1 example I have taken
here as given below:
Process Arrival
Time
CPU
Time
Priorit
yP1 0 6 2
P2 7 10 1
P3 4 4 3
p4 1 10 2
p5 2 12 0
Solution:
The Gantt chart is drawn as below:
Waiting time for P1=0+(24-2)=22ms
Waiting time for P2=14-7=7 ms
Waiting time for P3= 38-4 =34 ms
Waiting time for P4= 28-1=27 ms
8/7/2019 Gurpreet Singh RA1805 Roll No 17
23/99
Waiting time for P5= 2-2 =0 ms
Average Waiting Time = (22+7+34+27+0)/5 =59/5
=18 ms
Average Turnaround Time =A.V.T + Av. Execution
Time
=18+
(42)/5=18+8.4=26.4 ms
Comments: In priority scheduling, processes are allocated to the
CPU on the basis of an externally assigned priority. The key to the
performance of priority scheduling is in choosing priorities for the
processes.
Problem: Priority scheduling may cause low-priority processes to
starve
Solution: (AGING) This starvation can be compensated for if the
priorities are internally computed. Suppose one parameter in the
priority assignment function is the amount of time the process hasbeen waiting. The longer a process waits, the higher its priority
becomes. This strategy tends to eliminate the starvation problem.
Non preemptive scheduling:
In the non preemptive once the processes get start executing bythe processor we can take back the processor from the process until
the process gets executed. It is only when the process is non
preemptive.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
24/99
i.e. we can say that It is the scheduling in which the process
os the any processor is not terminated until it is completed.
Types of non preemptive scheduling
algorithms:
First in First Out (FIFO)
This is a Non-Preemptive scheduling algorithm. FIFO strategy
assigns priority to processes in the order in which they
request the processor. The process that requests the CPU first is
allocated the CPU first. When a process comes in, add its PCB to the
tail of ready queue. When running process terminates, dequeue the
process (PCB) at head of ready queue and run it.
In the fcfs algorithm , with this scheme the process that
request the cpu first is allocated to the cpu first. The
implementation of the fcfs policy Is easily managed with a fifo
queue. When a process enter a ready queue its PCB is linked onto
the tail of the queue. When the cpu is free it is allocated to the
process to the head of the queue. The running process is then
removed from the queue. The head for the FCFS scheduling processis easy to write and understanssd.
Suppose we have a processes as given below in the example
I have taken:
It is explained as given below:
8/7/2019 Gurpreet Singh RA1805 Roll No 17
25/99
The Gantt chart is drawn as below:
Waiting time for P1=0
Waiting time for P2=6
Waiting time for P3= (16-1) =15
Waiting time for P4= (20-4) =16
Waiting time for P5= (30-2) =28
Average Waiting Time = (0+6+15+16+28)/5 =65/5 =13 ms
Average Turnaround Time =A.V.T + Av. Execution Time
=13 +(42)/5=13+8.4=21.4 ms
Comments: While the FIFO algorithm is easy to implement, it
ignores the service time request and all other criteria that may
influence the performance with respect to turnaround or waiting
time.
Problem: One Process can monopolize CPU.
Process
ArrivalTime
CPUTime
P1 0 6
P2 0 10
P3 1 4
p4 4 10p5 2 12
8/7/2019 Gurpreet Singh RA1805 Roll No 17
26/99
Solution: Limit the amount of time a process can run without a
context switch. This time is called a time slice.
2)Priority Based non preemptive
algorithmProcess
ArrivalTime
CPUTime
Priority
P1 0 6 2
P2 7 10 1
P3 4 4 3
p4 1 10 2
p5 2 12 0
Solution:
The Gantt chart is drawn as below:
Waiting time for P1=0=0 ms
Waiting time for P2=18-7=11 ms
Waiting time for P3= 38-4 =34 ms
Waiting time for P4= 28-1=27 ms
Waiting time for P5= 6-2 =4 ms
Average Waiting Time = (0+11+34+27+4)/5 =76/5 =15.2 ms
Average Turnaround Time =A.V.T + Av. Execution Time
8/7/2019 Gurpreet Singh RA1805 Roll No 17
27/99
=15.2+ (42)/5=15.2+ 8.4 =23.6
ms
2) Shortest Job First (SJF):
In the shortest job first algorithm associate with each process the
length of its next CPU burst. It basically Use these lengths to
schedule the process with the shortest time
SJF is optimal because it gives minimum average waiting time
for a given set of processes.
Maintain the Ready queue in order of increasing job lengths.
When a job comes in, insert it in the ready queue based on its
length. When current process is done, pick the one at the head of
the queue and run it.
This is provably the most optimal in terms of turnaround/response
time.But, how do we find the length of a job? Make an estimate
based on the past behavior.Say the estimated time (burst) for a
process is E0, suppose the actual time is measured to be T0.
Update the estimate by taking a weighted sum of these two
ie. E1 = aT0 + (1-a)E0 and in general, E(n+1) = aTn + (1-a)En(Exponential average)
if a=0, recent history no weightageif a=1, past history no weightage.
typically a=1/2.
E(n+1) = aTn + (1-a)aTn-1 + (1-a)^jatn-j + ...
8/7/2019 Gurpreet Singh RA1805 Roll No 17
28/99
Older information has less weightage
Limitation of SJF:
The difficulty is knowing the length of the next CPU request
1) SJF Non preemptive: I m explaining this here by taking 1
example as given below:
The Gantt chart is drawn as below:
Waiting time for P1=0 ms
Waiting time for P2=10 ms
Waiting time for P3= (6-1) =5 ms
Waiting time for P4= (20-4) =16 ms
Waiting time for P5= (30-2) =28 ms
Average Waiting Time = (0+10+5+16+28)/5 =59/5 =11.8 ms
Average Turnaround Time =A.V.T + Av. Execution Time
=11.8+ (42)/5=11.8+8.4=20.2 ms
Comments: SJF is proven optimal only when all jobs are available
simultaneously.
Problem: SJF minimizes the average wait time because it services
small processes before it services large ones. While it minimizes
8/7/2019 Gurpreet Singh RA1805 Roll No 17
29/99
average wiat time, it may penalize processes with high service time
requests. If the ready list is saturated, then processes with large
service times tend to be left in the ready list while small processes
receive service. In extreme case, where the system has little idle
time, processes with large service times will never be served. This
total starvation of large processes may be a serious liability of this
algorithm.
Solution: Multi-Level Feedback Queques
Multi-Level Feedback QueueSeveral queues arranged in some priority order.
Each queue could have a different scheduling discipline/ time
quantum.
Lower quanta for higher priorities generally.
It is basically Defined by:
# of queues.
Scheduling algo for each queue.
When to upgrade a priority.
When to demote.
Attacks both efficiency and response time problems.
It Gives newly run able process a high priority and a
very short time slice. If process uses up the time slice without
8/7/2019 Gurpreet Singh RA1805 Roll No 17
30/99
blocking then decrease priority by 1 and double its next time
slice.
Often implemented by having a separate queue for each
priority.
How are priorities raised? By 1 if it doesn't use time
slice? What happens to a process that does a lot of
computation when it starts then waits for user input?
Need to boost priority a lot, quickly.
PROS AND CONS OFDIFFERENT SCHEDULING
ALGORITHM
Scheduling disciplines are algorithms used for distributing
resources among parties which simultaneously and
asynchronously request them. Scheduling disciplines are used
in routers (to handle packet traffic) as well as
in operatingsystems (to share CPU time among
both threads and processes), disk drives (I/O scheduling), printers
(print spooler), most embedded systems, etc.
The main purposes of scheduling algorithms are to
minimize resource starvation and to ensure fairness amongst
the parties utilizing the resources. Scheduling deals with the
http://en.wikipedia.org/wiki/Routerhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/CPU_timehttp://en.wikipedia.org/wiki/Thread_(computer_science)http://en.wikipedia.org/wiki/Process_(computing)http://en.wikipedia.org/wiki/I/O_schedulinghttp://en.wikipedia.org/wiki/Print_spoolerhttp://en.wikipedia.org/wiki/Resource_starvationhttp://en.wikipedia.org/wiki/Resource_starvationhttp://en.wikipedia.org/wiki/Routerhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/CPU_timehttp://en.wikipedia.org/wiki/Thread_(computer_science)http://en.wikipedia.org/wiki/Process_(computing)http://en.wikipedia.org/wiki/I/O_schedulinghttp://en.wikipedia.org/wiki/Print_spoolerhttp://en.wikipedia.org/wiki/Resource_starvation8/7/2019 Gurpreet Singh RA1805 Roll No 17
31/99
problem of deciding which of the outstanding requests is to be
allocated resources. There are many different scheduling
algorithms. In this section, we introduce several of them.
First in first outAlso known as First Come, First Served (FCFS), its the
simplest scheduling algorithm, FIFO simply queues
processes in the order that they arrive in the ready queue.
Since context switches only occur upon process
termination, and no reorganization of the process queueis required, scheduling overhead is minimal.
Throughput can be low, since long processes can hog the
CPU
Turnaround time, waiting time and response time can be
low for the same reasons above
No prioritization occurs, thus this system has trouble
meeting process deadlines.
The lack of prioritization does permit every process to
eventually complete, hence no starvation.
Shortest remaining timeAlso known as Shortest Job First (SJF). With this strategy the
scheduler arranges processes with the least estimated processing
time remaining to be next in the queue. This requires advance
http://en.wikipedia.org/wiki/FIFOhttp://en.wikipedia.org/wiki/FIFO8/7/2019 Gurpreet Singh RA1805 Roll No 17
32/99
knowledge or estimations about the time required for a process to
complete.
If a shorter process arrives during another process'
execution, the currently running process may be
interrupted, dividing that process into two separate
computing blocks. This creates excess overhead through
additional context switching. The scheduler must also place
each incoming process into a specific place in the queue,
creating additional overhead.
This algorithm is designed for maximum throughput
in most scenarios.
Waiting time and response time increase as the
process' computational requirements increase. Since
turnaround time is based on waiting time plus processing
time, longer processes are significantly affected by this.
Overall waiting time is smaller than FIFO, however since no
process has to wait for the termination of the longest process.
No particular attention is given to deadlines, the
programmer can only attempt to make processes with deadlines
as short as possible.
Starvation is possible, especially in a busy system with
many small processes being run.
Fixed priority pre-emptivescheduling
8/7/2019 Gurpreet Singh RA1805 Roll No 17
33/99
The O/S assigns a fixed priority rank to every process, and the
scheduler arranges the processes in the ready queue in order of
their priority. Lower priority processes get interrupted by incoming
higher priority processes.
Overhead is neither minimal, nor is it significant.
FPPS has no particular advantage in terms of throughput
over FIFO scheduling.
Waiting time and rf
Response time depend on the priority of the process. Higher
priority processes have smaller waiting and response times.
Deadlines can be met by giving processes with deadlines a
higher priority.
Starvation of lower priority processes is possible with large
amounts of high priority processes queuing for CPU time.
Round-robin scheduling
The scheduler assigns a fixed time unit per process, and cycles
through them.
Round Robin scheduling involves extensive overhead,
especially with a small time unit.
Balanced throughput between FCFS and SJF, shorter
jobs are completed faster than in FCFS and longer
processes are completed faster than in SJF.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
34/99
Fastest average response time, waiting time is
dependent on number of processes, and not average process
length.
Because of high waiting times, deadlines are rarely met
in a pure Round Robin system.
Starvation can never occur, since no priority is given. Order
of time unit allocation is based upon process arrival time, similar
to FCFS.
Multilevel queue schedulingThis is used for situations in which processes are easily divided into
different groups. For example, a common division is made between
foreground (interactive) processes and background (batch)
processes. These two types of processes have different response-
time requirements and so may have different scheduling needs.
Overview
Schedulingalgorithm
CPU UtilizationThroughputTurn-around
time
Response
time
Deadline
handling
Starvation
free
First In First Out Low Low High Low No Yes
Shortest Job First Medium High Medium Medium No No
Priority basedscheduling
Medium Low High High Yes No
http://en.wikipedia.org/wiki/Scheduling_algorithmhttp://en.wikipedia.org/wiki/Scheduling_algorithmhttp://en.wikipedia.org/wiki/Scheduling_algorithmhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Throughputhttp://en.wikipedia.org/wiki/Throughputhttp://en.wikipedia.org/wiki/Response_timehttp://en.wikipedia.org/wiki/Response_timehttp://en.wikipedia.org/wiki/Time_limithttp://en.wikipedia.org/wiki/Time_limithttp://en.wikipedia.org/wiki/Resource_starvationhttp://en.wikipedia.org/wiki/First_In_First_Outhttp://en.wikipedia.org/wiki/Shortest_Job_Firsthttp://en.wikipedia.org/w/index.php?title=Fixed_pre-emptive_scheduling&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Fixed_pre-emptive_scheduling&action=edit&redlink=1http://en.wikipedia.org/wiki/Scheduling_algorithmhttp://en.wikipedia.org/wiki/Scheduling_algorithmhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Throughputhttp://en.wikipedia.org/wiki/Response_timehttp://en.wikipedia.org/wiki/Response_timehttp://en.wikipedia.org/wiki/Time_limithttp://en.wikipedia.org/wiki/Resource_starvationhttp://en.wikipedia.org/wiki/First_In_First_Outhttp://en.wikipedia.org/wiki/Shortest_Job_Firsthttp://en.wikipedia.org/w/index.php?title=Fixed_pre-emptive_scheduling&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Fixed_pre-emptive_scheduling&action=edit&redlink=18/7/2019 Gurpreet Singh RA1805 Roll No 17
35/99
Round-robinscheduling
High Medium Medium High No Yes
Multilevel Queuescheduling
High High Medium Medium Low Yes
HOW TO CHOOSE SCHEDULING
ALGORITHM
When designing an operating system, a programmer must consider
which scheduling algorithm will perform best for the use the system
is going to see. There is no universal best scheduling
algorithm, and many operating systems use extended or
combinations of the scheduling algorithms above. For
example: Windows NT/XP/Vista uses a multilevel feedback
queue, a combination of fixed priority preemptive scheduling,round-robin, and first in first out. In this system, processes can
dynamically increase or decrease in priority depending on if it has
been serviced already, or if it has been waiting extensively. Every
priority level is represented by its own queue, with round-robin
scheduling amongst the high priority processes and FIFO among the
lower ones. In this sense, response time is short for most processes,
and short but critical system processes get completed very quickly.
Since processes can only use one time unit of the round robin in the
highest priority queue, starvation can be a problem for longer high
priority processes.
http://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Windows_NThttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/FIFOhttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Windows_NThttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/wiki/FIFOhttp://en.wikipedia.org/wiki/Round-robin_scheduling8/7/2019 Gurpreet Singh RA1805 Roll No 17
36/99
OPERATING SYSTEMSCHEDULER IMPLEMENTATION
Windows
Very early MS-DOS and Microsoft Windows systems were non-
multitasking, and as such did not feature a scheduler. Windows
3.1x used a non-preemptive scheduler, meaning that it did not
interrupt programs. It relied on the program to end or tell the OS
that it didn't need the processor so that it could move on to another
process. This is usually called cooperative multitasking. Windows 95
introduced a rudimentary preemptive scheduler; however, for
legacy support opted to let 16 bit applications run without
preemption.
Mac OSMac OS 9 uses cooperative scheduling for threads, where one
process controls multiple cooperative threads, and also provides
preemptive scheduling for MP tasks. The kernel schedules MP
http://en.wikipedia.org/wiki/MS-DOShttp://en.wikipedia.org/wiki/Windows_3.1xhttp://en.wikipedia.org/wiki/Windows_3.1xhttp://en.wikipedia.org/wiki/MS-DOShttp://en.wikipedia.org/wiki/Windows_3.1xhttp://en.wikipedia.org/wiki/Windows_3.1x8/7/2019 Gurpreet Singh RA1805 Roll No 17
37/99
tasks using a preemptive scheduling algorithm. All Process
Manager Processes run within a special MP task, called the
"blue task". Those processes are scheduled cooperatively, using
a round-robin scheduling algorithm; a process yields control of the
processor to another process by explicitly calling a blocking function
such as WaitNextEvent. Each process has its own copy of theThread
Manager that schedules that process's threads cooperatively; a
thread yields control of the processor to another thread by
calling YieldToAnyThread or YieldToThread.
Mac OS X uses a multilevel feedback queue, with four
priority bands for threads - normal, system high priority,
kernel mode only, and real-time. Threads are scheduled
preemptively, Mac OS X also supports cooperatively-scheduled
threads in its implementation of the Thread Manager in Carbon.
LinuxFrom version 2.5 of the kernel to version 2.6, Linux used a multilevel
feedback queue with priority levels ranging from 0-140. 0-99 are
reserved for real-time tasks and 100-140 are considered nice task
levels. For real-time tasks, the time quantum for switching
processes is approximately 200 ms, and for nice tasks
approximately 10 ms. The scheduler will run through the queue of
all ready processes, letting the highest priority processes go first
and run through their time slices, after which they will be placed in
an expired queue. When the active queue is empty the expired
queue will become the active queue and vice versa. From versions
http://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/w/index.php?title=Thread_Manager&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Thread_Manager&action=edit&redlink=1http://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Carbon_(API)http://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Nice_(Unix)http://en.wikipedia.org/wiki/Round-robin_schedulinghttp://en.wikipedia.org/w/index.php?title=Thread_Manager&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Thread_Manager&action=edit&redlink=1http://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Carbon_(API)http://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Nice_(Unix)8/7/2019 Gurpreet Singh RA1805 Roll No 17
38/99
2.6 to 2.6.23, the kernel used an O(1) scheduler. In version 2.6.23,
they replaced this method with the Completely Fair Scheduler that
uses red-black treesinstead of queues.
FreeBSDFreeBSD uses a multilevel feedback queue with priorities ranging
from 0-255. 0-63 are reserved for interrupts, 64-127 for the top half
of the kernel, 128-159 for real-time user threads, 160-223 for time-
shared user threads, and 224-255 for idle user threads. Also, like
Linux, it uses the active queue setup, but it also has an idle queue.
NetBSD
NetBSD uses a multilevel feedback queue with priorities rangingfrom 0-223. 0-63 are reserved for time-shared threads (default,
SCHED_OTHER policy), 64-95 for user threads which entered kernel
space, 96-128 for kernel threads, 128-191 for user real-time threads
(SCHED_FIFO and SCHED_RR policies), and 192-223 for software
interrupts.
Solaris
Solaris uses a multilevel feedback queue with priorities ranging from
0-169. 0-59 are reserved for time-shared threads, 60-99 for system
threads, 100-159 for real-time threads, and 160-169 for low priority
http://en.wikipedia.org/wiki/O(1)_schedulerhttp://en.wikipedia.org/wiki/Completely_Fair_Schedulerhttp://en.wikipedia.org/wiki/Red-black_treeshttp://en.wikipedia.org/wiki/FreeBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/NetBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Kernel_spacehttp://en.wikipedia.org/wiki/Kernel_spacehttp://en.wikipedia.org/wiki/Interrupthttp://en.wikipedia.org/wiki/Interrupthttp://en.wikipedia.org/wiki/Solaris_(operating_system)http://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/O(1)_schedulerhttp://en.wikipedia.org/wiki/Completely_Fair_Schedulerhttp://en.wikipedia.org/wiki/Red-black_treeshttp://en.wikipedia.org/wiki/FreeBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/NetBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Kernel_spacehttp://en.wikipedia.org/wiki/Kernel_spacehttp://en.wikipedia.org/wiki/Interrupthttp://en.wikipedia.org/wiki/Interrupthttp://en.wikipedia.org/wiki/Solaris_(operating_system)http://en.wikipedia.org/wiki/Multilevel_feedback_queue8/7/2019 Gurpreet Singh RA1805 Roll No 17
39/99
interrupts. Unlike Linux, when a process is done using its time
quantum, it's given a new priority and put back in the queue.
SUMMARY:Operating SystemPreemptio
nAlgorithm
Windows 3.1x None Cooperative Scheduler
Windows 95, 98, Me Half Preemptive for 32-bit processes, CooperativeScheduler for 16-bit processes
Windows NT (including 2000,XP, Vista, 7, and Server)
Yes Multilevel feedback queue
Mac OS pre-9 None Cooperative Scheduler
Mac OS 9 Some Preemptive for MP tasks, Cooperative Scheduler forprocesses and threads
Mac OS X Yes Multilevel feedback queue
Linux pre-2.6 Yes Multilevel feedback queue
Linux 2.6-2.6.23 Yes O(1) scheduler
Linux post-2.6.23 Yes Completely Fair Scheduler
Solaris Yes Multilevel feedback queue
NetBSD Yes Multilevel feedback queue
FreeBSD Yes Multilevel feedback queue
http://en.wikipedia.org/wiki/Windows_3.1xhttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Windows_95http://en.wikipedia.org/wiki/Windows_98http://en.wikipedia.org/wiki/Windows_Mehttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Windows_NThttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Mac_OShttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Mac_OShttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Mac_OS_Xhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/O(1)_schedulerhttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Completely_Fair_Schedulerhttp://en.wikipedia.org/wiki/Solaris_(operating_system)http://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/NetBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/FreeBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Windows_3.1xhttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Windows_95http://en.wikipedia.org/wiki/Windows_98http://en.wikipedia.org/wiki/Windows_Mehttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Windows_NThttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Mac_OShttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Mac_OShttp://en.wikipedia.org/wiki/Cooperative_Schedulerhttp://en.wikipedia.org/wiki/Mac_OS_Xhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/O(1)_schedulerhttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Completely_Fair_Schedulerhttp://en.wikipedia.org/wiki/Solaris_(operating_system)http://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/NetBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queuehttp://en.wikipedia.org/wiki/FreeBSDhttp://en.wikipedia.org/wiki/Multilevel_feedback_queue8/7/2019 Gurpreet Singh RA1805 Roll No 17
40/99
COMPARISON IN OPERATING
SYSTEM SCHEDULERS
1. Solaris 2 Scheduling
Priority-based process scheduling
Classes: real time, system, time sharing, interactive
Each class has different priority and scheduling algorithm
Each LWP assigns a scheduling class and priority
Time-sharing/interactive: multilevel feedback queue
Real-time processes run before a process in any other class
System class is reserved for kernel use (paging, scheduler)
The scheduling policy for the system class does not time-
slice
The selected thread runs on the CPU until it blocks, uses its
time slices, or is preempted by a higher-priority thread
Multiple threads have the same priority RR
8/7/2019 Gurpreet Singh RA1805 Roll No 17
41/99
Each class includes a set of priorities. But, the scheduler converts the
class-specific priorities into global priorities
2. Windows schedulingOverview: It Displays all context switches by CPU, as shown in the
following screen shot.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
42/99
Screen shot of a graph showing CPU scheduling zoomed to 500
microseconds
Graph Type: Event graph
Y-axis Units: CPUs
8/7/2019 Gurpreet Singh RA1805 Roll No 17
43/99
Required Flags:CSWITCH+DISPATCHER
Events Captured: Context switch events
Legend Description: Shows CPUs on the system.
Graph Description:
Shows all context switches for a time interval aggregated
by CPU. A tooltip displays detailed information on the
context switch including the call stack for the new thread.
Further information on the call stacks is available through
the summary tables. Summary tables are accessed by rightclicking from the graph and choosing Summary Table.
Note Context switches can occur millions of times per
second. In order to display the discrete structure of the
context switch streams it is necessary to zoom to a short
time interval.
Interrupt CPU Usage
Overview: Displays CPU resources consumed by servicing
interrupts, as shown in the following screen shot.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
44/99
Screen shot of a graph showing time as a percentage spent
servicing interrupts
Graph Type: Usage graph
Y-axis Units: Percentage of CPU usage
Required Flags: INTERRUPT
Events Captured : Service interrupt events
Legend Description: Active CPUs on the system
Graph Description:
This graph displays the percentage of the total CPU
resource each processor spends servicing device interrupts.
Noticable points:
8/7/2019 Gurpreet Singh RA1805 Roll No 17
45/99
Priority-based preemptive scheduling
A running thread will run until it is preempted by a higher-
priority one, terminates, time quantum ends, calls a blocking
system call
32-level priority scheme
Variable (1-15) and real-time (16-31) classes, 0 (memory
manage)
A queue for each priority. Traverses the set of queues from
highest to lowest until it finds a thread that is ready to run
Run the idle thread when no ready thread
Base priority of each priority class
Initial priority for a thread belonging to that class
8/7/2019 Gurpreet Singh RA1805 Roll No 17
46/99
The priority of variable-priority processes will adjust
Lower (not below base priority) when its time quantum runs
out
Priority boosts when it is released from a wait operation
The boost level depends on the reason for wait
Waiting for keyboard I/O gets a large priority increase
Waiting for disk I/O gets a moderate priority increase
Process in the foreground window get a higher priority
8/7/2019 Gurpreet Singh RA1805 Roll No 17
47/99
Linux Scheduling
Separate Time-sharing and real-time scheduling
algorithms.
Allow only processes in user mode to be preempted.
A process may not be preempted while it is running in
kernel mode, even if a real-time process with a higher priority is
available to run.
Soft real-time system.
Time-sharing: Prioritized, credit-based scheduling.
The process with the most credits is selected.
A timer interrupt occurs the running process loses one
credit.
Zero credit select another process.
No runnable processes have credits re-credit ALL
processes.
CREDITS = CREDITS * 0.5 + PRIORITY.
Priority: real-time > interactive > background.
Real-time scheduling.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
48/99
Two real-time scheduling classes: FCFS (non-preemptive)
and RR (preemptive).
PLUS a priority for each process.
Always runs the process with the highest priority.
Equal priorityruns the process that has been waiting
longest .
Symmetric multiprocessing in XP
Symmetric multiprocessing (SMP) is a technology that
allows a computer to use more than one processor. The most
common configuration of an SMP computer is one that uses
two processors. The two processors are used to complete
your computing tasks faster than a single processor. (Two
processors aren't necessarily twice as fast as a single
processor, though.)
In order for a computer to take advantage of a multiprocessor setup,
the software must be written for use with an SMP system. If a
program isn't written for SMP, it won't take advantage of SMP. Not
every program is written for SMP; SMP applications, such as image-
editing programs, video-editing suites, and databases, tend to be
processor intensive.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
49/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
50/99
Not every program is written for SMP; SMP applications, such as
image-editing programs, video-editing suites, and databases, tend
to be processor intensive.
Operating systems also need to be written for SMP in order to use
multiple processors. In the Windows XP family, only XP Professional
supports SMP; XP Home does not. If you're a consumer with a dual-
processor PC at home, you have to buy XP Professional. Windows XP
Advanced Server also supports SMP.
In Microsoft's grand scheme, XP Professional is meant to replace
Windows 2000, which supports SMP. In fact, XP Professional uses the
same kernel as Windows 2000. XP Home is designed to replace
Windows Me as the consumer OS, and Windows Me does not support
SMP.
The difference between XP Professional and XP Home is more than
just $100 and SMP support. XP Professional has plenty of other
features not found in XP Home; some you'll use, others you won't
care about.
COMPARISON
1) Solaris 2 Uses priority-based process scheduling.
2) Windows 2000 uses a priority-based preemptive scheduling
algorithm.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
51/99
3) Linux provides two separate process-scheduling algorithms: one is
designed for time-sharing processes for fair preemptive scheduling
among multiple processes; the other designed for real-time tasks.
a) For processes in the time-sharing class Linux uses a prioritized
credit-based algorithm.
b) Real-time scheduling: Linux implements two real-time scheduling
classes namely FCFS (First come first serve) and RR (Round Robin).
DIAGRAMETICAL REPRESENTATION
Solaris Scheduling
8/7/2019 Gurpreet Singh RA1805 Roll No 17
52/99
Windows XP Scheduling
8/7/2019 Gurpreet Singh RA1805 Roll No 17
53/99
Linux Scheduling
Constant order O(1) scheduling time
Two priority ranges: time-sharing and real-time
Real-time range from 0 to 99 and nice value from 100 to
140
Priorities and Time-slice length
8/7/2019 Gurpreet Singh RA1805 Roll No 17
54/99
List of Tasks Indexed According to Priorities
8/7/2019 Gurpreet Singh RA1805 Roll No 17
55/99
MEMORY MANAGEMENT
INTRODUCTIONMemory management is the act of managing computer
memory. In its simpler forms, this involves providing ways to
allocate portions of memory to programs at their request,
and freeing it for reuse when no longer needed. The
management of main memory is critical to the computer
system.
Virtual memory systems separate the memory addresses used by a
process from actual physical addresses, allowing separation of
processes and increasing the effectively available amount of
RAM using disk swapping. The quality of the virtual memory
manager can have a big impact on overall system performance.
Garbage collection is the automated allocation and
deallocation of computer memory resources for a program.
This is generally implemented at the programming language level
and is in opposition tomanual memory management, the explicit
allocation and deallocation of computer memory resources. Region-based memory management is an efficient variant of explicit
memory management that can deallocate large groups of objects
simultaneously.
http://en.wikipedia.org/wiki/Computer_memoryhttp://en.wikipedia.org/wiki/Computer_memoryhttp://en.wikipedia.org/wiki/Virtual_memoryhttp://en.wikipedia.org/wiki/Paginghttp://en.wikipedia.org/wiki/Paginghttp://en.wikipedia.org/wiki/Garbage_collection_(computer_science)http://en.wikipedia.org/wiki/Manual_memory_managementhttp://en.wikipedia.org/wiki/Region-based_memory_managementhttp://en.wikipedia.org/wiki/Region-based_memory_managementhttp://en.wikipedia.org/wiki/Computer_memoryhttp://en.wikipedia.org/wiki/Computer_memoryhttp://en.wikipedia.org/wiki/Virtual_memoryhttp://en.wikipedia.org/wiki/Paginghttp://en.wikipedia.org/wiki/Garbage_collection_(computer_science)http://en.wikipedia.org/wiki/Manual_memory_managementhttp://en.wikipedia.org/wiki/Region-based_memory_managementhttp://en.wikipedia.org/wiki/Region-based_memory_management8/7/2019 Gurpreet Singh RA1805 Roll No 17
56/99
Requirements
Memory management systems on multi-tasking operating
systems usually deal with the following issues.
Relocation
In systems with virtual memory, programs in memory must be able
to reside in different parts of the memory at different times. This is
because when the program is swapped back into memory after
being swapped out for a while it can not always be placed in the
same location. The virtual memory management unit must also deal
with concurrency. Memory management in the operating system
should therefore be able to relocate programs in memory and
handle memory references and addresses in the code of the
program so that they always point to the right location in memory.
Protection
Processes should not be able to reference the memory for another
process without permission. This is called memory protection, and
prevents malicious or malfunctioning code in one program from
interfering with the operation of other running programs.
Sharing
http://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Virtual_memoryhttp://en.wikipedia.org/wiki/Memory_protectionhttp://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Computer_multitaskinghttp://en.wikipedia.org/wiki/Virtual_memoryhttp://en.wikipedia.org/wiki/Memory_protection8/7/2019 Gurpreet Singh RA1805 Roll No 17
57/99
Even though the memory for different processes is normally
protected from each other, different processes sometimes need to
be able to share information and therefore access the same part of
memory. Shared memory is one of the fastest techniques for Inter-
process communication.
Logical organization
Programs are often organized in modules. Some of these modules
could be shared between different programs, some are read only
and some contain data that can be modified. The memory
management is responsible for handling this logical organizationthat is different from the physical linear address space. One way to
arrange this organization is segmentation.
Physical Organization
Memory is usually divided into fast primary storage and
slow secondary storage. Memory management in the operating
system handles moving information between these two levels of
memory.
DOS memory managers
In addition to standard memory management, the 640 KB barrier
ofMS-DOS and compatible systems led to the development of
programs known as memory managers when PC main memories
started to be routinely larger than 640 KB in the late 1980s
(see conventional memory). These move portions of the operating
system outside their normal locations in order to increase the
http://en.wikipedia.org/wiki/Inter-process_communicationhttp://en.wikipedia.org/wiki/Inter-process_communicationhttp://en.wikipedia.org/wiki/Segmentation_(memory)http://en.wikipedia.org/wiki/Primary_storagehttp://en.wikipedia.org/wiki/Secondary_storagehttp://en.wikipedia.org/wiki/MS-DOShttp://en.wikipedia.org/wiki/Memory_managerhttp://en.wikipedia.org/wiki/Conventional_memoryhttp://en.wikipedia.org/wiki/Inter-process_communicationhttp://en.wikipedia.org/wiki/Inter-process_communicationhttp://en.wikipedia.org/wiki/Segmentation_(memory)http://en.wikipedia.org/wiki/Primary_storagehttp://en.wikipedia.org/wiki/Secondary_storagehttp://en.wikipedia.org/wiki/MS-DOShttp://en.wikipedia.org/wiki/Memory_managerhttp://en.wikipedia.org/wiki/Conventional_memory8/7/2019 Gurpreet Singh RA1805 Roll No 17
58/99
amount of conventional or quasi-conventional memory available to
other applications. Examples are EMM386, which was part of the
standard installation in DOS's later versions, and QEMM. These
allowed use of memory above the 640 KB barrier, where memory
was normally reserved for RAMs, and high and upper memory.
Mac Memory Managers
In any program you write, you must ensure that you manageresources effectively and efficiently. One such resource is your
programs memory. In an Objective-C program, you must make sure
that objects you create are disposed of when you no longer need
them.
In a complex system, it could be difficult to determine exactly
when you no longer need an object. Cocoa defines some rulesand principles that help making that determination easier.
Important: In Mac OS X v10.5 and later, you can use automatic
memory management by adopting garbage collection. This is
described in Garbage Collection Programming Guide. Garbage
collection is not available on iOS.
Memory Management Rules summarizes the rules for
object ownership and disposal.
Object Ownership and Disposal describes the primary
object-ownership policy.
http://en.wikipedia.org/wiki/EMM386http://en.wikipedia.org/wiki/QEMMhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/GarbageCollection/Introduction.html#//apple_ref/doc/uid/TP40002431http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmObjectOwnership.html#//apple_ref/doc/uid/20000043-BEHDEDDBhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmObjectOwnership.html#//apple_ref/doc/uid/20000043-BEHDEDDBhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmObjectOwnership.html#//apple_ref/doc/uid/20000043-BEHDEDDBhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmObjectOwnership.html#//apple_ref/doc/uid/20000043-BEHDEDDBhttp://en.wikipedia.org/wiki/EMM386http://en.wikipedia.org/wiki/QEMMhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/GarbageCollection/Introduction.html#//apple_ref/doc/uid/TP40002431http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmRules.html#//apple_ref/doc/uid/20000994-BAJHFBGHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmObjectOwnership.html#//apple_ref/doc/uid/20000043-BEHDEDDB8/7/2019 Gurpreet Singh RA1805 Roll No 17
59/99
Practical Memory Management gives a practical
perspective on memory management.
Autorelease Pools describes the use of autorelease
poolsa mechanism for deferred deallocationin Cocoa
programs.
Accessory Methods describes how to implement
accessor methods.
Implementing Object Copy discusses issues related to
object copying, such as deciding whether to implement a deep or
shallow copy and approaches for implementing object copy inyour subclasses.
Memory Management of Core Foundation Objects in
Cocoa gives guidelines and techniques for memory
management of Core Foundation objects in Cocoa code.
Memory Management of Nib Objects discusses
memory management issues related to nib files.
Memory Management in
WINDOWS
http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmPractical.html#//apple_ref/doc/uid/TP40004447-SW1http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html#//apple_ref/doc/uid/20000047-CJBFBEDIhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAccessorMethods.html#//apple_ref/doc/uid/TP40003539-SW1http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmImplementCopy.html#//apple_ref/doc/uid/20000049-BBCEBJCHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmCFObjects.html#//apple_ref/doc/uid/20000997-BBCCGHBIhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmCFObjects.html#//apple_ref/doc/uid/20000997-BBCCGHBIhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmNibObjects.html#//apple_ref/doc/uid/TP40004998-SW2http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmPractical.html#//apple_ref/doc/uid/TP40004447-SW1http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html#//apple_ref/doc/uid/20000047-CJBFBEDIhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAccessorMethods.html#//apple_ref/doc/uid/TP40003539-SW1http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmImplementCopy.html#//apple_ref/doc/uid/20000049-BBCEBJCHhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmCFObjects.html#//apple_ref/doc/uid/20000997-BBCCGHBIhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmCFObjects.html#//apple_ref/doc/uid/20000997-BBCCGHBIhttp://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmNibObjects.html#//apple_ref/doc/uid/TP40004998-SW28/7/2019 Gurpreet Singh RA1805 Roll No 17
60/99
This is one of three related technical articles"Managing
Virtual Memory," "Managing Memory-Mapped Files," and
"Managing Heap Memory"that explain how to manage memory
in applications for Windows.
In each article, this introduction identifies the basic memory
components in the Windows programming model and
indicates which article to reference for specific areas of
interest.
The first version of the Microsoft Windows operating system
introduced a method of managing dynamic memory based
on a single global heap, which all applications and the system
share, and multiple, private local heaps, one for each application.
Local and global memory management functions were also
provided, offering extended features for this new memory
management system.
More recently, the Microsoft C run-time (CRT) libraries were
modified to include capabilities for managing these heaps in
Windows using native CRT functions such as malloc and free.
Consequently, developers are now left with a choicelearn the new
application programming interface (API) provided as part of
Windows or stick to the portable, and typically familiar, CRT
functions for managing memory in applications written for Windows.
The Windows API offers three groups of functions for
managing memory in applications: memory-mapped file
functions, heap memory functions, and virtual memory
functions.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
61/99
Figure 1. The Windows API provides different levels ofmemory management for versatility in applicationprogramming.
Table 1. Memory Management Functions
Memory setSystem resource affectedRelated technical article
Virtual memory functionsA process' virtual address spaceSystem pagefileSystem memoryHard disk space
"Managing Virtual Memory"
Memory-mapped file
functions
A process's virtual address spaceSystem pagefileStandard file I/OSystem memoryHard disk space
"Managing Memory-MappedFiles"
Heap memory functionsA process's virtual address spaceSystem memory
Process heap resource structure
"Managing Heap Memory"
Global heap memory
functions
A process's heap resource
structure
"Managing Heap Memory"
Local heap memoryfunctions
A process's heap resourcestructure
"Managing Heap Memory"
C run-time referencelibrary
A process's heap resourcestructure
"Managing Heap Memory"
8/7/2019 Gurpreet Singh RA1805 Roll No 17
62/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
63/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
64/99
These make up the normal address space of the kernel.
These addresses map some portion (perhaps all) of main
memory and are often treated as if they were physical
addresses. On most architectures, logical addresses and
their associated physical addresses differ only by a constant
offset.
Logical addresses use the hardware's native pointer size
and, therefore, may be unable to address all of physical
memory on heavily equipped 32-bit systems. Logical addresses
are usually stored in variables of type unsigned long or void *.
Memory returned from kmalloc has a kernel logical address.
Kernel virtual addresses
Kernel virtual addresses are similar to logical addresses in that they
are a mapping from a kernel-space address to a physical address.
Kernel virtual addresses do not necessarily have the linear, one-to-
one mapping to physical addresses that characterize the logical
address space, however. All logical addresses are kernel virtual
addresses, but many kernel virtual addresses are not logical
addresses.
For example: Memory allocated by vmalloc has a virtual address
(but no direct physical mapping). The kmap function (described later
in this chapter) also returns virtual addresses. Virtual addresses are
usually stored in pointer variables.
DIAGRAM: Address types used in Linux
8/7/2019 Gurpreet Singh RA1805 Roll No 17
65/99
If you have a logical address, the macro _ _pa( ) (defined
in ) returns its associated physical address. Physical
addresses can be mapped back to logical addresses with _ _va( ),
but only for low-memory pages.
Different kernel functions require different types of addresses. It
would be nice if there were different C types defined, so that the
required address types were explicit, but we have no such luck. In
this chapter, we try to be clear on which types of addresses are
used where.
Physical Addresses and Pages
Physical memory is divided into discrete units called pages. Much of
the system's internal handling of memory is done on a per-page
basis. Page size varies from one architecture to the next, although
most systems currently use 4096-byte pages. The
constant PAGE_SIZE (defined in ) gives the page size
on any given architecture.
8/7/2019 Gurpreet Singh RA1805 Roll No 17
66/99
If you look at a memory address virtual or physical it is
divisible into a page number and an offset within the page.
If 4096-byte pages are being used AS:
For example: The 12 least-significant bits are the offset, and the
remaining, higher bits indicate the page number. If you discard the
offset and shift the rest of an offset to the right, the result is called
a page frame number (PFN). Shifting bits to convert between page
frame numbers and addresses is a fairly common operation; the
macro PAGE_SHIFT tells how many bits must be shifted to make this
conversion.
Virtual Memory AreasThe virtual memory area (VMA) is the kernel data structure used to
manage distinct regions of a process's address space. A VMA
represents a homogeneous region in the virtual memory of a
process: a contiguous range of virtual addresses that have the same
permission flags and are backed up by the same object (a file, say,
or swap space). It corresponds loosely to the concept of a
"segment," although it is better described as "a memory object with
its own properties." The memory map of a process is made up of (at
least) the following areas:
An area for the program's executable code (often called
text)
Multiple areas for data, including initialized data (that which
has an explicitly assigned value at the beginning of execution),
uninitialized data (BSS),[3] and the program stack
http://www.linuxdriver.co.il/ldd3/chp-15-sect-1.shtml#chp-15-FNOTE-3http://www.linuxdriver.co.il/ldd3/chp-15-sect-1.shtml#chp-15-FNOTE-38/7/2019 Gurpreet Singh RA1805 Roll No 17
67/99
One area for each active memory mapping
The memory areas of a process can be seen by looking
in /proc/ (in which pid, of course, is replaced by a
process ID). /proc/self is a special case of /proc/pid, because it
always refers to the current process. As an example, here are a
couple of memory maps (to which we have added short comments
in italics):
# cat /proc/1/maps look at init08048000-0804e000 r-xp 00000000 03:01 64652 /sbin/init text0804e000-0804f000 rw-p 00006000 03:01 64652 /sbin/init data0804f000-08053000 rwxp 00000000 00:00 0 zero-mapped BSS
40000000-40015000 r-xp 00000000 03:01 96278 /lib/ld-2.3.2.sotext40015000-40016000 rw-p 00014000 03:01 96278 /lib/ld-2.3.2.sodata40016000-40017000 rw-p 00000000 00:00 0 BSS for ld.so42000000-4212e000 r-xp 00000000 03:01 80290 /lib/tls/libc-2.3.2.so text4212e000-42131000 rw-p 0012e000 03:01 80290 /lib/tls/libc-2.3.2.so data
42131000-42133000 rw-p 00000000 00:00 0 BSS for libcbffff000-c0000000 rwxp 00000000 00:00 0 Stack segmentffffe000-fffff000 ---p 00000000 00:00 0 vsyscall page
# rsh wolf cat /proc/self/maps #### x86-64 (trimmed)00400000-00405000 r-xp 00000000 03:01 1596291 /bin/cattext00504000-00505000 rw-p 00004000 03:01 1596291 /bin/catdata00505000-00526000 rwxp 00505000 00:00 0 bss
3252200000-3252214000 r-xp 00000000 03:01 1237890 /lib64/ld-2.3.3.so3252300000-3252301000 r--p 00100000 03:01 1237890 /lib64/ld-2.3.3.so3252301000-3252302000 rw-p 00101000 03:01 1237890 /lib64/ld-2.3.3.so7fbfffe000-7fc0000000 rw-p 7fbfffe000 00:00 0 stack
8/7/2019 Gurpreet Singh RA1805 Roll No 17
68/99
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0 vsyscall
Mac OS memorymanagement
"About This Computer" Mac OS 9.1 window showing the memory
consumption of each open application and the system software
itself.
Historically, the Mac OS used a form ofmemory management that
has fallen out of favour in modern systems. Criticism of this
approach was one of the key areas addressed by the change to Mac
OS X.
The original problem for the designers of the Macintosh was how to
make optimum use of the 128 KB ofRAM that the machine was
equipped with.[1] Since at that time the machine could only run
one application program at a time, and there was
no fixedsecondary storage, the designers implemented a simple
scheme which worked well with those particular constraints.
http://en.wikipedia.org/wiki/Mac_OS_9http://en.wikipedia.org/wiki/Mac_OShttp://en.wikipedia.org/wiki/Memory_managementhttp://en.wikipedia.org/wiki/Mac_OS_Xhttp://en.wikipedia.org/wiki/Mac_OS_Xhttp://en.wikipedia.org/wiki/Macintoshhttp://en.wikipedia.org/wiki/Random-access_memoryhttp://en.wikipedia.org/wiki/Mac_OS_memory_management#cite_note-0http://en.wikipedia.org/wiki/Application_softwarehttp://en.wikipedia.org/wiki/Hard_disk_drivehttp://en.wikipedia.org/wiki/Secondary_storagehttp://en.wikipedia.org/wiki/File:About_This_Computer_Mac_OS_9.1.pnghttp://en.wikipedia.org/wiki/Mac_OS_9http://en.wikipedia.org/wiki/Mac_OShttp://en.wikipedia.org/wiki/Memory_managementhttp://en.wikipedia.org/wiki/Mac_OS_Xhttp://en.wikipedia.org/wiki/Mac_OS_Xhttp://en.wikipedia.org/wiki/Macintoshhttp://en.wikipedia.org/wiki/Random-access_memoryhttp://en.wikipedia.org/wiki/Mac_OS_memory_management#cite_note-0http://en.wikipedia.org/wiki/Application_softwarehttp://en.wikipedia.org/wiki/Hard_disk_drivehttp://en.wikipedia.org/wiki/Secondary_storage8/7/2019 Gurpreet Singh RA1805 Roll No 17
69/99
However, that design choice did not scale well with the development
of the machine, creating various difficulties for both programmers
and users.
FRAGMENTATION
The chief worry of the original designers appears to have
been fragmentation - that is, repeated allocation and
deallocation of memory through pointers leads to many
small isolated areas of memory which cannot be used
because they are too small, even though the total free
memory may be sufficient to satisfy a particular request formemory.
To solve this: Apple designers used the concept of a
relocatable handle, a reference to memory which allowed
the actual data referred to be moved without invalidating
the handle.
Apple's scheme was simple - a handle was simply a pointer into a
(non relocatable) table of further pointers, which in turn pointed to
the data. If a memory request required compaction of memory, this
was done and the table, called the master pointer block, was
updated. The machine itself implemented two areas in the machine
available for this scheme - the system heap (used for the OS), and
the application heap. As long as only one application at a time wasrun, the system worked well. Since the entire application heap was
dissolved when the application quit, fragmentation was minimized.
http://en.wikipedia.org/wiki/Fragmentation_(computer)http://en.wikipedia.org/wiki/Fragmentation_(computer)http://en.wikipedia.org/wiki/Pointer_(computing)http://en.wikipedia.org/wiki/Pointer_(computing)http://en.wikipedia.org/wiki/Apple_Inc.http://en.wikipedia.org/wiki/Handle_(computing)http://en.wikipedia.org/wiki/Fragmentation_(computer)http://en.wikipedia.org/wiki/Pointer_(computing)http://en.wikipedia.org/wiki/Apple_Inc.http://en.wikipedia.org/wiki/Handle_(computing)8/7/2019 Gurpreet Singh RA1805 Roll No 17
70/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
71/99
Memory, or RAM, is handled differently in Mac OS X than it
was in earlier versions of the Mac OS. In earlier versions of
the Mac OS, each program had assigned to it an amount of
RAM the program could use. Users could turn on Virtual
Memory, which uses part of the system's hard drive as extra
RAM, if the system needed it.
In contrast, Mac OS X uses a completely different memory
management system. All programs can use an almost unlimited
amount of memory, which is allocated to the application on an as-
needed basis. Mac OS X will generously load as much of a
program into RAM as they can, even parts that may not
currently be in use.
This may inflate the amount of actual RAM being used by the
system. When RAM is needed, the system will swap or page out
those pieces not needed or not currently in use. It is
important to bear this in mind because a casual examination of
memory usage with the top command via the Terminal application
will reveal large amounts of RAM being used by applications. (The
Terminal application allows users to access the UNIX operating
system which is the foundation of Mac OS X.) When needed, the
system will dynamically allocate additional virtual memory so there
is no need for users try to tamper with how the system handles
additional memory needs. However, there is no substitute for havingadditional physical RAM.
Most Macintoshes produced in the past few years have
shipped with either 128 or 256 MB of RAM. Although Apple
8/7/2019 Gurpreet Singh RA1805 Roll No 17
72/99
claims that the minimum amount of RAM that's needed to run Mac
OS X is 128 MB, users will find having at least 256 MB is necessary
to work in a productive way and having 512 MB is preferable.
Starting with Mac OS 10.4 (Tiger) the minimum will be raised
to 256 MB of RAM. Most new Macintoshes are shipping with
512 MB of RAM. For systems which have only 256 MB of RAM it is
advisable for users to have at least 512 MB of RAM in order to run
applications effectively.
Mac OS 10.5 (Leopard) requires at least 512 MB of RAM. Most
users will find that a minimum of 1 GB of RAM is desirable. Less
than 1 GB means the system will have to do make use of
virtual memory which will adversely affect system
performance.
For CPU Scheduling and Memory Management
#include
#include
void roundrobin();
8/7/2019 Gurpreet Singh RA1805 Roll No 17
73/99
void fifo();
void prioritynonpre();
void sjf();
void fcfs();
void lru();
int main()
{
int choice1,choice2,choice3,choice4,choice5;
while(1)
{
//clrscr();
printf("\n\n\t ***** Welcome To CPU SCHEDULING and
MEMEORY MANAGEMENT ALGO *****");
printf("\n\n Enter your choice : \n 1.For CPU Scheduling
algorithms\n 2.For Memory Management algorithms\n 0.For
EXIT \n Enter Your Choice: ");
scanf("%d",&choice1);
if(choice1 == 1)
{
//clrscr();
8/7/2019 Gurpreet Singh RA1805 Roll No 17
74/99
printf("\n\n Enter your choice:\n 1.For Pre-emptive\n
2.For Non-Preemptive\n 0.For To Exit \n Enter Your Choice:");
scanf("%d",&choice2);
if(choice2 == 1)
{
//clrscr();
printf("Enter your choice :\n 1.For Round Robin\n 0.For
To Exit\n Enter Your Choice:");
scanf("%d",&choice3);
if(choice3 == 1)
{
roundrobin();
}
else if(choice3 == 0)
{
break;
}
else
{
printf("\n\n\t ***** INVALID INPUT *****");
printf("\n\t Press any key to continue......");
getch();
8/7/2019 Gurpreet Singh RA1805 Roll No 17
75/99
}
}
else if(choice2 == 2)
{
//clrscr();
printf("\n\n Enter your choice:\n 1.For FCFS\n 3.For SJF\n
0.For to Exit \n Enter Your Choice:");
scanf("%d",&choice4);
if(choice4 == 1)
{
fifo();
}
else if(choice4 == 3)
{
sjf();
}
else if(choice4 == 0)
{
break;
}
else
{
printf("\n\n\t ***** INVALID INPUT *****");
printf("\n\t Press any key to continue......");
8/7/2019 Gurpreet Singh RA1805 Roll No 17
76/99
getch();
}
}
else if(choice2 == 0)
{
break;
}
else
{
printf("\n\n\t ***** INVALID INPUT *****");
printf("\n\t Press any key to continue......");
getch();
}
}
else if(choice1 == 2)
{
//clrscr();
printf("Enter your choice:\n 1.For FIFO ALGORITHM\n
2.For LRU ALGORITHM\n 0.For To Exit\n Enter Your Choice:");
scanf("%d",&choice5);
if(choice5 == 1)
{
fcfs();
}
8/7/2019 Gurpreet Singh RA1805 Roll No 17
77/99
else if(choice5 == 2)
{
lru();
}
else if(choice5 == 0)
{
break;
}
else
{
printf("\n\n\t ***** INVALID INPUT *****");
printf("\n\t Press any key to continue......");
getch();
}
}
else if(choice1 == 0)
{
break;
}
else
{
printf("\n\n\t ***** INVALID INPUT *****");
printf("\n\t Press any key to continue......");
8/7/2019 Gurpreet Singh RA1805 Roll No 17
78/99
getch();
}
}
getch();
return 0;
}
void sjf()
{
int burst[5],arrival[5],done[5],waiting[5];
int i,j,k,l=0,sum,total,min,max;
int temp;
float awt = 0.0;
sum = 0;
printf("\n\n\t\t ***** This SJF is for 5 Processes *****");
printf("\n\n\tEnter the details of the processes ");
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
79/99
scanf("%d",&burst[i]);
printf("Enter the arrival time : ");
scanf("%d",&arrival[i]);
done[i] = 0;
waiting[i] = 0;
}
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
80/99
}
}
printf("\nProcessing process %d",l+1);
printf(" i = %d",i);
printf(" arrival = %d",arrival[l]);
temp = i - arrival[l];
i = i + burst[l];
done[l] = 1;
waiting[l] = temp;
}
awt = 0.0F;
printf("\nThe respective waiting times are : ");
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
81/99
/* ****************************************8888
int burst[4],arrival[4],done[4],waiting[4];
int i,j,sum,min;
int temp;
sum = 0;
printf("\n\n\t Enter the details of the processes:");
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
82/99
done[i] = 0;
waiting[i] = 0;
}
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
83/99
temp = i - arrival[j];
i = i + burst[j];
done[j] = 1;
waiting[j] = temp;
printf("\ntemp = %d",temp);
}
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
84/99
int gap=0;
float awt=0;
sum = 0;
// clrscr();
printf("\n\n *** This ROUNDROBIN ALGO works for 4
processes ***");
printf("\n\n\t Enter the details of the processes:");
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
85/99
lefttime[i] = 0;
waiting[i] = 0;
last[i] = 0;
}
printf("\n Enter the interval time:");
scanf("%d",&gap);
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
86/99
j=0;
for(i=0;i 0 && arrival[j] < i )
{
if(burst[j] < gap)
{
//printf("\nProcessing p%d in less",j);
lefttime[j] = 0;
waiting[j] = waiting[j] + (i - last[j]);
last[j] = i;
//printf(" Waiting = %d",waiting[j]);
i = i + lefttime[j];
}
else if(burst[j] > gap)
{
//printf("\nProcessing p%d in more",j);
lefttime[j] = lefttime[j] - gap;
waiting[j] = waiting[j] + (i - last[j]);
8/7/2019 Gurpreet Singh RA1805 Roll No 17
87/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
88/99
printf("\n\n The average waiting time is %.3f",awt);
printf("\n\n Press any key to continue.....");
getch();
}
void lru()
{
int num,i,buf,page[100],buff[100],j,pagefault,rep,flag =
1,ind,abc[100];
int count;
int l,k,fla;
// clrscr();
printf("\n\n Enter the number of paging sequence you want to
enter:");
scanf("%d",&num);
printf("\n Enter the paging sequence:\n");
8/7/2019 Gurpreet Singh RA1805 Roll No 17
89/99
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
90/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
91/99
{
printf("\n *** I m here ***");
if(k < buf)
{
buff[k] = page[i];
k++;
pagefault++;
printf("\nNow pages are : "); //%d %d %d
",buff[0],buff[1],buff[2]);
for(l=0;l
8/7/2019 Gurpreet Singh RA1805 Roll No 17
92/99
abc[j] = 0;
}
for(l=i;l>=0;l++)
{
for(j=buf-1;j>=0;j--)
{
if(abc[j] == page[l])
{
fla = 0;
break;
}
}
if(fla == 1)
{
abc[count] = page[l];
count++;
}
if(count == (buf-1))
{
rep = abc[buf-1];
break;
8/7/2019 Gurpreet Singh RA1805 Roll No 17
93/99
}
}
for(l=0;l
8/7/2019 Gurpreet Singh RA1805 Roll No 17
94/99
getch();
}
void fifo()
{
int k=0,ptime[25],n,s=0,i,sum=0;
char name[25][25];
float avg;
//clrscr();
printf ("\n\nEnter the no. of process:\t");
scanf ("%d",&n);
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
95/99
printf("\n \n");
printf("\n Process - Name \t Process - Time \n");
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
96/99
s+=ptime[i];
sum+=s;
}
avg=(float)sum/n;
printf("\n Turn around time is:\t");
printf("%2fmsec",avg);
printf("\n\n Press any key to continue.....");
getch();
}
void fcfs()
{
int num,i,buf,page[100],buff[100],j,pagefault,flag =
1,temp,k,l;
//clrscr();
printf("\n Enter the number of paging sequence you want to
enter:");
scanf("%d",&num);
printf("\n Enter the paging sequence:\n");
for(i=0;i
8/7/2019 Gurpreet Singh RA1805 Roll No 17
97/99
scanf("%d",&buf);
for(j=0;j
8/7/2019 Gurpreet Singh RA1805 Roll No 17
98/99
8/7/2019 Gurpreet Singh RA1805 Roll No 17
99/99
buff[j+1] = buff[j];
buff[j] = temp;
}
buff[buf-1] = page[i];
pagefault++;
printf("\n Now pages are : "); //%d %d %d
",buff[0],buff[1],buff[2]);
for(l=0;l