Cs19.wpunj.edu



Memory Management

The memory manager is the part of the operating system that manages the memory hierarchy.

It does the following:

• keep track of which parts of memory are in use and which parts are not in use.

• allocate memory to processes when they need it.

• deallocate memory allocated to a process when it is done with it.

• manage swapping between main memory and disk when the main memory is too small to hold all the processes.

Memory Management Systems

Two classes of memory management systems:

1. Those that move processes back and forth between the main memory and the disk:

▪ swapping

▪ paging.

2. And those that do not:

▪ monoprogramming without swapping or paging

▪ multiprogramming with fixed partition.

Monoprogramming without swapping or Paging

➢ Only one process at a time can be running.

➢ The main memory is shared between that program and the operating system.

➢ Three variations of this scheme are shown in page 183.

← As soon as the user types a command, the operating system copies the requested program from disk to the main memory and executes it.

← When the process finishes the operating system displays a prompt character and waits for a new command.

← A new program overwrites the previous one.

Multiprogramming with Fixed Partitions

➢ the main memory is divided up into n (possibly unequal) partitions

➢ the partition can be done manually when the system is started up.

➢ New jobs are either put into:

← a single input queue, or

← one of the multiple input queues (one queue for each partition).

➢ With one input queue for each partition, when a new job arrives, it is placed in the smallest partition large enough to hold it.

Disadvantage: the queue for a large partition may be empty whereas that for a small partition is full.

➢ With one input queue, two strategies are used whenever a partition becomes free:

1) the job closest to the front of the queue that fits in it is loaded into the empty partition and run.

2) the whole input queue is searched for the largest job that fits

to avoid the discrimination against small jobs,

← one may have a small partition around, or

← a rule stating that a job that is eligible to run may not be skipped k times.

Relocation and Protection in a Multiprogramming with Fixed Partitions

▪ When the linker builds an executable module, it does not know the physical address of the location where the program will be in memory.

▪ Relocation refers to the mechanism used to match the addresses referenced in a program to the physical addresses in memory.

▪ One possible solution is to use relative addressing in a program,

▪ and then modify the instructions as the program is loaded into memory by adding a constant to each address.

▪ For this to be possible, the linker must include in the program a list or bitmap telling which program words are addresses to be relocated.

▪ There must also be a way to prevent processes from reading and writing memory locations belonging to other users:

One solution (used on IBM 360) follows:

- divide the memory into 2-KB blocks and to assign a 4-bit protection code to each block.

- The PSW also contains a 4-bit key such that a memory block is accessed by a process only if its protection code matches the key in the PSW.

An alternative approach to both relocation and protection is as follows:

- use a base and a limit registers: when a process is started, the base register is loaded with the address of the start of its partition, and the limit register is loaded with the length of the partition.

- Every memory address generated is added to the contents of the base register automatically before being sent to memory.

- Addresses are also checked against the limit to make sure that they are within a given partition.

Swapping

Swapping consists of the following: ( Refer to Figure 3-4, page 189):

- bring in the main memory each process in its entirety,

- run it for a while, then putting it back on the disk.

▪ Note 1: with swapping

• the location of a partition,

• the number of partitions, and

• the size of a partition

vary dynamically.

▪ Note 2: With swapping

• multiple holes may be created in memory.

• It is possible to combine these holes into one big one by moving all processes downward as far as possible.

• This technique is known as memory compacting.

▪ Allocation of memory to a process: Figure 3-5, page 190.

Memory Management (with swapping)

➢ Two ways are used to keep track of memory usage in a swapping system:

- using the bitmaps, Figure 3-6, page 191

- using a linked list Figure 3-6, page 191.

➢ Updating the list: Figure 3-7, page 192.

➢ Algorithms to allocate memory for a newly created process (or an existing process being swapped in from disk)

(Note: processes and holes are kept on a list sorted by addresses):

• first fit.

• next fit.

• best fit.

• worst fit.

• Quick fit

➢ All these algorithms can be speeded up by maintaining separate lists for processes and holes

➢ With separate lists, one can implement another algorithm, the quick fit which maintains lists of holes of the same size.

Virtual Memory

▪ The basic idea behind virtual memory is that the size of a program (text, data, and stack) may exceed the amount of physical memory allocated for it.

▪ It allows a program to run even when it can only be partially in main memory:

▪ the operating system keeps those parts of the program currently in use in main memory, and the rest on disk.

▪ Virtual memory can also work in a multiprogramming system. While a program is waiting for part of itself to be brought in, it is waiting for I/O and cannot run, so the CPU can be given to another process.

▪ Addresses generated in a program are called virtual addresses and form the virtual address space.

▪ A virtual address is mapped into a physical address by the memory management unit (MMU).

▪ Most virtual memory systems use a technique called Paging.

Paging (page 197)

➢ In a paging system, the virtual address space is subdivided into units called pages, and the corresponding unit in the main memory are called page frames.

➢ Pages and page frames have the same size.

➢ Transfers between the main memory and the disk are always in units of a page.

Figure 3-9, page 197:

← the virtual address space has 64 KB ( 16 pages of 4 KB) and

← the physical memory has 32 KB (8 page frames of 4 KB)

➢ Some pages are physically in memory, but others are not.

➢ There is a bit (present/absent bit) in the hardware that keeps track of which pages are physically in memory.

➢ A page table (Figure 3.10, page 199) keeps track of the mapping between pages and page frames:

← The 16 Pages are numbered: 0000 - 1111

← The 8 page frames are numbered: 000 - 111

← The offsets in a page/ page frames have 12 bits: 000000000000 - 111111111111

← In addition of the Present/absent bit, a page table entry also has a Modified bit, a Referenced bit, and a Protection bit.

← The Modified and the Reference bits are updated on every memory reference. And the Reference bit is cleared at each clock interrupt.

← In the simplest case,

o a virtual address consists of:

▪ the virtual page number (high-order bits) and

▪ an offset (low-order bits), and

o A physical address also consists of:

▪ A page frame number (high-order bits) and

▪ an offset (low-order bits).

o The virtual page number is used as an index into the page table to find the entry for that virtual page.

o From the page table entry, the corresponding page frame number is found and attached to the offset to get the corresponding physical address.

➢ When the CPU executes an instruction with a reference to a virtual memory location that is not physically in memory,

1. the CPU sends a trap, called page fault to the operating system.

2. The operating system must then replace a page in memory by the referenced page.

Page Replacement Algorithms

➢ When a page fault occurs, the operating system has to choose a page to remove from memory to make room for the page that has to be brought in.

➢ Note that if the contents of the page to be replaced have been modified, it must be rewritten to the disk.

The algorithms used for page replacement are:

1. The not recently used page replacement algorithm

2. The first-in first-out (FIFO) page replacement algorithm

3. The second chance page replacement algorithm

4. The clock page replacement algorithm

5. The least recently used (LRU) page replacement algorithm (See also: Not Frequently Used)

6. The working set page replacement algorithm

7. The Wsclock replacement algorithm

Segmentation Page 240

Allows program text and data to be broken up into logically independent address spaces and to aid sharing and protection.

Chapter 3 Questions

1.

a) What is the job of the memory manager?

b) Provide a description of each of the following memory management systems:

i) Monoprogramming without swapping or paging ii) Multiprogramming with fixed partition

iii) Swapping iv) Virtual memory.

c) How are relocation and protection handled in a multi programming environment with fixed partitions?

d) Describe the concept of paging,

e) Describe the concept of memory segmentation.

f) How is memory organized in a palmtop computers or an embedded system?

g) What are the two ways than an operating system can use to keep track of the memory segments that are in used and those that are not in use in a memory management with swapping.

h) Describe two algorithms to allocate memory for a newly created process in a memory management system with swapping.

i) Describe two algorithms for page replacement in a paging system.

2.

Do the following exercises: 4, 5, 6, 7, 15a, 28, 29, 30, 36, pages 254 to 258

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download