In an instruction takes 1 microsecond and a page fault takes an additional n microseconds, give a formula for the effective instruction time if page faults occur every k instructions.
Outline the steps that the operating system must do between the time a process incurs a page fault and the time that process is ready to run again. Be sure and indicate how any tables are updated. Assume that a free frame is available for the page.
In the paging systems a frame in memory can have four possible status values. Draw a state transition diagram with these status values as states and show the possible transitions between these states and the reason for each transition.
Consider a page reference string for a process with four frames available to it. Assume that demand paging is used (no pages are preloaded). Show which page references cause a page fault for each of the following page replacement policies:
0 1 2 4 0 3 1 2
0 1 2 4 0 3 1 2
The First-In, First-Out (FIFO) page replacement policy is relatively easy to implement and has low overhead. However, FIFO can easily replace a heavily used page. Design a simple modification to FIFO that is also easy to implement, has low overhead and prevents a heavily used page from being replaced.
Assume we have the following text segment for a program:
Load R1 A Load R2 A Add R1 R2 R3 Store R3 A If0 R3 Jump 4The page size is two words. Each instruction is one word. In addition, the data segment for the program is located above the text segment. The data segment is two pages large. The array
A[0-3]occupies one word per element. The array is located at the top of the data segment. The page numbering starts from 0. What is the page reference string generated when the above code is executed?
Suppose that a 32-bit virtual address is broken up into four fields, a, b, c, and d. The first three are used for a three-level page table system. The fourth field, d, is the offset. Does the number of pages depend on the sizes of all four fields? If not, which ones matter and which ones do not?
It has been observed that the number of instructions executed between page faults is directly proportional to the number of page frames allocated to a program. If the available memory is doubled, the mean interval between page faults is also doubled. Suppose that a normal instruction takes 1 microsec, but if a page fault occurs, it takes 2001 microsecs. If a program takes 60 sec to run, during which time it gets 15,000 page faults, how long would it take to run if twice as much memory were available?
Consider a machine that has a 24-bit address space and an 8192 byte page. The page table is entirely in hardware, with one 24-bit word per entry. When a process starts, the page table is copied to the hardware from memory, at one word every 10 microseconds. Each process runs for 100 milliseconds (including the time to load the page table). What fraction of the CPU time is devoted to loading the page tables?