Consider a paging system with the page table stored in memory:
What kind of fragmentation do we have with paging? Given N processes in memory, how much fragmentation do we have?
What is the locality of reference? How does demand paging take advantage of this in terms of a process' working set?
File system initialization:
What is the maximum number of disk I/O operations required to add a block to the end of the file for each of: contiguous allocation, linked-list allocation with index and i-nodes. Assume the base file descriptor is currently cached in memory, but nothing else is.
Consider a file system that supports aliases (links). Suppose we have:
/home/mark/file.txt /user/local/temp.txtWhere "temp.txt" and "file.txt" are intended to refer to the same set of data blocks on the disk (ie - the same file). If "temp.txt" was a hard-link, what would happen if "file.txt" was deleted? If "temp.txt" was a soft-link, what would happen if "file.txt" was deleted?
Why do large file-system blocks (typically) result in better file system access times. Why do large blocks result in less efficient use of disk space?
Explain the principles behind the "top half" of an interrupt handler and the "bottom half."
What is the role of the device independent layer of a device driver?
Can a disk have more cylinders than tracks? Can a disk have more sectors than tracks? Can a disk have more cylinders than platters?
Explain why you can use "pure" LRU for a disk cache replacement algorithm but it is more difficult to use "pure" LRU for a memory cache (page) replacement algorithm.