# 3013 Previous Questions of the Day

Thursday, October 2nd, 1997

Q: Consider the following paged memory system:

• Physical memory = 128 bytes
• Physical address space = 8 frames

Answer the following questions:

1. How many bits in an address?
2. How many bits for page number?
3. How many bits for page offset?
4. Can a logical address space have only 2 pages? How big would the page table be?

A:

• Physical memory = 128 bytes
• Physical address space = 8 frames

1. 7
2. 3
3. 4
4. Yes, a logical address space can have only 2 pages. In this case, the page table could be 3 bits * 2 pages = 6 bits, or it could be 1/2 the memory (if the logical address space == physical address space).

Tuesday, September 30th, 1997

Q: In class, we briefly mentioned three algorithms for allocating a free memory "hole" to an incoming process:

• First-fit
• Best-fit
• Worst-fit

Write psuedo-code for an implementation of the "best-fit" algorithm. Analyze the best case and worst case running times. What assumption could you make on the list of free holes that might improve the running time of your algorithm? What would then be the best case running time?

A: Best-fit psuedo-code:

```best = NULL;
q = head of list of free holes
while (q <> NULL) {
if (q->size > memory requested && (!best || q->size < best->size)) {
best = q;
}

}

if (best == NULL) {
print "Cannot honor memory request"
} else {
remove q from list
give q to process
}
```

The best-case and worst-case running times are the same: O(n). You need to search the entire list before finding the "best".

If you assumed that the list was sorted in order of smallest to largest sized holes, sometimes you might save time. The best case running time in this case might be 1 (the first hole is larger than the memory requested.)

Friday, September 12th, 1997

Q: Consider the below code (presented in class) for a possible solution to the Dining Philosophers problem:

```while (1) {
/* think */
wait(chopstick[i]);
wait(chopstick[i+1 % 5]);
/* eat */
signal(chopstick[i+1 % 5]);
wait(chopstick[i+1]);
}
```

The above solution may result in "deadlock". Provide a solution that does not result in deadlock (if your solution results in "starvation", that is ok).

A: This solution has the philosophers check both forks before grabbing them.

```
void philosopher(int i) {
to be put here soon
}
```

Tuesday, September 9th, 1997

Q: Assume we have a producer and consumer that share the following variables:

```  item i;
item buffer[MAX];
int in, out;
```
The producer code looks like:
```  while(1) {
...
produce an item in i
...
while ((in+1 % MAX) == out) { /* no-op */}
buffer[in] = i;
in = (in + 1) % MAX;
}
```
The consumer code looks like:
```  while(1) {
while (in == out) { /* no-op */}
i = buffer[out];
out = (out + 1) % MAX;
...
consume the item in i
...
}
```

Describe a race condition that will cause the above producer-consumer pair to fail. How many of the MAX buffer spots can be used at a time?

A: The above solution is correct and there are no race conditions that will cause it to fail. However, only MAX-1 buffer spots can be used (one buffer must be kept empty to avoid overlap in the circular queue).

Monday, September 8th, 1997

Q:A CPU scheduling algorithm determines an order for the execution of it's scheduled processes. Given N processes to be scheduled on one processor, how many possible different schedules are there? Give a formula in terms of N.

Hint, you might first think about 2 processes. Then 3 processes. Then 4 process. And then try to generalize to N processes.

A: `N!`

Pronounced "N factorial" which is: N * (N-1) * (N-2) * ... * 2 * 1

Friday, September 5th, 1997

Q: Consider a round robin scheduler. Give an argument in favor of a small quantum. Give an argument in favor of a large quantum. Most round robin schedulers use a fixed-size quantum. Can you think of a reason that a system should allow the quantum to change?

A: If there is a mix of very short jobs and a few very long jobs, a small quantum will give a shorter average turn-around time. Moreover, a very small quantum will give almost `1/N` of the CPU to each process, where `N` is the number of processes. This is called "processor sharing."

If the jobs all have equal length, a large quantum will give a faster average turn-around time. Moreover, overall, there will be fewer context switches with a large quantum, hence less inefficiency.

A time-slice quantum is chosen beforehand, based on the predicted mix of processes that the system will run. Say a small quantum is chosen because there is expected to be a lot of interactive users. If, however, the system ends up only have one user who runs many "background" programs, the user may be better served by a larger quantum. The reverse may be true if the expected job mix is a single, non-interactive user. If we can change the length of the time quantum, we can better respond to a particular mix of processes.

In practice, it is difficult to determine such a mix and change the quantum of a running system. A rule of thumb is to try and have 80% of the jobs finish their CPU burst within one quantum.

Thursday, September 4th, 1997

Q: Typical context speed times are between 1 to 1000 microseconds. As you can tell, there is a great variance between fast and slow context switch times. How might you go about measuring the time of a context switch on the CCC Unix systems? How might you measure the context switch time if you had your own private Unix system?

A:There are probably several ways to tackle this problem. Here is one:

Imagine a dedicated machine where you are the only user (the second half of the question). If you could run the only proces, you could have it do:

(psuedo-code follows)

```begin

start = gettimeofday()
start_cs = getrusage()  /* cs short for context switch */
for (i = 1 to some big number)
end_cs = getrusage()
end = gettimeofday()

total = end - start
total_cs = start_cs = end_cs

/* at this point, we have time to run plus context switches */
end
```
(Note, there are many ways to get the time of execution of the program. Using `gettimeofday()` is only one. We could have used `getrusage()`, `/bin/time` ...)

Now, we run two copies of the above program. The time to run both copies will be twice as large PLUS the sum of the context switch times! Divide the extra time by the number of switches and voila, we have it! If the granularity of the clock is too coarse, we will need to make sure we have many switches in order to get our measurement.

On the CCC machines, the difficulty comes in getting a system "quiet" enough were we can be sure there were no other context switches to other users' processes in the time we ran our program. The more users there are on the system, the harder it is to guarantee there were no other switches (as you indicated). We may be able to run our measurement experiment in the early morning hours (when we hope it is more quiet). Having our own system, is definitely much easier.

By the way, context switch times are nearly exactly the same from switch to switch. This is because there is a fixed amount of work that is done each time. This allows us to take a bunch of switches and a bunch of time and divide to get the time for 1 switch.