## CS 2005, B Term 1999 Data Structures and Programming Techniques Test 3 Solutions

Instructions.   Read each problem carefully before answering. Circle the selected answer(s) or write your solutions in ink in the spaces provided when this is the appropriate action as indicated below for each problem. Write neatly. Each problem is worth 60 points. All parts of each problem have equal point value. Good luck!

```

```

1. a) Fill in the five blanks below so that the resulting tree on the right is the right rotation of the tree rooted at 4 shown on the left.

```       4                  2
/ \                / \
2   5              1   4
/ \                    / \
1   3                  3   5

```

b) Circle the correct response in each case:

• All AVL trees satisfy:
```-->height-balance             search time is O(1) in the worst case

```

• The worst-case running time for binary search on an input array of size n is:
```   O(2n)                      O(log2 n) <--

```

• The average search time for a hash table as a function of the load factor is:
```-->an increasing function     a decreasing function

```

2. The simplest linear probing version of hashing discussed in this course is based on the following hash function
```   template <class RecordType>
size_t Table<RecordType>::hash(int key) const
{
return (key % CAPACITY);
}
```
and resolves collisions by using the following next_index() function
```   template <class RecordType>
size_t Table<RecordType>::next_index(size_t index) const
{
return ((index+1) % CAPACITY);
}
```
Assume that CAPACITY=19 and that a record with key value 40 is to be inserted into the storage array using the hashing algorithm referred to above. Fill in the blanks below:

a) The array position in which the algorithm initially attempts to store the new record in is: 2

b) If array position 18 has been tried but a collision has occurred, the array position tried next is: 0

c) Give the maximum number of array positions that will be tried in the worst case: 19

```

```

3. Circle the best answer in each case.

(a) For large sizes of the input, it is better (faster) to use an algorithm whose running time is:

```   O(n2)                O(n log n) <--
```

(b) The quicksort algorithm as discussed in this course is:

```-->recursive            iterative (non-recursive)
```

(c) The average search time for a hash table with load factor 2/3 using open addressing with simple linear probing is (recall that the formula for this gives T as the average of 1 and 1/(1-a)):

```   3/2                  2 <--
```

(d) The worst-case running time for mergesort on an input array of size n is:

```   O(n2)                O(n log n) <--
```

(e) Of the following algorithms, the one that does not require a dynamic array for temporary storage is:

```-->quicksort            mergesort
```

(f) For sparse graphs (with few edges relative to the number of pairs of vertices), the better choice of representation is:

```   adjacency matrix     edge list <--
```

4. Fill in the following outline to obtain a complete implementation of the mergesort function as discussed in this course. Assume that a correct implementation of the merge function is available.
```   void mergesort(int data[ ], size_t n)
// Precondition: data is an array with at least n components.
// Postcondition: The elements of data have been rearranged so
// that data[0] <= data[1] <= ... <= data[n-1].
{
size_t n1; // Size of the first subarray
size_t n2; // Size of the second subarray

if (n > 1)
{
// Compute sizes of the subarrays
n1 = n / 2;
n2 = n - n1;

// Sort from data[0] through data[n1-1]
mergesort(data, n1);

// Sort from data[n1] to the end
mergesort((data + n1), n2);

// Merge the two sorted halves
merge(data, n1, n2);
}
}

```

5. Consider the graph shown here, which happens to be a tree:
```                   0
/   \
1     2
/ \   / \
3  4   5  6

```

a) Give the depth-first listing of the graph.

```
Answer: 0, 1, 3, 4, 2, 5, 6

```

b) Give the breadth-first listing of the graph.

```
Answer: 0, 1, 2, 3, 4, 5, 6

```