[WPI] [CS] [cs504] [Syllabus] 

cs504, S99/00 Class 4

Recurrence Relations

Recurrence relations describe how one element of a sequence can be computed from one or more of the previous elements in the sequence. There is no one way to solve recurrence relations, but they do come in families - in that respect they are similar to their analogue in continuous mathematics, the differential equation. It is worthwhile classifying a recurrence relation since that often gives clues on how to solve it. See the discussion n the Recurrence Relation Notes on the Notes page. While you are there, look at the examples of how to form recurrence relations from word problems and algorithms.

The rest of this class comprises examples of how to solve recurrence relations.

Look it up

Many of the recurrence relations you will run into have already been solved. NJA Sloane has compiled an extensive on-line database of sequences.

Recurrence Relations and Summations

Every summation has an implicit recurrence relation which can be found by using the first step of the perturbation method for solving summations.

Thus any linear, first-order recurrence relation with equal coefficients for the two terms can be solved - if the underlying summation can be solved.

Tables of solved series are contained in I.S. Gradshteyn and I.M. Ryzhik Alan Jeffrey, Table of Integrals, Series, and Products, 5th Ed (1993, Academic Press) and other published resources. A CD-ROM version of the book is also available.

Thus the solution to the above recurrence relation is

Notice that an initial condition has been added to the solution. The recurrence relation is first-order so only one initial condition is required.

An Example - Hashing with Random Probing

Hashing algorithms are used to store data where the set of possible keys (the data used for accessing the data) exceeds the size of the data set. For example, WPI student IDs are ten digits - 123-45-6789. CS504 has fewer than 100 registered students. The simplest way to store the numbers is to make an array of size 10^9 - the number of possible student IDs - and simply use the ID to look up the data in the database.

This is somewhat inefficient. Suppose we store the number by the last two digits. Then only 100 bins are available so the data ought to fit:

The problem, of course, is that two or more of the students may have the same last two digits in their IDs. We want to calculate the expected value of C, the number of comparisons necessary to store the N-th data set in the data structure, assuming that M of the bins are already filled. Clearly CN is at least one - we look in the bin with the number of the last two digits in the ID to see if it is already occupied. If the bin is already occupied, we use a hashing function which provides a repeatable method for selecting alternative bits. There are many possible hashing functions - such as looking in subsequent bins until an empty one is find - but that is not what we are studying in this problem. We merely want to calculate the average number of comparisons necessary to find an empty bin.

The probability that N bins have been searched but an empty one still has not been found is:

Thus the probability that an empty bin is found in the N-th comparison is:

The expected value is:

To make it easier to solve the summation, increase the upper limit to infinity:

We can calculate the summation by starting with the geometric sequence.

Except for the restriction on its absolute value, alpha can be anything. Therefore we can differentiate both sides of the last equation with respect to alpha:

Thus the expected value of the number of comparisons is:

Notice when the data structure is empty (N=0), one comparison is required; when the data structure is half full (N=M/2), an average of two comparisons is required; when 9/10 of the bins are filled (N=0.9), an average of ten comparison is required; when the data base is full (N=M), an infinite number of comparisons is required because non of them could possibly succeed.

Linear Fixed-order Recurrence Relation with Constant Coefficients

The Notes page contains information on solving these recurrence relations. The technique there works for recurrence relations where the "forcing function" is polynomial in N, exponential in N, or a product of the two.

Summing Factors

This technique is the analogue of the integrating factor technique for solving differential equations. The technique is applicable to any linear, first-order recurrence relation:

The two functions BN and CN can be almost any functions of N. The way we remove the BN so that the becomes a simple recurrence relation is to make a substitution of variables:

With this substitution, the recurrence relation becomes:

Now you see the "almost" mentioned above. if any of the BN is zero, the solution cannot be found. We can find the solution by performing a sequence of substitutions:

When we take this sequence all the way back to the initial condition, the solution is:

This is exactly the result we obtain from using the summing technique near the top of this page.

If we undo the substitution of variables, the solution is

An Example - Tower of Hanoi

A key to success in analyzing algorithms is incremental thinking. Rather than trying to analyze an algorithm globally, we try to analyze the incremental cost of implementing the algorithm. Then we use discrete mathematical techniques to solve the global problem.

The tower of Hanoi is a classical computer science problem. There are three rods and one of them contains a stack of rings with the smallest on top and the largest on bottom

Rings can be moved one at a time and a ring can only be placed on one smaller than itself. How many steps are required to move a stack of N rings from one rod to another? imagine a sequence of solutions SN, which counts the number of steps necessary to move N rings. The first two values are S1=1 and S2=3:

As N grows, it becomes increasing difficult to imagine, and therefore to count, the number of steps. However, we can note that moving a stack of N rings has three stages: move the top N-1 rings to the third rod, move the N-th ring to the second rod, move the N-1 smaller rings to sit on top of the N-th ring.

The number of steps to move N rings satisfies this recurrence relation:

Later in this course we will find that the solution to this recurrence relation is the sequence:

We showed that the number of steps necessary to move a stack of N rings in the Tower of Hanoi problem is:

We can fit this into the solution above by using these definitions, which include the fact that no steps are required if there are no rings:

The solution is:

Substitution of Variables

Let's find the solution to this recurrence relation:

The summing factor method can be used to solve this recurrence relation (but not easily). Or, we can use a trick. As with all tricks, it is not obvious. Multiply both sides by (N-1):

Now we can define a new sequence BN which is related to AN by this expression:

We substitute this into the original recurrence relation to obtain a new one for the sequence BN:

We also need to calculate the equivalent initial condition:

The solution can be calculated using the method in the Notes page or the summation method above:

When we undo the substitution, we obtain the final answer:

It is straightforward to see that this is, in fact, the correct solution to the original recurrence relation.

An Example - Binary Search

Assume we have an array with N sorted values in it. Assume that N is a power of 2, although that is not strictly necessary. How many comparisons are required to tell whether a number is in the array?

The first step is to look half way down the list. If the number is not found, it is either above the half-way point (less than the number there) or below that point. Now look at the half-way point of the part of the list which could contain the number. If this algorithm is applied recursively, a recurrence relation results for the number of comparisons:

The initial condition results from noting that a list of size one requires exactly one comparison - to see whether the number in the list is the number we are seeking.

Again, let's try a substitution of variables:

The recurrence relation becomes:

The problem with this sequence is that it only exists for powers of 2. There are gaps in the sequence. That makes it hard to solve using any of the above technique. If we can match this sequence, term-by-term, with another which has no gaps, then we can use the solution to the second sequence to find the solution to the first:


Create a sequence BK which is matched term-by-term with the first.

The solution is:

When we undo the substitutions, we obtain the final answer:

Multiplication of N-Digit Integers

Integer multiplication can be performed using a divide and conquer technique. Decimal notation is just a shorthand for a linear equation:

3754 = 3*10^3 + 7*10^2 + 5*10^1 + 4*10^0

Note that the largest exponent, 3, is one less than the number of digits, N=4. When we multiply two of these N-digit numbers,

Figure showing the American way to multiply two 4-digit numbers: 2345*6789 = 15920205

it takes O(n2) single-digit multiplications and O(n2) single-digit additions to find the product. Suppose we break the problem into four smaller problems, each involving the multiplication of numbers with N/2 digits:

2345 * 6789 = (23*67)*10^4 + (23*89 + 45*67)*10^2 + (45*89)*10^0

This algorithm can be used recursively to reduce multiplication of two integers of any length to a sequence of single-digit multiplications. This certainly makes the algorithm easier, but it does not reduce the computational complexity. At eash stage in the recursion, the number of multiplications increases by a factor of four and the size of the numbers being multiplied decreases by a factor of two - so the number of single-digit multiplications in each of the multplications goes down by a factor of four. These effects cancel and there is no net reduction in the number of single-digit multiplications. It is still of order O(n2).

Note that the multiplications by factors of ten, shown above, are easy to achieve. Just add the appropriate numbers of zeros to the ends of the intermediate products.

A Multiplication Algorithm which requires Fewer Single-Digit Multiplications

Each N-digit integer can be written as the sum of four N/2 digit multiplications.

xy = (a10^N/2 + b)*(c10^N/2 + d) = ac10^N + (ad + bc)10^N/2 +bd

Note, if the numbers are represented in numbering system other than decimal, then that base would be substituted for the 10. Note, again, that four multiplications are required. Note, however, this identity.

(ad + bc) = (a+b)(c+d) - ac - bd

This shows that the central product can be replaced by a single product plus two additions of size N/2 numbers and two subtractions of size N numbers. Note, also that ac and bd have to be calculated anyway to produce the outer two products when multiplying x times y. Thus we can form three products of N/2 digit numbers and combine them to produce the product of two N digit numbers.

P1 = ac; P2 = bd; P3 = (a+b)(c+d); xy = (a10^N/2 + b)(c10^N/2 + d) = P1*10^N + (P3 - P1 - P2)*10^N/2 + P2


Example showing how to multiply two 4-digit numbers

But, this is a recursive algorithm so we can apply it to each of the products, too. Here is the first of the three subproducts.

Partial multiplication of leading pair of 2-digit numbers from last equation

As in this case, the recursion continutes until only single-digit numbers are multiplied.

Analysis of the Multiplication Algorithm

We expect the algorithm shown above to require fewer single-digit multiplications since only three products are formed at each recursive step instead of four. There are also three additions of N-digit numbers. Here is an analysis of the computational complexity.

Let MN represent the number of single-digit multiplications required to multiply two N-digit integers. Then MN satisfies this recurrence relation.

M(n) - 3*M(n-1) = 0

This can be solved using the techniques discussed above.

N = 2^K; M(2^K) - 3*M(2^(K-1)) = 0; S(K) = M(2^K); S(K) = 3*S(K-1) = 0  ->  S(K) = A + 3^K = 3^K;  M(2^K) = 3^K  ->  M(N) = 3^(lgN)

Use this theorm

lg(a)*lg(b) = lg(b) * lg(a) - multiplication is commutative; 
lg(b^lga) = lg(a^lgb)  - fundamental theorem of logs; 
b^lga = a^lgb - exponentiation preserves equality

to rewrite the solution to the recurrence relation.

M(N) approx = 3^lgN = N^lg3; lg3 = ln3/ln2 = log3/log2 = 1.584953...;  M(N) approx = 3^lgN = N^1.584963...

This algorithm is of order O(N1.584963...). For large values of N, this is substantially less than the original algorithm, which was of order O(N2)

Merge Sort

Suppose we have two already-sorted sub-arrays of size N/2. We can combine them by comparing the top numbers, one at a time in pairs, and adding the smallest to a new array of size N.

Figure showing two stacks. The top-most elements - the ones which would be popped next - are being compared to decide which is smaller and should be pushed on to a third stack.

This operation is inherently recursive. An array of size N is split into two arrays of size N/2, which are recursively halved again and again. When the sub array sizes are one, a simple comparison between two numbers is made. There are many details involved in writing code for this algorithm - how to dynamically allocate and deallocate all of the subarrays, how to handle arrays whose sizes are not powers of two, etc. We will ignore these details and analyze the simple case when N is a power of two.

Let's calculate the number of binary comparisons required to merge sort an array of size N. Note, right away, that the best and worst cases are the same. There is no arrangement of the numbers in the array which will increase or decrease the number of comparisons. So, our calculation will probably be correct for best, worst, and average cases.

Notice that N-1 comparisons are required to sort the two sub arrays of size N/2 since the last number doesn't have to be compared - it automatically goes at the end of the merged array. This is in addition to the number of comparisons required to sort the two arrays of size N/2. The recurrence relation for the number of comparisons is:

C(N) = 2*C(N/2) + N - 1;  C(1) = 0

The initial condition states that an array with only one number requires no comparisons. This recurrence relation can be solved using the above methods.

N = 2^K --> K = lg(N);  C(2^K) - 2*C(2^(K-1)) = 2^K =1;   B(K) defined as C(2^K);  B(K) - 2*B(K-1) = 2^K -1;  x - 2 = 0 --> x = 2 --> B(K; homogeneous) = A*2^K;  B(K; particular) = D*K*2^K + E;  C(1) = 0 --> C(2^0) = 0 --> B(0) = 0

The factor of K in the particular solution is because of the repeated root - 2K is already part of the homogenous solution. Solve for D and E by substituting the particular solution back into the recurrence relation:

D*K*2^K + E - 2*(D*(K-1)*2^(K-1) + E) = 2^K -1;  2^K*(D*K - D*K + D) + (E - 2*E)  = 2^K - 1;  D = 1;  E = 1;  B(K; particular) = K*2^K + 1

Now combine the homogeneous and particular solutions and use the initial condition to eliminate A.

B(K) = (A + K)*2^K + 1;  B(0) = A + 1 = 0  --> A = -1;  C(2^K) = (K-1)*2^K + 1;  C(N) = N*(lg(N) - 1) + 1 = O(N*lg(N))

Merge sorting is thus guaranteed to be of order NlgN for all input data.

More on Binomial Coefficients

Here are a few more identities involving the binomial coefficient - or choose function.

The second argument k must be an integer. When the first argument N is also an integer, we can use the definition based on factorials:

By applying the recursive definition of the factorial:

we can derive the absorption/extraction identity:

This identity is useful for incrementing or decrementing the function's arguments:

We used Pascal's triangle and combinatorial arguments to prove this identity:

We can apply this identity recursively to decompose any specific binomial coefficient. Here is an example:

This can be generalized:

The above example corresponds to N=3, r=1. Later we will show that this identity applies even when r is not an integer.

Here are two other useful identities:

The second identity can be proven by using the factorial definition of the function.

Newton's Method

As an example of non-integral arguments in binomial coefficients, look at Newton's method for calculating square roots.

The binomial expansion:

applies for all values of r, even non-integral. Note that the limits on the summation are infinite. We rely on the binomial coefficient having the value zero outside of the appropriate range as a way of limiting the number of terms.

To find the square root of a number (natural, rational, or real), factor out any perfect squares so the problem reduces to calculating the square root of a number of this form:

The first term is one - it's always one when the second argument is zero. To calculate the other terms, we need an alternate definition of the binomial coefficient - one which works even for non-integral first arguments;

The factors in the numerator decrement in sequence by one and there are k of them. Here are the first few coefficients:

Thus the square root is:

For example,

Three terms provide an answer accurate to about one part in 10,000.

[WPI Home Page]  [Syllabus] [Homework] 

Contents ©1994-2000, Micha Hofri, Michael Gennert, Stanley Selkow, and Norman Wittels
Updated 28Mar00 by NW