[WPI] [cs2223] [cs2223 text] [News] [Syllabus] [Classes]
We briefly discussed the problem of matrix multiplication. See Section 7.6 in the text.
Note, there are variations in matrix terminology. This terminology comes from an excellent introductory matrix book: AJ Pettofrezzo, Matrices and Transformations (1966, Dover), ISBN 0-486-63634-8.
A matrix is a collection of numbers (ot other objects) arranged in rows and columns. The usual notation for a matrix element is two subscripts with row and column number, in that order.
A square matrix is one with the same number of rows and columns. A is a square matrix.
Each matrix element has a comatrix, which is the submatrix formed by eliminating the element's row and column. Here are two comatrices of the matrix A.
A matrix with only one row and column has no comatrix (or the comatrix is empty). The comatrix of a square matrix is also square, but the order of the matrix (number of rows and columns) is reduced by one.
There is an alternating sign associated with some matrix operations. It is calculated using the formuls
where r and c are the row and column numbers. This produces a sign which is positive in the upper left corner of the matrix and the sign changes with each step along a row or column. This can be represented by a sign matrix:
The determinant is a single number which characterizes a square matrix. It has a recursive definition. If a matrix is of order one (a single number), it's determinant is its value:
Note the use of the enclosing vertical lines | |
to denote
determinant. The minor of an element is the determinant of its comatrix:
The cofactor of an element is its minor times the sign from the sign matrix:
The determinant of a matrix of order n>1 can be written as a sum of determinants of order n-1. Specifically, it is the sum of the products of all of the elements in any row or column of the matrix times their respective cofactors.
These examples show the sum over the elements in the first row, the third row, and the second column, respectively. There are 2n such sums for a matrix of order n, and they all give the same value for the determinant. In practice, a program to calculate a determinant would pick a specific row or column - usually the first - for any matrix.
This algorithm is inherently recursive.
We used the determinant algorithm as an example of the summing factor method for solving recurrence relations, as discussed in the recurrence relation notes.
The algorithm for calculating the determinant of a matrix of order n forms the sum of n terms, each of which requires the calculation of the determinant of a matrix of order n-1. This leads to the recurrence relation for the "time" the algorithm takes:
The right sode of this equation can represent the n summations or the n increments in the loop which calculates the cofactors, etc. In fact, almost anything we include in the calculation will be of order n and we chose this value to make the solution easier. If you decide to change the right side of the recurrence relation, your calculation of the time will change somewhat.
This recurrence relation is linear and first-order, but the coefficients are not constant. Use the summing factor method with these definitions:
The initial condition states that no time is required to calculate the determinant of an empty matrix. We will need the product of the bn.
The solution is:
We can find an approximate answer for the last summation by using the Taylor series identity for e, the base of the natural logarithms.
The time for the determinant algorithm is:
Just how big is n factorial? The Stirling approximation is:
How big is that last quantity? Where does it fit into the heirarchy of orders we studied in Chapter 2 of the text?
Try comparing it with
and with
for values of A that are both larger than e and smaller than e.
[cs2223 text] [News] [Syllabus] [Classes] |