Last modified: 03/25/97
Much of this presentation is derived from the Wavelets Course given at SIGGRAPH 96. This course was organized by Peter Schröder of the California Institute of Technology and Wim Sweldens of Lucent Technologies Bell Laboratories.
The 3D surface material comes both from the above course notes and from the paper by Certain et. al. entitled, "Interactive Multiresolution Surface Viewing," presented at SIGGRAPH 96.
Wavelets, with their roots in signal processing and harmonic analysis, have had a significant impact in several areas of computer science. They have led to a number of efficient and easy to implement algorithms for use in such fields as:
The Haar Wavelet is probably the simplest wavelet to understand. Consider two numbers a and b, neighboring samples of a sequence. a and b have some correlation (hopefully) of which we'd like to take advantage. We use a simple linear transform to replace a and b by their average s and difference d.
We can then apply the same average/difference transform to the coarser signal, splitting it into a yet coarser signal and more details. This transform can be applied until their is but a single signal value and a whole lot of detail values. The remaining signal is the average of the entire signal, and can be thought of as the DC or zero frequency of the original signal.
With some fancy footwork, this transform can be applied and reversed in place with no additional buffer space, and requires only O(N) time due to its hierarchical nature.
Progressive transmission of an image can benefit from a 2D wavelet transform as described here.
The process is conceptually really simple. Given an image, we will create 4 new sub-images to replace it. (Assume that the image size is 2n x 2n pixels. This simplifies things a bit, as we will be able to symetrically subdivide the image down to single pixels. But this requirement can be eliminated by scaling the image or enlarging it with 0's until it is 2n x 2n pixels. After decoding the image, these pre-processing steps should be reversed.)
To create these four sub-images, we break the original image into 4 pixel blocks, 2 pixels to a side. If the original image has size 2n x 2n, we should have 22n-2 blocks. Now for each block, the top-right pixel will go directly into the top-right subimage. The bottom-left pixel goes directly into the bottom-left subimage. And the bottom-right pixel goes into the bottom-right subimage. So these 3 subimages will look like a coarse version of the original, containing 1/4 of the original pixels.
The top-left pixel of each block does not go directly into the top-left subimage. Rather, all 4 pixels of the block are averaged and placed into the top-left subimage. So the top-left subimage is effectively a scaled down version of the original image at 1/4 the original size. But it does not contain any of the original pixels itself (unless by chance).
This process can be repeated now for the top-left image. We break it down into four new sub-images the same way, producing a top-left subimage of the previous top-left subimage that is now a scaled down image 1/16 the original size. We repeat this process until our top-left subimage is only 1 pixel, with its color corresponding to the average of all pixels in the original image.
The newly encoded image lends itself to progressive transmission, as the top-left subimages can be transmitted first, yielding useful approximations, followed by the 3 corresponding subimages which contain original pixels to be used for reconstructing the next finer top-left subimage. Reconstructing the 4 pixel blocks is simple. One pixel is grabbed from each of the top-right, bottom-left, and bottom-right subimages. Then the top-left original pixel is generated by taking the average value from the top-left subimage, multiplying it by 4, and subtracting out each of the other 3 original pixel values. You are left with the original top-left value. This is a lossless encoding.
I have written a simple Java applet that demonstrates this 2D wavelet transform. Here lies the source code of the applet infrastructure and of the transform filter itself.
Much of this material is inaccessible without a sturdy background in multiresolution analysis. I will attempt to digest it somewhat and regurgitate it for the benefit of those not already initiated into this discourse community.
The wavelet approach to progressive transmission of 3D surfaces is basically the same as that used for curves, images, or any signal. We must iteratively decompose the original, or high-resolution, surface into a low-resolution part and a detail part. We end up with the coarsest representation of the original surface, along with a sequence of details which can be used to reconstruct the original.
The low-resolution part can be obtained by generating new vertex positions as the weighted averages of the original vertex positions. As this is a linear operation, it can be expressed as multiplication by a matrix Aj. Detail can likewise be expressed as multiplication by a matrix Bj. This is recursively applied to the low-resolution part until the coarsest representation is obtained.
"Analysis" filters Aj and Bj can be inverted to produce "synthesis" filters Pj and Qj. This synthesis can be viewed as two steps: splitting each face into 4 subtriangles by introducing new vertices at the midpoints of existing edges, and perturbing the new collection of vertices according to the wavelet coefficients.
A few goals guide the design of analysis and synthesis filters. These are:
Here is where it begins to get dense.
Consider our coarsest representation of the model. We'll call this the base mesh, or M0. We create the next finer mesh, M1, by subdividing each triangular face into four new faces. We do this by introducing new vertices at the midpoints of the parent face's three edges. In turn, these new faces can again be subdivided to create M2. The simplest base mesh is a tetrahedron, but this is arbitrary, and will be chosen depending on the topology of the original model.
These meshes are nested when considered in three space, as all points of M j are also points of M j+1. The nested meshes are used to define a sequence of nested spaces. With each mesh M j, we define V j as the set of all continuous functions that are linear on each face of M j. These spaces are nested since any function linear on the faces of M j is also linear on the faces of M j+1. Each function in V j maps points of M0 into a real number:
These hat functions can be used to construct polyhedral surfaces in 3-space:
This defines a polyhedron with vertex positions cJi = (xJi,yJi,zJi) that is topologically equivalent to the base mesh M0. The vertices of S have the connectivity of MJ, or the connectivity of M0 recursively subdivided J times. This sort of mesh is said to have subdivision connectivity.
Expressing S(x) in wavelet form looks like this:
for chosen wavelets wJi(x). Now we must construct such wavelets.
We will construct a biorthogonal wavelet basis for polyhedral surfaces. It will have the following properties:
Consider a vertex i of M j+1 located at the midpoint of an edge e of M j. The k-disk wavelet centered at vertex i is a function of the form
where Nk is a set of vertices of M j in a neighborhood of vertex i. The neighborhoods are defined recursively: The neighborhood N0 for the 0-disk wavelet consists of the endpoints of e; Nk contains the vertices of all triangles incident on Nk-1.
Once all wavelet coefficients have been computed at level j, S j(x) can be determined by subtracting out their collective contribution, since
Now that we are out of the woods mathematically, we can use these techniques to transform an arbitrary mesh made up of vertices, edges, and faces into a representation that is suitable for progressive transmission.
The techniques described above require a base mesh, M0, and a parameterized description of the mesh. All we have is the original full-resolution mesh, M. Fortunately, we have at our disposal the remeshing algorithm of Eck et. al., which, given an arbitrary mesh as input, will output a base mesh with relatively few faces, and a parameterization of the surface. Just what we need.
We can then apply the k-disk analysis described above. Once we have the wavelet coefficients, we sort them in order of magnitude. After transmitting the base mesh, we send the most influencial coefficients first, as this will result in a quicker convergence upon the full-resolution model.
As described by Certain et. al., vertex color data can be represented by wavelet coefficients in much the same way as vertex position data is.
See the SIGGRAPH 96 Visual Proceedings for a demonstration of these techniques. In addition to progressive transmission, the techniques are equally important for surface and texture compression, continuous level-of-detail control, and multiresolution editing.