Mobile Adaptive Distributed Graphics Framework (MADGRAF)
Mobile graphics, which involves running networked computer graphics applications
on mobile devices across wireless networks, is a fast growing segment of the networks
and graphics industries. Running networked graphics applications in mobile environments faces
a fundamental conflict; graphics applications require large amounts of
memory, CPU cycles, battery power and disk space, while
mobile devices and wireless channels tend to be limited in these resources.
In order to mitigate mobile environment issues, some form of adaptation based on a
client device's capabilities, prevailing wireless network conditions, characteristics
of the graphics application and user preference, is necessary. In this paper, we
describe the Mobile Adaptive Distributed Graphics Framework (MADGRAF), a graphics-aware middleware architecture
that makes it feasible to run complex 3D graphics applications on low end mobile devices over wireless networks.
In MADGRAF, a server can perform mobile device-optimized pre-processing of complex graphics scenes in order
to speed up run time rendering, scale high-resolution meshes using polygon or image-based simplification,
progressively transmit compressed graphics files, conceal transmission errors by including
redundant bits or perform remote execution, all tailored to the client's capabilities.
MADGRAF exposes our Mobile Adaptive Distributed Graphics Language (MADGL), an API that
facilitates the programming and management of networked 3D graphics in mobile environments.
Remote Mesa (R-Mesa)
Mobile devices have limited processing power and wireless
networks have limited bandwidth. A modern photorealistic graphics
application is resource-hungry,
consumes large amounts of cpu cycles, memory and network bandwidth if
networked. Moreover running them on mobile devices may also diminish their
battery power in the process. The majority of graphics computations are
floating point operations which can run significantly slower on
mobile devices which do not have floating point units or 3D
graphics accelerators. Proposed solutions such as input mesh
simplification are lossy and reduce photorealism. Remote execution,
wherein part or entire rendering process is offloaded to a powerful
surrogate server, is an attractive solution. We propose pipeline-splitting,
a paradigm whereby 15 sub-stages of the graphics
pipeline are isolated and instrumented with networking code such that they can run on either
a mobile client or a surrogate server. To validate our concepts,
we instrument Mesa3D, a popular implementation of the OpenGL graphics
to support pipeline-splitting, creating Remote Mesa (RMesa). We explore various
mappings of the graphics pipeline to the client and server while
monitoring key performance metrics such as total rendering time,
power consumption on the client and network usage and establish
conditions under which remote execution is an optimal solution.Our results
show that even with the incurred roundtrip delay, our remote execution
framework can improve rendering performance by up to 10 times when
rendering a moderate-sized graphics mesh file.
Battery power capacity has shown very little growth, especially
when compared with the exponential growths of CPU power, memory
and disk space. Hence, battery power is frequently the most
constraining resource on a mobile device. As a foundation
for optimizing application energy usage on mobile devices, it is increasingly
important to profile system-wide energy usage in order to
accurately determine where the energy is going?.
Previous work on profiling energy usage has either required external
hardware multimeters, provided coarse grain results or required
modifications to the operating system or/and profiled application.
We present PowerSpy, which tracks and reports the battery energy consumed by the
different threads of a monitored application, the operating
system, other applications in a multi-threaded environment along
with I/O devices. Using PowerSpy, we are able to
measure the power consumption of five diverse applications
including a web browser, VRML graphics browser, compiler and
video player, all without requiring modification to the
application's source code.
Ubiquitous Scalable Graphics using Wavelets
Large meshes or images cannot be rendered at full
resolution on mobile devices such as cell phones,
PDAs and laptops since these devices have limited
storage, CPU, memory, display, and battery power. Wavelet-based
multiresolution analysis can represent meshes and graphics input at
multiple Levels of Detail (LODs). We propose UbiWave, a
wavelet-based framework for scalable transmission
of large meshes and graphics content to heterogeneous ubiquitous computing devices.
In UbiWave, a base representation and different levels
of wavelet coefficients are pre-generated at the server.
An optimal LOD level is transmitted to each
mobile client based on its specifications and
wireless channel conditions, where the corresponding LOD
is reconstructed. To save scarce resources on mobile devices,
we render graphics content at the lowest LOD that does not show
visual artifacts, called the point of indiscernability (PoI). By rendering
content at the PoI instead of the highest resolution, we are able
save 61% decode time and 45% energy usage on the client.
Real-Time Rendering of Iridescent Colors using Spherical Harmonics
Iridescent colors such as diffraction, thin film interference,
dispersive refraction and scattering are produced by wavelength-dependent
Bi-directional Reflectance Distribution Functions (BRDFs).
Diffraction causes shimmering colors seen on CD-ROMs and
interference causes the colors of oil slicks and soap bubbles.
Due to expensive per-wavelength sampling required, rendering wavelength-dependent BRDFs have historically been
restricted to offline rendering techniques or real-time techniques
that use color ramps and simplified BRDFs. We present a generalized real-time
pipeline for physically accurate wavelength dependent phenomena that
is independent of sampling cost, uses a wavelength-based color
representation, and supports High Dynamic Range (HDR) rendering. Our
pipeline converts the lighting environment and BRDF to per-wavelength
Spherical Harmonics (SH) coefficients, rotates and uploads
them to the rendering framework, then interactively renders the
lighting integral with traditional scene geometry. Our pipeline
is used to render diffraction and interference.