Introduction to RenderMan

The computer graphics community is approaching an important threshold, the advent of photorealistic image synthesis available on the desktop. The advances of workstation technology and of basic image synthesis research have combined to allow affordable photorealism: the ability to produce pictures comparable in quality to those captured by a camera.

In issuing its RenderMan 3-D Scene Description Interface proposal, Pixar is embarking on a major effort to make the technology for photorealism accessible for a broad range of scientific visualization and computer-aided-design applications.

RenderMan Design Goals

The RenderMan interface is a Pixar proposal designed:


RenderMan is an interface through which 3-D geometry and visual attributes are passed. 3-D geometry is expressed as collections of primitives such as polygons, curved surface patches and NURBs. Constructive solid geometry may be used to assemble the primitives. Visual attributes and shading information is specified with a shading language. Modeling systems use the shading language to express custom materials, textures, light sources, and atmospheric effects. The interface provides a framework for a renderer to achieve a range of capabilities, from antialiasing to ray tracing to motion blur.

The design principles were to keep the interface:

Features and Capabilities

The RenderMan Interface is a standard interface between modeling programs and rendering programs capable of producing photorealistic quality images. A rendering program implementing the RenderMan Interface differs from an implementation of earlier graphics standards in that:

The RenderMan Interface is designed so that the information needed to specify a photorealistic image can be passed to different rendering programs compactly and efficiently. The interface itself is designed to drive different hardware devices, software implementations and rendering algorithms. Many types of rendering systems are accommodated by this interface, including z-buffer-based, scanline-based, ray tracing, terrain rendering, molecule or sphere rendering and the Reyes rendering architecture. In order to achieve this, the interface does not specify how a picture is rendered, but instead specifies what picture is desired. The interface is designed to be used by both batch-oriented and real-time interactive rendering systems. Real-time rendering is accommodated by ensuring that all the information needed to draw a particular geometric primitive is available when the primitive is defined. Both batch and real-time rendering is accommodated by making limited use of inquiry functions and call-backs.

The RenderMan Interface is meant to be complete, but minimal, in its transfer of scene descriptions from modeling programs to rendering programs. The interface usually provides only a single way to communicate a parameter; it is expected that the modeling front end will provide other convenient variations. An example is color coordinate systems -- the RenderMan Interface supports multiple-component color models because a rendering program intrinsically computes with an n-component color model. However, the RenderMan Interface does not support all color coordinate systems because there are so many and because they must normally be immediately converted to the color representation used by the rendering program. Another example is geometric primitives -- the primitives defined by the RenderMan Interface are considered to be rendering primitives, not modeling primitives. The primitives were chosen either because special graphics algorithms or hardware is available to draw those primitives, or because they allow for a compact representation of a large database. The task of converting higher-level modeling primitives to rendering primitives must be done by the modeling program.

The RenderMan Interface is not designed to be a complete three-dimensional interactive programming environment. Such an environment would include many capabilities not addressed in this interface.
These include

  • screen space or two-dimensional primitives such as annotation text, markers, and 2-D lines and curves.
  • non-surface primitives such as 3-D lines and curves.
  • user-interface issues such as window systems, input devices, events, selecting, highlighting, and incremental redisplay

    The RenderMan Interface is a collection of procedures to transfer the description of a scene to the rendering program.A rendering program takes this input and produces an image. This image can be immediately displayed on a given display device or saved in an image file. The output image may contain color as well as coverage and depth information for postprocessing. Image files are also used to input texture maps. This document does not specify a "standard format" for image files.

    The RenderMan Shading Language is a programming language for extending the predefined functionality of the RenderMan Interface. New materials and light sources can be created using this language. This language is also used to specify deformations, special camera projections, and simple image processing functions. All required shading functionality is also expressed in this language. A shading language is an essential part of a high-quality rendering program. No single material lighting equation can ever hope to model the complexity of all possible material models.

    Features and Capabilities

    The RenderMan Interface was designed in a top-down fashion by asking what information is needed to specify a scene in enough detail so that a photorealistic image can be created. Photorealistic image synthesis is quite challenging and many rendering programs cannot implement all of the features provided by the RenderMan Interface. This section describes which features are required and which are considered optional capabilities. The set of required features is extensive in order that application writers and end-users may reasonably expect basic compatibility between, and a high level of performance from, all implementations of the RenderMan Interface. Capabilities are optional only in situations where it is reasonable to expect that some rendering programs are algorithmically incapable of supporting that capability, or where the capability is so advanced that it is reasonable to expect that most rendering implementations will not be able to provide it.

    Required features

    All rendering programs which implement the RenderMan Interface must implement the interface as specified in this document. Implementations which are provided as a linkable C library must provide entry points for all of the subroutines and functions, accepting the parameters as described in this specification. All of the predefined types, variables and constants (including the entire set of constant RtToken variables for the predefined string arguments to the various RenderMan Interface subroutines) must be provided. The C header file ri.h describes these data items.

    Implementations which are provided as prelinked standalone applications must accept as input the complete RenderMan Interface Bytestream (RIB). Such implementations may also provide a complete RenderMan Interface library as above, which contains subroutine stubs whose only function is to generate RIB.

    All rendering programs which implement the RenderMan Interface must:

    Rendering programs which implement the RenderMan Interface receive all of their data through the interface. There will be no additional subroutines required to control or provide data to the rendering program. Data items which are substantially similar to items already described in this specification will be supplied through the normal mechanisms, and not through any of the implementation-specific extension mechanisms (RiAttribute, RiGeometry or RiOption). Rendering programs will not provide nonstandard alternatives to the existing mechanisms, such as any alternate language for programmable shading.

    Optional Capabilities

    Rendering programs may also provide one or more of the following optional capabilities. If a capability is not provided by an implementation, a specific default is required. A subset of the full functionality of a capability may be provided by a rendering program. For example, a rendering program might implement Motion Blur, but only of simple transformations, or only using a limited range of shutter times. Rendering programs should describe their implementation of the following optional capabilities using the terminology in the following list.

    Solid Modeling
    The ability to define solid models as collections of surfaces and combine them using the set operations intersection, union and difference. (See the section on

    Trim Curves
    The ability to specify a subset of a parametric surface by giving a region in parameter space.

    Level of Detail
    The ability to specify several definitions of the same model and have one selected based on the estimated screen size of the model.

    Motion Blur
    The ability to process moving primitives and antialias them in time.

    Depth of Field
    The ability to simulate focusing at different depths.

    Programmable Shading
    The ability to perform shading calculations using user-supplied RenderMan Shading Language programs.

    Special Camera Projections
    The ability to perform nonstandard camera projections such as spherical or Omnimax projections.

    The ability to handle nonlinear transformations such as bends and twists.

    The ability to handle displacements.

    Spectral Colors
    The ability to calculate colors with an arbitrary number of spectral color samples.

    Texture Mapping
    The ability to index a texture map with the surface's texture coordinates.

    Environment Mapping
    The ability to model the environmental illumination by indexing a texture map with a direction vector.

    Bump Mapping
    The ability to perturb just surface normals by giving a displacement map.

    Shadow Depth Mapping
    The ability to index a shadow map with a position.

    Volume Shading
    The ability to attach and evaluate volumetric shading procedures.

    Ray Tracing
    The ability to evaluate global illumination models using ray tracing.

    The ability to evaluate global illumination models using radiosity.

    Area Light Sources
    The ability to illuminate surfaces with area light sources.

    Copyright - Sudhir R Kaushik (