053-074
053-074
Abstract
Interactive volume rendering in its standard formulation has become an increasingly important tool in many
application domains. In recent years several advanced volumetric illumination techniques to be used in interactive
scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of
producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects,
including varying shadowing and scattering effects. In this article, we review and classify the existing techniques
for advanced volumetric illumination. The classification will be conducted based on their technical realization,
their performance behavior as well as their perceptual capabilities. Based on the limitations revealed in this
review, we will define future challenges in the area of interactive advanced volumetric illumination.
Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image
Generation—Volume Rendering
DOI: 10.2312/conf/EG2012/stars/053-074
54 Jönsson et al. / STAR: Interactive Volume Illumination
where Li (~x, ω~ i ) is the incident radiance reaching ~x from di- phase function can be thought of as being the spherical ex-
rection ω~ i , Le (~x) is the emissive radiance and s(~x, ω ~ i, ω
~o) tension of a hemispherical BRDF. It defines the probability
describes the actual shading, which is dependent on both the of a photon changing its direction of motion by an angle of
incident light direction ω ~ i as well as the outgoing light di- θ. In interactive volume rendering, the Henyey-Greenstein
rection ω~ o . Furthermore, s(~x, ω ~ i, ω
~ o ) is dependent on param- 1−g2
model G(θ, g) = 3 is often used to incorpo-
eters, which may vary based on ~x, as for instance the optical (1 + g2 −2g cos θ) 2
properties assigned through the transfer function. In the con- rate phase functions. The parameter g ∈ [−1, 1] describes the
text of volume rendering s(~x, ω ~ i, ω
~ o ) is often written as: anisotropy of the scattering event. A value g = 0 denotes that
light is scattered equally in all directions. A positive value
of g increases the probability of forward scattering. Accord-
s(~x, ω ~ o ) = τ(~x) · p(~x, ω
~ i, ω ~ i, ω
~ o ), (2) ingly, with a negative value backward scattering will become
more likely. If g = 1, a photon will always pass through
the point unaffectedly. If g = −1 it will deterministically
where τ(~x) represents the extinction coefficient at position ~x
~ i, ω
~ o ) is the phase function describing the scatter- be reflected into the direction it came from. Figure 2 shows
and p(~x, ω
the Henyey-Greenstein phase function model for different
ing characteristics of the participating medium at the posi-
anisotropy parameters g.
tion ~x.
Scattering is the physical process which forces light to When considering shadowing in this model, where the at-
deviate from its straight trajectory. The reflection of light tenuation of external light traveling through the volume is
at a surface point is thus a scattering event. Depending ~ i ) can be defined
incorporated, the incident radiance Li (~x, ω
on the material properties of the surface, incident photons as:
are scattered in different directions. When using the ray-
casting analogy, scattering can be considered as a redirec-
tion of a ray penetrating an object. Since scattering can also ~ i ) = Ll · T (~xl ,~x).
Li (~x, ω (3)
change a photons frequency, a change in color becomes pos-
sible. Max [Max95] presents accurate solutions for simu-
lating light scattering in participating media, i. e., how the This definition is based on the standard volume rendering
light trajectory is changed when penetrating translucent ma- integral, where the light source is located at ~xl and has the
terials. In classical computer graphics, single scattering ac- radiance Ll and T (~xl ,~x) is dependent on the transparency
counts for light emitted from a light source directly onto a between ~xl and ~x. It describes the compositing of the optical
surface and reflected unimpededly into the observer’s eye. properties assigned to all samples~x0 along a ray and it incor-
Incident light thus comes directly from a light source. Mul- porates emission and absorption, such that it can be written
tiple scattering describes the same concept, but incoming as:
photons are scattered multiple times. To generate realistic
images, both indirect light and multiple scattering events
Z ~x
have to be taken into account. Both concepts are closely ~ o ) = L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x) + o(~x0 ) · T (~x0 ,~x)d~x0 .
related, they only differ in their scale. Indirect illumination x~0
means that light is reflected multiple times by different ob- (4)
jects in the scene. Multiple scattering refers to the proba-
bilistic characteristics of a scattering event caused by a pho-
While L0 (~ ~ o ) depicts the background energy entering
x0 , ω
ton being reflected multiple times within an object. While
the volume, o(~x0 ) defines the optical properties at ~x0 , and
in classical polygonal computer graphics, light reflection on
T (~xi , ~x j ) is dependent on the optical depth and describes how
a surface is often modeled using bidirectional reflectance
light is attenuated when traveling through the volume:
distribution functions (BRDFs), in volume rendering phase
functions are used to model the scattering behavior. Such a
R x~j
− κ(~x0 )d~x0
T (~xi , ~x j ) = e ~xi
, (5)
Figure 2: The Henyey-Greenstein phase function plotted for According to Max, the standard volume rendering integral
different anisotropy parameters g. simulating absorption and emission can be extended with
these definitions to support single scattering and shadowing:
Figure 3: Visual comparison when applying different volumetric illumination models. From left to right: gradient-based shad-
ing [Lev88], directional occlusion shading [SPH∗ 09], image plane sweep volume illumination [SYR11], shadow volume prop-
agation [RDRS10b] and spherical harmonic lighting [KJL∗ 12]. Apart from the used illumination model, all other rendering
parameters are constant.
propagate illumination by iteratively slicing through the vol- To be able to depict more details for comparison reasons,
ume, third, light space based ( ) techniques which project we will cover the properties of the most relevant techniques
illumination as seen from the light source, fourth lattice in Table 1 at the end of this article. The table shows the
based ( ) techniques which compute the illumination di- properties with respect to supported light sources, scattering
rectly on the volumetric data grid without applying sam- capabilities, render and update performance, memory con-
pling, and fifth basis function based ( ) techniques which sumption, and the expected impact of a low signal to noise
use a basis function representation of the illumination infor- ratio. For rendering performance and memory consumption,
mation. While this classification into five groups is solely the filled circles represent the quantity of rendering time or
performed based on the concepts used to compute the illu- memory usage. An additional icon represents, if illumination
mination itself, and not for the image generation, it might in information needs to be updated when changing the lighting
some cases go hand in hand, e. g., when considering the slice or the transfer function. Thus, the algorithm performance for
based techniques which are also bound to the slice based frame rendering and illumination updates become clear.
rendering paradigm. A comparison of the visual output of
one representative technique of each of the five groups is
4. Algorithm Description
shown in Figure 3. As it can be seen, when changing the
illumination model, the visual results might change dras- In this section, we will describe all techniques covered
tically, even though most of the presented techniques are within this article in a formalized way. The section is struc-
based on Max’s formulation of a volumetric illumination tured based on the groups introduced in the previous section.
model [Max95]. The most prominent visual differences are For each group, we will provide details regarding the tech-
the intensities of shadows as well as their frequency, which niques themselves with respect to their technical capabilities
is visible based on the blurriness of the shadow borders. as well as the supported illumination effects. All of these
Apart from the used illumination model, all other rendering properties are discussed based on the property description
parameters are the same in the shown images. Besides the given in the previous section. We will start with a brief ex-
five main groups supporting fully interactive volume render- planation of each algorithm, where we relate the exploited
ing, we will also briefly cover ray-tracing based ( ) tech- illumination computations to Max’s model (see Section 2)
niques, and those only limited to isosurface illumination. and provide a conceptual overview of the implementation.
While the introduced groups allow a sufficient classifica- Then, we will discuss the illumination capabilities with re-
tion for most available techniques, it should also be men- spect to supported light sources and light interactions in-
tioned, that some approaches could be classified into more side the volume, before addressing performance impact and
than one group. For instance, the shadow volume propa- memory consumption.
gation approach [RDRS10b], which we have classified as
lattice-based, could also be classified as light space based,
4.1. Local Region Based Techniques ( )
since the illumination propagation is performed based on the
current light source position. As the name implies, the techniques described in this sub-
section perform the illumination computation based on a lo-
~ i ) = Ll · kd · max(| 5 τ( f (~x))| · ω
Ldi f f (~x, ω ~ i , 0). (7)
1
boundary. However, often local difference operators are used gA (~x) = . (12)
RΩ − a
for gradient computation, and thus the resulting gradients are
subject to noise. Therefore, several approaches have been Reformulation of gA (~x) also allows to include varying emis-
developed which improve the gradient quality. Since these sive properties for different tissue types. When such an emis-
techniques are beyond the scope of this article, we refer to sive mapping Le0 (~x) is available, gA (~x) can be defined as fol-
the work presented by Hossain et al. [HAM11] as a starting lows:
point.
Gradient-based shading can be combined with all exist-
1
ing volume rendering paradigms, and it supports an arbi- gA (~x) = + Le0 (~x). (13)
RΩ − a
trary number of point and directional light sources. Due to
the local nature, no shadows and scattering effects are sup- An application of the occlusion only illumination is shown in
ported. The rendering time, which depends on the used gra- Figure 6 (left), while Figure 6 (right) shows the application
dient computation scheme, is in general quite low. To further of the additional emissive term, which is set proportional to
accelerate the rendering performance, gradients can also be the signal strength of a coregistered functional magnetic res-
precomputed and stored in a gradient volume accessed dur- onance imaging (fMRI) signal [NEO∗ 10].
ing rendering. When this precomputation is not performed,
no additional memory is required. Clipping is directly sup- A straight forward implementation of local ambient oc-
ported, though the gradient along the clipping surfaces needs clusion would result in long rendering times, as for each
to be replaced with the clipping surface normal to avoid ren- ~x multiple rays need to be cast to determine the occlusion
dering artifacts [HLY09]. Besides the local nature of this factor. Therefore, it has been implemented by exploiting a
approach, the gradient computation makes gradient-based multiresolution approach [LLY06]. This flat blocking data
shading highly dependent on the SNR of the rendered data. structure represents different parts of the volume at differ-
Therefore, gradient-based shading is not recommendet for ent levels of detail. Thus, empty homogeneous regions can
modalities suffering from a low SNR, such as magnetic res-
onance imaging (MRI)(see Figure 5), 3D ultrasound or seis-
mic data [PBVG10].
Local Ambient Occlusion [HLY07] is a local approxima-
tion of ambient occlusion, which considers the voxels in a
neighborhood around each voxel. Ambient occlusion goes
back to the obscurance rendering model [ZIK98], which re-
lates the luminance of a scene element to the degree of oc-
clusion in its surroundings. To incorporate this information
into the volume rendering equation, Hernell et al. replace the
standard ambient term Lamb (~x) by:
Figure 6: An application of local ambient occlusion in a
Z Z δ(~x,ωi ,a) medical context. Usage of the occlusion term allows to con-
Lamb (~x) = gA (~x0 ) · T (~x0 , δ(~x, ωi , a))d~x0 dωi , vey the 3D structure (left), while multimodal emissive light-
Ω δ(~x,ωi ,RΩ )
(11) ing allows to integrate fMRI as additional modality (right).
(images taken from [NEO∗ 10])
where δ(~x, ωi , a) offsets the position by a along the direction
be stored at a very low level of detail, which requires less such that the ambient intensity Lamb (~x) is derived from the
sampling operations when computing Lamb (~x). precomputed data:
~ i ) = Ll · T (~xl ,~x).
Li (~x, ω (17) Due to the described synchronization behavior, half an-
gle slicing is, as all techniques described in this subsection,
To include multiple scattering, a directed diffusion pro- bound to the slicing rendering paradigm. It supports one di-
cess is modeled through an angle dependent blur function. rectional or point light source located outside the volume,
Thus, the indirect lighting is rewritten as: and allows to produce shadows having hard, well defined
borders (see Figure 1 (right)). By integrating a directed dif-
fusion process in the indirect lighting computation, multi-
~ i ) = Ll · T (~xl ,~x) + Ll · T (~xl ,~x)Blur(θ), ple scattering can be simulated which allows a realistic rep-
Li (~x, ω (18)
resentation of translucent materials. Half angle slicing per-
forms two rendering passes for each slice, one from the
where θ is a user defined cone angle. In addition to this mod- point of view of the observer and one from that of the light
ified indirect lighting, which simulates multiple scattering, source, and thus allows to achieve interactive frame rates.
half angle slicing also exploits a surface shading factor S(~x). Since rendering and illumination computation are performed
S(~x) can either be user defined through a mapping function in a lockstep manner, besides the three buffers used for com-
or derived from the gradient magnitude at ~x. It is used to positing, no intermediate storage is required. Clipping planes
weight the degree of emission and surface shading, such that can be applied interactively.
~ i, ω
the illumination C(~x, ω ~ o ) at ~x is computed as:
Directional Occlusion Shading [SPH∗ 09] is another slice
based technique that exploits front-to-back rendering of
C(~x, ω ~ o ) = Le (~x) · (1 − S(~x)) + s(~x, ω
~ i, ω ~ o ) · S(~x). (19)
~ i, ω view aligned slices. In contrast to the half angle slicing ap-
proach discussed above, directional occlusion shading does
C(~x, ω ~ o ) is then used in the volume rendering integral as
~ i, ω not adapt the slicing axis with respect to the light position.
follows: ~ i ,~ω) to prop-
Instead, it solely uses a phase function p(~x, ω
the light direction differs from the view direction. This ef-
fect is shown in Figure 7, where Figure 7 (middle) shows a
rendering where the light source is located in the camera po-
sition, while the light source has been moved to the left in
Figure 7 (right). Due to the more complex filter kernel, the
rendering time of this model is slightly higher than when us-
ing directional occlusion shading. The memory consumption
is equally low.
A comparison of deep shadow mapping, which allows Chen [MC10], the single scattering contribution is written
semi-transparent occluders, with shadow rays and standard as:
shadow mapping is shown in Figure 8. As it can be seen
deep shadow maps introduce artifacts when thin occluder
Z ~x
structures are present. The shadows of these structures show
Iss (~x, ωl ) = T (~xl ,~x) · L0 + T (~x0 ,~x) · τ(~x0 ) · Le (~x0 )d~x0 .
transparency effects although the occluders are opaque. This ~xl
results from the fact that an approximation of the alpha func- (26)
tion is exploited and especially thin structures may diminish
Multiple scattering is simulated by assuming the presence of
when approximating over too long depth intervals.
a high albedo making multiple scattering predominant and
allowing to approximate it by using a diffusion approach:
The deep shadow mapping technique is not bound to
a specific rendering paradigm. It supports directional and
point light sources, which should lie outside the volume Z ~x
1
since the shadow propagation is unidirectional. Due to the Ims (~x, ωi ) = ∇di f f · Le (~x0 )d~x0 . (27)
~xb |~x0 −~x|
same reason, only a single light source is supported. Both,
rendering time and update of the deep shadow data struc- Here, ∇di f f represents the diffusion approximation and
ture, which needs to be done when lighting or transfer func- 1
|~x0 −~x|
results in a weighting, such that more distant samples
tion changes, are interactive. The memory footprint of deep
shadow mapping is directly proportional to the number of have less influence. ~xb represents the background position
used layers. just lying outside the volume. In practice the computation of
the scattering contributions can be simplified, when assum-
ing the presence of a forward peaked phase function, such
Desgranges et al. [DEP05] also propose a gradient-free
that all relevant samples lie in direction of the light source.
technique for shading and shadowing which is based on pro-
Thus, the computation can be performed in a synchronized
jective texturing. Their approach produces visually pleasing
manner, which is depicted by the following formulation for
shadowing effects and can be applied interactively.
single scattering:
Image Plane Sweep Volume Illumination [SYR11] al-
lows the integration of advanced illumination effects into a 0
Iss (~x, ωi ) =T (~xl ,~x) · L0
GPU-based volume ray-caster by exploiting the plane sweep Z ~x
paradigm. By using sweeping, it becomes possible to re-
+ T (~x0 ,~x) · τ(~x0 ) · Le (~x0 )d~x0
duce the illumination computation complexity and achieve ~x00 (28)
interactive frame rates, while supporting scattering as well Z ~x00
as shadowing. The approach is based on a reformulation of + T (~x0 ,~x) · τ(~x0 ) · Le (~x0 )d~x0 ,
~xl
the optical model, that either considers only single scattering
with a single light direction, or multiple scattering resulting and a similar representation for the multiple scattering con-
from a rather high albedo [MC10]. To have shadowing ef- tribution:
fects of higher frequency and more diffuse scattering effects
at the same time, these two formulations are combined. This
is achieved by adding the local emission at ~x with the fol- Z ~x
1
0
lowing scattering term: Ims (~x, ωi ) =∇di f f · Le (~x0 )d~x0 +
~x00 |~x0 −~x|
Z ~x00 (29)
1
∇di f f · Le (~x0 )d~x0 .
~xb |~x0 −~x|
g(~x, ωo ) = (1−a(~x)) · p(~x, ωl , ωo ) · Iss (~x, ωl )+ Image plane sweep volume illumination facilitates this split-
Z (25) ting at ~x00 , which is based on the current state of the sweep
a(~x) · p(~x, ωi , ωo ) · Ims (~x, ωi )dωi . line, by performing a synchronized ray marching. During
Ω
this process, the view rays are processed in an order, such
that those being closer to the light source in image space
are processed earlier. This is ensured by exploiting a mod-
Here, ωl is the principal light direction and ωi are all di- ified plane sweeping approach, which iteratively computes
rections from which light may enter. a(~x) is the albedo, the influence of structures when moving further away from
which is the probability that light is scattered rather than ab- the light source. To store the illumination information accu-
sorbed. Thus, Iss (~x, ωl ) describes the single scattering contri- mulated along all light rays between the samples of a specific
bution, and Ims (~x, ωi ) describes the multiple scattering con- sweep plane, a 2D illumination cache in form of a texture is
tribution. Similar to the formulation presented by Max and used.
Image plane sweep volume illumination has been devel- computing the illumination per sample during rendering,
oped as a ray-based technique and thus can only be used the lattice based techniques perform the computations per
with this rendering paradigm. It supports one point or di- voxel, usually by exploiting the nature of the lattice. In this
rectional light source, and simulates global shadowing and subsection, we also focus on those techniques which al-
multiple scattering effects. Since all illumination computa- low interactive data exploration, though it should be men-
tions are performed directly within a single rendering pass, tioned that other techniques working on non-regular grids
it does not require any preprocessing does not need to store exist [QXF∗ 07].
intermediate results within an illumination volume, which
results in a low memory footprint. The interactive applica- Shadow Volume Propagation is a lattice-based approach,
tion of clipping planes is inherently supported. where illumination information is propagated in a prepro-
cessing through the regular grid defining the volumetric data
Shadow Splatting has been proposed first in 2001 by Nulkar set. The first algorithm realizing this principle has been
and Mueller [NM01]. As the name implies, it is bound to the proposed by Behrens and Ratering in the context of slice
splatting rendering paradigm, which is exploited to attenu- based volume rendering [BR98]. Their technique simulates
ate the lighting information as it travels through the volume. shadows caused by attenuation of one distant light source.
Therefore, a shadow volume is introduced, which is update These shadows, which are stored within a shadow vol-
in a first splatting pass, during which light information is at- ume, are computed by attenuating illumination slice by slice
tenuated as it travels through the volume. In the second splat- through the volume. More recently, the lattice-based prop-
ting pass, which is used for rendering, the shadow volume agation concept has been exploited also by others. Ropin-
is fetched to acquire the incident illumination. The indirect ski et al. have exploited such a propagation to allow high
light computation in the first pass can be described as: frequency shadows and low frequency scattering simulta-
neously [RDRS10b]. Their approach has been proposed as
a ray-casting-based technique that approximates the light
~ i ) = Ll · T (~xl ,~x).
Li (~x, ω (30) propagation direction in order to allow efficient rendering. It
facilitates a four channel illumination volume to store lumi-
The actual shading is performed using Phong shading, nance and scattering information obtained from the volumet-
whereas the diffuse and the specular lighting intensity is re- ric dataset with respect to the currently set transfer function.
~ i ).
placed by the indirect light Li (~x, ω Therefore, similar to Kniss et al. [KPH∗ 03], indirect light-
Zhang and Crawfis [ZC02, ZC03] present an algorithm, ing is incorporated by blurring the incoming light within
which does not require to store a shadow volume in mem- a given cone centered about the incoming light direction.
ory. Therefore, their algorithm exploits two additional image However, different to [KPH∗ 03] shadow volume propaga-
buffers for handling illumination information. These buffers tion uses blurring only for the chromaticity and not the inten-
are used to attenuate the light during rendering, and to add sity of the light (luminance). Although this procedure is not
its contribution to the final image. Whenever a contribution physically correct, it helps to accomplish the goal of obtain-
is added to the final image buffer seen from the camera, this ing harder shadow borders, which according to Wanger im-
contribution is also added to the shadow buffer as seen from prove the perception of spatial structures [WFG92, Wan92].
the light source. Thus, the underlying illumination model can be specified as
follows:
Shadow splatting can be used for multiple point light and
distant light sources, as well as textured area lights. While
the initial algorithm [ZC02] only supports hard shadows,
~ o ) =L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x)
a more recent extension integrates soft shadows [ZC03].
When computing shadows, the rendering time is approxi-
Z ~x (31)
+ (Le (~x) + s(~x, ω ~ o )) · T (~x0 ,~x)d~x0 ,
~ i, ω
mately doubled with respect to regular splatting. The algo- x~0
rithm by Zhang and Crawfis [ZC02, ZC03] updates the 2D
image buffers during rendering and thus requires no con- ~ i, ω
where s(~x, ω ~ o ) is calculated as the transport color t(~x),
siderable update times, while the algorithm by Nulkar and as assigned through the transfer function, multiplied by the
Mueller [NM01] needs to update the shadow volume, which chromaticity c(~x, ω ~ o ) of the in-scattered light and modulated
requires extra memory and update times. With a more recent by the attenuated luminance li (~x, ω ~ i ):
extension, the integration of geometry also becomes possi-
ble [ZXC05].
s(~x, ω ~ o ) = t(~x) · c(~x, ω
~ i, ω ~ o ) · li (~x, ω
~ i ). (32)
4.4. Lattice Based Techniques ( )
As lattice based techniques, we classify all those approaches, ~o)
The transport color t(~x) as well as the chromaticity c(~x, ω
that compute the illumination directly based on the under- are wave length dependent, while the scalar li (~x, ω ~ i ) de-
lying lattice. While many of the previous approaches are scribes the incident (achromatic) luminance. Multiplying
all together results in the incident radiance. Chromaticity voxel and the piecewiese segments are stored in an in-
~ o ) is computed from the in-scattered light
c(~x, ω termediate 3D texture using a multiresolution datastruc-
ture [LLY06]. Then, again for each voxel, global rays are
Z sent towards the light source now taking steps of size equal
~o) =
c(~x, ω ~ i0, ω
τ(~x) · p(~x, ω ~ i 0 )d ω
~ o ) · c(~x, ω ~ i0, (33) to the short rays and sampling from the piecewise segment
Ω 3D texture. In addition to direct illumination, first order in-
scattering is approximated by treating scattering in the vicin-
where Ω is the unit sphere centered around ~x. To allow ity as emission. To preserve interactivity, the final radiance is
~ i0, ω
the unidirectional illumination propagation, p(~x, ω ~ o ) has progressively refined by sending one ray at a time and blend-
been chosen to be a strongly forward peaked phase function: ing the result into a final 3D texture.
The piecewise integration technique can be combined
(~ 0
~ o )β
ωi · ω ~ i0 · ω
if ω ~ o < θ(~x) with other rendering paradigms, even though it has been
~ i0, ω
p(~x, ω ~o) = (34) developed for volume ray-casting. It supports a single di-
0 otherwise.
rectional or point light source, which can be dynamically
moved, and it include an approximation of first order in-
The cone angle θ is used to control the amount of scatter-
scattering. Rendering times are low, since only a single extra
ing and depends on the intensity at position ~x, and the phase
lookup is required to compute the incoming radiance at the
function is a Phong lobe whose extent is controlled by the
sample position. As soon at the light source or transfer func-
exponent β, restricted to the cone angle θ.
tion changes, the radiance must be recomputed which can be
During rendering, the illumination volume is accessed performed interactively and progressively. Three additional
to look up the luminance value and scattering color of 3D textures are required for the computations, however a
the current voxel. The chromaticity c(~x, ω ~ o ) and the lumi- multiresolution structure and lower resolution grid are used
nance li (~x, ω ~ i ) are fetched from the illumination volume and to alleviate some of the additional memory requirements.
used within s(~x, ω ~ i, ω
~ o ) as shown above. In cases, where
Summed Area Table techniques were first exploited by
a high gradient magnitude is present, specular reflections
Diaz et al. [DYV08] for computation of various illumina-
and thus surface-based illumination is also integrated into
tion effects in interactive direct volume rendering. The 2D
~ i, ω
s(~x, ω ~ o ).
summed area table (SAT) is a data structure where each cell
Shadow volume propagation can be combined with vari- (x, y) stores the sum of every cell located between the initial
ous rendering paradigms, and does not require a noticeable position (0, 0) and (x, y), such that the 2D SAT for each pixel
amount of preprocessing, since the light propagation is car- p can be computed as:
ried out on the GPU. It supports point and directional light
sources, whereby a more recent extension also supports area x,y
light sources [RDRS10a]. While the position of the light SATimage (x, y) = pi, j (36)
source is unconstrained, the number of light sources is lim-
∑
i=0, j=0
ited to one, since only one illumination propagation direction
is supported. By simulating the diffusion process, shadow A 2D SAT can be computed incrementally during a sin-
volume propagation supports multiple scattering. Rendering gle pass and the computation cost increase linearly with the
times are quite low, since only one additional 3D texture number of cells. Once a 2D SAT has been constructed, an
fetch is required to obtain the illumination values. However, efficient evaluation of a region can be performed with only
when the transfer function is changed, the illumination in- four texture accesses to the corners of the regions to be
formation needs to be updated and thus the propagation step evaluated. Diaz et al. [DYV08] created a method, known as
needs to be recomputed. Nevertheless, this still supports in- Vicinity Occlusion Mapping (VOM), which effectively uti-
teractive frame rates. Since the illumination volume is stored lizes two SATs computed from the depth map, one which
and processed on the GPU, a sufficient amount of graphics stores the accumulated depth for each pixel and one which
memory is required. stores the number of contributed values of the neighborhood
of each pixel. The vicinity occlusion map is then used to
Piecewise Integration [HLY08] computes global light incorporate view dependent ambient occlusion and halo ef-
transport by dividing rays into k segments and evaluating the fects in the volume rendered image. The condition for which
incoming radiance using: a pixel should be occluded is determined by evaluating the
average depth around that pixel. If this depth is larger than
k
the depth of the pixel, the neighboring structures are fur-
~ i ) = L0 · ∏ T (~xn ,~xn+1 ).
Li (~x, ω (35) ther away which means this structure is not occluded. If the
n=0 structure in the current pixel is occluded, the amount of oc-
clusion, i. e., how much the pixel should be darkened is de-
Short rays are first cast towards the light source for each termined by the amount of differences of depths around the
To produce images of high realism, stochastic Monte duced complexity and thus allow advanced illumination with
Carlo ray-tracing methods have also been applied in the area higher rendering speeds and sometimes lower memory foot-
of interactive volume rendering. The benefit of these ap- prints. Vicinity Shading [Ste03], which simulates illumina-
proaches is, that they are directly coupled with physically tion of isosurfaces by taking into account neighboring vox-
based light transport. The first Monte Carlo based approach els, was the first approach addressing advanced illumination
has been presented by Rezk-Salama [RS07]. It supports for isosurfaces extracted from volume data. In a precompu-
shadowing and scattering, whereby interactivity is achieved tation the vicinity of each voxel is analyzed and the resulting
by constraining the expensive scattering computations to se- value, which represents the occlusion of the voxel, is stored
lected surfaces only. With respect to the gradient magnitude in a shading texture which can be accessed during rendering.
it is decided, if a boundary surface, potentially initiating a Wyman et al. [WPSH06] and Beason et al. [BGB∗ 06] focus
scattering event, is present. The approach supports interac- on a spherical harmonic based precomputation, supporting
tive frame rates and rendering of images with a high degree the display of isosurfaces under static illumination condi-
of realism. tions. Penner et al. exploit an ambient occlusion approach to
render smoothly shaded isosurfaces [PM08]. More recently,
More recently, Kroes et al. have presented an interactive
Banks and Beacon have demonstrated how to decouple il-
Monte Carlo ray-tracer, which does not limit the scattering
lumination sampling from isosurface generation, in order to
to selected boundary surfaces [KPB11]. Their approach sim-
achieve sophisticated isosurface illumination results [BB09].
ulates several real-world factors influencing volumetric illu-
mination, such as multiple arbitrarily shaped and textured
area light sources, a real-world camera with lens and aper- 5. Perceptual Impact
ture, as well as complex material properties. This combina-
Besides the technical insights and the achievable illumina-
tion allows to generate images showing a high degree of re-
tion effects described above, the perceptual impact of the
alism (see Figure 11). To adequatly represent both, volumet-
current techniques is also of relevance. Although, the per-
ric and surface-like materials, their approach blends between
ceptual qualities of advanced illumination techniques are
BRDF based and a phase function based material represen-
frequently pointed out, remarkably little research has been
tation.
done in order to investigate them in a systematic fashion.
One study showing the perceptual benefits of advanced volu-
4.7. Isosurface Based Techniques metric illumination models has been conducted with the goal
Besides the algorithms discussed above, which can be com- to analyze the impact of the shadow volume propagation
bined with direct volume rendering and are therefore the technique [RDRS10b]. The users participating in this study
main focus of this article, several approaches exist that allow had to perform several tasks depending on the depth per-
advanced illumination when visualizing isosurfaces. While ception of static images, where one set was generated using
when dealing with direct volume rendering, a transfer func- gradient-based shading and the other one using shadow vol-
tion may extract several structures having different inten- ume propagation. The results of the study indicate, that depth
sities and thus optical properties, when dealing with the perception is more accurate and performed faster when using
simplest form of isosurface rendering, only a single isosur- shadow volume propagation.
face with homogeneous optical properties is visible. This re- Šoltészová et al. proposed and evaluated the perceptual
duces the combinations of materials which interact with each impact of chromatic soft shadows in the context of interac-
other and produce complex lighting effects drastically. Sev- tive volume rendering [vPV11]. Inspired by illustrators, they
eral approaches have been proposed, which exploit this re- replace the luminance decrease usually present in shadowed
areas by a chromaticity blending. In their user study, they
could show the perceptual benefit of these chromatic shad-
ows with respect to depth and surface perception. However,
their results also indicate that brighter shadows might have a
negative impact on the perceptual qualities of an image, and
should therefore be used with consideration.
More recently, Lindemann and Ropinski presented the re-
sults of a user study, in which they have investigated the
perceptual impact of seven volumetric illumination tech-
niques [LR11]. Within their study, depth and volume related
Figure 11: Monte Carlo ray-tracing allows to incorpo- tasks had to be performed. They could show, that advanced
rate gradient-based material properties and the modeling of volumetric illumination models make a difference when it is
camera apertures to generate highly realistic images. (Im- necessary to assess depth or size in images. However, they
age taken from [KPB11], courtesy of Thomas Kroes) could not find a relation between the time used to perform
a certain task and the used illumination model. The results
of this study indicate, that the directional occlusion shading techniques only support simplified phase function models
model has the best perceptual capabilities [LR11]. and sometimes BRDFs, while more realistic volumetric ma-
terial functions are not yet integrated.
6. Usage Guidelines Another challenging area which needs to be addressed
in the future, is the integration of several data sources. Al-
To support application developers when choosing an ad-
though, Nguyen et al. [NEO∗ 10] have presented an ap-
vanced volumetric shading method, we provide a compar-
proach for integrating MRI and fMRI data, more general
ative overview of some capabilities within this section. Ta-
approaches potentially supporting multiple modalities are
ble 1 contains an overview of the properties discussed along
missing. Along this lines, also the integration of volumetric
with each algorithm description. However, in some cases
and geometric information needs more consideration. While
more detailed information is necessary to compare and se-
the approach by Schott et al. [SMGB12] allows an integra-
lect techniques for specific application cases. Therefore, we
tion in the context of slice based rendering, other volume
provide the comparative charts shown in Figure 12.
rendering paradigms are not yet supported.
Figure 12 (a) provides a comparative overview of the
Apart from that, large data sets still pose a driving chal-
techniques belonging to the discussed groups with respect to
lenge, since the illumination computation takes a lot of time
illumination complexity and underlying rendering paradigm.
and due to memory limitations it is not possible to store large
To be able to assess the illumination capabilities of the in-
intermediate data sets.
corporated techniques, the illumination complexity has been
depicted as a continuous scale along the x-axis. It ranges
from low complexity lighting, e. g., local shading with a sin- 8. Conclusions
gle point light source, to highly complex lighting, which
could for instance simulate scattering and shadowing as ob- Within this article, we have discussed the current state of
tained from multiple area light sources. The y-axis in Fig- the art regarding volumetric illumination models for inter-
ure 12 depicts on the other hand which algorithmic render- active volume rendering. We have classified and explained
ing paradigm is supported. Thus, Figure 12 (a) depicts for the covered techniques and related their visual capabilities
each technique with which rendering paradigm it has been by using a uniform mathematical notation. Thus, the article
used and what lighting complexity has been achieved. We can be used as both, a reference for the existing techniques,
provide this chart as a reference, since we believe that the but also for deciding which advanced volumetric illumina-
shown capabilities are important criteria to be assessed when tion approaches should be used in specific cases. We hope,
choosing an advanced volumetric illumination model. that this helps researchers as well as application developers
to make the right decisions early on and identify challenges
Figure 12 (b) depicts the scalability of the covered tech- for future work.
niques in terms of growing image and data set size with
respect to performance. Since the x-axis depicts scalability
with respect to data size and the y-axis depicts scalability Acknowledgments
with respect to image size, the top right quadrant for in- Many of the presented techniques have been integrated into
stance contains all algorithms which allow a good perfor- the Voreen volume rendering engine (www.voreen.org),
mance in this respect. When selecting techniques in Fig- which is an open source visualization framework. The au-
ure 12 (a) based on the illumination complexity and the un- thors would like to thank Florian Lindemann, for implement-
derlying rendering paradigm, Figure 12 (b) can be used to ing the half angle slicing used to generate Figure 1 (right).
compare them regarding their scalability.
References
7. Future Challenges
[BB09] BANKS D. C., B EASON K.: Decoupling illumination
Though the techniques presented in this article allow to gen- from isosurface generation using 4d light transport. IEEE Trans-
erate realistic volume rendered images at interactive frame actions on Visualization and Computer Graphics 15, 6 (2009),
rates, still several challenges exist to be addressed in future 1595–1602. 17
work. When reviewing existing techniques, it can be noticed [BGB∗ 06] B EASON K. M., G RANT J., BANKS D. C., F UTCH
that although the underlying optical models already contain B., H USSAINI M. Y.: Pre-computed illumination for isosurfaces.
In Conference on Visualization and Data Analysis (2006), pp. 1–
simplifications, there is sometimes a gap between the model 11. 15, 17
and the actual implementation. In the future it is necessary
[Bli77] B LINN J. F.: Models of light reflection for computer syn-
to narrow this gap by providing physically more accurate il- thesized pictures. Proceedings of SIGGRAPH ’77 11 (1977),
lumination models. The work done by Kroes et al. [KPB11] 192–198. 6
can be considered as a first valuable step into this direction. [BR98] B EHRENS U., R ATERING R.: Adding shadows to a
One important ingredient necessary to provide further real- texture-based volume renderer. In IEEE Int. Symp. on Volume
ism is the support of advanced material properties. Current Visualization (1998), pp. 39–46. 4, 13, 22
11
3 12 9 Shadow Splatting
10 1 10 Shadow Volume Propagation
9
14
13 7
11 Piecewise Integration
Scales well with data size 12 Summed Area Table 3D
4
5 13 Extinction-based Shading
8
6 14 Spherical Harmonics
15
15 Monte Carlo ray-tracing
(b)
Figure 12: In (a) the illumination complexity of a method is shown in relation to the underlying rendering paradigm, whereby
the illumination complexity increases from left to right. Techniques to the far left can for instance only handle point lights,
while those to the far right can handle area light sources as well as multiple scattering. Plot (b) shows how each method
scales in terms of performance with respect to large image and data size. Techniques in the upper right quadrant perform well
independent of image and data size while methods in the lower left quadrant degrade substantially for large screen and data
sizes.
[CCF94] C ABRAL B., C AM N., F ORAN J.: Accelerated volume Real-time ambient occlusion and halos with summed area tables.
rendering and tomographic reconstruction using texture mapping Computers and Graphics 34 (2010), 337–350. 15, 22
hardware. In Proceedings of the 1994 Symposium on Volume
[DYV08] D ÍAZ J., Y ELA H., VÁZQUEZ P.-P.: Vicinity occlu-
Visualization (1994), pp. 91–98. 4, 9
sion maps - enhanced depth perception of volumetric models. In
[DE07] D ESGRANGES P., E NGEL K.: US patent application Computer Graphics Int. (2008), pp. 56–63. 14, 15
2007/0013696 A1: Fast ambient occlusion for direct volume ren-
[GP06] G RIBBLE C. P., PARKER S. G.: Enhancing interactive
dering, 2007. 15
particle visualization with advanced shading models. In Proceed-
[DEP05] D ESGRANGES P., E NGEL K., PALADINI G.: Gradient- ings of the 3rd symposium on Applied perception in graphics and
free shading: A new method for realistic interactive volume ren- visualization (2006), pp. 111–118. 1
dering. In Int. Fall Workshop on Vision, Modeling, and Visual-
[HAM11] H OSSAIN Z., A LIM U. R., M ÖLLER T.: Toward high-
ization (2005), pp. 209–216. 4, 12
quality gradient estimation on regular lattices. IEEE Transactions
[DK09] DACHSBACHER C., K AUTZ J.: Real-time global illumi- on Visualization and Computer Graphics 17 (2011), 426–439. 7
nation. In ACM SIGGRAPH Courses Program (2009). 1
[HKSB06] H ADWIGER M., K RATZ A., S IGG C., B ÜHLER K.:
[DVND10] D ÍAZ J., VÁZQUEZ P.-P., NAVAZO I., D UGUET F.: Gpu-accelerated deep shadow maps for direct volume render-
ing. In ACM SIGGRAPH/EG Conference on Graphics Hardware [Max95] M AX N.: Optical models for direct volume rendering.
(2006), pp. 27–28. 11, 22 IEEE Transations on Visualization and Computer Graphics 1, 2
[HLY07] H ERNELL F., L JUNG P., Y NNERMAN A.: Efficient am- (1995), 99–108. 1, 2, 3, 4, 5
bient and emissive tissue illumination using local occlusion in [MC10] M AX N., C HEN M.: Local and global illumination in the
multiresolution volume rendering. In IEEE/EG Int. Symp. on Vol- volume rendering integral. In Scientific Visualization: Advanced
ume Graphics (2007), pp. 1–8. 7, 8 Concepts (2010), pp. 259–274. 2, 12
[HLY08] H ERNELL F., L JUNG P., Y NNERMAN A.: Interactive [MMC99] M UELLER K., M ÖLLER T., C RAWFIS R.: Splatting
global light propagation in direct volume rendering using local without the blur. In IEEE Visualization (1999), p. 61. 4
piecewise integration. In IEEE/EG Int. Symp. on Volume and
[MR10] M E SS C., ROPINSKI T.: Efficient acquisition and clus-
Point-Based Graphics (2008). 14, 16, 22
tering of local histograms for representing voxel neighborhoods.
[HLY09] H ERNELL F., L JUNG P., Y NNERMAN A.: Local ambi- In IEEE/EG Volume Graphics (2010), pp. 117–124. 9
ent occlusion in direct volume rendering. IEEE Transations on
Visualization and Computer Graphics 15, 2 (2009), 548–559. 7, [NEO∗ 10] N GUYEN T. K., E KLUND A., O HLSSON H., H ER -
NELL F., L JUNG P., F ORSELL C., A NDERSSON M. T.,
8, 22
K NUTSSON H., Y NNERMAN A.: Concurrent volume visual-
[KJL∗ 12] K RONANDER J., J ÖNSSON D., L ÖW J., L JUNG P., ization of real-time fmri. In IEEE/EG Volume Graphics (2010),
Y NNERMAN A., U NGER J.: Efficient visibility encoding for pp. 53–60. 7, 18
dynamic illumination in direct volume rendering. IEEE Trans-
actions on Visualization and Computer Graphics 18, 3 (2012), [NM01] N ULKAR M., M UELLER K.: Splatting with shadows. In
447–462. 5, 16, 22 Volume Graphics (2001). 13
[KKH02] K NISS J., K INDLMANN G., H ANSEN C.: Multidimen- [PBVG10] PATEL D., B RUCKNER S., V IOLA I., G RÖLLER
sional transfer functions for interactive volume rendering. IEEE M. E.: Seismic volume visualization for horizon extraction. In
Transactions on Visualization and Computer Graphics 8 (2002), Proceedings of the IEEE Pacific Visualization Symposium 2010
270–285. 9 (2010), pp. 73–80. 7, 10
[KN01] K IM T.-Y., N EUMANN U.: Opacity shadow maps. In [PM08] P ENNER E., M ITCHELL R.: Isosurface ambient oc-
Proceedings of the 12th Eurographics Workshop on Rendering clusion and soft shadows with filterable occlusion maps. In
Techniques (2001), pp. 177–182. 11 IEEE/EG Int. Symp. on Volume and Point-Based Graphics
(2008), pp. 57–64. 17
[KPB11] K ROES T., P OST F. H., B OTHA C. P.: Interactive Di-
rect Volume Rendering with Physically-Based Lighting. Preprint [QXF∗ 07] Q IU F., X U F., FAN Z., N EOPHYTOS N., K AUFMAN
2011-11, Delft University of Technology, 2011. 17, 18, 22 A., M UELLER K.: Lattice-based volumetric global illumination.
IEEE Transactions on Visualization and Computer Graphics 13
[KPH∗ 03] K NISS J., P REMOZE S., H ANSEN C., S HIRLEY P.,
(2007), 1576–1583. 13
M C P HERSON A.: A model for volume lighting and modeling.
IEEE Transations on Visualization and Computer Graphics 9, 2 [RBV∗ 08] RUIZ M., B OADA I., V IOLA I., B RUCKNER S.,
(2003), 150–162. 9, 10, 13, 22 F EIXAS M., S BERT M.: Obscurance-based volume rendering
framework. In IEEE/EG Int. Symp. on Volume and Point-Based
[KPHE02] K NISS J., P REMOZE S., H ANSEN C., E BERT D.: In-
Graphics (2008), pp. 113–120. 8
teractive translucent volume rendering and procedural modeling.
In IEEE Visualization 2002 (2002), pp. 109–116. 2, 4, 9, 10 [RC06] RODGMAN D., C HEN M.: Refraction in volume graph-
[KW03] K RÜGER J., W ESTERMANN R.: Acceleration tech- ics. Graph. Models 68 (2006), 432–450. 16
niques for GPU-based volume rendering. In Proceedings of IEEE [RDRS10a] ROPINSKI T., D ÖRING C., R EZK S ALAMA C.: Ad-
Visualization ’03 (2003). 4 vanced volume illumination with unconstrained light source posi-
[Lev88] L EVOY M.: Display of surfaces from volume data. IEEE tioning. IEEE Computer Graphics and Applications 30, 6 (2010),
Computer Graphics and Applications 8 (1988), 29–37. 5, 6, 22 29–41. 14, 22
[LL94] L ACROUTE P., L EVOY M.: Fast volume rendering using [RDRS10b] ROPINSKI T., D ÖRING C., R EZK -S ALAMA C.: In-
a shear-warp factorization of the viewing transformation. In Pro- teractive volumetric lighting simulating scattering and shadow-
ceedings of the 21st annual conference on Computer graphics ing. In IEEE Pacific Visualization (2010), pp. 169–176. 4, 5, 13,
and interactive techniques (1994), pp. 451–458. 4 17, 22
[LLY06] L JUNG P., L UNDSTRÖM C., Y NNERMAN A.: Multires- [Rit07] R ITSCHEL T.: Fast gpu-based visibility computation for
olution interblock interpolation in direct volume rendering. In natural illumination of volume data sets. In Eurographics Short
IEEE/EG Symposium on Visualization (2006), p. 259Ű266. 7, Paper Proceedings 2007 (2007), pp. 17–20. 15, 16, 22
14, 16 [RKH08] ROPINSKI T., K ASTEN J., H INRICHS K. H.: Efficient
[LM05] L I S., M UELLER K.: Accelerated, high-quality refrac- shadows for gpu-based volume raycasting. In Int. Conference in
tion computations for volume graphics. In Volume Graphics, Central Europe on Computer Graphics, Visualization and Com-
2005. Fourth International Workshop on (2005), pp. 73–229. 16 puter Vision (2008), pp. 17–24. 11
[LR10] L INDEMANN F., ROPINSKI T.: Advanced light material [RMSD∗ 08] ROPINSKI T., M EYER -S PRADOW J., D IEPEN -
interaction for direct volume rendering. In IEEE/EG Int. Symp. BROCK S., M ENSMANN J., H INRICHS K. H.: Interactive vol-
on Volume Graphics (2010), pp. 101–108. 16, 22 ume rendering with dynamic ambient occlusion and color bleed-
ing. Computer Graphics Forum (Eurographics 2008) 27, 2
[LR11] L INDEMANN F., ROPINSKI T.: About the influence of
(2008), 567–576. 6, 8, 9, 22
illumination models on image comprehension in direct volume
rendering. IEEE Transactions on Visualization and Computer [RS07] R EZK -S ALAMA C.: GPU-based monte-carlo volume ray-
Graphics 17, 12 (2011), 1922–1931. 1, 17, 18 casting. In Pacific Graphics (2007). 17
[LV00] L OKOVIC T., V EACH E.: Deep shadow maps. In Pro- [RSHRL09] R EZK S ALAMA C., H ADWIGER M., ROPINSKI T.,
ceedings of the 27th annual conference on Computer graphics L JUNG P.: Advanced illumination techniques for gpu volume
and interactive techniques (2000), pp. 385–392. 11 raycasting. In ACM SIGGRAPH Courses Program (2009). 1
[SHC∗ 09] S MELYANSKIY M., H OLMES D., C HHUGANI J., [vPBV10] Š OLTÉSZOVÁ V., PATEL D., B RUCKNER S., V IOLA
L ARSON A., C ARMEAN D. M., H ANSON D., D UBEY P., AU - I.: A multidirectional occlusion shading model for direct vol-
GUSTINE K., K IM D., K YKER A., L EE V. W., N GUYEN A. D., ume rendering. Computer Graphics Forum (Eurographics/IEEE
S EILER L., ROBB R.: Mapping high-fidelity volume rendering VGTC Symp. on Visualization 2010) 29, 3 (2010), 883–891. 4,
for medical imaging to cpu, gpu and many-core architectures. 10, 22
IEEE Transactions on Visualization and Computer Graphics 15
(2009), 1563–1570. 1, 4 [vPV11] Š OLTÉSZOVÁ V., PATEL D., V IOLA I.: Chro-
matic shadows for improved perception. In Proceedings of
[SKS02] S LOAN P., K AUTZ J., S NYDER J.: Precomputed radi- Non-Photorealistic Animation and Rendering (NPAR) (2011),
ance transfer for real-time rendering in dynamic, low-frequency pp. 105–115. 17
lighting environments. In Proceedings of SIGGRAPH ’02 (2002),
pp. 527–536. 15 [Wan92] WANGER L. C.: The effect of shadow quality on the per-
ception of spatial relationships in computer generated imagery. In
[SKvW∗ 92] S EGAL M., KOROBKIN C., VAN W IDENFELT R.,
SI3D ’92: 1992 Symp. on Interactive 3D graphics (1992), pp. 39–
F ORAN J., H AEBERLI P.: Fast shadows and lighting effects us-
42. 1, 13
ing texture mapping. In Proceedings of the 19th annual confer-
ence on Computer graphics and interactive techniques (1992), [WFG92] WANGER L. C., F ERWERDA J. A., G REENBERG
Proceedings of SIGGRAPH ’92, pp. 249–252. 11 D. P.: Perceiving spatial relationships in computer-generated im-
[SMGB12] S CHOTT M., M ARTIN T., G ROSSET A., B ROWNLEE ages. IEEE Comput. Graph. Appl. 12, 3 (1992), 44–51, 54–58.
C.: Combined surface and volumetric occlusion shading. In Pro- 1, 13
ceedings of Pacific Vis 2012 (2012). 10, 18 [Wil78] W ILLIAMS L.: Casting curved shadows on curved sur-
[SMP11] S CHLEGEL P., M AKHINYA M., PAJAROLA R.: faces. In Proceedings of SIGGRAPH ’78 (1978), pp. 270–274.
Extinction-based shading and illumination in GPU volume ray- 11
casting. IEEE Transactions on Visualization and Computer
[WPSH06] W YMAN C., PARKER S., S HIRLEY P., H ANSEN C.:
Graphics 17, 12 (2011), 1795–1802. 4, 15, 22
Interactive display of isosurfaces with global illumination. IEEE
[SPH∗ 09] S CHOTT M., P EGORARO V., H ANSEN C., Transations on Visualization and Computer Graphics 12 (2006),
B OULANGER K., B OUATOUCH K.: A directional occlu- 186–196. 17
sion shading model for interactive direct volume rendering.
Computer Graphics Forum (Proceedings of Eurographics/IEEE [ZC02] Z HANG C., C RAWFIS R.: Volumetric shadows using
VGTC Symposium on Visualization 2009) 28, 3 (2009), 855–862. splatting. In IEEE Visualization (2002), pp. 85–92. 4, 13, 22
4, 5, 9, 10, 22
[ZC03] Z HANG C., C RAWFIS R.: Shadows and soft shadows
[SSKE05] S TEGMAIER S., S TRENGERT M., K LEIN T., E RTL T.: with participating media using splatting. IEEE Transations on
A simple and flexible volume rendering framework for graphics- Visualization and Computer Graphics 9, 2 (2003), 139–149. 4,
hardware-based raycasting. Fourth International Workshop on 13, 22
Volume Graphics 2005 (2005), 187–241. 16
[ZIK98] Z HUKOV S., I ONES A., K RONIN G.: An ambient light
[Ste03] S TEWART A. J.: Vicinity shading for enhanced percep-
illumination model. In EGRW ’98: Proceedings of the Euro-
tion of volumetric data. In IEEE Visualization (2003), pp. 355–
graphics Workshop on Rendering (1998), pp. 45–55. 7
362. 17
[SYR11] S UNDÉN E., Y NNERMAN A., ROPINSKI T.: Image [ZXC05] Z HANG C., X UE D., C RAWFIS R.: Light propagation
plane sweep volume illumination. IEEE Transactions on Visu- for mixed polygonal and volumetric data. In CGI ’05: Pro-
alization and Computer Graphics 17, 12 (2011), 2125–2134. 4, ceedings of the Computer Graphics International 2005 (2005),
5, 12, 22 pp. 249–256. 13, 22
Dynamic Ambient D, P, - - S
Occlusion [RMSD∗ 08] AMB