0% found this document useful (0 votes)
102 views22 pages

053-074

Advanced volumetric illumination
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views22 pages

053-074

Advanced volumetric illumination
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

EUROGRAPHICS 2012/ M.-P. Cani, F.

Ganovelli STAR – State of The Art Report

State of The Art Report on


Interactive Volume Rendering with Volumetric Illumination

Daniel Jönsson, Erik Sundén, Anders Ynnerman, and Timo Ropinski

Scientific Visualization Group, Linköping University, Sweden

Abstract
Interactive volume rendering in its standard formulation has become an increasingly important tool in many
application domains. In recent years several advanced volumetric illumination techniques to be used in interactive
scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of
producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects,
including varying shadowing and scattering effects. In this article, we review and classify the existing techniques
for advanced volumetric illumination. The classification will be conducted based on their technical realization,
their performance behavior as well as their perceptual capabilities. Based on the limitations revealed in this
review, we will define future challenges in the area of interactive advanced volumetric illumination.
Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image
Generation—Volume Rendering

1. Introduction the past years the scientific visualization community started


to develop interactive volume rendering techniques, which
Interactive volume rendering as a subdomain of scientific vi- do not only allow to simulate local lighting effects, but also
sualization has become mature in the last decade. Several more realistic global effects [RSHRL09]. The existing ap-
approaches exist, which allow a domain expert to visualize proaches span a wide range in terms of underlying ren-
and explore volumetric data sets at interactive frame rates dering paradigms, consumed hardware resources as well as
on standard graphics hardware. While the varying render- lighting capabilities, and thus have different perceptual im-
ing paradigms underlying these approaches directly influ- pacts [LR11]. In this article, we focus on advanced volume
ence image quality [SHC∗ 09], in most cases the standard illumination techniques, which allow interactive data explo-
emission absorption model is used to incorporate illumi- ration, i. e., rendering parameters can be changed interac-
nation effects [Max95]. However, the emission absorption tively. When developing such techniques, several challenges
model is only capable of simulating local lighting effects, need to be addressed:
where light is emitted and absorpted locally at each sam-
• A volumetric optical model must be applied, which usu-
ple point processed during rendering. This results in volume
ally results in more complex computations due to the
rendered images, which convey the basic structures repre-
global nature of the illumination.
sented in a volumetric data set (see Figure 1 (left)). Though,
• Interactive transfer function updates must be supported.
all structures are clearly visible in these images, information
This requires, that the resulting changes to the 3D struc-
regarding the arrangement and size of structures is some-
ture of the data must be incorporated during illumination.
times hard to convey. In the computer graphics community,
• Graphics processing unit (GPU) algorithms need to be de-
more precisely the real-time rendering community, the quest
veloped to allow interactivity.
for realistic interactive image synthesis is one of the ma-
jor goals addressed since its early days [DK09]. Originally, When successfully addressing these challenges, increas-
mainly motivated by the goals to be able to produce more ingly realistic volume rendered images can be generated in-
realistic imagery for computer games and animations, the teractively. As can be seen in Figure 1 (right), these images
perceptual benefits of more realistic representations could not only increase the degree of realism, but also support
also be shown [WFG92, Wan92, GP06]. Consequently, in improved perceptual comprehension. While Figure 1 (left)

c The Eurographics Association 2012.

DOI: 10.2312/conf/EG2012/stars/053-074
54 Jönsson et al. / STAR: Interactive Volume Illumination

reader interested mainly in the concepts underlying the im-


plementations of the approaches covered in this article may
skip Section 2 and directly proceed to Section 3, where we
classify the covered techniques. Based on this classification,
Section 4 discusses all covered techniques in greater detail.
Since one of the main motivations for developing volumet-
ric illumination models in the scientific visualization com-
munity is improved visual perception, we will discuss recent
findings in this area with respect to interactive volume ren-
Figure 1: Comparison of a volume rendered image using
dering in Section 5. By considering both, the technical capa-
the local emission absorption model [Max95] (left), and the
bilities of each technique as well as its perceptual outcome,
global half angle slicing technique [KPHE02] (right). Apart
we propose usage guidelines in Section 6. These guidelines
from the illumination model, all other image parameters are
have been formulated with the goal to support an applica-
the same.
tion expert choosing the right algorithm for a specific use
case scenario. Furthermore, based on the observations made
in Section 5 and the limitations identified throughout the ar-
shows the overall structure of the rendered computed tomog- ticle, we will present future challenges in Section 7. Finally,
raphy (CT) data set of a human heart, the shadows added in the article concludes in Section 8.
Figure 1 (right) provide additional depth information.
We will review and compare the existing techniques for 2. Volumetric Illumination Model
advanced volumetric illumination with respect to their appli-
cability, which we derive from the illumination capabilities, In this section we derive the volume illumination model fre-
the performance behavior as well as their technical realiza- quently exploited by the advanced volume illumination ap-
tion. Since this report addresses interactive advanced volu- proaches described in literature. The model is based on the
metric illumination techniques, potentially applicable in sci- optical model derived by Max [Max95] as well as the exten-
entific visualization, we do not consider methods requiring sions more recently described by Max and Chen [MC10].
precomputations that do not permit change of the transfer For clarity, we have included the definitions used within the
function during rendering. The goal is to provide the reader model as a reference below.
with an overview of the field and to support design decisions Mathematical Notation
when exploiting current concepts. We have decided to ful-
fill this by adapting a classification, which takes into account Ls (~x,~ωo ) Radiance scattered from ~x into direction ~ωo .
the technical realization, performance behavior as well as the Li (~x,~ωi ) Radiance reaching ~x from direction ~ωi .
supported illumination effects. Properties considered in the Le (~x) Radiance isotropically emitted from ~x.
classification are for instance, the usage of preprocessing, ~ i, ω
s(~x, ω ~o) Shading function used at position ~x.
because precomputation based techniques are usually not ap- Similar to a BRDF, it is dependent on the
plicable to large volumetric data sets as the precomputed incoming light direction ~ωi
illumination volume would consume too much additional and the outgoing light direction ~ωo .
graphics memory. Other approaches might be bound to a ~ i, ω
p(~x, ω ~o) Phase function used at position ~x.
certain rendering paradigm, which serves as another prop- As the shading function s, it is dependent
erty, since it might hamper usage of a technique in a gen- on the incoming light direction ~ωi and
eral manner. We have selected the covered properties, such the outgoing light direction ~ωo .
that the classification allows an applicability-driven group- τ(~x) Extinction occurring at ~x.
ing of the existing advanced volumetric illumination tech- T (~xl ,~x) Transparency between the light source
niques. To provide the reader with a mechanism to choose position ~xl and the current position.
which advanced volumetric illumination model to be used, Ll Initial radiance as emitted by the light
we will describe these and other properties as well as their source.
implications along with our classification. Furthermore, we L0 (~ ~o)
x0 , ω Background intensity.
will point out the shortcomings of the current approaches to Similar to surface-based models, in volume rendering the
demonstrate possibilities for future research. lighting is computed per point, which is in our case a sam-
The remainder of this article is structured as follows. In ~ o ) which is
ple ~x along a viewing ray. The radiance Ls (~x, ω
the next section we will discuss the models used for simulat- scattered from ~x inside the volume into direction ω ~ o can be
ing volumetric illumination. We will have our starting point defined as:
in the seminal work presented by Max [Max95], with the
emission absorption model which we successively enrich to
later on support global volumetric illumination effects. The ~ o ) = s(~x, ω
Ls (~x, ω ~ o ) · Li (~x, ω
~ i, ω ~ i ) + Le (~x), (1)

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 55

where Li (~x, ω~ i ) is the incident radiance reaching ~x from di- phase function can be thought of as being the spherical ex-
rection ω~ i , Le (~x) is the emissive radiance and s(~x, ω ~ i, ω
~o) tension of a hemispherical BRDF. It defines the probability
describes the actual shading, which is dependent on both the of a photon changing its direction of motion by an angle of
incident light direction ω ~ i as well as the outgoing light di- θ. In interactive volume rendering, the Henyey-Greenstein
rection ω~ o . Furthermore, s(~x, ω ~ i, ω
~ o ) is dependent on param- 1−g2
model G(θ, g) = 3 is often used to incorpo-
eters, which may vary based on ~x, as for instance the optical (1 + g2 −2g cos θ) 2
properties assigned through the transfer function. In the con- rate phase functions. The parameter g ∈ [−1, 1] describes the
text of volume rendering s(~x, ω ~ i, ω
~ o ) is often written as: anisotropy of the scattering event. A value g = 0 denotes that
light is scattered equally in all directions. A positive value
of g increases the probability of forward scattering. Accord-
s(~x, ω ~ o ) = τ(~x) · p(~x, ω
~ i, ω ~ i, ω
~ o ), (2) ingly, with a negative value backward scattering will become
more likely. If g = 1, a photon will always pass through
the point unaffectedly. If g = −1 it will deterministically
where τ(~x) represents the extinction coefficient at position ~x
~ i, ω
~ o ) is the phase function describing the scatter- be reflected into the direction it came from. Figure 2 shows
and p(~x, ω
the Henyey-Greenstein phase function model for different
ing characteristics of the participating medium at the posi-
anisotropy parameters g.
tion ~x.
Scattering is the physical process which forces light to When considering shadowing in this model, where the at-
deviate from its straight trajectory. The reflection of light tenuation of external light traveling through the volume is
at a surface point is thus a scattering event. Depending ~ i ) can be defined
incorporated, the incident radiance Li (~x, ω
on the material properties of the surface, incident photons as:
are scattered in different directions. When using the ray-
casting analogy, scattering can be considered as a redirec-
tion of a ray penetrating an object. Since scattering can also ~ i ) = Ll · T (~xl ,~x).
Li (~x, ω (3)
change a photons frequency, a change in color becomes pos-
sible. Max [Max95] presents accurate solutions for simu-
lating light scattering in participating media, i. e., how the This definition is based on the standard volume rendering
light trajectory is changed when penetrating translucent ma- integral, where the light source is located at ~xl and has the
terials. In classical computer graphics, single scattering ac- radiance Ll and T (~xl ,~x) is dependent on the transparency
counts for light emitted from a light source directly onto a between ~xl and ~x. It describes the compositing of the optical
surface and reflected unimpededly into the observer’s eye. properties assigned to all samples~x0 along a ray and it incor-
Incident light thus comes directly from a light source. Mul- porates emission and absorption, such that it can be written
tiple scattering describes the same concept, but incoming as:
photons are scattered multiple times. To generate realistic
images, both indirect light and multiple scattering events
Z ~x
have to be taken into account. Both concepts are closely ~ o ) = L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x) + o(~x0 ) · T (~x0 ,~x)d~x0 .
related, they only differ in their scale. Indirect illumination x~0
means that light is reflected multiple times by different ob- (4)
jects in the scene. Multiple scattering refers to the proba-
bilistic characteristics of a scattering event caused by a pho-
While L0 (~ ~ o ) depicts the background energy entering
x0 , ω
ton being reflected multiple times within an object. While
the volume, o(~x0 ) defines the optical properties at ~x0 , and
in classical polygonal computer graphics, light reflection on
T (~xi , ~x j ) is dependent on the optical depth and describes how
a surface is often modeled using bidirectional reflectance
light is attenuated when traveling through the volume:
distribution functions (BRDFs), in volume rendering phase
functions are used to model the scattering behavior. Such a
R x~j
− κ(~x0 )d~x0
T (~xi , ~x j ) = e ~xi
, (5)

where κ(~x0 ) specifies the absorption at ~x0 . Since volumetric


data is discrete, the volume rendering integral is in practice
approximated numerically, usually by exploiting a Riemann
sum.

Figure 2: The Henyey-Greenstein phase function plotted for According to Max, the standard volume rendering integral
different anisotropy parameters g. simulating absorption and emission can be extended with
these definitions to support single scattering and shadowing:

c The Eurographics Association 2012.


56 Jönsson et al. / STAR: Interactive Volume Illumination

is omni directional and homogeneous. As some algorithms


~ o ) = L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x) are bound to certain rendering paradigms, they may also
Z ~x be subject to constraints regarding the light source posi-
+ (Le (~x) + (s(~x0 , ω ~ o ) · Li (~x0 , ω
~ i, ω ~ i ))) · T (~x0 ,~x)d~x0 , tion,e. g., [BR98, SPH∗ 09, vPBV10]. Finally, the number of
x~0 supported light sources varies, where either one or many
(6) light sources are supported. With respect to the light inter-
action and propagation inside the volume, the discussed al-
with ω~ i = ~xl −~x0 , L0 (~ ~ o ) being the background intensity
x0 , ω gorithms also vary. While all algorithms support either local
and x~0 a point behind the volume. When extending this equa- or global shadowing, though at different frequencies reach-
tion to support multiple scattering, it is necessary to integrate ing from soft to hard shadows, scattering is not supported
the scattering of light coming from all possible directions ω ~i by all approaches. Those algorithms, which support scatter-
on the unit sphere. ing may either support the simulation of single or multiple
scattering events. We will discuss these capabilities for each
covered technique and relate the supported illumination ef-
3. Algorithm Classification
fects to Max’s optical models [Max95].
To be able to give a comprehensive overview of current in-
teractive advanced volumetric illumination techniques, we As in many other fields of computer graphics, applying
will introduce a classification within this section, which al- advanced illumination techniques in volume rendering in-
lows us to group and compare the covered approaches. The volves a trade off between rendering performance and mem-
classification has been developed with the goal to provide ory consumption. The two most extreme cases on this scale
application designers with decision support, when choosing are entire precomputation of the illumination information,
advanced volumetric illumination techniques for their needs. and recomputation on the fly for each frame. We will com-
Thus, we will first cover the most relevant algorithm prop- pare the techniques covered within this article with respect
erties, that need to be considered when making such a deci- to this trade off, by describing the performance impact of
sion. these techniques as well as the required memory consump-
tion. Since different application scenarios vary with respect
The most limiting property of advanced volumetric il- to the degree of interactivity, we will review their perfor-
lumination algorithms is the dependence on an underly- mance capabilities with respect to rendering and illumi-
ing rendering paradigm. In the past, different paradigms al- nation update. Some techniques trade illumination update
lowing interactive volume rendering have been proposed. times for frame rendering times and thus allow higher frame
These paradigms range from a shear-warp transforma- rates, as long as no illumination critical parameter has been
tion [LL94], over splatting [MMC99], and texture slic- changed. We discuss rendering times as well as illumina-
ing [CCF94] to volume ray-casting [KW03]. Besides their tion update times, whereby we differentiate whether they are
different technical realizations, the visual quality of the ren- triggered by lighting or transfer function updates. Recom-
dered image also varies with respect to the used render- putations performed when the camera changes are consid-
ing paradigm [SHC∗ 09]. Since efficiency is of great im- ered as rendering time. The memory footprint of the covered
portance in the area of interactive volumetric illumination, techniques is also an important factor, since it is often lim-
many approaches have been developed which allow a perfor- ited by the available graphics memory. Some techniques pre-
mance gain by being closely bounded to a specific rendering compute an illumination volume, e. g., [RDRS10b, SMP11],
paradigm (e. g., [KPHE02, ZC02, ZC03, SPH∗ 09, vPBV10, while others store much less or even no illumination data,
SYR11]). Especially when integrating volumetric illumina- e. g., [KPHE02, DEP05, SYR11].
tion into existing visualization systems, the underlying ren-
dering paradigm of the existing approaches might be limit- Finally, it is important if a volumetric illumination ap-
ing when deciding which algorithm to be used. Therefore, proach can be combined with clipping planes and allows to
we will discuss this dependency when presenting the cov- combine geometry and volumetric data. While clipping is
ered algorithms in Section 4. frequently used in many standard volume rendering appli-
cations, the integration of polygonal geometry data is for
Another property of major importance is, which illumina- instance important in flow visualization, when integrating
tion effects are supported by a certain algorithm. This is on pathlines in a volumetric data set.
the one hand specified based on the supported lighting and
on the other hand on the light interaction and propagation To enable easier comparison of the existing techniques
inside the volume. The supported illumination can vary with and the capabilities described above, we have decided to
respect to the number and types of the light sources. Sup- classify them into five groups based on the algorithmic con-
ported light source types range from classical point and di- cepts exploited to achieve the resulting illumination effects.
rectional light sources, over area and textured light sources. The thus obtained groups are, first local region based ( )
Since several algorithms focus on ambient visibility com- techniques which consider only the local neighborhood
putation, we consider also an ambient light source which around a voxel, second slice based ( ) techniques which

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 57

Figure 3: Visual comparison when applying different volumetric illumination models. From left to right: gradient-based shad-
ing [Lev88], directional occlusion shading [SPH∗ 09], image plane sweep volume illumination [SYR11], shadow volume prop-
agation [RDRS10b] and spherical harmonic lighting [KJL∗ 12]. Apart from the used illumination model, all other rendering
parameters are constant.

propagate illumination by iteratively slicing through the vol- To be able to depict more details for comparison reasons,
ume, third, light space based ( ) techniques which project we will cover the properties of the most relevant techniques
illumination as seen from the light source, fourth lattice in Table 1 at the end of this article. The table shows the
based ( ) techniques which compute the illumination di- properties with respect to supported light sources, scattering
rectly on the volumetric data grid without applying sam- capabilities, render and update performance, memory con-
pling, and fifth basis function based ( ) techniques which sumption, and the expected impact of a low signal to noise
use a basis function representation of the illumination infor- ratio. For rendering performance and memory consumption,
mation. While this classification into five groups is solely the filled circles represent the quantity of rendering time or
performed based on the concepts used to compute the illu- memory usage. An additional icon represents, if illumination
mination itself, and not for the image generation, it might in information needs to be updated when changing the lighting
some cases go hand in hand, e. g., when considering the slice or the transfer function. Thus, the algorithm performance for
based techniques which are also bound to the slice based frame rendering and illumination updates become clear.
rendering paradigm. A comparison of the visual output of
one representative technique of each of the five groups is
4. Algorithm Description
shown in Figure 3. As it can be seen, when changing the
illumination model, the visual results might change dras- In this section, we will describe all techniques covered
tically, even though most of the presented techniques are within this article in a formalized way. The section is struc-
based on Max’s formulation of a volumetric illumination tured based on the groups introduced in the previous section.
model [Max95]. The most prominent visual differences are For each group, we will provide details regarding the tech-
the intensities of shadows as well as their frequency, which niques themselves with respect to their technical capabilities
is visible based on the blurriness of the shadow borders. as well as the supported illumination effects. All of these
Apart from the used illumination model, all other rendering properties are discussed based on the property description
parameters are the same in the shown images. Besides the given in the previous section. We will start with a brief ex-
five main groups supporting fully interactive volume render- planation of each algorithm, where we relate the exploited
ing, we will also briefly cover ray-tracing based ( ) tech- illumination computations to Max’s model (see Section 2)
niques, and those only limited to isosurface illumination. and provide a conceptual overview of the implementation.
While the introduced groups allow a sufficient classifica- Then, we will discuss the illumination capabilities with re-
tion for most available techniques, it should also be men- spect to supported light sources and light interactions in-
tioned, that some approaches could be classified into more side the volume, before addressing performance impact and
than one group. For instance, the shadow volume propa- memory consumption.
gation approach [RDRS10b], which we have classified as
lattice-based, could also be classified as light space based,
4.1. Local Region Based Techniques ( )
since the illumination propagation is performed based on the
current light source position. As the name implies, the techniques described in this sub-
section perform the illumination computation based on a lo-

c The Eurographics Association 2012.


58 Jönsson et al. / STAR: Interactive Volume Illumination

tion, specular reflection and ambient lighting. Diffuse and


specular reflections both depend on the normalized gradient
| 5 τ( f (~x))|, where f (~x) is the intensity value given at posi-
tion ~x. Thus, we can define the diffuse reflection as:

~ i ) = Ll · kd · max(| 5 τ( f (~x))| · ω
Ldi f f (~x, ω ~ i , 0). (7)

Thus, we can modulate the initial diffuse lighting Ll based


on its incident angle and the current voxel’s diffuse color
kd . In contrast to diffuse reflections, specular reflections also
depend on the viewing angle, and thus the outgoing light di-
rection ω ~ o . Therefore, ω
~ o is also used to modulate the initial
light intensity Ll and the voxel’s specular color ks :
Figure 4: A CT scan of an elk rendered with emission and
absorption only (top), and with gradient-based diffuse shad-
~i +ω
ω ~o α
ing [Lev88] (bottom). Due to the incorporation of the gradi- ~ o ) = Ll · ks · max(| 5 τ( f (~x))| ·
~ i, ω
Lspec (~x, ω , 0) .
ent, surface structures emerge (bottom). 2
(8)
α is used to influence the shape of the highlight seen
on surface-like structures. A rather large α results in a
cal region. Thus, when relating volumetric illumination tech- small sharp highlight, while a smaller α results in a bigger
niques to conventional polygonal computer graphics, these smoother highlight. To approximate indirect illumination ef-
techniques could be best compared to local shading models. fects, usually an ambient term is added. This ensures, that
While the still most frequently used gradient-based shad- voxels with gradients pointing away from the light source
ing [Lev88] is exploiting the concepts underlying the clas- do not appear pitch black. Since this ambient term does not
sical Blinn-Phong illumination model [Bli77], other tech- incorporate any spatial information, it is the easiest to com-
niques are more focused on the volumetric nature of the pute as:
data to be rendered. Also with respect to the locality, the de-
scribed techniques vary. The gradient-based shading relies
on a gradient and thus takes, when for isntance using finite Lamb (~x) = Ll · ka , (9)
difference gradient operators, only adjacent voxels into ac-
count. Other techniques, as for instance dynamic ambient by only considering the initial light intensity Ll and the
occlusion [RMSD∗ 08], take into account bigger, though still voxels ambient color ka . More advanced techniques, which
local, regions. Thus, by using local region based techniques, substitute Lamb (~x) with the ambient occlusion at position
only direct illumination is computed, i. e., local illumination ~x, are covered below. To incorporate gradient-based shad-
not influenced by other parts of the scene. Hence, not every ing into the volume rendering integral, the shading function
other part of the scene has to be considered when comput- ~ i, ω
s(~x, ω ~ o ) is replaced by:
ing the illumination for the current object and the rendering
complexity is reduced from O(n2 ) to O(n), with n being the
number of voxels. s(~x, ω ~o) =
~ i, ω
(10)
τ(~x) · (Ldi f f (~x, ω
~ i ) + Lspec (~x, ω
~ i, ω
~ o ) + Lamb (~x)).
Gradient-based volumetric shading [Lev88] is today still
the most widely used shading technique in the context of
To also incorporate attenuation based on the distance be-
interactive volume rendering. It exploits voxel gradients as
tween ~x and ~xl , the computed light intensity can be modu-
surface normals to calculate local lighting effects. The ac-
lated. However, this distance based weighting does not in-
tual illumination computation is done in a similar way as
corporate any voxels possibly occluding the way to the light
when using the Blinn-Phong illumination model [Bli77] in
source, which would result in shadowing. To solve this is-
polygonal-based graphics, whereas the surface normal is
sue, volume illumination techniques not being constrained
substituted by the gradient derived from the volumetric data.
to a local region need to be applied.
Figure 4 shows a comparison of gradient-based volumetric
shading and the application of the standard emission and ab- As the gradient 5τ( f (~x)) is used in the computation of
sorption model. As it can be seen, surface details become ~ i ) and Lspec (~x, ω
Ldi f f (~x, ω ~ i, ω
~ o ), its quality is crucial for
more prominent when using gradient-based shading. The the visual quality of the rendering. Especially when dealing
local illumination at a point ~x is computed as the sum of with a high degree of specularity, an artifact free image can
the three supported reflection contributions: diffuse reflec- only be generated when the gradients are continuous along a

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 59

ωi , to avoid self occlusion. RΩ specifies the radius of the lo-


cal neighborhood for which the occlusion is computed, and
ga (~x) denotes the light contribution at each position ~x along
a ray. Since the occlusion should be computed in an ambient
manner, it needs to be integrated over all rays in the neigh-
borhood Ω around a position ~x.
Different light contribution functions g(~x) are presented.
To achieve sharp shadow borders, the usage of a Dirac delta
is proposed, which ensures that light contributes only at the
Figure 5: Gradient-based shading applied to a magnetic boundary of Ω. Smoother results can be achieved, when the
resonance imaging (MRI) data set (left). As it can be seen ambient light source is considered volumetric. This results
in the closeup, noise is emphasized in the rendering (right). in adding a fractional light emission at each sample point:

1
boundary. However, often local difference operators are used gA (~x) = . (12)
RΩ − a
for gradient computation, and thus the resulting gradients are
subject to noise. Therefore, several approaches have been Reformulation of gA (~x) also allows to include varying emis-
developed which improve the gradient quality. Since these sive properties for different tissue types. When such an emis-
techniques are beyond the scope of this article, we refer to sive mapping Le0 (~x) is available, gA (~x) can be defined as fol-
the work presented by Hossain et al. [HAM11] as a starting lows:
point.
Gradient-based shading can be combined with all exist-
1
ing volume rendering paradigms, and it supports an arbi- gA (~x) = + Le0 (~x). (13)
RΩ − a
trary number of point and directional light sources. Due to
the local nature, no shadows and scattering effects are sup- An application of the occlusion only illumination is shown in
ported. The rendering time, which depends on the used gra- Figure 6 (left), while Figure 6 (right) shows the application
dient computation scheme, is in general quite low. To further of the additional emissive term, which is set proportional to
accelerate the rendering performance, gradients can also be the signal strength of a coregistered functional magnetic res-
precomputed and stored in a gradient volume accessed dur- onance imaging (fMRI) signal [NEO∗ 10].
ing rendering. When this precomputation is not performed,
no additional memory is required. Clipping is directly sup- A straight forward implementation of local ambient oc-
ported, though the gradient along the clipping surfaces needs clusion would result in long rendering times, as for each
to be replaced with the clipping surface normal to avoid ren- ~x multiple rays need to be cast to determine the occlusion
dering artifacts [HLY09]. Besides the local nature of this factor. Therefore, it has been implemented by exploiting a
approach, the gradient computation makes gradient-based multiresolution approach [LLY06]. This flat blocking data
shading highly dependent on the SNR of the rendered data. structure represents different parts of the volume at differ-
Therefore, gradient-based shading is not recommendet for ent levels of detail. Thus, empty homogeneous regions can
modalities suffering from a low SNR, such as magnetic res-
onance imaging (MRI)(see Figure 5), 3D ultrasound or seis-
mic data [PBVG10].
Local Ambient Occlusion [HLY07] is a local approxima-
tion of ambient occlusion, which considers the voxels in a
neighborhood around each voxel. Ambient occlusion goes
back to the obscurance rendering model [ZIK98], which re-
lates the luminance of a scene element to the degree of oc-
clusion in its surroundings. To incorporate this information
into the volume rendering equation, Hernell et al. replace the
standard ambient term Lamb (~x) by:
Figure 6: An application of local ambient occlusion in a
Z Z δ(~x,ωi ,a) medical context. Usage of the occlusion term allows to con-
Lamb (~x) = gA (~x0 ) · T (~x0 , δ(~x, ωi , a))d~x0 dωi , vey the 3D structure (left), while multimodal emissive light-
Ω δ(~x,ωi ,RΩ )
(11) ing allows to integrate fMRI as additional modality (right).
(images taken from [NEO∗ 10])
where δ(~x, ωi , a) offsets the position by a along the direction

c The Eurographics Association 2012.


60 Jönsson et al. / STAR: Interactive Volume Illumination

be stored at a very low level of detail, which requires less such that the ambient intensity Lamb (~x) is derived from the
sampling operations when computing Lamb (~x). precomputed data:

Local ambient occlusion is not bound to a specific render-


ing paradigm. When combined with gradient-based lighting, Lamb (~x) = 1.0 − Oenv (~x, 5τ( f (~x))) · ka , (14)
it supports point and directional light sources, while the ac-
tual occlusion simulates an exclusive ambient light only. The where Oenv is the local occlusion factor, which is computed
rendering time is interactive, while the multiresolution data as:
structure needs to be updated whenever the transfer function
has been changed. The memory size of the multiresolution
data structure depends on the size and homogeneity of the 1
Oenv (~x, 5τ( f (~x))) = 2 3 ∑ τα ( j) · LH j (~x). (15)
data set. In a follow up work, more results and the usage of 3 πΩR 0≤ j<2b
clipping planes during the local ambient occlusion computa-
tion are discussed [HLY09]. Here, ΩR depicts the radius of the region, for which the lo-
cal histograms LH have been precomputed. b is the data bit
Ruiz et al. also exploit ambient illumination in their ob-
depth and thus 2b is the number of bins of the local his-
scurance based volume rendering framework [RBV∗ 08]. To
togram. The bins LH j are modulated based on the trans-
compute the ambient occlusion, Ruiz et al. perform a visibil-
parency τα ( j) assigned to each intensity j. This operation
ity test along the occlusion rays and evaluate different dis-
is performed through texture blending on the GPU, and thus
tance weightings. Besides advanced illumination, they also
allows to update the occlusion information in real-time. To
facilitate the occlusion to obtain illustrative renderings as
apply this approach to direct volume rendering, the environ-
well as for the computation of saliency maps.
mental color Eenv is derived in a similar way as the occlusion
Dynamic Ambient Occlusion [RMSD∗ 08] is a histogram factor Oenv :
based approach, which allows integration of ambient oc-
clusion, color bleeding and basic scattering effects, as well
1
as a simple glow effect that can be used to highlight re- Eenv (~x, 5τ( f (~x))) = 2 3 ∑ τα ( j) · τrgb ( j) · LH j (~x),
gions within the volume. While the techniques presented 3 πΩR 0≤ j<2b
in [HLY07] and [HLY09] exploit a ray-based approach to (16)
determine the degree of occlusion in the neighborhood of where τrgb ( j) represents the emissive color of voxels hav-
~x, dynamic ambient occlusion uses a precomputation stage, ing the intensity j. During rendering, the current voxel’s dif-
where intensity distributions are computed for each voxel’s fuse color coefficient kd is obtained by blending between the
neighborhood. The main idea of this approach is to modu- transfer function color τrgb and Eenv with using the occlusion
late these intensity distributions with the transfer function factor Oenv as blend factor. Thus, color bleeding is simulated,
during rendering, such that an integration over the modu- as the current voxel’s color is affected by its environment. To
lated result allows to obtain the occlusion. The intensity dis- support a simple glow effect, an additional mapping function
tributions are stored as normalized local histograms. When can be exploited, which is also used to modulate the local
storing one normalized local histogram for each voxel, this histograms stored in the precomputed 2D texture.
would result in a large amount of data which needs to be
stored and accessed during rendering. Therefore, a similar- Dynamic ambient occlusion is not bound to a specific ren-
ity based clustering is performed on the histograms, whereby dering paradigm, though in the original paper it has been in-
a vector quantization approach is exploited. After clustering, troduced in the context of volume ray-casting [RMSD∗ 08].
the local histograms representing a cluster are available as a Since it facilitates a gradient-based Blinn-Phong illumina-
2D texture table, which is accessed during rendering. Addi- tion model, which is enriched by ambient occlusion and
tionally, a scalar volume data set is generated, which con- color bleeding, it supports point, distant and ambient light
tains a cluster ID for each voxel, which associates the voxel sources. While an ambient light source is always exclusive,
with the corresponding row storing the local histogram in the the technique can be used with several point and distant light
2D texture. sources, which are used to compute Ldi f f and Lspec . Since
the precomputed histograms are constrained to local regions,
During rendering, the precomputed information is used only local shadows are supported, which have a soft ap-
to support four different illumination effects: ambient oc- pearance. A basic scattering extension, exploiting one his-
clusion, color bleeding, scattering and a simple glow effect. togram for each halfspace defined by a voxel’s gradient, is
Therefore, the representative histogram is looked up for the also supported. The rendering performance is interactive, as
current voxel and then modulated with the transfer function in comparison to standard gradient-based shading only two
color to produce the final color. To integrate ambient occlu- additional texture lookups need to be performed. The illu-
sion into an isosurface based volume renderer, the gradient- mination update time is also very low, since the histogram
based shading formulation described above is modulated, modulation is performed through texture blending. For the

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 61

precomputation stage, Ropinski et al. [RMSD∗ 08] report ex-


pensive processing times, which are required to compute the
local histograms and to perform the clustering. In a more re- ~ o ) =L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x)
cent approach, Mess and Ropinski propose a CUDA-based Z ~x
approach which reduces the precomputation times by a fac- + C(~x0 , ω ~ o ) · Li (~x0 , ω
~ i, ω ~ i ) · T (~x0 ,~x)d~x0 .
tor of up to 10 [MR10]. During precomputation, one addi- x~0
tional scalar cluster lookup volume as well as the 2D his- (20)
togram texture are generated. Since the histogram computa-
tion takes a lot of processing time, interactive clipping is not
supported, as it would affect the density distribution in the In a follow up work, Kniss et al. have proposed how
voxel neighborhoods. to enhance their optical model by using phase func-
tions [KPH∗ 03]. Therefore, the indirect lighting contribution
is rewritten as:
4.2. Slice Based Techniques ( )
The slice based techniques covered in this subsection are
all bound to the slice based rendering paradigm, originally
presented by Cabral et al. [CCF94]. This volume rendering ~ i ) = Ll · p(~x, ω
Li (~x, ω ~ o ) · T (~xl ,~x) + Ll · T (~xl ,~x)Blur(θ).
~ i, ω
paradigm exploits a stack consisting of a high number of (21)
rectangular polygons, which are used as a proxy geometry
to represent a volumetric data set. Different varieties of this ~ i)
The directional nature of the indirect lighting in Li (~x, ω
paradigm exist, though today image plane aligned polygons, supports an implementation, where the rendering and the
to which the volumetric data set stored in a 3D texture is illumination computation are synchronized. Kniss et al.
mapped, are most frequently used. achieve this by exploiting the half angle vector introduced
before [KKH02]. This technique choses the slicing axis,
Half Angle Slicing was introduced by Kniss et such that it is halfway between the light direction and the
al. [KPHE02]. It is the first of the slice based approaches, view direction, or the light direction and the inverted view
which synchronizes the slicing used for rendering with direction, if the light source is behind the volume. Thus, each
the illumination computation, and the first integration of slice can be rendered from the point of view of both the ob-
volumetric illumination models including indirect lighting server and the light, and no intermediate storage for the illu-
and scattering within interactive applications. The indirect mination information is required. The presented implemen-
lighting is modeled as: tation uses three buffers for accumulating the illumination
information and compositing along the view ray.

~ i ) = Ll · T (~xl ,~x).
Li (~x, ω (17) Due to the described synchronization behavior, half an-
gle slicing is, as all techniques described in this subsection,
To include multiple scattering, a directed diffusion pro- bound to the slicing rendering paradigm. It supports one di-
cess is modeled through an angle dependent blur function. rectional or point light source located outside the volume,
Thus, the indirect lighting is rewritten as: and allows to produce shadows having hard, well defined
borders (see Figure 1 (right)). By integrating a directed dif-
fusion process in the indirect lighting computation, multi-
~ i ) = Ll · T (~xl ,~x) + Ll · T (~xl ,~x)Blur(θ), ple scattering can be simulated which allows a realistic rep-
Li (~x, ω (18)
resentation of translucent materials. Half angle slicing per-
forms two rendering passes for each slice, one from the
where θ is a user defined cone angle. In addition to this mod- point of view of the observer and one from that of the light
ified indirect lighting, which simulates multiple scattering, source, and thus allows to achieve interactive frame rates.
half angle slicing also exploits a surface shading factor S(~x). Since rendering and illumination computation are performed
S(~x) can either be user defined through a mapping function in a lockstep manner, besides the three buffers used for com-
or derived from the gradient magnitude at ~x. It is used to positing, no intermediate storage is required. Clipping planes
weight the degree of emission and surface shading, such that can be applied interactively.
~ i, ω
the illumination C(~x, ω ~ o ) at ~x is computed as:
Directional Occlusion Shading [SPH∗ 09] is another slice
based technique that exploits front-to-back rendering of
C(~x, ω ~ o ) = Le (~x) · (1 − S(~x)) + s(~x, ω
~ i, ω ~ o ) · S(~x). (19)
~ i, ω view aligned slices. In contrast to the half angle slicing ap-
proach discussed above, directional occlusion shading does
C(~x, ω ~ o ) is then used in the volume rendering integral as
~ i, ω not adapt the slicing axis with respect to the light position.
follows: ~ i ,~ω) to prop-
Instead, it solely uses a phase function p(~x, ω

c The Eurographics Association 2012.


62 Jönsson et al. / STAR: Interactive Volume Illumination

Due to the fixed compositing order, directional occlusion


shading is bound to the slice based rendering paradigm and
supports only a single light source, which is located at the
camera position. The iterative convolution allows to generate
soft shadowing effects, and incorporating the phase function
accounts for first order scattering effects (see Figure 7 (left)
and (middle)). The performance is interactive and compa-
Figure 7: Directional occlusion shading [SPH∗ 09] inte- rable to half angle slicing, and since no precomputation is
grates soft shadowing effects (left). Multidirectional oc- used, no illumination updates need to be performed. Accord-
clusion shading [vPBV10] additionally supports differ- ingly, also the memory footprint is low, as only the addi-
ent lighting positions (middle) and (right). (Images taken tional occlusion buffers need to be allocated. Finally, clip-
from [SPH∗ 09] and [vPBV10], courtesy of Mathias Schott ping planes can be used interactively.
and Veronika Šoltészová) Patel et al. apply concepts similar to directional oc-
clusion shading when visualizing seismic volume data
sets [PBVG10]. By using the incremental blurring operation
agate occlusion information during the slice traversal from modeled through the phase function, they are able to visual-
front to back: ize seismic data sets without emphasizing the data inherent
noise. A more recent extension by Schott et al. allows to in-
( ) tegrate geometry and support light interactions between the
0 ~ i ·~ω < θ(~x)
if ω geometry and the volumetric media [SMGB12].
~ i ,~ω) =
p(~x, ω 1 , (22)
2π·(1−cos(θ))
otherwise.
Multidirectional Occlusion Shading introduced by Šoltés-
zová et al. [vPBV10], extends the directional occlusion tech-
where ~ω is the viewing direction and the ω
~ i ’s are lying in the nique [SPH∗ 09] to allow for a more flexible placement of the
cone, which points with the tip along ~ω. Similar to the ap- light source. Therefore, the convolution kernel is exchanged,
proach by Kniss et al. [KPH∗ 03] this is a, in this case back- which is used for the iterative convolution performed dur-
ward, peaked phase function, where the scattering direction ing the slicing. Directional occlusion shading makes the as-
is defined through the cone angle θ. This backward peaked sumption, that the light source direction is aligned with the
behavior is important in order to support synchronized oc- viewing direction, and therefore does not allow to change
clusion computation and rendering. Thus, p(~x, ω ~ i ,~ω) can be the light source position. Multidirectional occlusion shad-
used during the computation of the indirect lighting as fol- ing avoids this assumption, by introducing more complex
lows: convolution kernels. Instead of performing the convolution
of the opacity buffer with a symmetrical disc-shaped ker-
Z nel, Šoltészová et al. propose the usage of an elliptical ker-
x0 ,~ω) ·
Li (~x,~ω) = L0 (~ ~ i ,~ω) · T (~xl 0 ,~x)d~ωi ,
p(~x, ω (23) nel derived from a tilted cone c. In their paper, they derive

the convolution kernel from the light source position and the
where the background intensity is modulated by p(~x, ω ~ i ,~ω), aperture θ defining c. Due to the directional component in-
which is evaluated over the contributions coming from all troduced by the front-to-back slicing, the tilt angle of the
directions ~ωi over the sphere. Since p(~x, ω
~ i ,~ω) is defined as cone is limited to lie in [0, π2 −θ], which restricts the shape of
a backward peaked phase function, in practice the integral the filter kernel from generating into hyperbolas or parabo-
needs to be only evaluated in an area defined by the cone las. The underlying illumination model is the same as with
angle θ. To incorporate the occlusion based indirect lighting directional occlusion shading, whereas the indirect lighting
into the volume rendering integral, Schott et al. propose to contribution is modified by exploiting ellipse shaped filter
modulate it with a user defined scattering coefficient σs (~x): kernels.

As standard directional occlusion shading, multidirec-


tional occlusion shading uses front-to-back slice rendering,
~ o ) =L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x)
which tightly binds it to this rendering paradigm. Since the
Z ~x
introduced filter kernels allow to model light sources at dif-
+ σs (~x0 ) · Li (~x0 , ω
~ i ,~ω) · T (~x0 ,~x)d~x0 .
x~0 ferent positions, multiple light sources are supported. How-
(24) ever, due to the directional nature, light sources must be posi-
tioned in the viewer’s hemisphere. Shadowing and scattering
The implementation of occlusion based shading is similar capabilities are the same as with directional occlusion shad-
to standard front-to-back slice rendering. However, similar ing, i. e., soft shadows and first order scattering is supported.
as in half angle slicing [KPHE02] two additional occlusion However, since different light positions are supported, shad-
buffers are used for compositing the lighting contribution. ows can be made more prominent in the final image when

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 63

the light direction differs from the view direction. This ef-
fect is shown in Figure 7, where Figure 7 (middle) shows a
rendering where the light source is located in the camera po-
sition, while the light source has been moved to the left in
Figure 7 (right). Due to the more complex filter kernel, the
rendering time of this model is slightly higher than when us-
ing directional occlusion shading. The memory consumption
is equally low.

4.3. Light Space Based Techniques ( )


As light space based techniques we consider all those tech-
niques, where illumination information is directionally prop-
agated with respect to the current light position. Unlike the
slice based techniques, these techniques are independent of
the underlying rendering paradigm. However, since illumi-
nation information is propagated along a specific direction,
these techniques usually support only a single light source.
Deep Shadow Mapping has been developed to enable
shadowing of complex, potentially semi-transparent, struc-
tures [LV00]. While standard shadow mapping [Wil78] sup- Figure 8: The visible human head data set rendered with-
ports basic shadowing by projecting a shadow map as seen out shadows (top left), with shadow rays (top right), with
from the light source [SKvW∗ 92], it does not support semi- shadow mapping (bottom left) and with deep shadow maps
transparent occluders which often occur in volume render- (bottom right) (images taken from [RKH08]).
ing. In order to address semi-transparent structures, opac-
ity shadow maps serve as a stack of shadow maps, which
store alpha values instead of depth values in each shadow
traversed and it is checked iteratively whether the samples
map [KN01]. Nevertheless, deep shadow maps are a more
encountered so far can be approximated by a linear function.
compact representation for semi-transparent occluders. They
When processing a sample, where this approximation would
also consist of a stack of textures, but in contrast to the opac-
not be sufficient, i. e., a user defined error is exceeded, the
ity shadow maps, an approximation to the shadow function is
distance of the previous sample to the light source as well
stored in these textures. Thus, it is possible to approximate
as the accumulated alpha value at the previous sample are
shadows by using fewer hardware resources. Deep shadow
stored in the next layer of the deep shadow map. This is re-
mapping has first been applied to volume rendering by Had-
peated until all layers of the deep shadow map have been
wiger et al. [HKSB06]. An alternative approach has been
created.
presented by Ropinski et al. [RKH08]. While the original
deep shadow map approach [LV00] stores the overall light The error threshold used when generating the deep
intensity in each layer, in volume rendering it is advanta- shadow map data structure is introduced to determine
geous to store the absorption given by the accumulated al- whether the currently analyzed samples can be approximated
pha value in analogy to the volume rendering integral. Thus, by a linear function. In analogy to the original deep shadow
for each shadow ray, the alpha function is analyzed, i. e., the mapping technique [LV00], this error value constrains the
function describing the absorption, and approximated by us- variance of the approximation. This can be done by adding
ing linear functions. (resp. subtracting) the error value at each sample’s position.
When the alpha function does not lie anymore within the
To have a termination criterion for the approximation of
range given by the error threshold, a new segment to be ap-
the shadowing function, the depth interval covered by each
proximated by a linear function is started. A too small error
layer can be restricted. However, when it is determined that
threshold results in a too close approximation, and more lay-
the currently analyzed voxels cannot be approximated suffi-
ers are needed to represent the shadow function.
ciently by a linear function, smaller depth intervals are con-
sidered. Thus, the approximation works as follows. Initially, Once the deep shadow map data structure is obtained,
the first hit point for each shadow ray is computed, similar as the resulting shadowing information can be used in differ-
done for standard shadow mapping. Next, the distance to the ent ways. A common approach is to use the shadowing in-
light source of the first hit point and the alpha value for this formation in order to diminish the diffuse reflectance Ldi f f
position are stored within the first layer of the deep shadow when using gradient-based shading. Nevertheless, it can also
map. At the first hit position, the alpha value usually equals be used to directly modulate the emissive contribution Le at
zero. Starting from this first hit point, each shadow ray is ~x.

c The Eurographics Association 2012.


64 Jönsson et al. / STAR: Interactive Volume Illumination

A comparison of deep shadow mapping, which allows Chen [MC10], the single scattering contribution is written
semi-transparent occluders, with shadow rays and standard as:
shadow mapping is shown in Figure 8. As it can be seen
deep shadow maps introduce artifacts when thin occluder
Z ~x
structures are present. The shadows of these structures show
Iss (~x, ωl ) = T (~xl ,~x) · L0 + T (~x0 ,~x) · τ(~x0 ) · Le (~x0 )d~x0 .
transparency effects although the occluders are opaque. This ~xl
results from the fact that an approximation of the alpha func- (26)
tion is exploited and especially thin structures may diminish
Multiple scattering is simulated by assuming the presence of
when approximating over too long depth intervals.
a high albedo making multiple scattering predominant and
allowing to approximate it by using a diffusion approach:
The deep shadow mapping technique is not bound to
a specific rendering paradigm. It supports directional and
point light sources, which should lie outside the volume Z ~x
1
since the shadow propagation is unidirectional. Due to the Ims (~x, ωi ) = ∇di f f · Le (~x0 )d~x0 . (27)
~xb |~x0 −~x|
same reason, only a single light source is supported. Both,
rendering time and update of the deep shadow data struc- Here, ∇di f f represents the diffusion approximation and
ture, which needs to be done when lighting or transfer func- 1
|~x0 −~x|
results in a weighting, such that more distant samples
tion changes, are interactive. The memory footprint of deep
shadow mapping is directly proportional to the number of have less influence. ~xb represents the background position
used layers. just lying outside the volume. In practice the computation of
the scattering contributions can be simplified, when assum-
ing the presence of a forward peaked phase function, such
Desgranges et al. [DEP05] also propose a gradient-free
that all relevant samples lie in direction of the light source.
technique for shading and shadowing which is based on pro-
Thus, the computation can be performed in a synchronized
jective texturing. Their approach produces visually pleasing
manner, which is depicted by the following formulation for
shadowing effects and can be applied interactively.
single scattering:
Image Plane Sweep Volume Illumination [SYR11] al-
lows the integration of advanced illumination effects into a 0
Iss (~x, ωi ) =T (~xl ,~x) · L0
GPU-based volume ray-caster by exploiting the plane sweep Z ~x
paradigm. By using sweeping, it becomes possible to re-
+ T (~x0 ,~x) · τ(~x0 ) · Le (~x0 )d~x0
duce the illumination computation complexity and achieve ~x00 (28)
interactive frame rates, while supporting scattering as well Z ~x00
as shadowing. The approach is based on a reformulation of + T (~x0 ,~x) · τ(~x0 ) · Le (~x0 )d~x0 ,
~xl
the optical model, that either considers only single scattering
with a single light direction, or multiple scattering resulting and a similar representation for the multiple scattering con-
from a rather high albedo [MC10]. To have shadowing ef- tribution:
fects of higher frequency and more diffuse scattering effects
at the same time, these two formulations are combined. This
is achieved by adding the local emission at ~x with the fol- Z ~x
1
0
lowing scattering term: Ims (~x, ωi ) =∇di f f · Le (~x0 )d~x0 +
~x00 |~x0 −~x|
Z ~x00 (29)
1
∇di f f · Le (~x0 )d~x0 .
~xb |~x0 −~x|

g(~x, ωo ) = (1−a(~x)) · p(~x, ωl , ωo ) · Iss (~x, ωl )+ Image plane sweep volume illumination facilitates this split-
Z (25) ting at ~x00 , which is based on the current state of the sweep
a(~x) · p(~x, ωi , ωo ) · Ims (~x, ωi )dωi . line, by performing a synchronized ray marching. During

this process, the view rays are processed in an order, such
that those being closer to the light source in image space
are processed earlier. This is ensured by exploiting a mod-
Here, ωl is the principal light direction and ωi are all di- ified plane sweeping approach, which iteratively computes
rections from which light may enter. a(~x) is the albedo, the influence of structures when moving further away from
which is the probability that light is scattered rather than ab- the light source. To store the illumination information accu-
sorbed. Thus, Iss (~x, ωl ) describes the single scattering contri- mulated along all light rays between the samples of a specific
bution, and Ims (~x, ωi ) describes the multiple scattering con- sweep plane, a 2D illumination cache in form of a texture is
tribution. Similar to the formulation presented by Max and used.

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 65

Image plane sweep volume illumination has been devel- computing the illumination per sample during rendering,
oped as a ray-based technique and thus can only be used the lattice based techniques perform the computations per
with this rendering paradigm. It supports one point or di- voxel, usually by exploiting the nature of the lattice. In this
rectional light source, and simulates global shadowing and subsection, we also focus on those techniques which al-
multiple scattering effects. Since all illumination computa- low interactive data exploration, though it should be men-
tions are performed directly within a single rendering pass, tioned that other techniques working on non-regular grids
it does not require any preprocessing does not need to store exist [QXF∗ 07].
intermediate results within an illumination volume, which
results in a low memory footprint. The interactive applica- Shadow Volume Propagation is a lattice-based approach,
tion of clipping planes is inherently supported. where illumination information is propagated in a prepro-
cessing through the regular grid defining the volumetric data
Shadow Splatting has been proposed first in 2001 by Nulkar set. The first algorithm realizing this principle has been
and Mueller [NM01]. As the name implies, it is bound to the proposed by Behrens and Ratering in the context of slice
splatting rendering paradigm, which is exploited to attenu- based volume rendering [BR98]. Their technique simulates
ate the lighting information as it travels through the volume. shadows caused by attenuation of one distant light source.
Therefore, a shadow volume is introduced, which is update These shadows, which are stored within a shadow vol-
in a first splatting pass, during which light information is at- ume, are computed by attenuating illumination slice by slice
tenuated as it travels through the volume. In the second splat- through the volume. More recently, the lattice-based prop-
ting pass, which is used for rendering, the shadow volume agation concept has been exploited also by others. Ropin-
is fetched to acquire the incident illumination. The indirect ski et al. have exploited such a propagation to allow high
light computation in the first pass can be described as: frequency shadows and low frequency scattering simulta-
neously [RDRS10b]. Their approach has been proposed as
a ray-casting-based technique that approximates the light
~ i ) = Ll · T (~xl ,~x).
Li (~x, ω (30) propagation direction in order to allow efficient rendering. It
facilitates a four channel illumination volume to store lumi-
The actual shading is performed using Phong shading, nance and scattering information obtained from the volumet-
whereas the diffuse and the specular lighting intensity is re- ric dataset with respect to the currently set transfer function.
~ i ).
placed by the indirect light Li (~x, ω Therefore, similar to Kniss et al. [KPH∗ 03], indirect light-
Zhang and Crawfis [ZC02, ZC03] present an algorithm, ing is incorporated by blurring the incoming light within
which does not require to store a shadow volume in mem- a given cone centered about the incoming light direction.
ory. Therefore, their algorithm exploits two additional image However, different to [KPH∗ 03] shadow volume propaga-
buffers for handling illumination information. These buffers tion uses blurring only for the chromaticity and not the inten-
are used to attenuate the light during rendering, and to add sity of the light (luminance). Although this procedure is not
its contribution to the final image. Whenever a contribution physically correct, it helps to accomplish the goal of obtain-
is added to the final image buffer seen from the camera, this ing harder shadow borders, which according to Wanger im-
contribution is also added to the shadow buffer as seen from prove the perception of spatial structures [WFG92, Wan92].
the light source. Thus, the underlying illumination model can be specified as
follows:
Shadow splatting can be used for multiple point light and
distant light sources, as well as textured area lights. While
the initial algorithm [ZC02] only supports hard shadows,
~ o ) =L0 (~
L(~x, ω ~ o ) · T (~
x0 , ω x0 ,~x)
a more recent extension integrates soft shadows [ZC03].
When computing shadows, the rendering time is approxi-
Z ~x (31)
+ (Le (~x) + s(~x, ω ~ o )) · T (~x0 ,~x)d~x0 ,
~ i, ω
mately doubled with respect to regular splatting. The algo- x~0
rithm by Zhang and Crawfis [ZC02, ZC03] updates the 2D
image buffers during rendering and thus requires no con- ~ i, ω
where s(~x, ω ~ o ) is calculated as the transport color t(~x),
siderable update times, while the algorithm by Nulkar and as assigned through the transfer function, multiplied by the
Mueller [NM01] needs to update the shadow volume, which chromaticity c(~x, ω ~ o ) of the in-scattered light and modulated
requires extra memory and update times. With a more recent by the attenuated luminance li (~x, ω ~ i ):
extension, the integration of geometry also becomes possi-
ble [ZXC05].
s(~x, ω ~ o ) = t(~x) · c(~x, ω
~ i, ω ~ o ) · li (~x, ω
~ i ). (32)
4.4. Lattice Based Techniques ( )
As lattice based techniques, we classify all those approaches, ~o)
The transport color t(~x) as well as the chromaticity c(~x, ω
that compute the illumination directly based on the under- are wave length dependent, while the scalar li (~x, ω ~ i ) de-
lying lattice. While many of the previous approaches are scribes the incident (achromatic) luminance. Multiplying

c The Eurographics Association 2012.


66 Jönsson et al. / STAR: Interactive Volume Illumination

all together results in the incident radiance. Chromaticity voxel and the piecewiese segments are stored in an in-
~ o ) is computed from the in-scattered light
c(~x, ω termediate 3D texture using a multiresolution datastruc-
ture [LLY06]. Then, again for each voxel, global rays are
Z sent towards the light source now taking steps of size equal
~o) =
c(~x, ω ~ i0, ω
τ(~x) · p(~x, ω ~ i 0 )d ω
~ o ) · c(~x, ω ~ i0, (33) to the short rays and sampling from the piecewise segment
Ω 3D texture. In addition to direct illumination, first order in-
scattering is approximated by treating scattering in the vicin-
where Ω is the unit sphere centered around ~x. To allow ity as emission. To preserve interactivity, the final radiance is
~ i0, ω
the unidirectional illumination propagation, p(~x, ω ~ o ) has progressively refined by sending one ray at a time and blend-
been chosen to be a strongly forward peaked phase function: ing the result into a final 3D texture.
The piecewise integration technique can be combined

(~ 0
~ o )β
ωi · ω ~ i0 · ω
if ω ~ o < θ(~x) with other rendering paradigms, even though it has been
~ i0, ω
p(~x, ω ~o) = (34) developed for volume ray-casting. It supports a single di-
0 otherwise.
rectional or point light source, which can be dynamically
moved, and it include an approximation of first order in-
The cone angle θ is used to control the amount of scatter-
scattering. Rendering times are low, since only a single extra
ing and depends on the intensity at position ~x, and the phase
lookup is required to compute the incoming radiance at the
function is a Phong lobe whose extent is controlled by the
sample position. As soon at the light source or transfer func-
exponent β, restricted to the cone angle θ.
tion changes, the radiance must be recomputed which can be
During rendering, the illumination volume is accessed performed interactively and progressively. Three additional
to look up the luminance value and scattering color of 3D textures are required for the computations, however a
the current voxel. The chromaticity c(~x, ω ~ o ) and the lumi- multiresolution structure and lower resolution grid are used
nance li (~x, ω ~ i ) are fetched from the illumination volume and to alleviate some of the additional memory requirements.
used within s(~x, ω ~ i, ω
~ o ) as shown above. In cases, where
Summed Area Table techniques were first exploited by
a high gradient magnitude is present, specular reflections
Diaz et al. [DYV08] for computation of various illumina-
and thus surface-based illumination is also integrated into
tion effects in interactive direct volume rendering. The 2D
~ i, ω
s(~x, ω ~ o ).
summed area table (SAT) is a data structure where each cell
Shadow volume propagation can be combined with vari- (x, y) stores the sum of every cell located between the initial
ous rendering paradigms, and does not require a noticeable position (0, 0) and (x, y), such that the 2D SAT for each pixel
amount of preprocessing, since the light propagation is car- p can be computed as:
ried out on the GPU. It supports point and directional light
sources, whereby a more recent extension also supports area x,y
light sources [RDRS10a]. While the position of the light SATimage (x, y) = pi, j (36)
source is unconstrained, the number of light sources is lim-

i=0, j=0
ited to one, since only one illumination propagation direction
is supported. By simulating the diffusion process, shadow A 2D SAT can be computed incrementally during a sin-
volume propagation supports multiple scattering. Rendering gle pass and the computation cost increase linearly with the
times are quite low, since only one additional 3D texture number of cells. Once a 2D SAT has been constructed, an
fetch is required to obtain the illumination values. However, efficient evaluation of a region can be performed with only
when the transfer function is changed, the illumination in- four texture accesses to the corners of the regions to be
formation needs to be updated and thus the propagation step evaluated. Diaz et al. [DYV08] created a method, known as
needs to be recomputed. Nevertheless, this still supports in- Vicinity Occlusion Mapping (VOM), which effectively uti-
teractive frame rates. Since the illumination volume is stored lizes two SATs computed from the depth map, one which
and processed on the GPU, a sufficient amount of graphics stores the accumulated depth for each pixel and one which
memory is required. stores the number of contributed values of the neighborhood
of each pixel. The vicinity occlusion map is then used to
Piecewise Integration [HLY08] computes global light incorporate view dependent ambient occlusion and halo ef-
transport by dividing rays into k segments and evaluating the fects in the volume rendered image. The condition for which
incoming radiance using: a pixel should be occluded is determined by evaluating the
average depth around that pixel. If this depth is larger than
k
the depth of the pixel, the neighboring structures are fur-
~ i ) = L0 · ∏ T (~xn ,~xn+1 ).
Li (~x, ω (35) ther away which means this structure is not occluded. If the
n=0 structure in the current pixel is occluded, the amount of oc-
clusion, i. e., how much the pixel should be darkened is de-
Short rays are first cast towards the light source for each termined by the amount of differences of depths around the

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 67

corresponding pixel. A halo effect can be achieved by also


considering the average depth from the neighborhood. If the
structure in the pixel lies outside the object, a halo color is
rendered which decays with respect to the distance to the
object.
Density Summed Area Table [DVND10] utilize a 3D
SAT for ambient occlusion, calculated from the density val-
ues stored in the volume. This procedure is view indepen-
dent and supports incorporation of semi-transparent struc-
tures, while it is more accurate than the 2D image space ap-
proach [DYV08]. However, while only 8 texture accesses are
needed to estimate a region in a 3D SAT, the preprocessing
is more computationally expensive and it results in a larger
memory footprint. However, the 3D SAT does not need to be
re-computed if the view changes. Desgranges and Engel also
describe a SAT based inexpensive approximation of ambi-
ent light [DE07]. They combine ambient occlusion volumes
from different filtered volumes into a composite occlusion Figure 9: Extinction-based Shading and Illumination
volume. [SMP11] allows interactive integration of local and global
Extinction-based Shading and Illumination is recent tech- illumination properties and advanced light setups. (Images
nique by Schlegel et al. [SMP11], where a 3D SAT is used taken from [SMP11], courtesy of Philipp Schlegel)
to model ambient occlusion, color bleeding, directional soft
shadows and scattering effects by using the summation of the
exponential extinction coefficient. As the summation of the light source position ~xl are represented using a basis func-
exponential extinction coefficient in their model is order in- tion to compute the incident radiance in equation 3. Since
dependent, the use of a 3D SAT and efficient aggregate sum- radiance and visibility are computed independently, light
mation decreases the computational cost significantly. The sources may be dynamically changed without an expensive
directional soft shadows and scattering effects are incorpo- visibility update. Furthermore, the light source composition
rated by estimating a cone defined by the light source. The may be complex as they are projected to another basis which
approximation of that cone is done with a series of cuboids. is independent of the number of light sources. Up to now, the
Texture accesses in the SAT structure can only be performed only basis functions used in the area of volume rendering are
axis-aligned, thus the cuboids are placed along the two axis- spherical harmonics. However, to be able to incorporate fu-
aligned planes with the largest projection size of the cone ture techniques into our classification, we have decided to
towards the light. Schlegel et al. do this by defining the pri- keep this group more general. There are two key properties
mary cone axis to be the axis with the smallest angle to the of spherical harmonics which are important and thus worth
vector towards the light source and then defining two sec- special mentioning. The first is, that efficient integral eval-
ondary axes (of four possibilities) which are afterwards used uation can be performed, which supports real-time compu-
to form two axis-aligned planes. A selections of renderings tation of the incoming radiance over the sphere. The second
using this method is shown in Figure 9. property is, that it can be rotated on the fly, and thus supports
Extinction-based Shading and Illumination is not limited dynamic light sources. Since it is beyond the scope of this ar-
to any specific rendering paradigm. The technique can han- ticle to provide a detailed introduction to spherical harmonic
dle point, directional and area light sources. For each ad- based lighting, we focus on the volume rendering specific
ditional light source, the extiction must be estimated which properties only, and refer to the work by Sloan et al. [SKS02]
means that rendering time increases with the number of light for further details. While spherical harmonics were first used
sources. In terms of memory consumption an extra 3D tex- in volume rendering by Beason et al. [BGB∗ 06] to enhance
ture is required for the SAT which can be of lower resolu- isosurface shading, in this section, we cover algorithms ap-
tion than the volume data while still producing convincing plying spherical harmonics in direct volume rendering.
results. Spherical Harmonics have been first applied to direct vol-
ume rendering by Ritschel [Rit07]. He has applied spher-
ical basis functions to simulate low-frequency shadowing
4.5. Basis Function Based Techniques ( )
effects. To enable interactive rendering, a GPU-based ap-
The group of basis function based techniques covers those proach is exploited for the computation of the required
approaches, where the light source radiance Ll and trans- spherical harmonic coefficients. Therefore, a multiresolution
parency (visibility) T (~xl ,~x) from the position ~x towards the data structure similar to a volumetric mipmap is computed

c The Eurographics Association 2012.


68 Jönsson et al. / STAR: Interactive Volume Illumination

and stored on the GPU. Whenever the spherical harmonic


coefficients need to be recomputed, this data structure is ac-
cessed and the ray traversal is performed with increasing
step size. The increasing step size is realized by sampling the
multiresolution data structure in different levels, whereby
the level of detail decreases as the sampling proceeds fur-
ther away from the voxel for which the coefficients need to
be computed.

In a more recent paper spherical harmonics were used


to support more advanced material properties [LR10].
Figure 10: A compute tomography scan of a human
Therefore, the authors exploit the approach proposed by
torso rendered with diffuse gradient-based shading (left)
Ritschel [Rit07] to compute spherical harmonic coefficients.
and spherical harmonic lighting (right). (Images taken
During ray-casting, the spherical integral over the prod-
from [KJL∗ 12])
uct of an area light source and voxel occlusion is effi-
ciently computed using the precomputed occlusion infor-
mation. Furthermore, realistic light material interaction ef-
fects are integrated while still achieving interactive volume However, storage and computation time limits the accuracy
frame rates. By using a modified spherical harmonic pro- of the spherical harmonics representation thus restricting
jection approach, tailored specifically for volume rendering, the illumination to low frequency effects. In general, light
it becomes possible to realize non cone-shaped phase func- sources are constrained to be outside of the volume, however
tions in the area of interactive volume rendering. Thus, more when using local visibility, e. g., as in [KJL∗ 12], they may be
realistic material representations become possible. positioned arbitrarily. The rendering time is interactive, only
requiring some (in general 1-4) extra texture lookups and
One problem with using a basis function such as spheri-
scalar products. The light source representation must be re-
cal harmonics is the increased storage requirement. Higher
computed or rotated when the illumination changes but this
frequencies can be represented when using more coeffi-
is generally fast and can be performed in real-time. How-
cient, thus obtaining better results. However, each coeffi-
ever, the visibility must be recomputed when the transfer
cient needs to be stored to be used during rendering. Kro-
function or clip plane changes, which is expensive but can
nander et al. [KJL∗ 12] address this storage requirement is-
be performed interactively [KJL∗ 12]. The memory require-
sue and also decrease the time it takes to update the visi-
ments are high as each coefficient needs to be stored for the
bility. In their method, both volume data and spherical har-
visibility in the volume. We refer to the work by Kronander
monic coefficients exploit the multiresolution technique of
et al. [KJL∗ 12] for an indepth analysis of how to trade off
Ljung [LLY06]. The sparse representation of volume data
memory size and image quality.
and visibility allows substantial reductions with respect to
memory requirements. To also decrease the visibility update
time, a two pass approach is used. First, local visibility is 4.6. Ray-Tracing Based Techniques ( )
computed over a small neighborhood by sending rays uni-
formly distributed on the sphere. Then, piecewise integra- The work discussed above focuses mainly on increasing
tion of the local visibility is performed in order to compute the lighting realism in interactive volume rendering appli-
global visibility [HLY08]. Since both, local and global visi- cations. This is often achieved by making assumptions re-
bility have been computed, the user can switch between them garding the lighting setup or the illumination propagation,
during rendering and thus reveal structures hidden by global e. g., assuming that only directional scattering occurs. As
features while still keeping spatial comprehension. ray-tracing has been widely used in polygonal rendering to
support physically correct and thus realistic images, a few
The spherical harmonics based methods discussed here authors have also investigated this direction in the area of
are not bound to any specific rendering paradigm, although volume rendering. Gradients, as used for the gradient-based
only volume ray-casting has been used by the techniques shading approaches, can also be exploited to realize a vol-
presented here. An arbitrary number of dynamic directional umetric ray-tracer supporting more physically correct illu-
light sources are supported, i. e. environment maps. Area and mination. One such approach, considering only first order
point light sources are also supported, each adding an over- rays to be traversed, has been proposed by Stegmaier et
head since the direction towards the sample along the ray al. [SSKE05]. With their approach they are able to simulate
changes and the spherical harmonics representation of the mirror-like reflections and refraction, by casting a ray and
light source must be rotated. Very naturally looking images computing its deflection. To simulate mirror-like reflections,
can be generated due to the possibility of complex light se- they perform an environment texture lookup. More sophis-
tups and material representation (see Figure 10), in contrast ticated refraction approaches have been presented by oth-
to the hard shadows of techniques like half angle slicing. ers [LM05, RC06].

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 69

To produce images of high realism, stochastic Monte duced complexity and thus allow advanced illumination with
Carlo ray-tracing methods have also been applied in the area higher rendering speeds and sometimes lower memory foot-
of interactive volume rendering. The benefit of these ap- prints. Vicinity Shading [Ste03], which simulates illumina-
proaches is, that they are directly coupled with physically tion of isosurfaces by taking into account neighboring vox-
based light transport. The first Monte Carlo based approach els, was the first approach addressing advanced illumination
has been presented by Rezk-Salama [RS07]. It supports for isosurfaces extracted from volume data. In a precompu-
shadowing and scattering, whereby interactivity is achieved tation the vicinity of each voxel is analyzed and the resulting
by constraining the expensive scattering computations to se- value, which represents the occlusion of the voxel, is stored
lected surfaces only. With respect to the gradient magnitude in a shading texture which can be accessed during rendering.
it is decided, if a boundary surface, potentially initiating a Wyman et al. [WPSH06] and Beason et al. [BGB∗ 06] focus
scattering event, is present. The approach supports interac- on a spherical harmonic based precomputation, supporting
tive frame rates and rendering of images with a high degree the display of isosurfaces under static illumination condi-
of realism. tions. Penner et al. exploit an ambient occlusion approach to
render smoothly shaded isosurfaces [PM08]. More recently,
More recently, Kroes et al. have presented an interactive
Banks and Beacon have demonstrated how to decouple il-
Monte Carlo ray-tracer, which does not limit the scattering
lumination sampling from isosurface generation, in order to
to selected boundary surfaces [KPB11]. Their approach sim-
achieve sophisticated isosurface illumination results [BB09].
ulates several real-world factors influencing volumetric illu-
mination, such as multiple arbitrarily shaped and textured
area light sources, a real-world camera with lens and aper- 5. Perceptual Impact
ture, as well as complex material properties. This combina-
Besides the technical insights and the achievable illumina-
tion allows to generate images showing a high degree of re-
tion effects described above, the perceptual impact of the
alism (see Figure 11). To adequatly represent both, volumet-
current techniques is also of relevance. Although, the per-
ric and surface-like materials, their approach blends between
ceptual qualities of advanced illumination techniques are
BRDF based and a phase function based material represen-
frequently pointed out, remarkably little research has been
tation.
done in order to investigate them in a systematic fashion.
One study showing the perceptual benefits of advanced volu-
4.7. Isosurface Based Techniques metric illumination models has been conducted with the goal
Besides the algorithms discussed above, which can be com- to analyze the impact of the shadow volume propagation
bined with direct volume rendering and are therefore the technique [RDRS10b]. The users participating in this study
main focus of this article, several approaches exist that allow had to perform several tasks depending on the depth per-
advanced illumination when visualizing isosurfaces. While ception of static images, where one set was generated using
when dealing with direct volume rendering, a transfer func- gradient-based shading and the other one using shadow vol-
tion may extract several structures having different inten- ume propagation. The results of the study indicate, that depth
sities and thus optical properties, when dealing with the perception is more accurate and performed faster when using
simplest form of isosurface rendering, only a single isosur- shadow volume propagation.
face with homogeneous optical properties is visible. This re- Šoltészová et al. proposed and evaluated the perceptual
duces the combinations of materials which interact with each impact of chromatic soft shadows in the context of interac-
other and produce complex lighting effects drastically. Sev- tive volume rendering [vPV11]. Inspired by illustrators, they
eral approaches have been proposed, which exploit this re- replace the luminance decrease usually present in shadowed
areas by a chromaticity blending. In their user study, they
could show the perceptual benefit of these chromatic shad-
ows with respect to depth and surface perception. However,
their results also indicate that brighter shadows might have a
negative impact on the perceptual qualities of an image, and
should therefore be used with consideration.
More recently, Lindemann and Ropinski presented the re-
sults of a user study, in which they have investigated the
perceptual impact of seven volumetric illumination tech-
niques [LR11]. Within their study, depth and volume related
Figure 11: Monte Carlo ray-tracing allows to incorpo- tasks had to be performed. They could show, that advanced
rate gradient-based material properties and the modeling of volumetric illumination models make a difference when it is
camera apertures to generate highly realistic images. (Im- necessary to assess depth or size in images. However, they
age taken from [KPB11], courtesy of Thomas Kroes) could not find a relation between the time used to perform
a certain task and the used illumination model. The results

c The Eurographics Association 2012.


70 Jönsson et al. / STAR: Interactive Volume Illumination

of this study indicate, that the directional occlusion shading techniques only support simplified phase function models
model has the best perceptual capabilities [LR11]. and sometimes BRDFs, while more realistic volumetric ma-
terial functions are not yet integrated.
6. Usage Guidelines Another challenging area which needs to be addressed
in the future, is the integration of several data sources. Al-
To support application developers when choosing an ad-
though, Nguyen et al. [NEO∗ 10] have presented an ap-
vanced volumetric shading method, we provide a compar-
proach for integrating MRI and fMRI data, more general
ative overview of some capabilities within this section. Ta-
approaches potentially supporting multiple modalities are
ble 1 contains an overview of the properties discussed along
missing. Along this lines, also the integration of volumetric
with each algorithm description. However, in some cases
and geometric information needs more consideration. While
more detailed information is necessary to compare and se-
the approach by Schott et al. [SMGB12] allows an integra-
lect techniques for specific application cases. Therefore, we
tion in the context of slice based rendering, other volume
provide the comparative charts shown in Figure 12.
rendering paradigms are not yet supported.
Figure 12 (a) provides a comparative overview of the
Apart from that, large data sets still pose a driving chal-
techniques belonging to the discussed groups with respect to
lenge, since the illumination computation takes a lot of time
illumination complexity and underlying rendering paradigm.
and due to memory limitations it is not possible to store large
To be able to assess the illumination capabilities of the in-
intermediate data sets.
corporated techniques, the illumination complexity has been
depicted as a continuous scale along the x-axis. It ranges
from low complexity lighting, e. g., local shading with a sin- 8. Conclusions
gle point light source, to highly complex lighting, which
could for instance simulate scattering and shadowing as ob- Within this article, we have discussed the current state of
tained from multiple area light sources. The y-axis in Fig- the art regarding volumetric illumination models for inter-
ure 12 depicts on the other hand which algorithmic render- active volume rendering. We have classified and explained
ing paradigm is supported. Thus, Figure 12 (a) depicts for the covered techniques and related their visual capabilities
each technique with which rendering paradigm it has been by using a uniform mathematical notation. Thus, the article
used and what lighting complexity has been achieved. We can be used as both, a reference for the existing techniques,
provide this chart as a reference, since we believe that the but also for deciding which advanced volumetric illumina-
shown capabilities are important criteria to be assessed when tion approaches should be used in specific cases. We hope,
choosing an advanced volumetric illumination model. that this helps researchers as well as application developers
to make the right decisions early on and identify challenges
Figure 12 (b) depicts the scalability of the covered tech- for future work.
niques in terms of growing image and data set size with
respect to performance. Since the x-axis depicts scalability
with respect to data size and the y-axis depicts scalability Acknowledgments
with respect to image size, the top right quadrant for in- Many of the presented techniques have been integrated into
stance contains all algorithms which allow a good perfor- the Voreen volume rendering engine (www.voreen.org),
mance in this respect. When selecting techniques in Fig- which is an open source visualization framework. The au-
ure 12 (a) based on the illumination complexity and the un- thors would like to thank Florian Lindemann, for implement-
derlying rendering paradigm, Figure 12 (b) can be used to ing the half angle slicing used to generate Figure 1 (right).
compare them regarding their scalability.

References
7. Future Challenges
[BB09] BANKS D. C., B EASON K.: Decoupling illumination
Though the techniques presented in this article allow to gen- from isosurface generation using 4d light transport. IEEE Trans-
erate realistic volume rendered images at interactive frame actions on Visualization and Computer Graphics 15, 6 (2009),
rates, still several challenges exist to be addressed in future 1595–1602. 17
work. When reviewing existing techniques, it can be noticed [BGB∗ 06] B EASON K. M., G RANT J., BANKS D. C., F UTCH
that although the underlying optical models already contain B., H USSAINI M. Y.: Pre-computed illumination for isosurfaces.
In Conference on Visualization and Data Analysis (2006), pp. 1–
simplifications, there is sometimes a gap between the model 11. 15, 17
and the actual implementation. In the future it is necessary
[Bli77] B LINN J. F.: Models of light reflection for computer syn-
to narrow this gap by providing physically more accurate il- thesized pictures. Proceedings of SIGGRAPH ’77 11 (1977),
lumination models. The work done by Kroes et al. [KPB11] 192–198. 6
can be considered as a first valuable step into this direction. [BR98] B EHRENS U., R ATERING R.: Adding shadows to a
One important ingredient necessary to provide further real- texture-based volume renderer. In IEEE Int. Symp. on Volume
ism is the support of advanced material properties. Current Visualization (1998), pp. 39–46. 4, 13, 22

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 71

Illumination complexity 1 Gradient-based

Splatting 9 2 Local Ambient Occlusion


3 Dynamic Ambient Occlusion
6
Texture Slicing 4 Half Angle Slicing
5 4
5 Directional Occlusion
11
Ray Casting 1 3 10 14 6 Multidirectional Occlusion
12 2 7 8 13 15
7 Deep Shadow Mapping
(a) 8 Image Plane Sweep
2
Scales well with image size

11
3 12 9 Shadow Splatting
10 1 10 Shadow Volume Propagation
9
14
13 7
11 Piecewise Integration
Scales well with data size 12 Summed Area Table 3D
4
5 13 Extinction-based Shading
8
6 14 Spherical Harmonics
15
15 Monte Carlo ray-tracing
(b)

Figure 12: In (a) the illumination complexity of a method is shown in relation to the underlying rendering paradigm, whereby
the illumination complexity increases from left to right. Techniques to the far left can for instance only handle point lights,
while those to the far right can handle area light sources as well as multiple scattering. Plot (b) shows how each method
scales in terms of performance with respect to large image and data size. Techniques in the upper right quadrant perform well
independent of image and data size while methods in the lower left quadrant degrade substantially for large screen and data
sizes.

[CCF94] C ABRAL B., C AM N., F ORAN J.: Accelerated volume Real-time ambient occlusion and halos with summed area tables.
rendering and tomographic reconstruction using texture mapping Computers and Graphics 34 (2010), 337–350. 15, 22
hardware. In Proceedings of the 1994 Symposium on Volume
[DYV08] D ÍAZ J., Y ELA H., VÁZQUEZ P.-P.: Vicinity occlu-
Visualization (1994), pp. 91–98. 4, 9
sion maps - enhanced depth perception of volumetric models. In
[DE07] D ESGRANGES P., E NGEL K.: US patent application Computer Graphics Int. (2008), pp. 56–63. 14, 15
2007/0013696 A1: Fast ambient occlusion for direct volume ren-
[GP06] G RIBBLE C. P., PARKER S. G.: Enhancing interactive
dering, 2007. 15
particle visualization with advanced shading models. In Proceed-
[DEP05] D ESGRANGES P., E NGEL K., PALADINI G.: Gradient- ings of the 3rd symposium on Applied perception in graphics and
free shading: A new method for realistic interactive volume ren- visualization (2006), pp. 111–118. 1
dering. In Int. Fall Workshop on Vision, Modeling, and Visual-
[HAM11] H OSSAIN Z., A LIM U. R., M ÖLLER T.: Toward high-
ization (2005), pp. 209–216. 4, 12
quality gradient estimation on regular lattices. IEEE Transactions
[DK09] DACHSBACHER C., K AUTZ J.: Real-time global illumi- on Visualization and Computer Graphics 17 (2011), 426–439. 7
nation. In ACM SIGGRAPH Courses Program (2009). 1
[HKSB06] H ADWIGER M., K RATZ A., S IGG C., B ÜHLER K.:
[DVND10] D ÍAZ J., VÁZQUEZ P.-P., NAVAZO I., D UGUET F.: Gpu-accelerated deep shadow maps for direct volume render-

c The Eurographics Association 2012.


72 Jönsson et al. / STAR: Interactive Volume Illumination

ing. In ACM SIGGRAPH/EG Conference on Graphics Hardware [Max95] M AX N.: Optical models for direct volume rendering.
(2006), pp. 27–28. 11, 22 IEEE Transations on Visualization and Computer Graphics 1, 2
[HLY07] H ERNELL F., L JUNG P., Y NNERMAN A.: Efficient am- (1995), 99–108. 1, 2, 3, 4, 5
bient and emissive tissue illumination using local occlusion in [MC10] M AX N., C HEN M.: Local and global illumination in the
multiresolution volume rendering. In IEEE/EG Int. Symp. on Vol- volume rendering integral. In Scientific Visualization: Advanced
ume Graphics (2007), pp. 1–8. 7, 8 Concepts (2010), pp. 259–274. 2, 12
[HLY08] H ERNELL F., L JUNG P., Y NNERMAN A.: Interactive [MMC99] M UELLER K., M ÖLLER T., C RAWFIS R.: Splatting
global light propagation in direct volume rendering using local without the blur. In IEEE Visualization (1999), p. 61. 4
piecewise integration. In IEEE/EG Int. Symp. on Volume and
[MR10] M E SS C., ROPINSKI T.: Efficient acquisition and clus-
Point-Based Graphics (2008). 14, 16, 22
tering of local histograms for representing voxel neighborhoods.
[HLY09] H ERNELL F., L JUNG P., Y NNERMAN A.: Local ambi- In IEEE/EG Volume Graphics (2010), pp. 117–124. 9
ent occlusion in direct volume rendering. IEEE Transations on
Visualization and Computer Graphics 15, 2 (2009), 548–559. 7, [NEO∗ 10] N GUYEN T. K., E KLUND A., O HLSSON H., H ER -
NELL F., L JUNG P., F ORSELL C., A NDERSSON M. T.,
8, 22
K NUTSSON H., Y NNERMAN A.: Concurrent volume visual-
[KJL∗ 12] K RONANDER J., J ÖNSSON D., L ÖW J., L JUNG P., ization of real-time fmri. In IEEE/EG Volume Graphics (2010),
Y NNERMAN A., U NGER J.: Efficient visibility encoding for pp. 53–60. 7, 18
dynamic illumination in direct volume rendering. IEEE Trans-
actions on Visualization and Computer Graphics 18, 3 (2012), [NM01] N ULKAR M., M UELLER K.: Splatting with shadows. In
447–462. 5, 16, 22 Volume Graphics (2001). 13
[KKH02] K NISS J., K INDLMANN G., H ANSEN C.: Multidimen- [PBVG10] PATEL D., B RUCKNER S., V IOLA I., G RÖLLER
sional transfer functions for interactive volume rendering. IEEE M. E.: Seismic volume visualization for horizon extraction. In
Transactions on Visualization and Computer Graphics 8 (2002), Proceedings of the IEEE Pacific Visualization Symposium 2010
270–285. 9 (2010), pp. 73–80. 7, 10
[KN01] K IM T.-Y., N EUMANN U.: Opacity shadow maps. In [PM08] P ENNER E., M ITCHELL R.: Isosurface ambient oc-
Proceedings of the 12th Eurographics Workshop on Rendering clusion and soft shadows with filterable occlusion maps. In
Techniques (2001), pp. 177–182. 11 IEEE/EG Int. Symp. on Volume and Point-Based Graphics
(2008), pp. 57–64. 17
[KPB11] K ROES T., P OST F. H., B OTHA C. P.: Interactive Di-
rect Volume Rendering with Physically-Based Lighting. Preprint [QXF∗ 07] Q IU F., X U F., FAN Z., N EOPHYTOS N., K AUFMAN
2011-11, Delft University of Technology, 2011. 17, 18, 22 A., M UELLER K.: Lattice-based volumetric global illumination.
IEEE Transactions on Visualization and Computer Graphics 13
[KPH∗ 03] K NISS J., P REMOZE S., H ANSEN C., S HIRLEY P.,
(2007), 1576–1583. 13
M C P HERSON A.: A model for volume lighting and modeling.
IEEE Transations on Visualization and Computer Graphics 9, 2 [RBV∗ 08] RUIZ M., B OADA I., V IOLA I., B RUCKNER S.,
(2003), 150–162. 9, 10, 13, 22 F EIXAS M., S BERT M.: Obscurance-based volume rendering
framework. In IEEE/EG Int. Symp. on Volume and Point-Based
[KPHE02] K NISS J., P REMOZE S., H ANSEN C., E BERT D.: In-
Graphics (2008), pp. 113–120. 8
teractive translucent volume rendering and procedural modeling.
In IEEE Visualization 2002 (2002), pp. 109–116. 2, 4, 9, 10 [RC06] RODGMAN D., C HEN M.: Refraction in volume graph-
[KW03] K RÜGER J., W ESTERMANN R.: Acceleration tech- ics. Graph. Models 68 (2006), 432–450. 16
niques for GPU-based volume rendering. In Proceedings of IEEE [RDRS10a] ROPINSKI T., D ÖRING C., R EZK S ALAMA C.: Ad-
Visualization ’03 (2003). 4 vanced volume illumination with unconstrained light source posi-
[Lev88] L EVOY M.: Display of surfaces from volume data. IEEE tioning. IEEE Computer Graphics and Applications 30, 6 (2010),
Computer Graphics and Applications 8 (1988), 29–37. 5, 6, 22 29–41. 14, 22
[LL94] L ACROUTE P., L EVOY M.: Fast volume rendering using [RDRS10b] ROPINSKI T., D ÖRING C., R EZK -S ALAMA C.: In-
a shear-warp factorization of the viewing transformation. In Pro- teractive volumetric lighting simulating scattering and shadow-
ceedings of the 21st annual conference on Computer graphics ing. In IEEE Pacific Visualization (2010), pp. 169–176. 4, 5, 13,
and interactive techniques (1994), pp. 451–458. 4 17, 22
[LLY06] L JUNG P., L UNDSTRÖM C., Y NNERMAN A.: Multires- [Rit07] R ITSCHEL T.: Fast gpu-based visibility computation for
olution interblock interpolation in direct volume rendering. In natural illumination of volume data sets. In Eurographics Short
IEEE/EG Symposium on Visualization (2006), p. 259Ű266. 7, Paper Proceedings 2007 (2007), pp. 17–20. 15, 16, 22
14, 16 [RKH08] ROPINSKI T., K ASTEN J., H INRICHS K. H.: Efficient
[LM05] L I S., M UELLER K.: Accelerated, high-quality refrac- shadows for gpu-based volume raycasting. In Int. Conference in
tion computations for volume graphics. In Volume Graphics, Central Europe on Computer Graphics, Visualization and Com-
2005. Fourth International Workshop on (2005), pp. 73–229. 16 puter Vision (2008), pp. 17–24. 11
[LR10] L INDEMANN F., ROPINSKI T.: Advanced light material [RMSD∗ 08] ROPINSKI T., M EYER -S PRADOW J., D IEPEN -
interaction for direct volume rendering. In IEEE/EG Int. Symp. BROCK S., M ENSMANN J., H INRICHS K. H.: Interactive vol-
on Volume Graphics (2010), pp. 101–108. 16, 22 ume rendering with dynamic ambient occlusion and color bleed-
ing. Computer Graphics Forum (Eurographics 2008) 27, 2
[LR11] L INDEMANN F., ROPINSKI T.: About the influence of
(2008), 567–576. 6, 8, 9, 22
illumination models on image comprehension in direct volume
rendering. IEEE Transactions on Visualization and Computer [RS07] R EZK -S ALAMA C.: GPU-based monte-carlo volume ray-
Graphics 17, 12 (2011), 1922–1931. 1, 17, 18 casting. In Pacific Graphics (2007). 17
[LV00] L OKOVIC T., V EACH E.: Deep shadow maps. In Pro- [RSHRL09] R EZK S ALAMA C., H ADWIGER M., ROPINSKI T.,
ceedings of the 27th annual conference on Computer graphics L JUNG P.: Advanced illumination techniques for gpu volume
and interactive techniques (2000), pp. 385–392. 11 raycasting. In ACM SIGGRAPH Courses Program (2009). 1

c The Eurographics Association 2012.


Jönsson et al. / STAR: Interactive Volume Illumination 73

[SHC∗ 09] S MELYANSKIY M., H OLMES D., C HHUGANI J., [vPBV10] Š OLTÉSZOVÁ V., PATEL D., B RUCKNER S., V IOLA
L ARSON A., C ARMEAN D. M., H ANSON D., D UBEY P., AU - I.: A multidirectional occlusion shading model for direct vol-
GUSTINE K., K IM D., K YKER A., L EE V. W., N GUYEN A. D., ume rendering. Computer Graphics Forum (Eurographics/IEEE
S EILER L., ROBB R.: Mapping high-fidelity volume rendering VGTC Symp. on Visualization 2010) 29, 3 (2010), 883–891. 4,
for medical imaging to cpu, gpu and many-core architectures. 10, 22
IEEE Transactions on Visualization and Computer Graphics 15
(2009), 1563–1570. 1, 4 [vPV11] Š OLTÉSZOVÁ V., PATEL D., V IOLA I.: Chro-
matic shadows for improved perception. In Proceedings of
[SKS02] S LOAN P., K AUTZ J., S NYDER J.: Precomputed radi- Non-Photorealistic Animation and Rendering (NPAR) (2011),
ance transfer for real-time rendering in dynamic, low-frequency pp. 105–115. 17
lighting environments. In Proceedings of SIGGRAPH ’02 (2002),
pp. 527–536. 15 [Wan92] WANGER L. C.: The effect of shadow quality on the per-
ception of spatial relationships in computer generated imagery. In
[SKvW∗ 92] S EGAL M., KOROBKIN C., VAN W IDENFELT R.,
SI3D ’92: 1992 Symp. on Interactive 3D graphics (1992), pp. 39–
F ORAN J., H AEBERLI P.: Fast shadows and lighting effects us-
42. 1, 13
ing texture mapping. In Proceedings of the 19th annual confer-
ence on Computer graphics and interactive techniques (1992), [WFG92] WANGER L. C., F ERWERDA J. A., G REENBERG
Proceedings of SIGGRAPH ’92, pp. 249–252. 11 D. P.: Perceiving spatial relationships in computer-generated im-
[SMGB12] S CHOTT M., M ARTIN T., G ROSSET A., B ROWNLEE ages. IEEE Comput. Graph. Appl. 12, 3 (1992), 44–51, 54–58.
C.: Combined surface and volumetric occlusion shading. In Pro- 1, 13
ceedings of Pacific Vis 2012 (2012). 10, 18 [Wil78] W ILLIAMS L.: Casting curved shadows on curved sur-
[SMP11] S CHLEGEL P., M AKHINYA M., PAJAROLA R.: faces. In Proceedings of SIGGRAPH ’78 (1978), pp. 270–274.
Extinction-based shading and illumination in GPU volume ray- 11
casting. IEEE Transactions on Visualization and Computer
[WPSH06] W YMAN C., PARKER S., S HIRLEY P., H ANSEN C.:
Graphics 17, 12 (2011), 1795–1802. 4, 15, 22
Interactive display of isosurfaces with global illumination. IEEE
[SPH∗ 09] S CHOTT M., P EGORARO V., H ANSEN C., Transations on Visualization and Computer Graphics 12 (2006),
B OULANGER K., B OUATOUCH K.: A directional occlu- 186–196. 17
sion shading model for interactive direct volume rendering.
Computer Graphics Forum (Proceedings of Eurographics/IEEE [ZC02] Z HANG C., C RAWFIS R.: Volumetric shadows using
VGTC Symposium on Visualization 2009) 28, 3 (2009), 855–862. splatting. In IEEE Visualization (2002), pp. 85–92. 4, 13, 22
4, 5, 9, 10, 22
[ZC03] Z HANG C., C RAWFIS R.: Shadows and soft shadows
[SSKE05] S TEGMAIER S., S TRENGERT M., K LEIN T., E RTL T.: with participating media using splatting. IEEE Transations on
A simple and flexible volume rendering framework for graphics- Visualization and Computer Graphics 9, 2 (2003), 139–149. 4,
hardware-based raycasting. Fourth International Workshop on 13, 22
Volume Graphics 2005 (2005), 187–241. 16
[ZIK98] Z HUKOV S., I ONES A., K RONIN G.: An ambient light
[Ste03] S TEWART A. J.: Vicinity shading for enhanced percep-
illumination model. In EGRW ’98: Proceedings of the Euro-
tion of volumetric data. In IEEE Visualization (2003), pp. 355–
graphics Workshop on Rendering (1998), pp. 45–55. 7
362. 17
[SYR11] S UNDÉN E., Y NNERMAN A., ROPINSKI T.: Image [ZXC05] Z HANG C., X UE D., C RAWFIS R.: Light propagation
plane sweep volume illumination. IEEE Transactions on Visu- for mixed polygonal and volumetric data. In CGI ’05: Pro-
alization and Computer Graphics 17, 12 (2011), 2125–2134. 4, ceedings of the Computer Graphics International 2005 (2005),
5, 12, 22 pp. 249–256. 13, 22

c The Eurographics Association 2012.


74 Jönsson et al. / STAR: Interactive Volume Illumination

Table 1: Comparison between the different illumination methods.

Method Light Source Scattering Refresh Time Memory SNR


Consu- Indepen-
mption dent
Type(s) Location # Render Update
Constraint
Gradient-based [Lev88] D, P None N
Local Ambient D, P, - -
Occlusion [HLY09] AMB

Dynamic Ambient D, P, - - S
Occlusion [RMSD∗ 08] AMB

Half Angle Slicing [KPH∗ 03] D, P Outside 1 S,M



Directional Occlusion [SPH 09] P Head 1 S
Multidirectional Occlusion P View Hemi- N S,M
[vPBV10] sphere
Deep Shadow Mapping D, P Outside 1 S,M
[HKSB06]

Image Plane Sweep [SYR11] D, P None 1 S,M


Shadow Splatting [ZC02, ZC03, D, P, Outside 1 S,M
ZXC05] A
Piecewise Integration [HLY08] D, P None 1 S

Shadow Volume Propagation D, P, None 1 S,M


[BR98, RDRS10b, RDRS10a] A

Summed Area Table 3D AMB - -


[DVND10]

Extinction-based Shading D,P,A, None N S


[SMP11] AMB

Spherical Harmonics [Rit07, D, P, Outside N S,M


LR10, KJL∗ 12] A, T,
AMB
Monte Carlo ray-tracing D, P, None N S,M
[KPB11] A, T,
AMB
Legend Light Types: D (Directional), P (Point), A (Area), T (Textured), AMB (Ambient).
Light Source #: Number of supported light sources, - (not applicable) N (infinite).
Scattering: (No), S (Single), M (Multiple).
: No quantity of rendering time or memory usage.
: High quantity of rendering time or memory usage.
Update Trigger: (Light Source Update), (Transfer Function Update).
SNR Independent: (Sensitive to noisy data) (Insensitive to noisy data).

c The Eurographics Association 2012.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy