0% found this document useful (0 votes)
11 views24 pages

Full Text 01

Research
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views24 pages

Full Text 01

Research
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

State of the Art in Transfer Functions for Direct

Volume Rendering
Patric Ljung, Jens Krueger, Eduard Groeller, Markus Hadwiger, Charles D. Hansen
and Anders Ynnerman

The self-archived postprint version of this journal article is available at Linköping


University Institutional Repository (DiVA):
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130665

N.B.: When citing this work, cite the original publication.


Ljung, P., Krueger, J., Groeller, E., Hadwiger, M., Hansen, C. D., Ynnerman, A., (2016), State of the Art
in Transfer Functions for Direct Volume Rendering, Computer graphics forum (Print), 35(3), 669-
691. https://doi.org/10.1111/cgf.12934

Original publication available at:


https://doi.org/10.1111/cgf.12934
Copyright: Wiley (12 months)
http://eu.wiley.com/WileyCDA/
Eurographics Conference on Visualization (EuroVis) 2016 Volume 35 (2016), Number 3
K.-L. Ma, G. Santucci, and J. van Wijk
(Guest Editors)

State of the Art in Transfer Functions


for Direct Volume Rendering

Patric Ljung1 Jens Krüger2,3 Eduard Gröller4,5 Markus Hadwiger6 Charles D. Hansen3 Anders Ynnerman1

1 Linköping University, Sweden 2 CoViDAG, University of Duisburg-Essen


3 Scientific Computing and Imaging Institute, University of Utah 4 TU Wien, Austria 5 University of Bergen, Norway
6
King Abdullah University of Science and Technology

Abstract
A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role
in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the
data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an
expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The
purpose of this state-of-the-art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead
to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research
into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user
interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development
of next generation TF tools and methodologies.
Categories and Subject Descriptors (according to ACM CCS): I.3.8 [Computer Graphics]: Applications—Volume Rendering I.3.7
[Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture I.4.10 [Computer
Graphics]: Image Representation—Volumetric

1. Introduction TF less powerful with respect to identifying relevant parts of the


data. The advantage of a TF is, however, a substantial performance
A transfer function (TF) maps volumetric data to optical properties
gain as classification based on the one-dimensional data range re-
and is part of the traditional visualization pipeline: data acquisi-
duces the complexity tremendously. This gain is the result of the
tion, processing, visual mapping, and rendering. Volumetric data is
three-dimensional domain being typically two orders of magnitude
considered to be a scalar function from a three-dimensional spatial
larger than the small data range. Histograms are an example of dis-
domain with a one-dimensional range (e.g., density, flow magnitude,
carding potentially complex spatial information and aggregating
etc.). Image generation involves mapping data samples through the
only the data values to binned-frequency information. In the same
TF, where they are given optical properties such as color and opacity,
spirit, TFs classify interesting parts of the data by considering data
and compositing them into the image.
values alone. The second functionality of a TF deals with specifying
A TF simultaneously defines (1) which parts of the data are es- optical properties for portions of the data range previously identified
sential to depict and (2) how to depict these, often small, portions as being relevant.
of the volumetric data. Considering the first step, a TF is a special,
but important, case of a segmentation or classification. With classifi- A survey of the works published in the field of TFs reveals that
cation, certain regions in a three-dimensional domain are identified in the three decades since the earliest published techniques for di-
to belong to the same material, such as bone, vessel, or soft tissue, rect volume rendering [KVH84, DCH88, Lev88], over a hundred
in medical imaging. A plethora of classification and segmentation studies have been published in the most influential journals and
algorithms have been developed over the last decades, with semi- conferences. In 2001, the Transfer Function Bake-Off examined
automatic approaches often tailored to specific application scenarios. four of the then most promising approaches to TF design [PLB∗ 01]:
Segmentation algorithms can be quite intricate since information trial and error with a TF editor; data-centric using computed metrics
from the three-dimensional spatial domain and the one-dimensional over the scalar field; data-centric using a material boundary model;
data range are taken into account. TFs in their basic form are, on the and image-centric using an organized sampling of exemplar images.
other hand, restricted to using only the data ranges. In comparison In 2010, Arens and Domik [AD10] authored a survey of TFs for
with general classification algorithms, this characteristic makes a volume rendering in which they subdivided TFs into the follow-

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John
Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

16 tended to show that the TF maintains its instrumental role in visual


14
data analysis by introducing novel and effective approaches to TF
design and use.
12

10 2. Transfer Function Fundamentals


8 In this STAR, we assume that the reader has an elementary under-
standing of the basic concepts involved in direct volume rendering
6
(DVR). For a general overview and introduction to volume render-
4 ing, we refer the reader to Engel et al. [EHK∗ 06], in particular the
sections on TFs in Chapters 4 and 10. This background section
2 provides the basic concepts defining the role of the TF and the com-
0
plexity that it abstracts away. The section also problematizes the
multiple roles of the TF as a material classifier and carrier of optical
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
properties. An overview of the evolving and expanding role of the
Figure 1: Number of publications related to transfer functions over TF in the wider context of the DVR pipeline is also provided.
the last three decades. The starting point for our overview is the well known emission-
absorption volume rendering equation
Z b Rs
ing six categories: 1D data-based, gradient 2D, curvature-based, I= q(s)e− a κ(u)du
ds, (1)
a
size-based, texture-based, and distance-based. These six categories
were compared using automaticity, user-interaction, constraints, em- where I is the resulting light intensity after traversing the ray be-
phasis, memory consumption, and real-time capability. The survey tween the points a and b in the volume, and q(s) is the light con-
concluded that no single TF is generally applicable for all situa- tribution from the point s along the ray. The attenuation of the
tions, and expertise is required to determine the most suitable TF lightRcontribution along the ray is estimated using the optical depth
for specific data and/or application. τ = κ. The functions q(s) and τ(u) are defined by the properties
of the material(s) in the object of study and the transport of light
During the six years since the last published survey, significant through the material(s). The role of TFs in volume rendering is to
progress on TFs has been made. Figure 1 provides a histogram, not provide estimates of these two functions. It is important to realize
claimed to be complete, showing the numbers of papers published that the seemingly simple mapping from data to optical properties
on the topic of TFs during the past 32 years, including several hides layers of complexity. In its most rigorous interpretation, the
since the last survey in 2010. The goal of this state-of-the-art report TF would be based on the simulation of physical light transport
(STAR) is to fill the gap in the literature review since the last survey in participating media containing all the materials present in the
by bringing together a diverse group of researchers in the field of object of study. However, in practice, this is not possible due to
direct volume rendering. We provide an up-to-date overview of the computational limitations nor is it desirable in most applications.
entire area of TFs for direct volume rendering, from the earliest The data may not represent a physical object, and/or optical proper-
approaches to the most recent advances. ties corresponding to a certain data value are based on user design
principles rather than on a representation of the physical transport
1.1. Structure and categorization of light. Furthermore, TFs are exploration tools that are interactively
changed during volume exploration to emphasize regions of interest.
This STAR follows the categorization shown in Figure 2. Section 2 For more details on light transport in participating volumetric media,
reviews the fundamentals of TFs. Section 3 examines TFs from the we refer readers to the seminal paper by Max [Max95], in which
dimensionality and native data domain perspective. TFs can include optical models for DVR are discussed.
multiple derived attributes, discussed in Section 4, or aggregated
attributes, described in Section 5. These sections on data domain In its discrete version, Equation 1 can be viewed as a compositing
aspects relate mostly to the material classification component of operation over color contributions from sampled points along a ray.
the TF. Section 6 reviews research about how TFs can be used in With one such ray per pixel, originating in the image plane, the final
the rendering process to focus attention on relevant data, referred to pixel color is determined by the forward compositing equation
as visual mapping. Two usability aspects of the TF are covered in n i−1
the following sections. Methods and techniques for aiding the user I= ∑ Ci αi ∏ (1 − α j ), (2)
in TF settings, via levels of automation, are presented in Section 7. i=1 j=1
User interfaces, and the topics on how the user can interact with a where Ci is the color and αi is the fraction of incoming light that is
system to define a particular TF setting, are described in Section 8. absorbed at the discrete position i along the ray. This equation can
Some potential areas of future research are presented in Section 9, be solved front-to-back using the recursive forms
while Section 10 concludes the STAR. 0 0 0
Ci+1 = Ci + (1 − αi )Ci αi (3)
This STAR aims to provide a comprehensive account of trends 0 0 0
and approaches spanning the TF landscape. The STAR is also in- αi+1 = αi + (1 − αi )αi (4)

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Curvature
Surface Histogram clustering
One-dimensional Statistics and uncertainty Local frequency distribution
Two-dimensional Flow analysis Dimensionality reduction
Multidimensional Geometric relations Topology and skeleton
Domain specific

Dimensionality Derived Attributes Aggregated Attributes

Transfer Functions
Rendering User Interfaces
Automation
One-dimensional editors
Coloring and texturing
Two/multidimensional editors
Styles and shading Adapting presets
Parallel coordinates
Animation and temporal Semi-automatic
Brushing and painting
Sample reconstruction Automatic
Design galleries
High dynamic range Supervised machine learning
Higher-level interaction

Figure 2: The transfer function classification used in this STAR.

Opacity described the mapping from data values to optical properties using
a probabilistic view of material presence and mixture.

2.1. The dual role of the transfer function


As an illustration of the TF complexity, let us assume that the object
of study consists of a number of materials with different optical
Intensity properties. The TF is seen as a material classification tool in both
perspectives, but there is a difference in whether material probabil-
Figure 3: A standard user interface for 1D TF settings. Color ities are an implicit or explicit part of the mapping. In the explicit
and opacity are assigned to data ranges using piecewise linear TF case, the application of a TF is modeled as a two-step approach.
widgets. The background represents the histogram of binned scalar First, the sample value s is mapped to a set of material probabili-
attribute data. ties pm (s), where m is the index among the M materials. Then, the
material probabilities are used to combine the individual material
colors Cm = (rm , gm , bm , αm )T , which results in the sample color C.
Such an approach was employed in the initial DVR implementation
0
where Ci is the opacity-weighted accumulated color-contribution of by Drebin et al. [DCH88] and further thoroughly elaborated upon
0
ray samples 1 up to i, and αi is the accumulated opacity, with initial by Kniss et al. [KUS∗ 05] while studying the probabilistic classifi-
0 0 cation, and later by Lundström et al. [LLPY07], in the context of
color and opacity C1 = 0, α1 = 0.
uncertainty visualization. In the implicitly probabilistic view, the
Here the role of the TF is to perform a mapping of data values TF is seen as a direct mapping from sample value s to sample color
to optical properties, resulting in estimates of Ci and αi . Figure 3 C. This view is currently the dominant approach, and it is the view
shows a standard application of a TF to a data volume with a single represented in recent DVR literature [EHK∗ 06].
scalar. The histogram of the data serves as the basis for the 1D TF
widgets that are used to assign color and opacity to data ranges.
2.2. The evolving landscape of the transfer function
It is interesting to note that initial publications on volume ren-
The research compiled in this report demonstrates how the TF con-
dering consider a TF conceptually but do not use the explicit term.
cept has evolved into increasingly advanced TF designs and uses.
Kajiya and Von Herzen [KVH84] were among the first to introduce
As shown in Figure 4, the landscape in which the TF now resides
the concept of volume rendering of a 3D density field. They provided
goes far beyond its traditional domain in which the TF acts only as
an approximation of the full radiative scattering and, furthermore,
classifier and mapper of material properties.
showed that a simple forward scattering approach is insufficient to
render phenomena such as clouds. Levoy [Lev88] presented the first Data that needs to be visualized is becoming increasingly multi-
paper using the gradient magnitude to enhance the boundaries, or modal in nature, which enables improved classification capabilities.
rather suppress the interior of homogeneous regions in the volume. For example, modalities are combined to produce co-registered data
Although Levoy did not use the term TF, the mapping to colors and fields in the area of medicine. Relevant examples are found in the
opacities was described. In the same year, Drebin et al. [DCH88] increasing use of CT-PET and MR-PET, but also in the development

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Sampled
Object of Study
Attribute Data Traditional TF

Material
Visual Mapping Rendering
Classification

Derived
Attribute Data

Knowledge of
Materials and Visual Design
Properties

Figure 4: Embedding of the TF into object analysis, visual design, and image generation through rendering.

of multispectral CT, which will significantly increase the dimen- as well as in complex fields, with real and imaginary values, from
sionality of both sampled and derived attributes. The rapid move numerical simulations in many different domains. There are many
towards multimodality calls for TF approaches that can deal with examples of multivariate volumetric data, often from numerical
high-dimensional attribute data. Multidimensional data can be un- simulations in fluid dynamics, that create vector or tensor data, but
correlated, which opens up the possibility of using separable 1D also from seismic surveys and astrophysical numerical simulations.
TFs that are later combined to form complete multidimensional TFs. Multimodal data is also a common form, in which the data may be
represented on grids with different extent and alignment, such as
Another trend is the use of a TF as a tool for the expression of user
Ultrasound B-mode data together with CT data.
knowledge of material presence and properties, which transforms
the TF into a source of information that can be used to derive As the starting point for investigations in most application do-
further attribute data. High level expressions of knowledge codify mains is data, we initially examine the data perspective. The data on
the complex relations between data domains and the corresponding which the TF is operating defines the corresponding data domain,
TFs and indeed between different TF segments. A powerful scheme or simply the transfer function domain, i.e., the domain in which
is to define the TF as material presence components, which follows the function is defined, and the corresponding dimensionality. We
the ideas presented by Lindholm et al. [LLL∗ 10], where labels sub-categorize this section in terms of dimensionality, which covers
are assigned to TFs with semantics, indicating, for instance, bone, both the source dimensionality as well as simpler derived properties.
muscle, fat, etc. Several important contributions to the field rely Later we describe the subcategory of first- and second-order types
on the expression of knowledge encoded in the TF in ways that go of additional local attributes used for the TF, such as gradient mag-
beyond the traditional TF domain. Such approaches have been used, nitude, curvature, etc. In the following subsections, we survey the
for instance, in data reduction and multiresolution representations. literature on 1D, 2D, and multidimensional (MD) TFs and focus on
the material classification aspect of the TF, rather than the visual
Perhaps one of the most important aspects of the TF is the free-
mapping.
dom it provides users in designing visual appearance. The TF further
allows users to interactively alter the visual parameters and their
application to different parts of the object of study. The number of 3.1. One-dimensional data
attributes is increasing, and the usability of available TF tools is
improving. Thus, the TF is increasingly becoming an instrument for The simplest kind of TF has a one-dimensional domain where the
the exploration of scientific content of the data to produce, both sci- function is defined, i.e., it is a 1D TF operating on a scalar input
entifically relevant and aesthetically appealing, high quality images. value. This input is most commonly the scalar value given by the
The role of the TF as a design and workflow tool was already appar- scalar field comprising the input volume, such as the material density.
ent in the Transfer Function Bake-Off [PLB∗ 01], which analyzed The one-dimensional TF classifies the scalar data value, d, and
four approaches in the TF design and highlighted differences in the subsequently maps the material to an optical property for rendering:
workflow of the TF design process. q(d) = V(M(d)),
where M(·) is the material classification function and V(·) is the
3. Dimensionality visual mapping of the material.
Volumetric data is commonly presented as scalar, bi-variate, or mul- In the paper by Drebin et al. [DCH88], the concept of the TF is
tivariate. Examples of scalar data are medical data from CT scanners presented as a material classification with probabilities based on
and particle density fields from numerical simulations. Bi-variate the scalar value, and thereafter a visual mapping is applied. The
data is also found in medical imaging from dual energy CT scanners authors point out that material classification is a probabilistic and

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

not a binary decision, leading to the notion of material mixtures and


how to blend or mix the visual mappings in the presence of multiple
materials. See Section 2.1 for a discussion on the dual role of the
TF and Section 6 for visual mapping strategies.
Without additional attributes, the 1D TF is quite straightforward,
and in a strict interpretation, rare, if we consider the visual mapping
to be a part of the TF. Visual mapping often requires the normal
for shading operations and thus the normal constitutes an additional
input. One example of a strict 1D TF is the Gradient-Free Shading
described by Desgranges et al. [DEP05]. The noisy nature of 3D Figure 5: 2D TF texture combining material density and object
Ultrasound data is a good example of a case in which 1D TFs are (label) ID of segmented objects [HBH03]. Image courtesy Hadwiger.
suitable. However, it may be beneficial to include ray depth to pro- Copyright 2003 IEEE.
vide improved depth-cues as performed in the work of Srinivasan et
al. [SLMSC13]. Nevertheless, this issue is more of a visual mapping
aspect rather than one of classification.
The latter formulation defines a separable 2D TF, with examples
One-dimensional TFs are adequate in many cases of simulation such as gradient-based opacity modulation in which the second
data where measurement noise is low or even non-existent. Other dimension is used to improve visual appearance, which suppresses
examples include industrial CT scans, such as in Li et al. [LZY∗ 07b], interior homogeneous material regions and enhances boundaries.
where different materials of interest have few overlapping intensity The material classification power of a separable TF still corresponds
ranges. For medical image data, the 1D TF is often inadequate as to that of a 1D TF, but, by also including higher dimensions, provides
tissues have significant overlap in the intensity range, as described a significant enhancement to the visual appearance. Essentially, a
by Lundström et al. [LLY06a]. In addition, medical data is measured 1D TF is applied first, followed by multiplying the opacity with a
and relatively noisy, which further negatively impacts the ability of 1D function of gradient magnitude. This type of 2D TF is very easy
1D TFs to correctly classify different tissue types. to store, because it can be represented as two 1D functions instead
of as a full 2D function. In the case of gradient magnitude-weighted
Despite their shortcomings, 1D TFs are the most common form of opacity, the second function is described by a simple equation and
TFs, especially outside the visualization research community. One- does not need to be stored as a lookup table. A classic example is the
dimensional TFs are often the first tool available in software pack- gradient magnitude-weighted opacity modulation of the classified
ages providing volume rendering, as they are relatively easy to com- value. The seminal work by Levoy [Lev88] contains an example of
prehend for the novice or occasional user. Practically all production this type of 2D TF, but the approach cannot be considered as fully
visualization software, such as ParaView [HA04], VisIt [CBW∗ 12], separable.
or ImageVis3D [CIB15], support 1D TF editors. Constructing a 1D
TF is most commonly achieved by combining separate TF com- A very simple, yet important, type of non-separable 2D TF is
ponents, which simplifies several aspects, such as user interface the use of multiple 1D TFs for rendering segmented volume data.
interactions and adaptation to new datasets, as discussed in Castro See Figure 5 for an example of such a TF for a volume containing
et al. [CKLG98]. four segmented objects. In this case, one dimension is the scalar
value, and the second dimension is simply the ID of a segmented
object (also called a label ID). We note that this type of 2D TF is
3.2. Two-dimensional data not separable, but it also does not really constitute a “general” 2D
If the TF operates on an input with more than one dimension, it is TF, because multiple 1D TFs are simply combined into a 2D table
termed a 2D TF (for a bi-variate input) or an MD TF (for an input in a straightforward manner. However, this approach, or a variant
of multiple dimensions). thereof, is used in many volume renderers that render segmented data
[HBH03, BG05]. The basic approach can be extended further, for
A distinction needs to be made whether the TFs are separable or if example in style TFs for segmented volumes [BG07] (see Figure 11).
they are intrinsically high-dimensional. A separable 2D TF is defined In essence, this type of 2D TFs for segmented volumes delegates
as two separate 1D functions that are combined only after both 1D (most of) the material classification to the segmentation process that
functions have been applied separately, which is most commonly occurs prior to the visualization stage. Segmentation augments the
done via the tensor product. For example, the first dimension may unsegmented volume dataset with a label ID for each voxel, which
define the material classification, and the second dimension controls can then be directly fed as the second input into this type of TF.
some aspect of the visual mapping. Non-separable 2D TFs are
better able to classify materials or features in the data compared An example of a genuinely non-separable 2D TF is the value and
to separable 2D TFs, which are essentially two separate functions gradient magnitude mapping presented in Kniss et al. [KKH02],
combined afterward: which cannot be obtained as the tensor product of two 1D TFs. In this
work, 2D TF widgets (polygons) are defined using a graphical user
qnon-separable (d1 , d2 ) = V(M(d1 , d2 )), interface over the desired region in the 2D frequency distribution
plot, or 2D histogram (see Figure 6). It should be noted that in
and
contrast to a separable 2D TF, a non-separable 2D TF requires
qseparable (d1 , d2 ) = V(M(d1 ), d2 ). storing (or representing) a full 2D function, which means that it

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Classification spaces with related attributes, such as data value


range, range or distribution over first- and second-order derivatives,
sometimes allow parametrization of some dimensions. This can
make the TF definition easier [HPB∗ 10].
Industrial spectral X-ray CT data has been tackled by Amirkhanov
et al. [AFK∗ 14], who define MD TFs directly for the spectral data
to identify material compositions in industrial components. The
system also includes X-ray fluorescence spectral data. A color-mix
is computed based on the spectral energy of a spectral bin and its
associated color, and there is no explicit classification of materials.
Another form of MD TFs deals with the classification of color
data, as in the cases of the Visible Human Project, which generated
slice stacks of photographs. The challenge here is to produce a
useful volume rendering that reveals structures within the data, since
color ranges are highly overlapping and ambiguous. Morris and
Ebert [ME02] propose a method to map color data to opacity TFs
to reveal the interior structures of the data. They use boundary
Figure 6: The tooth dataset in which a general non-separable 2D enhancement techniques, which incorporate the first and second
TF (density vs. gradient magnitude) is used to more precisely select directional derivatives along the gradient direction. The CIEL*u*v*
features in the data [KKH02]. Image courtesy of Kniss. Copyright color space is chosen in their algorithms, and a deeper analysis
2002 IEEE. is provided in Ebert et al. [EMRY02]. For medical RGB color
data, Takanashi et al. [TLMM02] perform independent component
analysis (ICA) of the RGB-color histogram space. Their assumption
is that moving a clipping plane along the ICA axes allows the user to
usually requires storing a 2D array or a 2D texture. Both approaches discriminate different parts of the data. Muraki et al. [MNKT01] use
to non-separable TFs can also be combined in a straightforward a similar ICA-based approach to transfer colors from RGB datasets
manner, i.e., using one general 2D TF per segmented object, which to multichannel MRI volumes.
amounts to storing all 2D TFs in a higher-dimensional table.

4. Derived Attributes

3.3. Multidimensional data So far, we have described what could be considered the “traditional”
attributes of volume data that are used as the input to TFs in the
If the user is to manipulate the TF definition directly, moving be- form of scalar value (density), gradient magnitude, and object/label
yond 2D TFs immediately poses significant challenges in terms of ID (in the case of segmented data). In this section, we consider
user interfaces and cognitive comprehension. Much research and additional derived attributes that are often used.
work on MD TFs is, therefore, related to various forms of automa-
tion and advanced user interfaces, which are covered in Sections
7 and 8. Typical approaches include dimensional reduction, clus- 4.1. Curvature
tering and grouping, machine learning, and various user interface
The curvature of surfaces is an important attribute that characterizes
approaches such as parallel coordinates or direct slice or volume
their local shape. Even for volume rendering, the computation and
view interaction.
use of curvature makes sense. Usually, employing this attribute
From a material classification point of view, several methods means interpreting curvature as isosurface curvature at a specific
investigate different features, which include distributions around a position, which does not necessarily require computing an actual
point, such as radial basis functions (RBFs), or various geometric isosurface. The normalized gradient describes the local orientation,
primitives, such as box, pyramid, or ellipsoid primitives. Kniss namely the tangent plane, of the isosurface passing through any
et al. [KKH02] employ box and pyramid shapes, but only two point in a volume. The isovalue of this surface is obviously the
dimensions are shown at a time. Alper, Selver and Guzelis [SG09] same as the scalar value at the considered position. Similarly, the
use RBFs in the classification scheme, discussed in Section 7.2. curvature of the same isosurface can also be computed and used as
an input to a TF.
Methods using separable classifiers allow for easier definitions
but provide a weaker classification power in the MD data space. Different types of curvature measures can be computed at a point
Zhou et al. [ZSH12] discuss combinations of 2D primitives and 1D on a 2D surface. In the context of volume rendering, the two princi-
TFs. Here each 2D primitive has its own associated 1D TF where pal curvatures are usually computed. The curvatures consist of the
the data domain can be chosen by the user. Using separate 1D TF minimum κ1 and maximum κ2 principal curvature magnitudes, and
definitions for each widget, or region, defined in the 2D domain if desired, the corresponding principal curvature directions. Based
improves the classification power over using a single shared 1D TF. on these measures, other attributes, such as the Gaussian curvature
Additional user interface aspects are covered in Section 8.2. (κ1 κ2 ) or the mean curvature 12 (κ1 + κ2 ), can be derived as well.

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Hladuvka et al. [HKG00] have introduced the concept of 4.4. Flow analysis
curvature-based TFs with a 2D domain in the context of direct
Even though TFs are commonly used in the analysis of flow data,
volume rendering, where the curvatures are computed by locally fit-
not much work has dealt explicitly with the design of TFs for flow
ting surface descriptions. The curvature TF is a 2D TF that is indexed
visualization. Volume rendering is often used as a second step to
by (κ1 , κ2 ). More examples of curvature TFs are demonstrated in
reduce the clutter in dense texture methods, for instance. An exam-
Kindlmann et al. [KWTM03]. Their work presents a significantly
ple of this approach is presented by Falk and Weiskopf [FW08],
simplified computation of the two principal curvature magnitudes κ1
who apply volume rendering to 3D Line Integral Convolution. MD
and κ2 by using tri-cubic B-spline convolution filters. As Hadwiger
TFs are also of interest in dealing with multiple scalar feature iden-
et al. [HSS∗ 05] show, the isosurface curvature in a volume can also
tifiers, and Park et al. [PBL∗ 04] suggest using volume rendering
be computed in real-time using tri-cubic B-splines and texture map-
with MD TFs. They extract scalar values from the flow field, such
ping hardware. The authors also compute and visualize the principal
as velocity, gradient, curl, helicity, and divergence, and use these
curvature directions. The curvature attributes can also be used as
values as TF parameters, which results in an expressive and un-
additional input to focus and context visualizations, which has been
cluttered visualization. Svakhine et al. [SJEG05] present reduced
demonstrated by Krüger et al. [KSW06] and is further discussed in
user interaction by allowing only two variables to control color and
Section 6.2.1.
transparency. The long and cumbersome fine tuning of the trans-
fer function needed in previous work [PBL∗ 04] is thereby avoided.
4.2. Surface attributes Daniels et al. [DANS10] have explored similarities of vector fields
by encoding the distances in attribute space. The method is related
In most cases, TFs are used to map the scalar or multimodal volumes
to explicit TF approaches as it allows users to control paint strokes
to optical quantities, and the mapping is used for direct volume
and color choices on a canvas used for feature enhancement.
rendering. However, several approaches use TFs to analyze surfaces
in a volume directly. A more comprehensive overview of rendering aspects of flow
To inspect the geometrical variability of isosurfaces in scalar visualization is found in the state-of-the-art report on illustrative
fields with uncertainty information, Pfaffelmoser et al. [PRW11] flow visualization by Brambilla et al. [BCP∗ 12]. The report asserts
propose to study the mutability in the data using a stochastic uncer- that volume rendering is known to generate cluttering and occlusion
tainty model. To visualize this model, their isosurface ray-casting if used unwisely and is not well suited for conveying directional
approach uses a TF to color the surfaces based on the approximate information. They also conclude that volumetric data is often used
spatial deviation of possible surface points from the mean surface. in flow visualization to show scalar variables such as pressure or
Haidacher et al. [HBG11] use the TF domain to express surface temperature.
similarity and dissimilarity for multimodal volumetric datasets by
means of information theory measures. 4.5. Geometric relations
A simple yet highly effective attribute to include in the TF definition
4.3. Statistics and uncertainty
is the distance to some geometric entity, such as a point, parametric
A variety of statistics, usually characterizing local neighborhoods shape, or arbitrary geometry. Tappenbeck et al. [TPD06] propose
centered at the current voxel of interest, can be very powerful at- such an approach and discuss methods for TF specification and
tributes as input to 2D or MD transfer functions. One example is the suitable applications of it. The efficacy of this approach depends, to
work of Haidacher et al. [HPB∗ 10], who use a statistical domain a high degree, on the ease of specification, but has the potential to
defined by mean value and standard deviation, in which a 2D TF can be simplified with the automated generation of a point, or geometry,
be defined. In Section 5, we will describe further approaches that of reference. A related derived attribute is the feature scale, encoded
make use of volume statistics in the context of transfer functions. into a 3D scale field, that expresses the size of the local feature on
a per-voxel basis. This approach has been presented by Correa and
Kniss et al. [KUS∗ 05] describe an approach to interactively ex-
Ma [CM08]. The method yields convincing images showing distin-
plore the uncertainty and risk of surface boundaries. The decou-
guished features that are otherwise difficult to classify. Correa and
pling of classification and color mapping may point to application
Ma [CM09a] have continued to explore aspects of geometric rela-
specific solutions in situations where many volumes are involved
tions to improve classification of features of interest by introducing
simultaneously. In such cases, the TF interface could remain simple
the occlusion spectrum (Figure 7). The spectrum is based on ideas
and robust concerning scalability as the feature space is broken
of ambient occlusion and is able to enhance structures rather than
down into independent components. Lundström et al. [LLPY07]
boundaries, as is the case for intensity-gradient magnitude 2D TFs.
deal with uncertainty visualization through probabilistic animation
The ambient occlusion of a voxel is rationalized into a weighted
methods. It could be interesting to explore extensions of the proba-
average of the surrounding neighborhood.
bility representations into the temporal domain and to investigate the
general applicability of probabilistic animation. Soundararajan and Another TF attribute that is useful in some application domains
Schultz [SS15] model the uncertainty in probabilistic TFs resulting is orientation or direction. Fritz et al. [FHG∗ 09] have computed
from supervised learning approaches. This is an interesting direc- the orientations of steel fibers embedded in reinforced concrete
tion to inspect further possible applications of machine learning to obtained via industrial CT scanning. The resulting orientations can
volume visualization and TFs, such as adaptive, online, and transfer be parametrized with two Euler angles, where anti-podal points on
learning. the unit sphere are identified as describing the same orientation. The

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

In practice, however, any added quantity makes it more difficult


to create the TF. Visually, 1D TFs can be represented as a line
or curve, but in the case of 2D TFs, visualizing and modifying
polygons becomes significantly more complex. 3D TFs require
volume rendering of the TF itself, which in turn, requires “meta
transfer functions” for display, making the approach impractical. To
avoid these issues, and going beyond 3D TFs, several papers have
been published suggesting approaches for aggregating attributes
and reducing the complexity of visualizing and designing higher-
dimensional TFs.

5.1. Histogram clustering


To reduce the degrees of freedom in designing a TF, several ap-
proaches have been proposed to analyze and cluster the histogram
space. User interaction is simplified to select, weight, and modify
these clusters. Tzeng and Ma [TM04] propose using the iterative
Figure 7: The occlusion spectrum [CM09a] emphasizes the clas-
self-organizing data analysis technique to find the clusters in 2D
sification of structures within the data, rather than on boundaries
histogram space. With this algorithm, the user can choose opacity
between materials. Image courtesy of Correa. Copyright 2009 IEEE.
and color for each cluster (see Figure 9). Wang et al. [WCZ∗ 11]
have also worked with 2D histograms but propose modeling the
histogram space using a Gaussian mixture model. An elliptical TF
is assigned to each Gaussian, and the user interaction is simplified
to parameterizing these ellipsoids. Li et al. [LZY∗ 07a, LZY∗ 07b]
concentrate on industrial CT applications and 1D histograms. They
propose using stochastic methods to differentiate between the clus-
ters in the histogram.
Maciejewski et al. [MWCE09] describe a method for 2D clus-
Figure 8: Orientation as an attribute for a 2D TF [FHG∗ 09]: tering in a feature space, comprised of value and gradient. The
(a) The TF is defined in terms of Euler angles (here, identifying technique automatically generates a set of TF components that can
anti-podal points); (b) Specifying widgets in the TF domain selects be refined or filtered out later in the workflow. The method also
objects such as steel fibers according to orientation; (c) A directional incorporates temporal changes by building a histogram volume from
histogram on the unit half-sphere. Image courtesy of Fritz. Copyright the 2D feature space, which allows for a more consistent clustering
2009 IEEE. over time.
In a more general approach, more suitable for arbitrary attribute
spaces, Wang et al. [WZK12] use hierarchical clustering. From a
two angles are then used as inputs to several TF variants, for example preprocessing step, the user can select clusters in terms of subtrees in
a 2D TF including both angles, or a 2D TF for scalar density plus the modified dendrogram, and subsequently refine the TF selections
one selected angle, which is important in this particular application. in a more fine grained multidimensional space. Recent work by Ip et
See Figure 8 for an example. al. [IVJ12] demonstrates an algorithm for multilevel segmentation
of the intensity gradient 2D histogram that allows the user to select
We can view orientation as a specific texture property, and expand the most appropriate segments hierarchically for which they can
the set of attributes to include statistical measures of the textural generate separate TFs.
properties of the data. In Caban and Rheingans [CR08], the classifi-
cation domain was enriched to support differentiation of similar data
values present in different regions based on textural properties in the 5.2. Local frequency distribution
local neighborhood. Here, the textural properties are defined from Lundström et al. [LLY05, LLY06a] propose a set of techniques and
various first- and second-order statistics, such as mean, variance, tools based on local frequency distributions (LFDs). An exhaustive
energy, inertia, etc. More advanced textural properties and features peak finding approach aims to find all neighborhoods, data blocks, to
of the data, such as shape, as suggested by Praßni et al. [PRMH10], associate with a peak in the global histogram. To this end, the partial
require more advanced preprocessing and are therefore considered range histogram (PRH) is introduced in which each neighborhood
in Section 5.4. has its LFD footprint measured against a range that is automatically
generated from the current global peak searched. Neighborhoods
with a sufficiently large LFD footprint in this range are added to
5. Aggregated Attributes
the group, making up the PRH. After each iteration, the LFDs of
Conceptually, the introduction of additional quantities in the TF the current PRH are removed from the global histogram, revealing
definition makes it possible to discriminate more parts in a dataset. another peak, and the process is iterated until all neighborhoods have

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Figure 10: Separation of spongy bone and vessels. The left image
shows the results with a standard 1D TF, which renders bone and
vessels red, but a classifying 2D TF makes the vessels stand out from
the background [LLY06a]. Image courtesy of Lundström. Copyright
2006 IEEE.

al. [ŠBSG06] have extended this concept to general transfer func-


tions whose domain is the 2D LH space. They also introduce the
use of hierarchical clustering in the LH space for semi-automatic
TF design [ŠVG06].
To better capture the neighborhood properties around voxels,
Patel et al. [PHBG09] define the concept of moment curves. These
curves describe the moments in the first- and second-order statistical
attributes for each voxel. These attributes depict a point projected
back into a 2D space where TF primitives are defined, which yields
a more powerful way to identify features of interest.
Johnson and Huang [JH09] have implemented the concept of
local distributions in a very general and query-driven approach, in
which they define bins and clauses. With a bin, the user restricts the
domain of the distributions’ given intervals. Clauses are series of
Boolean predicates that compare bin variables to other quantities.
Figure 9: The clustered histogram is shown in the center. The Both bins and clauses are defined in a domain-specific language and
volume rendered images depict the corresponding regions in the can be changed arbitrarily. The color for each voxel is determined
spatial domain [TM04]. Image courtesy of Tzeng and Ma. by its predicate matchings and the opacity by the quality of this
matching.
Lindholm et al. [LLL∗ 10] have used LFDs to support a logic
been assigned. In the second part of the process, tissue classification framework based on labeling the LFDs as blood, gas, iodine, liver,
is aided by the neighborhoods’ LFD footprints, which are used in etc., which allows the creation of expressions to render iodine when
a competitive classification approach, to determine probability and close to, or not close to, liver tissue, for instance. These labels
tissue classification. An example of this approach is shown in Figure provide for the semantics of the TF components, and the logic is
10. In Lundström et al. [LYL∗ 06], LFDs are enhanced to appear then executed on the GPU during rendering.
more clearly in the global histogram, which improves peak visibility
in the user interface. This enhancement is achieved by promotion Similar to the approaches of Johnson and Huang [JH09] and
of local correlation by simply applying a power exponent, α, to Lindholm et al. [LLL∗ 10], where the TF specification is enhanced
the binned values in the local histogram. Promoting local peaks is by rules, Cai et al. [CNCO15] propose a rule-enhanced TF as well.
also helpful in peak detection and in automated TF adaption to new In this approach, the rules have been determined through training
datasets. The method was evaluated by experts on both CT and MR on segmented datasets to identify different tissues in the data when
data. The α-histogram is not as successful as the PRH approach in derived data attributes and LFDs are used.
adapting TFs.
5.3. Dimensionality reduction
Serlie et al. [STF∗ 03] have introduced the concept of LH his-
tograms for identifying material boundaries in the context of virtual Dimensionality reduction methods propose simplifying the complex-
colon cleansing in virtual colonoscopy. For each voxel, a respective ity of the TF design process by projecting high-dimensional TFs to
low (L) material value and a high (H) material value are determined a lower-dimensional space. Kniss et al. [KUS∗ 05] have presented
in the positive gradient direction for H, and in the negative gradient an approach for statistically quantitative volume visualization in
direction for L. The voxel then corresponds to a point in the 2D which a graph-based technique is used to achieve dimensionality
LH space, which denotes a material transition between a material reduction. In this work, uncertainty visualization and probabilistic
with value L and a material with value H, respectively. Šereda et classification take a central role (also see Section 4.3.)

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

De Moura Pinto and Freitas [dMPF07] propose using unsuper- Diffusion tensor imaging (DTI) is one such specific domain, and
vised learning, specifically in the form of self-organizing Kohonen DTI data is often visualized using other visualization techniques,
Maps to detect structure in the data. These maps are trained with such as glyph-based or with feature extraction approaches that in-
input voxel values and local derived quantities (such as derivatives clude DTI fibers. Kindlmann and Weinstein [KW99] present a DVR
and statistical measures). Linear combinations of cells in the Ko- method in which the tensor field is colored using hue-balls and
honen Map are then assigned to the corresponding voxels in the shaded with lit-tensors. A 2D barycentric space of anisotropy is
volume dataset, and the TF is applied to that map instead of the used to define the opacity of samples. In Kindlmann et al. [KWH00],
high-dimensional attribute space. the approach is studied in more detail, and the options are extended
to include barycentric color maps.
Haidacher et al. [HBKG08] aim to reduce multivariate data into
a single fused representation, which is then mapped via the well In more recent work, Bista et al. [BZGV14] present an approach
known value/gradient magnitude 2D TF. The reduction is carried for volume rendering of diffusion kurtosis imaging (DKI) data that
out using weighting based on point-wise mutual information. Kim et more clearly depicts microstructural characteristics of neural tissues.
al. [KSC∗ 10] follow a similar approach but use Isomap, local linear Spherical harmonics are used to color and shade samples in the
embedding, and principal component analysis to reduce the high- volume rendering based on the spatio-angular field of DKI.
dimensional MD-TF/multichannel data space and allow for straight
Seismic data is another domain with specific attributes that con-
2D TF application. Zhao and Kaufmann [ZK10] combine the local
stitutes many dimensions and thus requires advanced user interfaces
linear embedding method to reduce the dimensionality with a user
to control. Zhou and Hansen [ZH13,ZH14] present examples of this,
interface based on parallel coordinates. Dimensionality reduction
which are discussed in Section 8.4.1.
using Isomap has also been applied in Abbasloo et al. [AWHS16]
for the visualization of tensor normal distributions. Another approach, which resembles textural classification meth-
ods, or texture-based TFs, has been proposed by Alper Selver
[Sel15]. Brushlet expansion is applied to the original volume and
5.4. Topology and skeletons allows the analysis of the resulting quadrants in order to identify low-
The use of topological methods has recently been effective in visual- and high-frequency textures. By selecting quadrants and threshold-
ization and analysis of many types of data. These methods provide ing of the complex brushlet coefficients, specific textural properties
aggregated attributes that can be useful for TFs. In their early works, can be reconstructed in the volume. The quadrant selection can be
Fujishiro et al. [FAT99, FTAT00] used a hyper Reeb graph to auto- manual, atlas based, or based on machine learning.
matically generate TFs that emphasize critical isosurfaces, which
are surfaces corresponding to isovalues near a change in the surface 6. Rendering Aspects of the Transfer Function
topology. Takahashi et al. [TTF04] improve this approach by using
topological volume skeletonization, which improves the detection So far we have reviewed the dimensionality aspects of TFs, includ-
of critical field values and speeds up computation in comparison ing discussions on global and local attributes (direct, derived, and
with hyper Reeb graphs. The volume skeleton tree concept is also aggregated) that aim to improve the classification power of the TFs
used by Takeshima et al. [TTFN05] to generate transfer functions and to determine colors and opacities. In this section, we review
using inclusion levels, isosurface trajectory distances, and isosurface publications that deal with applying the TF and the rendering of
genera. This concept is further extended by Weber et al. [WDC∗ 07] the volume. Some of the techniques discussed in these publications
who give the user more control over the TF generation by enabling deal with unique data, such as the full-color Visible Human, or use
the specification of different TF parameters per topologically dis- TFs to achieve specific artistic or stylistic effects. Other publications
tinct feature. Furthermore, Zhou and Takatsuka [ZT09] use contour deal with enabling focus and context visualization by using specific
trees to partition the volume in subregions. These authors take par- methods and TF concepts.
ticular care to automatically assign matching colors and opacities to
those subregions. Instead of considering per-voxel features, Praßni et 6.1. Coloring and texturing
al. [PRMH10] have suggested assigning shape properties to features,
which aids the user in assigning material properties for different One interesting way of computing a TF is by attempting to apply
volumetric structures. The different structures are identified using realistic looking colors to gray level volume data by either color-
curve-skeleton analysis in a preprocessing step. Lastly, Xiang et ing voxels or utilizing texture patterns. Realistic looking volume
al. [XTY∗ 11] have introduced the skeleton-based graph cuts algo- renderings can be obtained by transferring the colors of a colored
rithm that enables effective and efficient classification of topological volume such as the photographic Visible Human dataset (obtained
structures used to assign localized transfer functions. from actual photographs) to a volume of a different modality. In
this way, realistically colored volumes can be rendered, although
no actual color information is known. A powerful approach for
5.5. Domain specific aggregation transferring colors is to train a neural network for this purpose, one
example being the work by Muraki et al. [MNKT01]. The authors
Although many papers have been written with a specific applica-
transfer colors from the Female Visible Human dataset to MRI data
tion domain in mind, such as medicine or engineering, most of the
by training an RBF network.
presented techniques are universally applicable. In some cases, how-
ever, the data exhibits very subtle and domain-specific structures Instead of transferring colors on only a per-voxel basis, larger
that require consideration through highly specialized methods. texture patterns can also be transferred. Approaches in this direction

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

A general concept is that of style transfer functions, which have


been introduced by Bruckner and Gröller [BG07]. Similarly to a
traditional TF, optical properties are assigned according to voxel
density. However, the optical properties are not simple colors and
opacities, but instead entire rendering styles. The different styles are
described using an image-based lighting model built on lit sphere
maps (Figure 11). Different styles can be interpolated, which allows
for combining them into a single illustrative volume rendering.
This concept was taken further by Rautek et al. [RBG07], who
have introduced the concept of determining a TF using semantic lay-
ers by mapping several volumetric attributes to multiple illustrative
visual styles. Semantic layers allow a domain expert to specify this
mapping in the natural language of the domain using a linguistic
specification. This specification is then mapped to the visual style
used for volume rendering by employing fuzzy logic techniques.
This concept was later extended to include interaction dependent
semantics [RBG08].
Csébfalvi et al. [CMH∗ 01] have proposed a model to render con-
tours of volumetric objects by combining two measures in a way that
is similar to a gradient magnitude-weighted TF. The first measure
ascertains how much a voxel corresponds to a surface, which is
determined from the gradient magnitude. The second measure estab-
lishes how much a voxel corresponds to the silhouette of an object,
which is determined from the angle (via the dot product) between
the gradient direction and the view direction. Both measures are
then multiplied to determine contours within the volume without
Figure 11: Segmented volume rendered with a MD style TF based requiring an explicit correspondence to isosurfaces.
on data value and object membership. The base of the image depicts Maximum intensity projection (MIP) is an alternative to composit-
the lit spheres for the different styles. Image courtesy of Bruckner et ing colors and opacities from a TF that uses the emission-absorption
al. [BG07]. volume rendering integral given in Section 2. MIP retains only the
maximum value encountered along each ray. This approach provides
the advantage of avoiding the need to specify a full TF. However,
MIP has the disadvantage of losing occlusion and depth cues. A way
differ by transferring patterns from either 2D texture maps or 3D to circumvent this disadvantage and to combine the advantages of
texture maps. For virtual colonoscopy applications, Shibolet and MIP and classical DVR approaches has been proposed by Bruckner
Cohen-Or [SCO98] present a two-step approach to map patterns and Gröller [BG09] with the maximum intensity difference accu-
from 2D textures to the 3D non-convex surface of the colon. In mulation (MIDA) approach. By using MIDA, it is also possible to
this way, a more realistic look of the colon is obtained. Lu and obtain a smooth seamless transition between MIP and DVR.
Ebert [LE05] describe a system for obtaining example-based volume
illustrations. They use colored example images to compute a colored As presented by Hernell et al. [HLY07], the TF can also de-
3D volume with a similar look using texture synthesis. They then fine multiple rendering properties. Here not only the traditional
transfer this look to a volume for rendering either slices or the whole absorption/scattering property is used but also light emission, where
volume. selected materials are treated as light sources as well. This method
works in combination with the ambient occlusion and global illumi-
Traditional color and opacity TFs can be combined with the appli- nation approaches.
cation of 3D texture patterns to the volume, as has been performed
by Manke and Wünsche [MW09] using texture TFs. This com- 6.2.1. Focus and context techniques
bination results in what they call texture-enhanced direct volume
rendering, which can be used to visualize supplementary data such TFs that contain information about segmented objects can be effi-
as material properties and additional data fields. ciently employed for focus+context approaches. The user’s attention
should be directed toward the focus, but the context should still be
available in a less emphasized and unobtrusive way.
6.2. Rendering styles and shading
As introduced by Viola et al. [VKG04, VKG05], by assigning
The general idea of a TF can also be extended to determine the different parts of a volume, a corresponding object importance value
rendering style according to volume properties. Approaches range leads to the concept of importance-driven volume rendering. These
from determining or modifying the shading model that is used, to importance values are then used in the volume rendering to estab-
general illustrative, non-photorealistic volume rendering techniques. lish a view dependent priority regarding the visibility of different

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

specialized methods, such as in the work of Fritz et al. [FHG∗ 09]


concerning the inspection of steel fiber reinforced sprayed con-
crete. Another popular field for illustrative volume rendering is
earth science, in particular the visualization of seismic data. Patel
et al. [PGTG07, PGT∗ 08] have presented an algorithm to generate
textbook like seismic illustrations. A number of layers are com-
bined, such as 2D textures and 3D volumes, and the 3D volumes are
blended with the 2D textures by 1D TFs.

6.3. Animation and temporal techniques


Animations serves as a natural means to convey time-dependent
Figure 12: Different combinations of context layers for fo-
data. Moreover, adding animations can be a very powerful technique
cus+context visualization in ClearView [KSW06]. Image courtesy
to provide insight into volumetric data, even if the data are not time
of Krüger. Copyright 2006 IEEE.
dependent.
One way of using animation techniques is to expose the uncer-
tainty in the data. Lundström et al. [LLPY07] have tackled this
objects. Doing so allows for an easy distinction to be made between
important problem by mapping the data uncertainty to an interactive
objects in the focus and objects in the context, corresponding to
animation of the volume. The TF is decomposed into material com-
more important and less important objects, respectively.
ponents. Each component can be given a probability separate from
Another approach involving focus and context is outlined in the the material property opacity. Material mappings are then animated
work by Hadwiger et al. [HBH03] who present GPU-based two-level to convey the probability of the material, that is, the probable extent
volume rendering of segmented volume data. In this work, different of the material.
segmented objects can have different TFs, different rendering modes
Another approach to incorporate animations is to morph between
(such as DVR or MIP), and different compositing modes. The latter
TFs, a technique proposed by Wong et al. [WWT09]. The user
capability enables two-level volume rendering, which comprises one
specifies a start-TF and an end-TF and the system automatically
local compositing mode per object, and a second global composit-
interpolates all intermediate TFs to obtain a full animation.
ing level that combines the contributions of different objects. This
scheme can be used to distinguish between focus and context, for The reverse concept, taking temporal data and mapping it into
example rendering the context using the volumetric object-contour a static view, is presented by Balabanian et al. [BVMG08], who
technique proposed by Csebfalvi et al. [CMH∗ 01] and rendering introduce temporal styles, so called style TFs, for time varying
the focus with standard DVR and a regular 1D TF. volume data. Their method condenses multiple time steps of a time
varying dataset into a single view, in which the TF provides different
Illustrative approaches can be integrated with focus and con-
styles at different time steps, allowing for the depiction of start/end
text. Bruckner et al. [BGKG05, BGKG06] employ a modification
conditions and internal transition points.
of the compositing equation to distinguish focus from context for
illustrative context-preserving volume rendering. This strategy en-
ables the efficient simultaneous visualization of interior as well as 6.4. Sample reconstruction
exterior structures in a volume dataset. The VolumeShop system de- The way in which a TF can or should be applied to volumetric
veloped by Bruckner and Gröller [BG05] combines many different data often depends on sampling and function reconstruction con-
focus+context and illustrative shading techniques into an applica- siderations. Two important considerations are: 1) how should a TF
tion for general interactive illustrative volume rendering. Different be applied to segmented data when considering the boundary be-
segmented objects can have different TFs and different shading tween different materials or segmented objects; and 2) how should
modes. Cutaway views and ghosting are other powerful methods for a TF be applied to down sampled volume data in the context of
obtaining volumetric illustrations. multiresolution volume rendering.
Another approach for focus+context rendering is blending multi- A feature aware approach for applying multiple 1D TFs in an ac-
ple focus and context layers into a single output image, as carried out curate way to a volume containing multiple materials is presented by
in the ClearView system of Krüger et al. [KSW06]. See Figure 12 Lindholm et al. [LJHY14]. This approach achieves a better preser-
for examples of combining different layers. vation of feature boundaries than previous work. The approach
estimates the local support of materials before performing multiple
6.2.2. Domain-specific examples of stylistic approaches
material specific reconstructions. This estimation prevents much
In some domains, a specific visualization style has been developed. of the misclassification traditionally associated with transitional re-
Domain scientists have developed excellent skills in interpreting gions when TFs are applied. A result example is shown in Figure 13.
these types of illustrations over multiple centuries.
Younesy et al. [YMC06] describe a subtle problem in multiresolu-
Medicine is one well known domain-specific example, and not tion volume rendering: the TF should be applied to the distribution
unexpectedly, a large number of papers cited in this STAR use med- or histogram conceptually associated with each voxel in a down-
ical data as examples. However, other research areas also require sampled volume, instead of to the down-sampled voxel value. In

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Kroes et al. [KPB12], a more general volume rendering pipeline has


been developed to support full Monte Carlo path tracing, which also
works with HDR.

7. Automation of Transfer Function Definition


Consulting the volume rendering and TF-related literature of the
last decade reveals a motivational statement that can be found in
many of the abstracts: Volume rendering has established itself as
a powerful visualization tool in many domains, but the necessary
design of TFs is a time consuming and tedious task. Consequently,
a significant amount of work is dedicated to the automatic and semi-
automatic generation of TFs. Based on the fundamental notions and
classifications of the TF concept, this section provides an overview
Figure 13: The left image shows the result of standard volume of the higher level aspects of TFs. This section also examines the
rendering with continuous reconstruction. On the right, the red improvement of functionality and efficiency by introducing automa-
“hull” is avoided by continuous reconstruction within each feature tization through data driven approaches as well as by utilizing user
while preventing interpolation between features as proposed by knowledge encoded through interaction.
Lindholm et al. [LJHY14]. Image courtesy of Lindholm. Copyright
2014 IEEE.
7.1. Adapting presets
Instead of starting the TF design process anew for every new dataset,
order to restrict memory usage, these authors substitute the his- Rezk-Salama et al. [RSHSG00] have proposed the reuse of existing,
togram associated with each voxel with the corresponding mean and manually designed TFs from previous similar datasets and adjusting
variance. The TF is then applied to pairs of mean and variance for them by non-linear distortion to fit the new data. To perform the dis-
each voxel, which leads to an improved quality in multiresolution tortion, the work has compared both the histograms of the datasets
volume rendering. and improved the initial results using Kindlmann and Durkin’s po-
sition function [KD98]. Castro et al. [CKLG98] have suggested to
The same problem is addressed by Sicat et al. [SKMH14], who ap- work with labeled components to further facilitate adjusting of pre-
ply the TF to a whole distribution of voxel values. However, instead sets at this component level. Rezk-Salama et al. [RSKK06] later re-
of being restricted to the mean and variance of a distribution, their fined their approach to utilize existing TFs. They have recommended
goal is to represent the distribution accurately, while simultaneously establishing a domain-specific semantic model of components in
keeping memory usage low. This aim is achieved by employing a the data and the manual definition of one or more 2D TF primitives
sparse representation with 4D Gaussian basis functions in a com- for each component. For a number of datasets, these primitives are
bined 4D space composed of a 3D volume domain plus a 1D TF adjusted manually to account for the variation between datasets. The
domain. adjustments are projected to one dimension using the first principal
In addition to representing data distributions in multiresolution component, and this single parameter is exposed to the novice user
volumes, a histogram for each voxel can also be used to encode un- as a slider.
certainty, such as the concept of hixels that is presented by Thomp- In extension of their previous work on LiveSync, a system to syn-
son et al. [TLB∗ 11]. Hixels provide a histogram of data values for chronize 2D slice views and volumetric views of medical datasets,
each voxel, which can be employed in the analysis and rendering of Kohlmann et al. [KBKG08] have presented a new workflow and
large-scale volume data. interaction technique to aid the TF generation. As a starting point,
Another way to take on the problem of multiresolution volume the user selects a point of interest (POI) in the slice view, and the
rendering is from the perspective of defining which optical properties system refines and adapts TF presets based on statistical properties
should be down-sampled in order to obtain lower resolutions while derived from the selected POI. The system provides additional sup-
retaining good quality. Kraus and Bürger [KB08] have compared portive features, such as smart clipping, to reveal volume views that
down-sampling RGBA volumes, applying an RGBA TF, to down- are not occluded.
sampling the corresponding extinction coefficients instead, which
led to better results.
7.2. Semi-automatic generation
Pursuing automatic and semi-automatic generation of TFs is the
6.5. High dynamic range rendering
ultimate goal in many application domains since they enable a
In a fashion similar to rendering in general, volume rendering can be more widespread use of volume rendering. In most situations, it
extended from a low dynamic range to a high dynamic range (HDR) makes sense to allow the user to change the TF manually even
rendering. The work of Yuan et al. [YNCP05, YNCP06] extends if it was generated automatically. Consequently, we distinguish
this concept to specifying opacities in the TF with higher precision, between automatic and semi-automatic methods by their capability
in addition to using a larger range for the colors. As outlined in to generate an (initial) TF with or without user intervention.

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Many different approaches have been used over the years to al. [PRMH10] have related shape properties to features, rather than
accomplish the goal of a semi-automatic TF generation. He et to voxels. This approach aids the user in the assignment of ma-
al. [HHKP96] were the first to propose a semi-automatic approach to terial properties for different volumetric structures. The different
TF design in 1996. They employed a “supervised genetic algorithm” structures are identified using a curve-skeleton analysis in a pre-
approach to find the best TF with either the user or the system itself, processing step. Ip et al. [IVJ12] have demonstrated an algorithm
automatically ranking generations to identify the fittest members to segment the 2D histogram and allow the user to select the most
for the next iteration. Early work on semi-automatic generation also appropriate segments hierarchically for which they can generate
includes the work by Castro et al. [CKLG98] who have proposed separate TFs.
a component-based weighted mixture to yield the complete TF.
With MD data, data with multiple attributes at each voxel, project-
Components can then be referred to at a higher abstraction level as
ing the high-dimensional space to lower dimensional spaces formed
different tissues, thus providing a semantic description. In adopting
by combinations of attributes can guide the user in the selection of
a novel view to the TF as a two-step process, Fang et al. [FBT98]
an appropriate TF. Such multivariate data has been addressed by Liu
have described a semi-automatic algorithm. In their implementation,
et al. [LWT∗ 14] who have used dynamic and animated projections
they first apply a transformation to the entire dataset using image en-
to guide the user through the attribute space. The user is presented
hancement and boundary detection operations, followed by a simple
with multiple projected views in the TF design process.
1D linear ramp to “color” the dataset.

Leveraging histograms is a common method for the semi-


7.3. Automatic generation
automatic generation of TFs. While focusing on material boundaries
in datasets, Kindlmann and Durkin [KD98] have considered the 3D The automatic generation of TFs does not require user interaction.
histogram volume defined by attribute value, first and second direc- For the visualization of uncertainty of isosurfaces, Pfaffelmoser
tional derivative. In this space, they have defined a model that has et al. [PRW11] have used an automatically generated TF to color
classified boundaries and then have used this model for automatic the surfaces based on the approximate spatial deviation of possible
opacity function generation. Roettger et al. [RBS05] have used a 2D surface points from the mean surface (see Section 4.2). The idea of
value/gradient histogram and x, y, z spatial coordinates of the cor- using Morse theory to automatically decompose the feature space
responding values to classify the dataset. Prauchner et al. [PFC05] of the volume into a set of cells has been presented by Wang et
have combined the Design Galleries [MAB∗ 97] concept with the al. [WZL∗ 12]. Using cell separability, cells that potentially contain
histogram approach of Kindlmann and Durkin to evaluate detailed important features are retained, whereas those most likely originat-
views of the TF, thus guiding the user in finding suitable TF settings. ing from noise are rejected. The remaining cells are transformed
Correa and Ma [CM09b] have aimed to automatically optimize the into a hierarchical representation by successively merging them. Fi-
visibility of certain value ranges. The ranges are selected by the nally, the user is presented with these cells and can assign color and
user through modulating the opacity to ensure that otherwise hidden opacity values to the individual cells or groups of cells. Fujishiro et
features are visible. The approach uses a visibility histogram along al. [FAT99] and Fujishiro et al. [FTAT00] have automated the color
the rays. Correa and Ma have later extended their work [CM11] by assignments for the TF with a starting point in 3D field topology and
adding another dimension to the visibility histogram and by also graph representations. Wang and Kaufman [WK12] have used an
extending the algorithm to a view independent iterative mode. importance function that the user can modify to automatically com-
pute the TF. The importance of objects in the volume is conveyed
Clustering with feature space is another method for the semi-
using the saliency of a color. Bramon et al. [BRB∗ 13] have used
automatic generation of TFs. Šereda et al. [ŠVG06] have employed
information theoretic strategies that are based on concepts, such
an approach based on agglomerative hierarchical clustering in the
as informativeness and informational divergence, to automatically
two-dimensional LH domain [ŠBSG06] for identifying material
define multimodal TFs.
boundaries. The clustering can be computed according to two al-
ternative similarity measures. The first measure clusters similar
boundaries, which is determined by their distance in LH space. The 7.4. Supervised machine learning
second measure clusters boundaries that are spatially connected
in the volume. The resulting cluster hierarchy can be explored by The design of a “good” TF requires some experience and do-
the user to facilitate semi-automatic TF design. Maciejewski et main knowledge. An interesting research question that arises is
al. [MWCE09] have proposed a method for 2D clustering of the if this knowledge can be gained through machine learning. Tzeng
feature space (value, gradient) that automatically generates a set of et al. [TLM05] have proposed to do exactly that. They have sug-
TF components (see Section 5.1). Specifically for abdominal visu- gested to let the domain scientist interact with the materials in a
alization, Selver and Guzelis [SG09] have designed a method that volume directly and hide the underlying classification through the
creates a histogram for each slice of a scan and puts those together use of machine learning. In this work, the user paints onto certain
to create a volume histogram stack. The peaks are then extracted au- areas (similar to a segmentation user interface), and the system uses
tomatically via an RBF network. Nodes are hierarchically generated machine learning to determine a multidimensional TF based on the
at different levels, which can be selected to form groups to which input. They claim that almost any machine learning technique, such
the user assigns TF materials. as artificial neural networks, support vector machines, Bayesian net-
works, or hidden Markov models, is applicable. The authors imple-
Rather than generating the TF semi-automatically, some au- mented neural networks and support vector machines. Both showed
thors propose a semi-automatic transfer of properties. Praßni et promising results without one being superior over the other. Inspired

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

by Tzeng et al. [TLM05], Soundararajan and Schultz [SS15] have


provided a more recent overview in which they studied the use of a
number of different classifiers. The conclusion from their work is
that the random forest approach is the recommended classification
technique. De Moura Pinto and Freitas [dMPF07] instead have pro-
posed the use of unsupervised learning methods (Kohonen Maps) to
automatically reduce the dimensionality of the TF space and assign
color and opacity to the resulting low-dimensional space. Wang
et al. [WCLC06] have assumed a set of parameters from expert
input that classifies volumes, termed an information bank. For a
new dataset, they used Kohonen Maps to find similarities with the
information bank. They generated a TF using the known entries Figure 14: Two Gaussian distributions that combined completely
from the information bank and adjusted those to the current dataset hide the separate peaks in a normal histogram whereas the α-
using a back-propagation network. histogram is able to recover the peaks [LYL∗ 06]. Image courtesy of
Lundström.

8. User Interfaces
Interactive volume rendering is a powerful tool for visual exploration the opacity axis, which better aligns with visual appearance in the
of volumetric data. Interaction and exploration are thus key aspects rendered image. Maciejewski et al. [MJW∗ 13] proposed the abstract
that converge in the user interface for the TF. As stated in the attribute space, a histogram showing a selected statistical measure
introduction, balancing between simplicity and classification power such as mean, standard deviation, or skewness. This space directs
has been the driving force in a substantial body of research in this the user towards peaks in the histogram with greater interest, com-
field. In this section, we discuss the user interface aspects of TFs pared to the standard frequency histogram. The authors also show
and provide an overview of what has been accomplished so far. how the abstract attribute space can be helpful in 2D density plots.
Often times for simulation data, floating point values are provided.
8.1. One-dimensional editors This defines a much larger range than typical color and opacity
ranges, which use 8-bits per channel. When the data range is greater
For the direct volume rendering task, the colors and material prop- than the resolution of the color channels, as in HDR rendering (see
erties are commonly compiled into textures and stored on the GPU Section 6.5), it is necessary to provide a user interface to allow for
where they can be efficiently applied in the process of making an the specification of opacities and colors in the higher range of the
image. In a simple approach, the user can place control points, or data. A user interface to control this mapping can aid the user in
draw a curve, to define colors and opacities, an example of which is highlighting details in the data. Yuan et al. [YNCP05, YNCP06]
shown in Figure 3. At each control point, a color is assigned, and the have proposed to use nonlinear scaling and zooming of the large
vertical position is used to set the opacity. An early example of such HDR TF domain to allow users to adjust the TF even for small scale
an editor is found in Cignoni et al. [CMPS97]. A year later, Castro features.
et al. [CKLG98] have published work on the specification of TFs for
medical data by combining separate components, each component
8.2. 2D and multidimensional editors
representing a tissue of interest. A component was expressed as
an envelope, or a primitive such as a trapezoid, and the combina- Taking the step from one-dimensional TF editors to two-dimensional
tion of all components resulted in the final lookup table. Castro et and multidimensional TF editors undoubtedly raises the complexity
al. carefully studied the effect on the final result for a range of drastically. Many users find that interacting with 1D TFs is difficult,
parameters defining the exact shape of the component. König and yet the classification power is not expressive enough for many do-
Gröller [KG01] also developed a component-based editor for the main needs. In this subsection, we review the literature on 2D and
VolumePro hardware. MD TFs and associated user interfaces.
Histograms are useful for the aggregation of attributes as de- Non-separable 2D TFs have mostly been supported by an editor
scribed in Section 5. They can also aid the user. For example, the exploiting a 2D space. Analogous to the 1D TF editor, and even
global frequency distribution is often shown as a histogram in the more so, the two-dimensional frequency distribution of data values
background of the editing viewport. The global histogram is some- becomes an essential aid in the user interface. As the two dimensions
times of moderate use and provides poor guidance as the features of of screen space are taken by the 2D data value domain, the 2D fre-
interest are often non-visible peaks, drowning in the overall mix of quency distributions of the data domain need to be represented as 2D
materials. One idea proposed by Lundström et al. [LYL∗ 06] is the α- histograms, or density plots, rather than 1D traditional histograms.
histogram, which is able to emphasize locally homogeneous regions Thus, in the discussion of 2D TF editors, both TF components and
in the global histogram (see Figure 14). The Visibility Histogram, 2D histogram are tightly coupled. Kniss et al. [KKH02] have pre-
introduced by Correa and Ma [CM11], is another approach to help sented a set of widgets and interactions tools to define the 2D TF.
the user define a TF by providing feedback of what information The gradient magnitude is most commonly used for the second di-
is lost in the volume for the current TF. Potts and Möller [PM04] mension, and the widgets presented are mostly geared towards that
also studied 1D TF editing. They suggested a logarithmic scale on type of data (see Figure 15). A set of 2D widgets, or parametrized

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Figure 16: Use of parallel coordinates to make range selections of


TF components for multivariate volumes. The center view shows a
temporal 2D histogram [AM07]. Image courtesy of Akibay.

Figure 15: The two-dimensional TF editor widgets allow the user Figure 17: Parallel coordinates with embedded multidimensional
to more precisely control the classification power of the 2D TF. scale plots for editing multidimensional TFs [GXY11]. Image cour-
The triangular widget can also be skewed to better fit the desired tesy of Guo. Copyright 2011 IEEE.
classification domain [KKH02]. Image courtesy of Kniss. Copyright
2002 IEEE.

simultaneous views of two time steps and brushing over the range
in the PC view (see Figure 16). In Zhao and Kaufman [ZK10], the
primitives, for 2D TF editors has also been presented by Rezk-
parallel coordinates have been used in addition to a dimension re-
Salama et al. [RSKK06] Zhou et al. [ZSH12] presented a technique
duction approach that employs a local linear embedding to simplify
for combining 2D and 1D TF editors, in a side-by-side manner, with
the TF design. Dimension reduction has also been used by Guo et
the goal to increase the expressiveness while minimizing user inter-
al. [GXY11] in combination with PC plots. They have integrated
face complexity. They argued that users have familiarity with 2D
multidimensional scale plots within the parallel coordinates plot to
and 1D TF editors, and combining them to achieve 3D TFs provides
allow the user to edit the TF settings in a single consistent view. An
a more effective solution without overwhelming users. A user is
example is shown in Figure 17. Guo et al. [GXY12] later extend
given the choice to select different 2D domains and an optional 1D
their previous work to combine PC plots with multidimensional
TF domain where a TF curve is defined for each widget created in
scaling to avoid the need to adjust parameters for each dimension
the 2D domain. The user can specify selections using widgets in the
separately. Liu and Shen [LS16] have proposed to use association
form of rectangles or freehand drawn areas using a lasso tool.
analysis for multivariate scientific data sets to generate a parallel
Another approach for user interfaces for high-dimensional TFs, coordinate plot. They introduce the concepts of uniqueness and
Lundström et al. [LLY06b] introduced the sorted histograms, for informativeness to select scalar values of interest.
2D and MD editors. Their technique maintains the simpler visual
approach along the style of the 1D TF histogram representation. The
additional attribute is color-coded and stacked in a form similar to 8.4. Brushing and painting
that of the 1D histogram. Selections on the value range can be done Another approach for the TF design is to allow the user to select, or
iteratively to cover additional dimensions of the TF. probe, the data, not unlike the eyedropper in image editing tools. A
TF component is inferred from the point, area, or region of selection.
8.3. Parallel coordinates This interaction scheme is also often combined with various forms
of automation in the TF generation. This approach can be applied
Reaching beyond 2D TF editing becomes highly intricate for any to either the volume rendering view or assisting views, essentially
direct primitive-based editing, such as an envelope or areal shape. slice views.
Several researchers have proposed Parallel Coordinates (PC) to deal
with TF specification. In Akibay and Ma [AM07], this visualiza-
8.4.1. Slice view interaction
tion technique has been used in combination with temporal density
plots and 2D histograms, to show the temporal development for In medical visualization, a DVR view is often combined with one or
time varying multivariate data. Their user interface has supported more slice views, or multiplanar reconstructions (MPRs). Selecting

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

a point or region of interest on such an MPR view is then quick and 8.5. Design galleries
easy, and it also extends to multidimensional data.
Providing exemplar images from specific TFs is another method for
Allowing the user to paint on 2D slices of the data can be used assisting the user in TF generation. Marks et al. [MAB∗ 97] intro-
as the input for training a machine learning algorithm. The inter- duced the concept of Design Galleries, in which the parameter-space
face proposed by Tzeng et al. [TLM03, TLM05] extracts a 10- is automatically explored, and thumbnail views are generated to give
dimensional attribute space and trains a neural network with the data the user feedback on the results for various parameter settings. This
samples brushed by the user. In their interface, the user can draw concept applies to many tasks in computer graphics and animation
strokes for data points to be included or excluded, which is an ap- as well as to the exploration of TF settings. The concept has been fur-
proach similar to that taken in segmentation interfaces. Soundarara- ther explored in various forms by Jankun-Kelly et al. [JKM00], who
jan and Schultz [SS15] use a similar interface in their compara- presented a collection of TF settings in the form of a spreadsheet.
tive study of classifiers. The user interface presented by Zhou and A two-level interaction approach has been presented by Prauchner
Hansen [ZH13] lets users paint line strokes on 2D multivariate et al. [PFC05], in which a set of automatically generated TFs are
slices. Via kernel density estimation, a multivariate TF component used, and renderings are shown in a gallery style. The approach
is automatically defined and can be further refined by brushing in was also explored by de Moura Pinto and Freitas [PF06]. The user
the volume view or via 2D Gaussians in a reduced 2D space. The could either fine-tune the selected TF from the gallery or use it as a
system also supports parallel coordinates manipulation. A similar basis for a stochastic evolutive search in the TF domain. Both papers
approach has been taken by Zhou and Hansen [ZH14], where ad- based the initial TF generation on Kindlemann and Durkin [KD98].
vanced selections for more complex areas of interest on slices of The technique was improved in de Moura Pinto and Freitas [PF08]
multivariate data are performed using a lasso tool that automatically by providing a history tree to track TF design evolution. The user
snaps to boundaries. Based on the multivariate attribute space, the interface was also improved.
selection defines a feature for which a TF definition is automatically
Wu and Qu [WQ07] presented a somewhat different approach but
generated.
allow the user to select some sample images rendered using prede-
8.4.2. Volume view interaction fined TFs. A new TF is then automatically generated by evaluating
intermediate results of a mix of source TFs that optimizes the result
Another approach to sample the data is to interact with the volume to match all source images. Guo et al. [GLY14] propose the Transfer
data directly. Examples are probe based or stroke based as in the Function Map, a meta-visualization to organize and utilize expert
work of Zhou and Hansen [ZH14]. However, this approach requires designed, existing TFs for a given dataset.
some form of initial simple TF setting to show part of the volume
data, which can be a simple gray value gradient. A recent approach with the intention to address novice users with
no prior knowledge of volume rendering and TFs was presented in
The probe tool used by Kniss et al. [KKH02] takes samples of Jönsson et al. [JFY15]. In this system, a novice user is faced with
the volume data on slicing planes displayed in the volume rendering, automatically partitioned ranges and assigned unique colors in a
which in turn result in widgets in the TF domain. The widgets can dynamic gallery. The user can then select to adjust the range and
be manipulated to change the volume rendering interactively. In this opacity and assign a color, using a touch interface and a two-level
way, the user can interact in both the volume domain and the TF interaction scheme.
domain, referred to as dual domain interaction.
An approach for stroke-based TF design has been presented by
8.6. Higher-level interaction
Ropinski et al. [RPSH08], in which the user draws directly on a
monochromatic view of the volume and selects features of interest In this subsection, we discuss work with a specific focus on the
by placing strokes near silhouettes. This approach results in TF com- workflow and guided interaction. An early taxonomy and rules for
ponents that later can be modified/combined and disabled/enabled to selecting colormaps in a general sense were presented in Bergman et
explore the volume. Although Zhou and Hansen [ZH13] primarily al. [BRT95] and implemented as a module, named PRAVDAColor,
painted on 2D slices, the system also allowed the user to further re- in the IBM Visualization Data Explorer, later OpenDX. This module
fine the TF definitions by placing strokes or lassos around features of provides a means to guide the user selecting an appropriate colormap
interest, which are then back-projected onto the volume. A very pow- based on the type of data and task at hand.
erful approach for interacting with volume data is the WYSIWYG
Castro et al. [CKLG98] proposed to add metadata to the TF
(What You See is What You Get) approach to TF design presented
components, such as component labels. The user can then employ
by Guo et al. [GMY11]. They demonstrated a set of intuitive tools to
the metadata to specify a TF by selecting components at a higher
enable interaction directly on the 3D volume rendered image. This
level, potentially selecting from a list, such as bone, fat, muscle. The
work was later extended by Guo and Yuan [GY13] to allow for the
authors further describe how components can be saved as presets and
design of local transfer functions, achieving expressiveness similar
automatically adjusted to new datasets. Reapplying existing presets
to classical segmentation.
and combinations of TF components could greatly improve the
There are, however, considerations to take into account when efficiency for the user. In Rezk-Salama et al. [RSKK06], semantics
interacting by drawing on a volume view. Selections in image space is also assigned to the TF components to allow the user to interact
correspond to a volumetric region, and thus the actual selection may with a high-level panel by adjusting the visual appearance of tissues.
need to use some thresholding of opacity or a similar operation to Selver et al. [SFKH07] further explored a complete system of presets
identify which voxel values are actually of interest. and storage of the TF components. An integration of the software

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

into a DICOM viewer has facilitated the evaluation of the system Robust classification algorithms typically produce a probability
by medical experts, and overall the TF design process has been of the classification. Simply using opacity modulation to represent
shortened. uncertainty combines these probabilities with the material properties
described by the TF. Effective methods for including quantifiable
Since TF specification is often related to domain queries, Natural
uncertainties or probabilities of classification will benefit the domain
Language Processing (NLP) may be desired. An interface using
users. The work described in Section 4.3 represents initial steps in
Natural Language Processing to allow the user to specify rendering
this promising direction.
styles in illustrative rendering is proposed in Rautek et al. [RBG07]
and extended in Rautek et al. [RBG08]. The system allows the user Sensitivity, Robustness, Reproducibility: In certain regions of
to select layered rendering styles with simple expressions, such as the domain, TFs can be very sensitive, whereas large changes in
"if principal curvature is not positive then contours are blueish." other regions do not produce large changes in output. Scanned
data often contains a certain variability across a slice or within a
stack of slices that make up the volume. Classification robustness to
9. Open Research Questions such variability has not been well explored. Techniques are needed
to cope with these different types of sensitivity. TFs provide the
While this survey reports on the extensive amount of research con-
mapping for visual results, but are often not directly quantifiable.
ducted on TFs, there remain open questions that call for future
Reproducibility would be enhanced with more quantitative results.
study. Many areas of future research are interesting but this section
In this respect, more work on the automated generation of TFs
describes some that are of high priority.
(Section 7) and user interfaces (Section 8) that is aware of sensitivity
Data Modality: TFs are very well suited for specific types of data and robustness is necessary.
such as computed tomography. This is not the case for other data
modalities. TFs have been less successful for scanning modalities Spatial Information: The simplest density-based TFs do not
such as MRI, fMRI, or ultrasound. It would be helpful to identify take spatial information into account. Various extensions include
data characteristics that describe which data is suitable for which local or more global spatial information in the TF specification. For
type of TF. The works described in Section 5.5 can provide starting a specific data and task at hand, it would be interesting to identify
points in this direction. how much spatial information is needed, from none to all spatial
information from the data domain. An open topic is to research and
Photometric Classification: In almost no case are the physical
possibly identify the "sweet spot" in going from a TF to a more
properties of the TF defined, meaning the material density per unit
elaborate classification/segmentation scheme. Various approaches
of physical length of the modeled material. In general, material
that investigate the wider local neighborhood have been discussed
properties are rarely specified, beyond a basic color and opacity. A
in Sections 4.5, 5.1, 5.4, and 6.4.
few papers deal with styles of the TF. These papers emphasize that
the shader or shading model being used can be controlled by the User Interfaces: Interacting with TFs remains difficult for end
TF. Essentially, two open aspects are given here. As the materials users. As the dimensionality exceeds 2D, such interaction becomes
present in the volume have physical properties, the TF should, rather, an even greater problem. If we consider the data-user-task design tri-
be defined in relation to these quantities. For example, opacity is angle, the user interface could address the data area, or the task area,
derived from material density, which clearly should be defined per or be focused on the user. User interfaces for TFs should support the
unit length. Similarly, the TF should provide the ability to define user during both specification and editing. Better interfaces might
photometric properties, such as the BRDF or subsurface scattering of provide more effective representations of features in either the data
the materials to be used. Some research has explored the description range or visual domain, or in combinations thereof. Higher-level
of various material properties (see Section 6.2), but such properties interaction techniques, described in Section 8.6, that decrease the
are not photometrically based. cognitive load on the user are a promising area that can bring TFs to
Physical Dimensions: Most volume rendering research is es- the next level of usability.
tablished on a unit scale of voxels, and not physical dimensions.
Scalability: Scalability concerns the size of the data domain,
Therefore, the result depends on the underlying sampling density.
the number of attributes, the number of datasets, etc. Often not a
As such, it leaves most relevant measures out of touch with end-
single data volume is investigated but rather entire ensembles or
users and domain applications, making it difficult to reproduce. For
longitudinal time series, perhaps consisting of hundreds of datasets.
example, finding a feature in a medical dataset, whose size is given
Examples of work on high-dimensional attribute data is described
like "three voxels wide", is entirely irrelevant. Medical features
in several parts of the report, such as in Section 5.3 and Section 7.4.
should have width expressed in a physical unit relating to human
Not only is there variability in a particular dataset, but collections
physiology. Neglecting the physical background is due to the on
of datasets may have a significant variability between individual
purpose simplified TF setup where material classification and visual
datasets. While maintaining quantifiable results, adaptability would
mapping are intertwined (see Section 3.1). The complex attributes
benefit the TF specification for such collections. Attributes are in-
as discussed in Section 4 and Section 5 might serve as an inspiration
creasingly available at each spatial position. Effective techniques for
to more fully consider the physical context of the given application.
addressing such datasets, perhaps through massive multidimensional
Quantifiable Uncertainty: Uncertainty quantification has been TFs, are required. In particular, new techniques may emerge from
an active field of research for quite some time, but, in general, data analytics that would benefit the TF specification for such data
uncertainty representation and visualization has not been solved. (Section 7).

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Transfer Functions for Visualizations in the Wild: The com- [AFK∗ 14] A MIRKHANOV A., F RÖHLER B., K ASTNER J., G RÖLLER
ponents and setup of a visualization can be easily controlled under E., H EINZL C.: InSpectr: Multi-modal exploration, visualization, and
laboratory conditions. However, this is not the case in real-world analysis of spectral data. Computer Graphics Forum (Proc. of EuroVis)
33, 3 (June 2014), 91–100. doi:10.1111/cgf.12365. 6
scenarios. Visualization will become increasingly concerned with
[AM07] A KIBAY H., M A K.-L.: A tri-space visualization interface
combining a dynamically changing number of datastreams with
for analyzing time-varying multivariate volume data. In Joint Euro-
limited scope, validity, and accuracy that provide complementing or graphics - IEEE VGTC Symposium on Visualization (2007), EuroVis’07,
even contradicting information. The number of streams displayed at Eurographics Association, pp. 115–122. doi:10.2312/VisSym/
the same time may vary considerably. Some of these streams will EuroVis07/115-122. 16
be volumetric data. How TFs can be adjusted to accommodate this [AWHS16] A BBASLOO A., W IENS V., H ERMANN M., S CHULTZ T.:
highly volatile situation, and to ensure expressivity while avoiding Visualizing tensor normal distributions at multiple levels of detail. IEEE
clutter and visual overload, will be a compelling research question. TVCG 22, 1 (Jan. 2016), 975–984. doi:10.1109/TVCG.2015.
2467031. 10
Starting points for this research could be some of the works in
[BCP∗ 12] B RAMBILLA A., C ARNECKY R., P EIKERT R., V IOLA I.,
Sections 5, 7.1, and 8.
H AUSER H.: Illustrative flow visualization: State of the art, trends and
challenges. In Eurographics – STARs (2012), Cani M.-P., Ganovelli F.,
(Eds.), Eurographics Association. doi:10.2312/conf/EG2012/
10. Conclusions stars/075-094. 7
This report has delivered a holistic account of the concept of TFs [BG05] B RUCKNER S., G RÖLLER M. E.: VolumeShop: An interactive
system for direct volume illustration. In IEEE Visualization (2005),
and systematically described the enabling part TFs play in making
pp. 671 – 678. doi:10.1109/VISUAL.2005.1532856. 5, 12
volume rendering one of the most successful visualization meth-
[BG07] B RUCKNER S., G RÖLLER M. E.: Style transfer functions for illus-
ods available. The report emphasizes the multifaceted use of the
trative volume rendering. Computer Graphics Forum (Proc. of Eurograph-
TF, ranging from a classification tool and representation of optical ics) 26, 3 (Sept. 2007), 715–724. doi:10.1111/j.1467-8659.
properties, to knowledge encoding and visual design, and as a tool 2007.01095.x. 5, 11
for interactive exploration. We conclude that research on TFs has [BG09] B RUCKNER S., G RÖLLER M. E.: Instant volume visualization
reached a level of maturity that merits this review of the research using maximum intensity difference accumulation. Computer Graphics
contributions made to the field over the past decades. Based on our Forum (Proc. of EuroVis) 28, 3 (2009), 775–782. doi:10.1111/j.
experience of research in the field, and working closely with domain 1467-8659.2009.01474.x. 11
experts, we have identified areas in which research is needed to [BGKG05] B RUCKNER S., G RIMM S., K ANITSAR A., G RÖLLER M. E.:
Illustrative context-preserving volume rendering. In Joint Eurographics -
take the next important steps to support emerging volumetric ex-
IEEE VGTC Symposium on Visualization (2005), EuroVis’05, Eurograph-
ploration concepts and further improve the utility of TF-based data ics Association, pp. 69–76. doi:10.2312/VisSym/EuroVis05/
exploration. 069-076. 12
[BGKG06] B RUCKNER S., G RIMM S., K ANITSAR A., G RÖLLER M. E.:
Illustrative context-preserving exploration of volume data. IEEE TVCG
Acknowledgments 12, 6 (2006), 1559–1569. doi:10.1109/TVCG.2006.96. 12
This research was partially supported by the Department of Energy, [BRB∗ 13] B RAMON R., RUIZ M., BARDERA A., B OADA I., F EIXAS
M., S BERT M.: Information theory-based automatic multimodal transfer
National Nuclear Security Administration, under Award Number(s) function design. IEEE Journal of Biomedical and Health Informatics 17,
DE-NA0002375, the DOE SciDAC Institute of Scalable Data Man- 4 (July 2013), 870–880. doi:10.1109/JBHI.2013.2263227. 14
agement Analysis and Visualization DOE DE-SC0007446, NIH: [BRT95] B ERGMAN L., ROGOWITZ B., T REINISH L.: A rule-based tool
Fluorender: An Imaging Tool for Visualization and Analysis of Con- for assisting colormap selection. In IEEE Visualization (1995), pp. 118–
focal Data as Applied to Zebrafish Research, R01-GM098151-01, 125, 444. doi:10.1109/VISUAL.1995.480803. 17
NASA NSSC-NNX16AB93A and NSF ACI-1339881, NSF IIS- [BVMG08] BALABANIAN J.-P., V IOLA I., M ÖLLER T., G RÖLLER
1162013. It was further partially supported by the Knut and Alice M. E.: Temporal styles for time-varying volume data, 2008.
Wallenberg foundation: Seeing Organ Function KAW.2013.0076. Poster presented at 3D Data Processing, Visualization, and Trans-
mission. URL: http://www.cg.tuwien.ac.at/research/
The Swedish Research Council: 2015-05462. Swedish e-Science
publications/2008/balabanian-2008-tst/. 12
Research Council (SeRC) and Excellence Center at Linköping -
[BZGV14] B ISTA S., Z HUO J., G ULLAPALLI R. P., VARSHNEY A.:
Lund in Information Technology (ELLIIT). This research was made Visualization of brain microstructure through spherical harmonics illumi-
possible in part by the Intel Visual Computing Institute, by the nation of high fidelity spatio-angular fields. IEEE TVCG (Proc. of Vis.) 20,
NIH/NCRR Center for Integrative Biomedical Computing, P41- 12 (Dec. 2014), 2516–2525. doi:10.1109/TVCG.2014.2346411.
RR12553-10, and by Award Number R01NR14852-01A1 from the 10
National Institute Nursing Research. It was further partially sup- [CBW∗ 12] C HILDS H., B RUGGER E., W HITLOCK B., M EREDITH J.,
ported by King Abdullah University of Science and Technology A HERN S., P UGMIRE D., B IAGAS K., M ILLER M., H ARRISON C.,
(KAUST) baseline funding. W EBER G. H., K RISHNAN H., F OGAL T., S ANDERSON A., G ARTH
C., B ETHEL E. W., C AMP D., RÜBEL O., D URANT M., FAVRE J. M.,
NÁVRATIL P.: VisIt: An end-user tool for visualizing and analyzing very
large data. In High Performance Visualization–Enabling Extreme-Scale
References Scientific Insight, Chapman & Hall/CRC Computational Science. CRC
[AD10] A RENS S., D OMIK G.: A survey of transfer functions suitable for Press, Oct 2012, pp. 357–372. 5
volume rendering. In Eurographics/IEEE VGTC Symposium on Volume [CIB15] CIBC:, 2015. ImageVis3D: An interactive visualization software
Graphics (2010), VG’10, Eurographics Association, pp. 77–83. doi: system for large-scale volume data. Scientific Computing and Imaging
10.2312/VG/VG10/077-083. 1 Institute (SCI). URL: http://www.imagevis3d.org. 5

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

[CKLG98] C ASTRO S., K ÖNIG A., L ÖFFELMANN H., G RÖLLER E.: [FTAT00] F UJISHIRO I., TAKESHIMA Y., A ZUMA T., TAKAHASHI S.:
Transfer function specification for the visualization of medical data, Volume data mining using 3D field topology analysis. Computer Graph-
1998. Technical Report: TR-186-2-98-12. URL: http://citeseerx. ics and Applications 20, 5 (Sept. 2000), 46–51. doi:10.1109/38.
ist.psu.edu/viewdoc/summary?doi=10.1.1.48.7705. 5, 865879. 10, 14
13, 14, 15, 17
[FW08] FALK M., W EISKOPF D.: Output-sensitive 3D line integral
[CM08] C ORREA C., M A K.-L.: Size-based transfer functions: A new convolution. IEEE TVCG 14, 4 (July 2008), 820–834. doi:10.1109/
volume exploration technique. IEEE TVCG (Proc. of Vis.) 14, 6 (Nov. TVCG.2008.25. 7
2008), 1380–1387. doi:10.1109/TVCG.2008.162. 7
[GLY14] G UO H., L I W., Y UAN X.: Transfer function map. In IEEE
[CM09a] C ORREA C., M A K. L.: The occlusion spectrum for volume Pacific Visualization Symposium (2014), PacificVis 2014, pp. 262–266.
classification and visualization. IEEE TVCG (Proc. of Vis.) 15, 6 (Nov. doi:10.1109/PacificVis.2014.24. 17
2009), 1465–1472. doi:10.1109/TVCG.2009.189. 7, 8
[GMY11] G UO H., M AO N., Y UAN X.: WYSIWYG (what you see is
[CM09b] C ORREA C., M A K.-L.: Visibility-driven transfer functions. In what you get) volume visualization. IEEE TVCG (Proc. of Vis.) 17, 12
IEEE Pacific Visualization Symposium (2009), PacificVis 2009, pp. 177– (Dec. 2011), 2106–2114. doi:10.1109/TVCG.2011.261. 17
184. doi:10.1109/PACIFICVIS.2009.4906854. 14
[GXY11] G UO H., X IAO H., Y UAN X.: Multi-dimensional transfer
[CM11] C ORREA C. D., M A K.-L.: Visibility histograms and visibility- function design based on flexible dimension projection embedded in
driven transfer functions. IEEE TVCG 17, 2 (Feb. 2011), 192–204. doi: parallel coordinates. In IEEE Pacific Visualization Symposium (2011),
10.1109/TVCG.2010.35. 14, 15 PacificVis 2011, pp. 19–26. doi:10.1109/PACIFICVIS.2011.
[CMH∗ 01] C SÉBFALVI B., M ROZ L., H AUSER H., K ÖNIG A., 5742368. 16
G RÖLLER E.: Fast visualization of object contours by non-photorealistic [GXY12] G UO H., X IAO H., Y UAN X.: Scalable multivariate volume
volume rendering. Computer Graphics Forum (Proc. of Eurographics) visualization and analysis based on dimension projection and parallel
20, 3 (2001), 452–460. doi:10.1111/1467-8659.00538. 11, 12 coordinates. IEEE TVCG 18, 9 (Sept. 2012), 1397–1410. doi:10.
[CMPS97] C IGNONI P., M ONTANI C., P UPPO E., S COPIGNO R.: Mul- 1109/TVCG.2012.80. 16
tiresolution representation and visualization of volume data. IEEE TVCG
[GY13] G UO H., Y UAN X.: Local WYSIWYG volume visualization. In
3, 4 (Oct. 1997), 352–369. doi:10.1109/2945.646238. 15
IEEE Pacific Visualization Symposium (2013), PacificVis 2013, pp. 65–72.
[CNCO15] C AI L.-L., N GUYEN B. P., C HUI C.-K., O NG S.-H.: Rule- doi:10.1109/PacificVis.2013.6596129. 17
Enhanced Transfer Function Generation for Medical Volume Visualiza-
tion. Computer Graphics Forum (Proc. of EuroVis), 3 (2015). doi: [HA04] H ENDERSON A., A HRENS J.: The Paraview guide : a parallel
10.1111/cgf.12624. 9 visualization application. Kitware, Inc., New York, 2004. URL: http:
//opac.inria.fr/record=b1117983. 5
[CR08] C ABAN J., R HEINGANS P.: Texture-based transfer functions for
direct volume rendering. IEEE TVCG (Proc. of Vis.) 14, 6 (Nov. 2008), [HBG11] H AIDACHER M., B RUCKNER S., G RÖLLER E.: Volume analy-
1364–1371. doi:10.1109/TVCG.2008.169. 8 sis using multimodal surface similarity. IEEE TVCG (Proc. of Vis.) 17,
12 (2011), 1969–1978. doi:10.1109/TVCG.2011.258. 7
[DANS10] DANIELS J R J., A NDERSON E. W., N ONATO L. G., S ILVA
C. T.: Interactive vector field feature identification. IEEE TVCG (Proc. [HBH03] H ADWIGER M., B ERGER C., H AUSER H.: High-quality two-
of Vis.) 16, 6 (2010), 1560–8. doi:10.1109/TVCG.2010.170. 7 level volume rendering of segmented data sets on consumer graphics
hardware. In IEEE Visualization (2003), pp. 301–308. doi:10.1109/
[DCH88] D REBIN R. A., C ARPENTER L., H ANRAHAN P.: Volume VISUAL.2003.1250386. 5, 12
rendering. In SIGGRAPH (1988), ACM, pp. 65–74. doi:10.1145/
54852.378484. 1, 3, 4 [HBKG08] H AIDACHER M., B RUCKNER S., K ANITSAR A., G RÖLLER
M. E.: Information-based transfer functions for multimodal visualization.
[DEP05] D ESGRANGES P., E NGEL K., PALADINI G.: Gradient-free
In Eurographics Workshop on Visual Computing for Biomedicine (2008),
shading: A new method for realistic interactive volume rendering. In
VCBM 2008, Eurographics Association, pp. 101–108. doi:10.2312/
International Workshop on Vision, Modeling and Visualization (2005),
VCBM/VCBM08/101-108. 10
VMV’05, pp. 209–216. 5
[dMPF07] DE M OURA P INTO F., F REITAS C. M. D. S.: Design of [HHKP96] H E T., H ONG L., K AUFMAN A., P FISTER H.: Generation of
multi-dimensional transfer functions using dimensional reduction. In transfer functions with stochastic search techniques. In IEEE Visualization
Joint Eurographics - IEEE VGTC Symposium on Visualization (2007), (1996), pp. 227–ff. doi:10.1109/Visual.1996.568113. 14
EuroVis’07, Eurographics Association, pp. 131–138. doi:10.2312/ [HKG00] H LADUVKA J., K ÖNIG A., G RÖLLER E.: Curvature-
VisSym/EuroVis07/131-138. 10, 15 based transfer functions for direct volume rendering. In Spring
[EHK∗ 06] E NGEL K., H ADWIGER M., K NISS J., R EZK -S ALAMA C., Conference on Computer Graphics (2000), SCCG 2000, pp. 58–
W EISKOPF D.: Real-time volume graphics. CRC Press, 2006. 2, 3 65. URL: https://www.cg.tuwien.ac.at/research/vis/
vismed/CurvatureTF/CurvatureTF.pdf. 7
[EMRY02] E BERT D. S., M ORRIS C. J., R HEINGANS P., YOO T. S.:
Designing effective transfer functions for volume rendering from pho- [HLY07] H ERNELL F., L JUNG P., Y NNERMAN A.: Efficient ambient
tographic volumes. IEEE TVCG 8, 2 (Apr. 2002), 183–197. doi: and emissive tissue illumination using local occlusion in multiresolution
10.1109/2945.998670. 6 volume rendering. In Eurographics/IEEE VGTC Symposium on Vol-
ume Graphics (2007), Hege H.-C., Machiraju R., Möller T., Sramek M.,
[FAT99] F UJISHIRO I., A ZUMA T., TAKESHIMA Y.: Automating transfer (Eds.), VG’07, Eurographics Association. doi:10.2312/VG/VG07/
function design for comprehensible volume rendering based on 3D field 001-008. 11
topology analysis. In IEEE Visualization (1999), pp. 467–563. doi:
10.1109/VISUAL.1999.809932. 10, 14 [HPB∗ 10] H AIDACHER M., PATEL D., B RUCKNER S., K ANITSAR A.,
G RÖLLER E.: Volume visualization based on statistical transfer-function
[FBT98] FANG S., B IDDLECOME T., T UCERYAN M.: Image-based trans- spaces. In IEEE Pacific Visualization Symposium (2010), PacificVis 2010,
fer function design for data exploration in volume visualization. In IEEE pp. 17–24. doi:10.1109/PACIFICVIS.2010.5429615. 6, 7
Visualization (1998), pp. 319–326. doi:10.1109/VISUAL.1998.
745319. 14 [HSS∗ 05] H ADWIGER M., S IGG C., S CHARSACH H., B ÜHLER K.,
G ROSS M.: Real-time ray-casting and advanced shading of discrete
[FHG∗ 09] F RITZ L., H ADWIGER M., G EIER G., P ITTINO G., G RÖLLER
isosurfaces. Computer Graphics Forum (Proc. of Eurographics) 24, 3
M. E.: A visual approach to efficient analysis and quantification of ductile
(2005), 303–312. 7
iron and reinforced sprayed concrete. IEEE TVCG (Proc. of Vis.) 15, 6
(2009), 1343–1350. doi:10.1109/TVCG.2009.115. 7, 8, 12 [IVJ12] I P C. Y., VARSHNEY A., J Á J Á J.: Hierarchical exploration

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

of volumes using multilevel segmentation of the intensity-gradient his- [Lev88] L EVOY M.: Display of surfaces from volume data. Computer
tograms. IEEE TVCG (Proc. of Vis.) 18, 12 (2012), 2355–2363. doi: Graphics and Applications 8, 3 (May 1988), 29–37. doi:10.1109/
10.1109/TVCG.2012.231. 8, 14 38.511. 1, 3, 5
[JFY15] J ÖNSSON D., FALK M., Y NNERMAN A.: Intuitive Exploration [LJHY14] L INDHOLM S., J ÖNSSON D., H ANSEN C., Y NNERMAN A.:
of Volumetric Data Using Dynamic Galleries. IEEE TVCG (Proc. of Vis.) Boundary aware reconstruction of scalar fields. IEEE TVCG (Proc. of Vis.)
22, 1 (2015), 896 – 905. doi:10.1109/TVCG.2015.2467294. 17 20, 12 (2014), 2447–2455. doi:10.1109/TVCG.2014.2346351.
[JH09] J OHNSON C., H UANG J.: Distribution-driven visualization of 12, 13
volume data. IEEE TVCG 15, 5 (Sept. 2009), 734–746. doi:10.1109/ [LLL∗ 10] L INDHOLM S., L JUNG P., L UNDSTROM C., P ERSSON A.,
TVCG.2009.25. 9 Y NNERMAN A.: Spatial conditioning of transfer functions using local
[JKM00] JANKUN -K ELLY T., M A K.-L.: A spreadsheet interface for material distributions. IEEE TVCG (Proc. of Vis.) 16, 6 (Nov. 2010),
visualization exploration. In IEEE Visualization (2000), pp. 69–76. doi: 1301–1310. doi:10.1109/TVCG.2010.195. 4, 9
10.1109/VISUAL.2000.885678. 17 [LLPY07] L UNDSTROM C., L JUNG P., P ERSSON A., Y NNERMAN A.:
[KB08] K RAUS M., B ÜRGER K.: Interpolating and downsampling RGBA Uncertainty visualization in medical volume rendering using probabilistic
volume data. In International Workshop on Vision, Modeling and Visual- animation. IEEE TVCG (Proc. of Vis.) 13, 6 (Nov. 2007), 1648–1655.
ization (2008), pp. 323–332. 13 doi:10.1109/TVCG.2007.70518. 3, 7, 12
[KBKG08] KOHLMANN P., B RUCKNER S., K ANITSAR A., G RÖLLER [LLY05] L UNDSTRÖM C., L JUNG P., Y NNERMAN A.: Extending and
E.: LiveSync++: enhancements of an interaction metaphor. In Graph- simplifying transfer function design in medical volume rendering using
ics Interface Conference (2008), ACM, pp. 81–88. doi:10.1145/ local histograms. In Joint Eurographics - IEEE VGTC Symposium on
1375714.1375729. 13 Visualization (2005), EuroVis’05, Eurographics Association, pp. 263–270.
doi:10.2312/VisSym/EuroVis05/263-270. 8
[KD98] K INDLMANN G., D URKIN J. W.: Semi-automatic generation of
transfer functions for direct volume rendering. In IEEE Symposium on [LLY06a] L UNDSTRÖM C., L JUNG P., Y NNERMAN A.: Local histograms
Volume Visualization (1998), pp. 79–86. doi:10.1109/SVV.1998. for design of transfer functions in direct volume rendering. IEEE TVCG
729588. 13, 14, 17 12, 6 (2006), 1570–1579. doi:10.1109/TVCG.2006.100. 5, 8, 9
[KG01] K ÖNIG A., G RÖLLER E.: Mastering transfer function spec- [LLY06b] L UNDSTRÖM C., L JUNG P., Y NNERMAN A.: Multi-
ification by using VolumePro technology. In Spring Conference on dimensional transfer function design using sorted histograms. In Eu-
Computer Graphics (2001), vol. 17, pp. 279–286. URL: https: rographics/IEEE VGTC Workshop on Volume Graphics (2006), Machi-
//www.cg.tuwien.ac.at/research/publications/ raju R., Möller T., (Eds.), VG’06, Eurographics Association. doi:
2000/Koenig-2000-ATFS/TR-186-2-00-07Paper.pdf, 10.2312/VG/VG06/001-008. 16
doi:10.1.1.43.5954. 15
[LS16] L IU X., S HEN H. W.: Association analysis for visual exploration
[KKH02] K NISS J., K INDLMANN G., H ANSEN C.: Multidimensional of multivariate scientific data sets. IEEE TVCG 22, 1 (Jan. 2016), 955–964.
transfer functions for interactive volume rendering. IEEE TVCG 8, 3 doi:10.1109/TVCG.2015.2467431. 16
(July 2002), 270–285. doi:10.1109/TVCG.2002.1021579. 5, 6,
15, 16, 17 [LWT∗ 14] L IU S., WANG B., T HIAGARAJAN J. J., B REMER P. T., PAS -
CUCCI V.: Multivariate volume visualization through dynamic projections.
[KPB12] K ROES T., P OST F. H., B OTHA C. P.: Exposure render: An In IEEE Symposium on Large Data Analysis and Visualization (2014),
interactive photo-realistic volume rendering framework. PloS one 7, 7 LDAV 2014, pp. 35–42. doi:10.1109/LDAV.2014.7013202. 14
(2012). doi:10.1371/journal.pone.0038586. 13
[LYL∗ 06] L UNDSTRÖM C., Y NNERMAN A., L JUNG P., P ERSSON A.,
[KSC∗ 10] K IM H. S., S CHULZE J. P., C ONE A. C., S OSINSKY G. E., K NUTSSON H.: The α-histogram: Using spatial coherence to enhance
M ARTONE M. E.: Dimensionality reduction on multi-dimensional trans- histograms and transfer function design. In Joint Eurographics - IEEE
fer functions for multi-channel volume data sets. Information Visualiza- VGTC Symposium on Visualization (2006), EuroVis’06, Eurographics
tion 9, 3 (June 2010), 167–180. doi:10.1057/ivs.2010.6. 10 Association, pp. 227–234. doi:10.2312/VisSym/EuroVis06/
[KSW06] K RÜGER J., S CHNEIDER J., W ESTERMANN R.: ClearView: 227-234. 9, 15
An interactive context preserving hotspot visualization technique. IEEE
[LZY∗ 07a] L I J., Z HOU L., Y U H., L IANG H., WANG L.: Classification
TVCG (Proc. of Vis.) 12, 5 (Sept. 2006), 941–948. doi:10.1109/
for volume rendering of industrial CT based on moment of histogram.
TVCG.2006.124. 7, 12
In IEEE Conference on Industrial Electronics and Applications (2007),
[KUS∗ 05] K NISS J. M., U ITERT R. V., S TEPHENS A., L I G. S., TAS - pp. 913–918. ICIEA 2007. doi:10.1109/ICIEA.2007.4318542.
DIZEN T., H ANSEN C.: Statistically quantitative volume visualization. 8
In IEEE Visualization (2005), pp. 287–294. doi:10.1109/VISUAL.
[LZY∗ 07b] L I J., Z HOU L., Y U H., L IANG H., Z HANG L.: Classifi-
2005.1532807. 3, 7, 9
cation for volume rendering of industrial CT based on minimum cross
[KVH84] K AJIYA J. T., VON H ERZEN B. P.: Ray tracing volume den- entropy. In IEEE International Conference on Mechatronics and Au-
sities. Computer Graphics (Proc. of SIGGRAPH) (Jan. 1984), 165–174. tomation (2007), pp. 2710–2715. ICMA 2007. doi:10.1109/ICMA.
doi:10.1145/800031.808594. 1, 3 2007.4303986. 5, 8
[KW99] K INDLMANN G., W EINSTEIN D.: Hue-balls and lit-tensors for [MAB∗ 97] M ARKS J., A NDALMAN B., B EARDSLEY P. A., F REE -
direct volume rendering of diffusion tensor fields. In IEEE Visualization MAN W., G IBSON S., H ODGINS J., K ANG T., M IRTICH B., P FIS -
(1999), pp. 183–524. doi:10.1109/VISUAL.1999.809886. 10 TER H., RUML W., RYALL K., S EIMS J., S HIEBER S.: Design gal-
[KWH00] K INDLMANN G., W EINSTEIN D., H ART D.: Strategies for leries: A general approach to setting parameters for computer graph-
direct volume rendering of diffusion tensor fields. IEEE TVCG 6, 2 (Apr. ics and animation. In SIGGRAPH (1997), ACM, pp. 389–400. doi:
2000), 124–138. doi:10.1109/2945.856994. 10 10.1145/258734.258887. 14, 17
[KWTM03] K INDLMANN G., W HITAKER R., TASDIZEN T., M ÖLLER [Max95] M AX N.: Optical models for direct volume rendering. IEEE
T.: Curvature-based transfer functions for direct volume rendering: TVCG 1, 2 (June 1995), 99–108. doi:10.1109/2945.468400. 2
Methods and applications. In IEEE Visualization (2003), pp. 513–520. [ME02] M ORRIS C. J., E BERT D.: Direct volume rendering of pho-
doi:10.1109/VISUAL.2003.1250414. 7 tographic volumes using multi-dimensional color-based transfer func-
[LE05] L U A., E BERT D. S.: Example-based volume illustrations. In tions. In Joint Eurographics - IEEE TCVG Symposium on Visual-
IEEE Visualization (2005), pp. 655–662. doi:10.1109/VISUAL. ization (2002), VisSym 2002, Eurographics Association, pp. 115–ff.
2005.1532854. 11 doi:10.2312/VisSym/VisSym02/115-124. 6

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

[MJW∗ 13] M ACIEJEWSKI R., JANG Y., W OO I., J ÄNICKE H., G AITHER [RBG07] R AUTEK P., B RUCKNER S., G RÖLLER E.: Semantic layers
K. P., E BERT D. S.: Abstracting attribute space for transfer function for illustrative volume rendering. IEEE TVCG (Proc. of Vis.) 13, 6 (Nov.
exploration and design. IEEE TVCG 19, 1 (Jan. 2013), 94–107. doi: 2007), 1336–1343. doi:10.1109/TVCG.2007.70591. 11, 18
10.1109/TVCG.2012.105. 15 [RBG08] R AUTEK P., B RUCKNER S., G RÖLLER M. E.: Interaction-
[MNKT01] M URAKI S., NAKAI T., K ITA Y., T SUDA K.: An attempt for dependent semantics for illustrative volume rendering. Computer Graph-
coloring multichannel MR imaging data. IEEE TVCG 7, 3 (July 2001), ics Forum (Proc. of EuroVis) 27, 3 (May 2008), 847–854. doi:
265–274. doi:10.1109/2945.942694. 6, 10 10.1111/j.1467-8659.2008.01216.x. 11, 18
[MW09] M ANKE F., W ÜNSCHE B.: Texture-enhanced direct volume [RBS05] ROETTGER S., BAUER M., S TAMMINGER M.: Spatialized
rendering. In International Conference on Computer Graphics Theory transfer functions. In Joint Eurographics - IEEE VGTC Symposium on
and Applications (2009), VISIGRAPP 2009, SciTePress, pp. 185–190. Visualization (2005), EuroVis’05, Eurographics Association, pp. 271–278.
doi:10.5220/0001772701850190. 11 doi:10.2312/VisSym/EuroVis05/271-278. 14
[MWCE09] M ACIEJEWSKI R., W OO I., C HEN W., E BERT D. S.: Struc- [RPSH08] ROPINSKI T., P RASSNI J., S TEINICKE F., H INRICHS K.:
turing feature space: A non-parametric method for volumetric transfer Stroke-based transfer function design. In Eurographics/IEEE VGTC
function generation. IEEE TVCG 15, 6 (2009), 1473–1480. doi: Symposium on Volume and Point-Based Graphics (2008), VGPBG’08, Eu-
10.1109/TVCG.2009.185. 8, 14 rographics Association, pp. 41–48. doi:10.2312/VG/VG-PBG08/
041-048. 17
[PBL∗ 04] PARK S., B UDGE B., L INSEN L., H AMANN B., J OY K.:
Multi-dimensional transfer functions for interactive 3D flow visual- [RSHSG00] R EZK -S ALAMA C., H ASTREITER P., S CHERER J.,
ization. In Pacific Conference on Computer Graphics and Applica- G REINER G.: Automatic adjustment of transfer functions for
tions (2004), PCCGA’04, IEEE, pp. 177–185. doi:10.1109/PCCGA. 3D volume visualization. In International Workshop on Vi-
2004.1348348. 7 sion, Modeling and Visualization (2000), VMV 2000, pp. 357–364.
URL: http://www.cg.informatik.uni-siegen.de/data/
[PF06] P INTO F., F REITAS C.: Two-level interaction transfer function Publications/2000/rezk_Publ.2000.11.pdf. 13
design combining boundary emphasis, manual specification and evolutive
generation. In Brazilian Symposium on Computer Graphics and Image [RSKK06] R EZK -S ALAMA C., K ELLER M., KOHLMANN P.: High-level
Processing (2006), SIBGRAPI 2006, IEEE, pp. 281–288. doi:10. user interfaces for transfer function design with semantics. IEEE TVCG
1109/SIBGRAPI.2006.45. 17 (Proc. of Vis.) 12, 5 (Sept. 2006), 1021–1028. doi:10.1109/TVCG.
2006.148. 13, 16, 17
[PF08] P INTO F. D . M., F REITAS C. M. D. S.: Volume visualization
and exploration through flexible transfer function design. Computers & [ŠBSG06] Š EREDA P., BARTROLI A. V., S ERLIE I. W. O., G ERRIT-
Graphics 32, 5 (Oct. 2008), 540–549. doi:10.1016/j.cag.2008. SEN F. A.: Visualization of boundaries in volumetric data sets us-
08.006. 17 ing lh histograms. IEEE TVCG 12, 2 (Mar. 2006), 208–218. doi:
10.1109/TVCG.2006.39. 9, 14
[PFC05] P RAUCHNER J., F REITAS C., C OMBA J.: Two-level interaction
approach for transfer function specification. In Brazilian Symposium [SCO98] S HIBOLET O., C OHEN -O R D.: Coloring voxel-based objects for
on Computer Graphics and Image Processing (2005), SIBGRAPI 2005, virtual endoscopy. In IEEE Symposium on Volume Visualization (1998),
IEEE, pp. 265–272. doi:10.1109/SIBGRAPI.2005.52. 14, 17 pp. 15–22. doi:10.1109/SVV.1998.729580. 11
[PGT∗ 08] PATEL D., G IERTSEN C., T HURMOND J., G JELBERG J., [Sel15] S ELVER M. A.: Exploring brushlet based 3D textures in transfer
G RÖLLER M. E.: The seismic analyzer: Interpreting and illustrat- function specification for direct volume rendering of abdominal organs.
ing 2d seismic data. IEEE TVCG 14, 6 (2008), 1571–1578. doi: IEEE TVCG 21, 2 (Feb. 2015), 174–187. doi:10.1109/TVCG.2014.
10.1109/TVCG.2008.170. 12 2359462. 10
[PGTG07] PATEL D., G IERTSEN C., T HURMOND J., G RÖLLER [SFKH07] S ELVER M. A., F ISCHER F., K UNTALP M., H ILLEN W.: A
E.: Illustrative rendering of seismic data. In International software tool for interactive generation, representation, and systematical
Workshop on Vision, Modeling and Visualization (2007), VMV’07, storage of transfer functions for 3D medical images. Computer Methods
pp. 13–22. URL: http://www.cg.tuwien.ac.at/research/ and Programs in Biomedicine 86, 3 (June 2007), 270–280. doi:10.
publications/2007/patel_daniel_2007_IRSD/. 12 1016/j.cmpb.2007.03.008. 17
[PHBG09] PATEL D., H AIDACHER M., BALABANIAN J., G RÖLLER [SG09] S ELVER M. A., G UZELIS C.: Semiautomatic transfer function ini-
M. E.: Moment curves. In IEEE Pacific Visualization Symposium (2009), tialization for abdominal visualization using self-generating hierarchical
PacificVis 2009, pp. 201–208. doi:10.1109/PACIFICVIS.2009. radial basis function networks. IEEE TVCG 15, 3 (May 2009), 395–409.
4906857. 9 doi:10.1109/TVCG.2008.198. 6, 14

[PLB∗ 01] P FISTER H., L ORENSEN B., BAJAJ C., K INDLMANN G., [SJEG05] S VAKHINE N. A., JANG Y., E BERT D., G AITHER K.: Il-
S CHROEDER W., M ACHIRAJU R., L EE J.: The transfer function bake- lustration and photography inspired visualization of flows and volumes.
off. Computer Graphics and Applications 21, 3 (May 2001), 16–22. In IEEE Visualization (2005), pp. 687–694. doi:10.1109/VISUAL.
doi:10.1109/38.920623. 1, 4 2005.1532858. 7

[PM04] P OTTS S., M ÖLLER T.: Transfer functions on a logarithmic [SKMH14] S ICAT R., K RÜGER J., M ÖLLER T., H ADWIGER M.: Sparse
scale for volume rendering. In Graphics Interface Conference (2004), PDF volumes for consistent multi-resolution volume rendering. IEEE
GI’04, Canadian Human-Computer Communications Society, pp. 57– TVCG (Proc. of Vis.) 20, 12 (Dec. 2014), 2417–2426. doi:10.1109/
63. URL: http://dl.acm.org/citation.cfm?id=1006058. TVCG.2014.2346324. 13
1006066, doi:10.1145/1006058.1006066. 15 [SLMSC13] S RINIVASAN S., L JUNG P., M C D ERMOTT B., S MITH -
[PRMH10] P RASSNI J.-S., ROPINSKI T., M ENSMANN J., H INRICHS C ASEM M.: Adaptive volume rendering for ultrasound color flow
K. H.: Shape-based transfer functions for volume visualization. In diagnostic imaging, Apr. 2013. US Patent 8,425,422. URL: http:
IEEE Pacific Visualization Symposium (2010), PacificVis 2010, pp. 9–16. //www.google.com/patents/US8425422. 5
doi:10.1109/PacificVis.2010.5429624. 8, 10, 14 [SS15] S OUNDARARAJAN K. P., S CHULTZ T.: Learning probabilistic
[PRW11] P FAFFELMOSER T., R EITINGER M., W ESTERMANN R.: Vi- transfer functions: A comparative study of classifiers. Computer Graphics
sualizing the positional and geometrical variability of isosurfaces in un- Forum (Proc. of EuroVis) (2015). doi:10.1111/cgf.12623. 7, 15,
certain scalar fields. Computer Graphics Forum (Proc. of EuroVis) 30, 3 17
(2011), 951–960. doi:10.1111/j.1467-8659.2011.01944.x. [STF∗ 03] S ERLIE I., T RUYEN R., F LORIE J., P OST F., V LIET L., VOS
7, 14 F.: MICCAI. Springer, 2003, ch. Computed Cleansing for Virtual

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.
Patric Ljung et al. / STAR: Transfer Functions

Colonoscopy Using a Three-Material Transition Model, pp. 175–183. [WQ07] W U Y., Q U H.: Interactive transfer function design based on
doi:10.1007/978-3-540-39903-2_22. 9 editing direct volume rendered images. IEEE TVCG 13, 5 (Sept. 2007),
1027–1040. doi:10.1109/TVCG.2007.1051. 17
[ŠVG06] Š EREDA P., V ILANOVA A., G ERRITSEN F. A.: Automating
transfer function design for volume rendering using hierarchical clustering [WWT09] W ONG H.-C., W ONG U.-H., TANG Z.: Direct volume ren-
of material boundaries. In Joint Eurographics - IEEE VGTC Symposium dering by transfer function morphing. In International Conference on
on Visualization (2006), Santos B. S., Ertl T., Joy K., (Eds.), EuroVis’06, Information, Communications and Signal Processing (2009), ICICS’09,
Eurographics Association. doi:10.2312/VisSym/EuroVis06/ IEEE, pp. 857–860. doi:10.1109/ICICS.2009.5397587. 12
243-250. 9, 14 [WZK12] WANG L., Z HAO X., K AUFMAN A.: Modified dendrogram
[TLB∗ 11] T HOMPSON D., L EVINE J. A., B ENNETT J. C., B REMER P.- of attribute space for multidimensional transfer function design. IEEE
T., G YULASSY A., PASCUCCI V., P ÉBAY P. P.: Analysis of large-scale TVCG 18, 1 (Jan. 2012), 121–131. doi:10.1109/TVCG.2011.23.
scalar data using hixels. In IEEE Symposium on Large Data Analysis and 8
Visualization (2011), LDAV 2011, pp. 23–30. doi:10.1109/LDAV. [WZL∗ 12] WANG Y., Z HANG J., L EHMANN D. J., T HEISEL H., C HI
2011.6092313. 13 X.: Automating transfer function design with valley cell-based clustering
[TLM03] T ZENG F.-Y., L UM E. B., M A K.-L.: A novel interface for of 2D density plots. Computer Graphics Forum (Proc. of EuroVis) 31,
higher-dimensional classification of volume data. In IEEE Visualization 3 (June 2012), 1295–1304. doi:10.1111/j.1467-8659.2012.
(2003), pp. 66–. doi:10.1109/VISUAL.2003.1250413. 17 03122.x. 14
[TLM05] T ZENG F.-Y., L UM E., M A K.-L.: An intelligent system ap- [XTY∗ 11] X IANG D., T IAN J., YANG F., YANG Q., Z HANG X., L I
proach to higher-dimensional classification of volume data. IEEE TVCG Q., L IU X.: Skeleton cuts — an efficient segmentation method for
11, 3 (May 2005), 273–284. doi:10.1109/TVCG.2005.38. 14, 15, volume rendering. IEEE TVCG 17, 9 (Sept. 2011), 1295–1306. doi:
17 10.1109/TVCG.2010.239. 10
[TLMM02] TAKANASHI I., L UM E., M A K.-L., M URAKI S.: IS- [YMC06] YOUNESY H., M ÖLLER T., C ARR H.: Improving the quality
pace: interactive volume data classification techniques using indepen- of multi-resolution volume rendering. In Joint Eurographics - IEEE
dent component analysis. In Pacific Conference on Computer Graph- VGTC Symposium on Visualization (2006), EuroVis’06, Eurographics
ics and Applications (2002), PCCGA 2002, IEEE, pp. 366–374. doi: Association. doi:10.2312/VisSym/EuroVis06/251-258. 12
10.1109/PCCGA.2002.1167880. 6 [YNCP05] Y UAN X., N GUYEN M. Z., C HEN B., P ORTER D. H.: High
[TM04] T ZENG F.-Y., M A K.-L.: A cluster-space visual interface for dynamic range volume visualization. In IEEE Visualization (2005),
arbitrary dimensional classification of volume data. In Joint Euro- pp. 327–334. doi:10.1109/VISUAL.2005.1532812. 13, 15
graphics - IEEE TCVG Symposium on Visualization (2004), VisSym [YNCP06] Y UAN X., N GUYEN M. X., C HEN B., P ORTER D. H.: HDR
2004, Eurographics Association, pp. 17–24. doi:10.2312/VisSym/ VolVis: high dynamic range volume visualization. IEEE TVCG 12, 4
VisSym04/017-024. 8, 9 (July 2006), 433–445. doi:10.1109/TVCG.2006.72. 13, 15
[TPD06] TAPPENBECK A., P REIM B., D ICKEN V.: Distance-based trans- [ZH13] Z HOU L., H ANSEN C. D.: Transfer function design based on
fer function design: Specification methods and applications. In Simulation user selected samples for intuitive multivariate volume exploration. In
and Visualization (2006), pp. 259–274. 7 IEEE Pacific Visualization Symposium (2013), PacificVis 2013, IEEE,
pp. 73–80. doi:10.1109/PacificVis.2013.6596130. 10, 17
[TTF04] TAKAHASHI S., TAKESHIMA Y., F UJISHIRO I.: Topological
volume skeletonization and its application to transfer function design. [ZH14] Z HOU L., H ANSEN C.: GuideME: Slice-guided semiautomatic
Graphical Models 66, 1 (2004), 24 – 49. doi:10.1016/j.gmod. multivariate exploration of volumes. Computer Graphics Forum (Proc. of
2003.08.002. 10 EuroVis), 3 (2014). doi:10.1111/cgf.12371. 10, 17
[TTFN05] TAKESHIMA Y., TAKAHASHI S., F UJISHIRO I., N IELSON [ZK10] Z HAO X., K AUFMAN A.: Multi-dimensional reduction and trans-
G. M.: Introducing topological attributes for objective-based visualization fer function design using parallel coordinates. In Eurographics/IEEE
of simulated datasets. In Eurographics/IEEE VGTC Workshop on Volume VGTC Symposium on Volume Graphics (2010), VG’10, Eurographics
Graphics (2005), VG’05, Eurographics Association. doi:10.2312/ Association, pp. 69–76. doi:10.2312/VG/VG10/069-076. 10, 16
VG/VG05/137-145. 10 [ZSH12] Z HOU L., S CHOTT M., H ANSEN C.: Technical section: Transfer
[VKG04] V IOLA I., K ANITSAR A., G RÖLLER E.: Importance-driven function combinations. Computers & Graphics 36, 6 (Oct. 2012), 596–
volume rendering. In IEEE Visualization (2004), pp. 139–146. doi: 606. doi:10.1016/j.cag.2012.02.007. 6, 16
10.1109/VISUAL.2004.48. 11 [ZT09] Z HOU J., TAKATSUKA M.: Automatic transfer function gener-
[VKG05] V IOLA I., K ANITSAR A., G RÖLLER M. E.: Importance-driven ation using contour tree controlled residue flow model and color har-
feature enhancement in volume visualization. IEEE TVCG 11, 4 (2005), monics. IEEE TVCG (Proc. of Vis.) 15, 6 (Nov. 2009), 1481–1488.
408–418. doi:10.1109/TVCG.2005.62. 11 doi:10.1109/TVCG.2009.120. 10

[WCLC06] WANG L., C HEN X., L I S., C AI X.: General adaptive transfer
functions design for volume rendering by using neural networks. In
International Conference on Neural Information Processing - Volume
Part II (2006), ICONIP’06, Springer-Verlag, pp. 661–670. doi:10.
1007/11893257_74. 15
[WCZ∗ 11] WANG Y., C HEN W., Z HANG J., D ONG T., S HAN G., C HI
X.: Efficient volume exploration using the Gaussian mixture model. IEEE
TVCG 17, 11 (2011), 1560–1573. doi:10.1109/TVCG.2011.97. 8
[WDC∗ 07] W EBER G. H., D ILLARD S. E., C ARR H., PASCUCCI V.,
H AMANN B.: Topology-controlled volume rendering. IEEE TVCG 13, 2
(Mar. 2007), 330–341. doi:10.1109/TVCG.2007.47. 10
[WK12] WANG L., K AUFMAN A.: Importance driven automatic color
design for direct volume rendering. Computer Graphics Forum (Proc. of
EuroVis) 31, 3 (2012), 1305 – 1314. doi:10.1111/j.1467-8659.
2012.03123.x. 14

c 2016 The Author(s)


Computer Graphics Forum c 2016 The Eurographics Association and John Wiley & Sons Ltd.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy