Phace - Physics-Based Face Modeling and Animation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Phace: Physics-based Face Modeling and Animation

ALEXANDRU - EUGEN ICHIM, EPFL


PETR KADLEČEK, Charles University in Prague / University of Utah
LADISLAV KAVAN, University of Utah
MARK PAULY, EPFL

Fig. 1. Physics-based simulation facilitates a number of advanced effects for facial animation, such as applying wind forces, fattening and slimming of the
face, wearing a VR headset, and even turning into a zombie.

We present a novel physics-based approach to facial animation. Contrary to ACM Reference format:
commonly used generative methods, our solution computes facial expres- Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly.
sions by minimizing a set of non-linear potential energies that model the 2017. Phace: Physics-based Face Modeling and Animation. ACM Trans. Graph.
physical interaction of passive flesh, active muscles, and rigid bone struc- 36, 4, Article 153 (July 2017), 14 pages.
tures. By integrating collision and contact handling into the simulation, DOI: http://dx.doi.org/10.1145/3072959.3073664
our algorithm avoids inconsistent poses commonly observed in generative
methods such as blendshape rigs. A novel muscle activation model leads to
a robust optimization that faithfully reproduces complex facial articulations.
We show how person-specific simulation models can be built from a few 1 INTRODUCTION
expression scans with a minimal data acquisition process and an almost
Accurate simulation of facial motion is of paramount importance in
entirely automated processing pipeline. Our method supports temporal dy-
namics due to inertia or external forces, incorporates skin sliding to avoid
computer animation for feature films and games, but also in medical
unnatural stretching, and offers full control of the simulation parameters, applications such as regenerative and plastic surgery. Realistic facial
which enables a variety of advanced animation effects. For example, slim- animation has seen significant progress in recent years, largely
ming or fattening the face is achieved by simply scaling the volume of the due to novel algorithms for face tracking and improvements in
soft tissue elements. We show a series of application demos, including artistic acquisition technology (Klehm et al. 2015; von der Pahlen et al.
editing of the animation model, simulation of corrective facial surgery, or 2014).
dynamic interaction with external forces and objects. High-end facial animations are most commonly produced using a
sophisticated data capture procedure in combination with algorith-
CCS Concepts: • Computing methodologies → Physical simulation; mic and manual data processing. While video-realistic animations
can be created in this manner, the production effort is significant
Additional Key Words and Phrases: 3D avatar creation, facial animation, and costly. A main reason is that complex physical interactions are
anatomical models, rigging difficult to recreate with the commonly employed reduced model
representations. For example in blendshape rigs, collisions around
the lip regions or inertial effects of the facial tissue are typically not
Permission to make digital or hard copies of all or part of this work for personal or accounted for. To remedy these shortcomings, artists often introduce
classroom use is granted without fee provided that copies are not made or distributed hundreds of corrective shapes that need to be carefully sculpted
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM and blended to achieve the desired effect in each specific animation
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, sequence (Lewis et al. 2014).
to post on servers or to redistribute to lists, requires prior specific permission and/or a Recent work (Barrielle et al. 2016; Ichim et al. 2016) proposes to
fee. Request permissions from permissions@acm.org.
© 2017 ACM. 0730-0301/2017/7-ART153 $15.00 avoid these shortcomings by augmenting the generative approach
DOI: http://dx.doi.org/10.1145/3072959.3073664 of blendshape animation with a simulation-based solution. A key

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:2 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

benefit of physics-based simulation is the ability to correctly han- • slimming and fattening of the face by adapting the volume
dle collision and contact, both for internal contact of facial tissue of soft tissue,
or bones, as well as for collisions with external objects. In addi- • simulation of corrective facial surgery, such as orthognathic
tion, secondary motion, such as inertial deformations or other time- surgery to correct for jaw malformations,
dependent effects can easily be integrated into the optimization • dynamic interaction with external forces (e.g. wind) and
pipeline. objects (e.g. VR headsets),
One major difficulty in simulation-based approaches is to achieve • artistic editing of facial expression dynamics by modifying
the required level of realism, which is particularly challenging for tissue stiffness or muscle behavior.
facial animation, due to the heightened human sensitivity for facial
motion perception (Bruce and Young 1986). Accurate simulation Overview. Figure 2 provides a visual summary of our physics-
requires building a detailed volumetric face model that faithfully based face modeling and animation approach. Central to our method
represents the shape and dynamics of the captured subject. However, is a face template model that combines volumetric and surface ele-
acquiring such a volumetric face model is challenging. Volumetric ments as shown in Figure 3. Physics-based optimization is performed
data produced by CT or MRI scanners is often difficult to convert on a tetrahedralized volumetric model composed of rigid bones and
into a simulation-ready representation. So far, successful pioneering deformable tissue. The latter is further separated into active muscles,
methods required a significant amount of manual editing (Sifakis and passive flesh and skin. Muscles actively deform to drive the dy-
et al. 2005), which makes them difficult to deploy at scale. namic motion of the face model. In order to control the animation,
We approach this problem by combining easy-to-obtain facial we augment the volumetric template with a surface blendshape ba-
surface scans with a template model that integrates rigid bone struc- sis that represents the facial expression space. This also provide an
tures, active muscle tissues and passive flesh, fat, and skin layers interface to the surface scans used to build actor-specific simulation
in a fully volumetric simulation model of the human face (see Fig- models.
ure 3). By scanning the subject in multiple facial poses, we obtain The core algorithmic components of our method are the inverse
a representation of the geometry and expression dynamics of the and forward physics simulation modules. Inverse physics is used
acquired person. We then solve an inverse problem to estimate the in a model building stage to create a simulation-ready anatomical
activation parameters of the registered template rest pose in order face model of a specific person. As input to this preprocessing stage,
to best reproduce the scanned expressions under activation. we assume a set of surface scans that are first transformed to a
We propose a novel muscle activation model in order to match the user-specific blendshape model. An anatomy transfer step warps
input scans more accurately. Unlike previous models that are con- the volumetric template towards the neutral expression of the blend-
strained by fixed fiber directions, our model introduces additional shape model. Subsequently, our inverse physics solver computes
degrees of freedom to support any deformation devoid of global suitable muscle activations of the simulation model to best approxi-
rotation (since a muscle cannot rotate itself). This generalized model mate each expression blendshape.
avoids the problem of relying on pre-determined fiber directions Given the person-specific simulation model and corresponding
which are often inaccurate. muscle activation patterns, we can apply forward physics simulation
Subsequently, we can create new animations driven by muscle to compute dynamic face articulations. This animation stage takes
activations using a forward physics simulation that incorporates as input a temporal series of blendshape weights that are mapped
collision handling, volume preservation, inertia, and external forces to per-frame muscle activations. External effects such as gravity or
such as wind forces or gravity. Muscle activations can be computed object collisions can be incorporated in the simulation to support a
from a temporal sequence of blendshape weights, which enables wide range of dynamic effects.
straightforward integration into existing animation environments. The rest of the paper is organized as follows: we first put our
work in context by discussing related work in Section 2. In Section 3
Contributions. The main technical contributions of our work are: we present our simulation template model. Then we introduce the
forward and inverse physics simulation algorithms in Sections 4
• a novel muscle activation model that offers more flexibility and 5, respectively. Section 6 explains how these components are
than standard fiber-based models, integrated into the model building and animation stages. In Section 7
• a physics-based simulation method that retains realism we analyze the behavior of our method and provide comparisons
even with significant external forces or substantial modifi- to previous work. We show several application demos in Section 8,
cations of the face geometry and tissue material properties, before concluding with a discussion of limitations and future work.
• an inverse modeling optimization to adapt the simulation
template to a series of expression scans of a specific person. 2 RELATED WORK
An important feature of physics-based approaches is that their Data-driven methods. A significant body of work in facial anima-
parameters can be controlled to achieve the desired effects. In our tion is based on data-driven techniques. Multi-view stereo acqui-
case, the parameters include the stiffness of simulation elements, sition systems are used extensively to acquire detailed geometry
their rest shape volume, the static bone structure, or the muscle and texture models, e.g., (Alexander et al. 2010; Amberg et al. 2007;
activation parameters. This detailed control facilitates numerous Beeler et al. 2010). Avatar creation based on simple cell-phone cam-
new applications that are difficult to achieve with existing methods. era acquisition was proposed by Ichim et al. (2015), while depth
Examples we show in this paper include sensors are often used to create 3D avatars suitable for realtime

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
Phace: Physics-based Face Modeling and Animation • 153:3

the upper body (Lee et al. 2009), and combined with fluid simulation
to study swimming (Si et al. 2014).
Biomechanical modeling is a complex task and several software
platforms support soft tissue simulation, such as Sofa (Allard et al.
2007), ArtiSynth (Lloyd et al. 2012), or FEBio (Maas et al. 2012). An
important aspect of soft tissue modeling is the capture of material
properties (Bickel et al. 2009) and their reproduction using modern
fabrication methods such as 3D printing (Bickel et al. 2012).
Algorithmic and numerical aspects of soft tissue simulation con-
tinue to be a topic of active research; recently, Fan et al. (2014)
proposed an Eulerian-on-Lagrangian method to simulate dynamic
musculoskeletal systems, while Saito et al. (2015) applied Projective
Dynamics (Bouaziz et al. 2014) to simulate hypertrophy or atrophy
of the muscles or fat.
More recently, Kadlecek et al. (2016) studied the inverse problem
of full-body modeling, inferring effects such as hypertrophy or
atrophy of skeletal muscles from input 3D scans. Despite certain
similarities to faces, a key difference is that full-body animation
is characterized by muscles moving the bones, e.g., biceps moving
the elbow. In facial animation, the skeletal articulation is limited to
the jaw bone and facial expressions are created mainly by muscles
pulling one another without any associated bone motion.

Physics-based face animation. The pioneering work of Sifakis et


al. (2005) proposed a fully physics-based facial animation model,
built from MRI and laser scan data of one specific subject. The key
Fig. 2. Schematic workflow of our method. differences of our method are a more flexible muscle activation
model combined with a more efficient inverse physics solver (Sec-
tion 5). While Sifakis et al. (2005) also solve the inverse activation
tracking, e.g., (Bouaziz et al. 2013; Li et al. 2013; Weise et al. 2011). problem, their approach needs to invert a dense n × n matrix, where
These methods typically rely on data-driven priors to guide the n is the number activation variables. This limits their method to
reconstruction process, in particular morphable face models (Blanz using only low-dimensional activations, such as one activation per
and Vetter 1999) or multi-linear (tensor) decomposition (Cao et al. muscle, as opposed to our approach that allows for a richer high-
2014; Vlasic et al. 2005). They can be used in combination with up- dimensional activation model. A key benefit of our approach is that
sampling methods, for example to add subject-specific details such building the simulation model for different people is an almost en-
as wrinkles (Bermano et al. 2014). Data-driven methods typically tirely automatic process, as opposed to the substantial manual work
do not capture dynamic effects, even though some recent progress required in previous work (Sifakis et al. 2005). Without the need
on this front has been made in the case of full-body animation of detailed parameter tuning, our approach also simplifies facial
(Pons-Moll et al. 2015). modifications such as slimming/fattening or geometric edits of the
rigid bones.
Advanced acquisition. In recent years we have witnessed sig-
The problem of scaling physics-based animation to different sub-
nificant improvements in acquisition of facial performance and
jects has also been addressed in Cong et al. (2015). They propose a
morphology, in particular detailed skin microstructure (Nagano
method that uses only the neutral expression to adapt an anatomical
et al. 2015), eyes (Bérard et al. 2016, 2014), eyelids (Bermano et al.
face model to different characters, including fictional ones. However,
2015), hair (Hu et al. 2015), lips (Garrido et al. 2016) and teeth (Wu
they do not attempt to closely match specific expressions of the
et al. 2016a). Modern methods can also capture medium-scale de-
target character as in our approach.
tails (wrinkles) from monocular input in real-time (Cao et al. 2015).
Cong and colleagues (2016) introduce “art-directed muscles”, i.e.,
Anatomical constraints have proven useful in estimating the rigid
blendshape models applied to the muscles. This approach caters to
transformation of the skull (rigid stabilization) (Beeler and Bradley
experienced visual artists who appreciate direct control over their
2014) and extracting detailed flesh deformations (Wu et al. 2016b).
anatomical rigs. However, the art-directed muscles lack translation
Anatomical models. Early anatomical models were based on proce- and rotation invariance which limits their ability to generalize, e.g.,
dural models such as FFD (Chadwick et al. 1989). Procedural muscle to significant facial modifications or large external forces inducing
models were also used in pioneering work on facial reconstruction displacements of entire muscle groups. We propose a translation
(Kähler et al. 2003). Physics-based models of muscles and passive and rotation invariant muscle activation model and an automatic
soft tissues were explored by Teran and colleagues (2003; 2005a; inverse physics procedure for inferring muscle activations from
2005b), later extended into a comprehensive biomechanical model of target expressions.

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:4 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

Fig. 3. Our template model consists of a volumetric representation of the tissue and bones (a), and a surface blendshape basis to represent the expression
space (d). Muscles are embedded into a non-conforming tetrahedral mesh discretization (b). We explicitly model jaw kinematics with a 5 DoF joint (c) and
utilize low-resolution geometry proxies for faster collision detection for the teeth region (e). Dynamic skin sliding is supported by introducing both sliding
(green) and fixed (red) constraints for bone-tissue connections (f).

Alternative approaches to physics-based facial animation use the bones (the skull, the jaw, including teeth), skin (including a
mass-spring systems (Ma et al. 2012) or finite element modeling of realistic model of the oral cavity), and 33 facial muscles. Using the
the face as elastic thin shell (Barrielle et al. 2016). While these meth- winding-number method of Jacobson et al. (2013) we generate a
ods support certain types of physics-based effects, a surface-only tetrahedral mesh discretizing the soft tissue of the face. Our tet-
approach does not correctly handle collisions or support volumetric mesh conforms to the skin and the bones, but not to the muscles,
face modifications, such as visualizing the outcome of facial surgery. because a conforming discretization of the numerous thin facial
Modeling interior tissue and bones is also important when the face muscles would require prohibitively many elements. Instead, we
is subjected to inertial or external forces that visibly expose the use non-conforming discretization where every tetrahedron can
rigidity of the internal bone structure. represent multiple types of soft tissues. We distinguish between two
Volumetric blendshapes as proposed in Ichim et al. (2016) intro- types of soft tissues: active corresponds to muscles, while passive
duce energy terms attracting deformation gradients to their target corresponds to subcutaneous fat, connective tissue and the skin, i.e.,
values derived from input facial expressions of a given person. The tissue that is not voluntarily activated by neural signals (Figure 3-b).
volumetric blendshapes are translation invariant, but they lack ro- Up to the accuracy of the discretization, the active layer corre-
tation invariance, introducing similar artifacts as linear elasticity, sponds to the union of all facial muscles, while the passive layer
especially in situations with large external forces (Figure 10). In this forms the region between the active layer and the skin and fills in ar-
paper, we create a model compatible with traditional blendshape eas between the bones. Even though this model is not as accurate as
interfaces, but we push the anatomical realism further by utilizing modeling every muscle individually, it captures the key fact that the
a novel muscle activation model, separating active and passive soft shape of the skin is affected by facial muscles only indirectly, i.e., the
tissue layers, and introducing sliding constraints to attach soft tissue contracted muscles deform passive soft tissue, which consequently
to the bones. As a consequence, our model implements a variety induces skin deformations.
of advanced animation effects and supports significant modifica-
tions of the face simulation model, which enables a number of new Jaw kinematics. The relative motion of the jaw with respect to
applications as demonstrated in Section 8. the skull contributes significantly to the final articulation of the face.
The kinematics of the temporomandibular joint is non-trivial, con-
sisting of both rotational and translational motion. In our model (see
3 TEMPLATE FACE MODEL Figure 3-c), we define the major rotation axis (x-axis, corresponding
Our approach starts from a generic face model – an anatomical face to mouth opening) as the axis passing through the centers of the
template corresponding to an average human subject (see Figure 3). mandibular condyles. Halfway through the condyles, we define a
We created this model from a commercially available anatomical perpendicular axis (y-axis) corresponding to vertical jaw rotation.
data set (Zygote 2016) that contains polygonal representations of The jaw does not normally rotate about the third orthogonal axis

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
Phace: Physics-based Face Modeling and Animation • 153:5

(z-axis), but it can translate (slightly) in all three directions. This


amounts to 5 DoFs for the jaw motion, expressed with respect to
the skull, which is treated as a free rigid body (our model does not
include the craniocervical junction). We concatenate the kinematic a = [2.0, 0.7, 0.7, 0.0, 0.0, 0.0]
parameters of the jaw bone into a vector b ∈ R5 .
Template blendshapes. Given an anatomical model of the face, a
natural control interface would be activation signals for all motor
units. While biologically meaningful, such controls would not be
user-friendly, because many motor units can affect a surface point a = [1.7, 0.7, 1.0, 0.0, -0.6, 0.0]
in a complex, non-linear way. Instead, we augment our template
model with a set of 48 blendshapes inspired by FACS (Ekman and
Friesen 1977) that have been sculpted by an artist on our generic face
model. These blendshapes are only defined on the skin as a basis
for parametrizing the space of facial expressions. They provide no a = [1.8, 1.2, 0.8, 0.2, 0.1, 0.6]
information about the internal deformations, which are handled by
physics-based simulation (Section 4 and Section 5). This combination
of surface blendshape basis and volumetric simulation model allows
us to retain compatibility with commonly used blendshape controls,
a = [0.9, 1.2, 1.7, 0.5, -0.5, 0.2]
while offering the benefits of advanced physics-based simulation
effects.
Fig. 4. Visualization of the capabilities of our 6-DoF activation model by
4 FORWARD PHYSICS squishing a cube, corresponding to a small sample of muscle tissue.
The goal of the forward physics algorithm is to compute the de-
formed soft tissue and resulting skin surface given bone kinemat-
ics and muscle activation parameters. We model the latter with a
vector a (see “Active tissue” below) that represents the amount of
activation (contraction) of all facial muscles. Even though in reality muscle contracts (Lee et al. 2009; Teran et al. 2005a). While this
the jaw motion is controlled by muscle activations (in particular corresponds to the biological structure of muscles, the problem is
the masseter muscle) our model assumes the bones are directly con- that the exact muscle fiber directions are in general not known.
trolled kinematically and the muscle activations are used only to Medical imaging techniques such as diffusion tensor imaging are
create the facial expressions. prohibitively expensive and time consuming, and the signal quality
At the heart of our method is a physics-based model of soft tissue is limited.
elasticity including muscle activation. We define this model using Previous work in graphics (Saito et al. 2015) applied ad-hoc muscle
linear finite elements on our tet-mesh adapted for a given subject. fiber approximations which worked well for major skeletal muscles
Let x denote a vector stacking all degrees of freedom of the soft (such as the biceps), but are not sufficiently accurate for the delicate
tissue, i.e., the 3D coordinates of all nodes. facial muscles. Along with the exact location of muscle insertion
points, tuning of these parameters to obtain realistic facial expres-
Passive tissue. For passive tissue we define deformation energy
sions is possible, but tedious (Sifakis et al. 2005). To circumvent
pass
X
E pass (x) = min Wi µ ∥Fi (x) − Ri ∥F2 + these issues, we propose a different muscle activation model that
Ri ∈SO (3)
i does not require explicit knowledge of fiber directions, but relies on
pass
Wi λ(det(Fi (x)) − 1) 2 , (1) the elementary bio-mechanical fact that muscles can generate only
pass internal forces. In other words, an isolated muscle is not capable
where the index i goes over all tets and Wi≥ 0 denotes the of translating or rotating by itself (even though the muscle can of
volume of the i-th tetrahedron that is occupied by passive tissue, pre- course be translated or rotated due to contact with the surrounding
computed during template construction with Monte-Carlo sampling. tissues).
The first term in Eq. 1 corresponds to the commonly used co-rotated The property that the muscle cannot translate itself is already
elasticity (measure of deviation from rigid motion), while the second guaranteed by the translation invariance of the deformation gradi-
term models the resistance to changes of volume. Fi (x) denotes the ent operator Fi (x). Since a muscle tet should also not rotate itself,
deformation gradient, and Ri is an auxiliary rotation matrix used in we require the activation to be a symmetric 3 × 3 matrix. Every
the co-rotated model (Sifakis and Barbic 2012). µ and λ are material symmetric matrix in R3×3 has an eigendecomposition of the form
parameters that we set by default to µ = 1 and λ = 3. We can change QΛQT , where Q ∈ SO (3) and Λ ∈ R3×3 is diagonal. Therefore, the
these parameters to achieve specific effects as discussed in Section 8. symmetric activation matrix corresponds to non-uniform scaling
Active tissue. For tets corresponding to the active layer (muscles), (Λ) in an arbitrary orthonormal coordinate system (Q). In other
we propose a novel activation model. Previous muscle models typ- words, the symmetric matrix represents pure distortion without any
ically assume a given direction of muscle fibers along which the change of orientation (Shoemake and Duff 1992) (see Figure 4).

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:6 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

For each active tet, we define an activation vector ai ∈ R 6 and


use a linear operator S : R6 → R3×3 to generate the correspond-
ing symmetric matrix S(ai ) ∈ R3×3 . Muscles, like most biolog-
ical soft tissue, are approximately incompressible, which means
that det(S(ai )) = det(QΛQT ) = det(Λ) should be close to 1. How-
ever, to compensate for discretization errors, we do not enforce
det(S(ai )) = 1 strictly, but only as a soft constraint, as discussed in
Section 5.
We use this activation model to define the deformation energy
E act (x, a) of active tissue, where a is a vector stacking the 6-dimensional
activation parameters for all active tets. Specifically, we define:
X
E act (x, a) = min Wiact µ ∥Fi (x) − Ri S(ai )∥F2 + Fig. 5. An eyebrow raise expression uses the skin sliding feature of our model.
i
Ri ∈SO (3) The blue arrows show the displacement of the contact vertices between the
cranium and the flesh.
Wiact λ(det(Fi (x)) − det(S(ai ))) 2 , (2)
where the index i goes over all tets and Wiact
≥ 0 represents the
volume of the i-th tet that corresponds to active tissue. Here the penetrations (collision response) as follows. When collision detec-
co-rotated term aims to find the rotation Ri that best aligns the tion finds a surface vertex penetrating the volumetric face model
deformation gradient Fi (x) with S(ai ). The second term encourages (see below for more details), an inequality constraint is appended
the volume ratio of the deformed tet (i.e., det(Fi (x))) to align with to p. For each offending vertex we find its projection onto the surface
the volume ratio of the activation matrix det(S(ai )), which should and create a tangent plane at this point. The inequality constraint
be close to 1, i.e., volume conserving. requires the vertex to be at the half-space opposite the volume.
We solve Eq. 3 by alternating between an interior point solver
Bone attachments. Muscles are connected to the bones using a used to minimize Eq. 3 for fixed collision constraints p, and collision
complex network of connective tissue, whose exact function is a detection to update p. We have initially implemented a “homebrew”
matter of active research (Schleip et al. 2013). In animation, the augmented Lagrangian solver, but ultimately decided to use the
visual importance of skin sliding is well recognized (Li et al. 2013). IPOPT package (Wächter and Biegler 2006), which has proven to be
To distinguish areas where soft tissue is directly attached to the more robust and usually needs fewer iterations to converge.
bones from areas where soft tissue slides over the bones, we create
two types of constraints: 1) pin constraints and 2) sliding constraints. Collisions - implementation details. We implemented collision han-
The pin constraints are straightforward to implement using Dirichlet dling between lips and the teeth and between the upper and lower
boundary conditions. The sliding constraints are modeled as point- lip, which are the areas most prone to inter-penetration. Because the
on-plane constraints on the tangent planes of the bone surfaces. We geometry of the teeth is quite detailed (and we are not aspiring to
found this approximation to be sufficient even for curved regions, simulate e.g. flossing where the detail would be necessary), we start
since the amount of sliding displacement is generally small. by creating proxy collision shapes for the upper and lower teeth
Formally, we express both pin and sliding constraints using a (see Figure 3-e). The upper and lower lip geometries are already
function c(x, b) that depends also on the kinematic parameters sufficiently smooth and we do not need any special collision proxy.
b ∈ R5 of the jaw bone. All of the constraints are satisfied if and We detect collisions using AABB hierarchies built for the upper and
only if c(x, b) = 0. We have manually distributed the pin and sliding lower teeth proxy geometries and the upper and lower lips. Since
constraint as shown in Figure 3-f. The constraint types were chosen lips are deforming, we recompute the AABB hierarchies at run-time.
to achieve realistic deformations. For example, for an eyebrow raise We did not use techniques to avoid or amortize this recomputation
expression, the skin slides over the skull as illustrated in Figure 5. costs as this was not a bottleneck in our implementation.
For each of the collision proxies and the lips, we also manually
Quasi-static solution. In this section we discuss how to compute
define a “projection region”, i.e., subset of triangles where inter-
the quasi-static solution of the forward physics simulation, defer-
penetrating vertices can be pushed to resolve collisions. Previous
ring the discussion of dynamics to Section 6. Quasi-statics means
work considers all surface points as valid candidates for projection
calculating a steady state where all dynamic motion has settled. The
(McAdams et al. 2011). In our case this occasionally created problems
quasi-static regime is useful in generating static expressions and is
such as resolving lip-teeth collisions by projecting the lip vertices
particularly important when solving for muscle activations from
behind the teeth, i.e., inside the mouth, which is rarely a plausible
observed shapes, as discussed in Section 5. Finding the steady state
solution. Instead of more complicated continuous collision detection,
can be formulated as the following optimization problem:
we therefore disallowed these implausible projections. For each of
minimize E pass (x) + E act (x, a) + E grav (x) the projection regions, we compute another AABB hierarchy that
x
(3) is used to find the closest point in the projection region, i.e., the
subject to c(x, b) = 0, p(x) ≥ 0,
location where an inter-penetrated vertex will be pushed in order to
where E grav (x) represents a linear gravitational potential (i.e., the resolve the collision. A collision handling example during animation
familiar mдh). The inequality constraints p(x) are used to resolve is shown in Figure 6.

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
Phace: Physics-based Face Modeling and Animation • 153:7

Fig. 6. Importance of collision handling. Without collisions, intersections


between the teeth and the deformable tissue can occur (left). Our method Fig. 7. Muscle activation regularization. Red lines indicate the direction and
correctly detects and handles the contact (right). magnitude of the dominant muscle contraction, computed from the SVD of
the activation matrix.

5 INVERSE PHYSICS
where i sums over all active tets and m over all muscles. Since our tet
The previous section explains how to compute face articulations
mesh does not conform to the muscles, some tets may be occupied
for given bone positions and muscle activations. In this section we
only partially by a muscle, or be occupied by several muscles. We
discuss the inverse problem. For a given target shape of the skin, we
calculate the fraction fi,m ∈ [0, 1] of tet i occupied by muscle m
want to compute the corresponding bone parameters b and muscle
by Monte-Carlo sampling (if an active tet contains some amount of
activations a, which, when used in the forward simulation (Eq. 3),
passive tissue, we get m fi,m < 1 and the regularization strength
P
will produce a skin surface close to the input shape.
is proportionally reduced as expected). The contraction parame-
Optimization formulation. Let t denote the target vertex positions ters γm ≥ 0 are auxiliary variables representing the contraction of
of the skin. The inverse modeling problem can be written as muscle m. Recall that S(a) is our symmetric muscle activation ma-
trix introduced in Section 4. Intuitively, R act (a) encourages all tets
min. ∥Sx − Tt∥ 2 + R act (a) + R sparse (a) corresponding to a single muscle to contract in a uniform, volume
x,a,b −1 √γ √
(4) preserving way (because γm γ
(i ) m (i ) m(i )
= 1).
subj. to c(x, b) = 0, p(x) ≥ 0
In addition to the muscle-activation regularization term R act , we
∇x E pass (x) + ∇x E act (x, a) + ∇x E grav (x) = 0 found it beneficial to also include the following term to promote
where R act (a) and R sparse (a) are regularization terms discussed be- sparse muscle activations:
low. The objective term ∥Sx − Tt∥ 2 measures how close state x is to X sX
the target t. The matrix S selects the simulation nodes correspond- R sparse (a) = fi,m ∥ai ∥ 2 (5)
m i
ing to the skin surface. In addition S and T encode both position
(point-to-point) and point-to-plane distance terms (Rusinkiewicz Specifically, this is a group sparsity term similar to L 1 regulariza-
and Levoy 2001). The point-to-plane terms enable some amount of tion, but applied to entire groups – in our case, muscles. This term
sliding (tangential motion) which is useful if we do not completely encourages all activations corresponding to one muscle to remain
trust the correspondences represented by t. The last vector equal- zero unless contributing significantly to the result. We introduced
ity constraint describes the condition of quasi-static equilibrium, this term to avoid small spurious activations of remote muscles,
i.e., the sum of all forces (gradients with respect to x) is zero. Even which is justified when our target shapes t correspond to traditional
though x is also an optimization variable, the desired output are the FACS-type blendshape models which isolate individual action units.
optimal values of muscle activations a and bone parameters b. Figure 7 shows that compared to a naive L 2 regularization approach,
our method leads to sparser activations that are better aligned with
Regularization. Without regularization, the optimization of Eq. 4
the geometric structure of the muscles.
can lead to over-fitting and anatomically implausible activations a.
To provide an appropriate prior on activation patterns, we exploit the Numerical solution. As in Section 4, we use interior-point methods
geometric structure of the muscles by estimating an approximate (Wächter and Biegler 2006) to solve the constrained optimization
preferred contraction direction. Following Choi et al. (2013) we problem in Eq. 4. Our implementation of the Hessian of the La-
compute these directions by solving a Laplace equation and encode grangian of Eq. 4 ignores third-order derivatives of E (pretends
the corresponding orientations for the i-th tet as Qi ∈ SO (3). The they are zero), amounting to the commonly used Gauss-Newton
regularizing prior softly penalizes deviations in muscle contraction approximation of the Hessian (Bickel et al. 2012; Sifakis et al. 2005).
from the preferred direction and is defined as: We alternate the interior point solver with collision detection that
determines the non-penetration constraints p as in Section 4.
γ −1 0 0  2
Even though including the regularization term R act could be di-

X  m(i ) √
R act (a) = fi,m QiT γm (i ) 0  Qi − S(ai ) rectly incorporated into our optimization objective (adding γm as

 0

i,m  0 0 γm (i ) auxiliary variables), we found that this significantly increases the

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:8 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

process the generic face model can deform freely, i.e., the shape
and/or volume of all cells can change, including the bones (in con-
trast to the deformation model considered in Section 4). We then use
Example-Based Facial Rigging (Li et al. 2010) to convert the regis-
tered expressions sk to subject-specific blendshapes cj , j = 1, . . . , 48.
The processing steps so far essentially rely on existing methods
to align the volumetric template to the neutral expression and to
create the subject-specific blendshape model. We refer to the above
cited papers for implementation details on these algorithms. After
this geometric preprocessing, we now solve for activations aj and
jaw bone parameters bj that correspond to each of the blendshapes
cj using the Inverse Physics optimization of Section 5.

Animation. To animate the created face model, we need to feed


appropriate muscle activations and jaw bone parameters to the For-
ward Physics optimization of Section 4 for each animation frame.
Given per-frame blendshape weights w = {w 1 , . . . , w 48 }, we com-
pute muscle activations as a = aneut + j w j (aj −aneut ), where aneut
P

Fig. 8. Inverse physics finds jaw transformation and muscle activations that corresponds to neutral activations, i.e., each activation S(aj,i ) =
accurately reproduce the target blendshapes. I ∈ R3×3 . Linear blending of the activation parameters is justified
because there is no rotational component in symmetric matrices
(Shoemake and Duff 1992). Similarly, we compute the blended jaw
non-linearity of the problem and forces the non-linear solver to take kinematics parameters b = j w j bj . While blending of rotation
P
many iterations, each making only slow progress towards the solu- angles is in general not recommended, we found that for the limited
tion. To avoid this problem, we instead use a local-global approach range of rotations of the jaw this simple scheme does not produce
(Sorkine and Alexa 2007). In the local step, the activations a are any visible artifacts.
fixed and we compute optimal γm by finding roots of a 6-th order
polynomial using the method of Brent (1971). In the global step, we Dynamics. Adding inertia corresponds to a minor change of Eq. 3.
call the interior point solver to optimize a for fixed γm , which is an We use the popular backward Euler integration, which in its op-
easier optimization problem exhibiting fast convergence. timization form (Liu et al. 2013) corresponds to augmenting the
Figure 8 shows an example of an inverse physics solve for two objective of Eq. 3 with the term: 12 ∥x − (xn + hvn )∥M 2 , where x and
n
blendshapes of a user-specific blendshape model, visualizing sepa- vn are positions and velocities in the previous frame, h > 0 is the
rately the effect of the jaw motion and the effect of muscle activa- time step, and M is the mass matrix. We use a diagonal matrix M
tions. (mass lumping) with a soft tissue density of 1д/cm3 . The minimizer
x of Eq. 3 then becomes the new state xn+1 and the new velocity
6 PHACE MODELING AND ANIMATION is vn+1 = (xn+1 − xn )/h. The main difference from the quasi-static
In this section we explain how we integrate the optimization al- solution is that the dynamic solution depends on the previous state
gorithms presented above into a complete system for creating and (xn , vn ), i.e., we need to execute the time steps in sequence. To add
animating subject-specific face simulation models. non-conservative external forces, such as wind, we proceed as in
Projective Dynamics (Bouaziz et al. 2014) and change the additional
Model Building. We start by 3D scanning the face of our subject
term to 12 ∥x − (xn + hvn + h 2 M−1 fext )∥M 2 . Here f 3
ext ∈ R is the ex-
in neutral expression and about 5-10 additional premeditated facial
ternal force vector, e.g., a wind force is a function of triangle normal,
expressions using a multi-view stereo setup as described in Ichim
area, and wind direction.
et al. (2016). Each of the scans is approximately aligned with the
Plasticity. To support effects such as fattening or slimming, we
skin of our template model (Section 3)
use a standard model of plastic deformations. Specifically, each total
using rigid registration (plus uniform
deformation gradient Ftotal (x) is assumed to be composed of an elas-
scale). Then we apply non-rigid ICP
tic deformation component and plastic deformation component, i.e.,
(Rusinkiewicz and Levoy 2001) to find
Ftotal (x) = Felast (x)Fplast or, equivalently, Felast (x) = Ftotal (x)F−1
plast
.
dense correspondences between the tem-
plate skin and the target scan, guided with Note that Fplast does not depend on the current deformed state x.
a few manually chosen markers as shown The deformation gradient Fi (x) used in Eq. 1 and Eq. 2 corresponds
in the inset. We denote the registered skin to the elastic deformation component, because plasticity is a sep-
surfaces as sneut for the neutral and sk for arate process, e.g., tissue growth, which is decoupled from elastic
k-th expression. deformations. Therefore, the only modification we need to make to
Next, we deform our volumetric template model such that its account for plasticity is to replace the Fi (x) in Eq. 1 and Eq. 2 by
boundary (skin) aligns with sneut . This is accomplished with Anatomy Fi (x)F−1
plast,i
, where Fplast,i describes the plastic deformation of the
Transfer (Dicko et al. 2013; Ichim et al. 2016). Note that during this i-th tet. In our system, we use only uniform scaling, i.e., Fplast,i = si I,

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
Phace: Physics-based Face Modeling and Animation • 153:9

Fig. 10. A boxing punch to the nose results in artifacts with an elastic model
lacking rotation invariance as in Ichim et al. (2016) (left). More realistic
deformations are obtained with our rotation-invariant model (right).

Eact _1DO F (x, a) = Wiact µ ∥Fi (x) − Ri QT S1DO F (ai )Q∥F2 +


Wiact λ(det(Fi (x)) − det(S(ai ))) 2 ,
where Q encodes the muscle fiber orientations and
S1DO F (ai ) = diaд(ai , 1, 1) . As Figure 9 illustrates, muscle activa-
tions constrained to the fiber directions fail to closely match the
desired target shape, while our activation model leads to a much
Fig. 9. Our 6-DoF muscle activation model (rigt) leads to more accurate
more accurate reconstruction of the target expression.
reconstruction of the target expression (left) than previous 1-DoF fiber-
aligned activations models (middle). Comparison to volumetric blendshapes. Defining a deformation
model that is invariant under rigid motions is essential for correct
tissue behavior. The volumetric blendshape approach of Ichim et
where si > 0 is a scaling coefficient (corresponding to growth for al. (2016) lacks rotation invariance, which can lead to artifacts, e.g.,
si > 1 and shrinking for 0 < si < 1). The settings of the si pa- when large rotation of the soft tissues are induced by external forces,
rameters for each tet depend on the effect we wish to achieve as such as the boxing punch shown in Figure 10. We propose rotation-
discussed in Section 8. Plasticity, as well as inertia and external invariant models for both passive and active soft tissue, leading to
forces are applied in forward physics only. more realistic results. While we distinguish between passive and
active tissue, previous work (Ichim et al. 2016) assumes that all soft
7 EVALUATION tissue can activate. In addition, our approach includes a kinematic
Before showing application results of our method in Section 8, we model for the jaw, whereas Ichim et al. (2016) only approximated the
evaluate the behavior of our optimization algorithms and provide jaw by using a more stiff (but not exactly rigid) material. Finally, our
comparisons to previous work. method also allows for skin sliding, facilitating more realistic flesh
deformations especially in areas such as the forehead (Figure 5).
Muscle activation model. As mentioned in Section 4, previous
methods constrain the deformation along muscle fibre directions (Lee Model adaptations. Our approach supports animating a charac-
et al. 2009; Saito et al. 2015; Sifakis et al. 2005; Teran et al. 2005a). ter after significant modifications of the neutral pose (e.g. slim-
In our experiments we found that muscle fiber directions can be ming/fattening, bone modifications, see Section 8) using the same
unreliable and lack the flexibility to accurately reproduce all facial muscle activation patterns. One might argue that the same effects
expressions. This insight triggered the design of our more general could be obtained by using deformation transfer (Sumner and Popović
activation model. In Figure 9 we compare the results of inverse 2004) on traditional linear animation models. For example, similar
physics with our method and the previous fiber-restricted model, modifications as the ones we propose could be applied on the surface
where fiber directions are computed from our geometric muscle mesh of the neutral blendshape. Deformation transfer on all expres-
models using the method of Choi et al. (2013) (these directions are sion blendshapes will then yield new face rig that incorporates the
shown in Figure 7). The active tetrahedra of the 1-DOF muscle model desired changes. However, this approach has the significant draw-
act based on the following constraint energy: back that the new blendshapes are not necessarily consistent with

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:10 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

body transformation T ∈ SE(3) corresponding to the global rotation


and translation of the head, as well as pitch and yaw for each of the
eyeballs, which are parented to the head transformation T.

Body mass index changes. Figure 12-a illustrates how an animated


avatar can be modified to slim or fatten the person’s face by adapting
the plasticity scale for the soft tissue tets. As this adaptation alters
the face geometry, simply re-animating the blendshape model would
lead to unnatural expressions and visual artifacts caused by self-
intersections. Our simulation approach avoids self-collisions and
balances the stress distribution in the facial tissue while preserving
the actuation forces, which leads to more plausible expressions and
natural dynamics.
To create the scaling parameters si > 0, we start from a surface
“fat map” painted by the user that specifies which areas of the face
are more prone to fat accumulation. The values of the fat map are
propagated into the volumetric tet-mesh by a diffusion process, sim-
ilar to standard polygon-mesh diffusion flow ((Botsch et al. 2010),
Fig. 11. Model adaptations such as increased lip volume are handled accu- Chapter 4.2), but using the volumetric Laplacian instead of the sur-
rately in our approach, while deformation transfer (Sumner and Popović face Laplace-Beltrami. We apply forward Euler integration with
2004) leads to self-intersections. time step and number of steps adjusted by the user in an interac-
tive graphical tool to achieve the desired volumetric propagation
effect. We used the same fat map for both characters in Figure 12-a,
the same blendshape weights, e.g., self-intersections easily occur as uniformly scaled to achieve slimming or fattening. To account for
shown in Figure 11. the increased fat content in the soft tissue, we lower the stiffness µ
In addition, direct transfer of modifications to the neutral pose to 0.8, 0.5, 0.3 for the three levels of fattening shown in Figure 12-a.
cannot account for the complex force interactions in the elastic For slimming, we keep the default stiffness µ = 1.
tissue. For example, when increasing the volume of the lips, the
expression dynamics will change as a consequence of the changed Facial surgery. Figures 12-b and 12-c demonstrate potential ap-
stress distribution. Our indirect approach, that solves for the facial plications in visualizing the possible outcomes of facial surgery,
pose given muscle activations, can accommodate such scenarios helping patients to choose between different versions of corrective
and leads to more natural expressions. or cosmetic procedures. Our method allows direct manipulation of
the deformable soft tissue (e.g. lip fat injection) or the rigid bones
Statistics. The interior point solver of the forward physics opti-
(e.g. chin displacement). The simulation then provides a detailed
mization requires on average 8 iterations per frame to converge.
visual preview of such interventions on the expression dynamics
This takes approx. 22 seconds including the collision detection up-
of the animated person. We modeled the lip fat injection using our
date on a consumer laptop with a 3.1 GHz Intel Core i7 processor
plasticity model and diffusion tool, simulating the process of in-
and 16GB of main memory. The inverse problem needs approx. 15
jecting filler material with a syringe through several points on the
iterations to compute the jaw transformation and muscle activa-
skin. To simulate the lower stiffness of the fat-like filler material,
tions, averaging at about 3 minutes per target shape. The volumetric
we decreased the soft tissue stiffness µ to 0.8 for the medium, and
face template model of the passive flesh and active muscles used
to 0.5 for the high lip volume effect (Figure 12-b). For the chin dis-
for the results presented in this paper has 8098 vertices and 35626
placement we directly edited the bone using an interactive mesh
tetrahedra. The active muscle layer covers approx. 27% of the entire
modeling tool.
flesh. The surface mesh model of the entire skin has 6393 vertices
and 12644 faces. Inertia. Figure 13-a shows how our method incorporates inertial
deformations in the dynamic simulation. Such secondary motion
8 APPLICATION DEMOS becomes particularly important in animations with strong accelera-
We present a series of application demos to highlight the versatility tions, such as jumping, head shaking, or boxing.
of our approach. A key benefit of our physics-based simulation
is that we can modify the static and dynamic parameters of the Interaction with external forces and objects. Figure 13-b shows how
model to achieve a number of advanced animation effects that would an animation can be augmented with complex external force inter-
be difficult to obtain with purely generative geometric methods. actions produced by a dynamic wind field. Figure 13-d illustrates
Please also refer to the accompanying video to better appreciate the how a speech animation is affected when the subject is wearing
dynamics of the animations. a VR headset. Our contact resolution method adapts the face de-
All animation examples were driven by a temporal sequence of formations to account for the collisions with the headset, creating
blendshape weights obtained from the performance capture system non-linear bulging and wrinkling effects due to volume preservation
of Weise et al. (2011). The tracking software also provides a rigid of the facial tissue.

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
Phace: Physics-based Face Modeling and Animation • 153:11

Fig. 12. Application Demos I: a) Body mass index changes and their impact on expressions. The original avatar is highlighted with dashed lines. More intense
red in the fat map means more volume change of the corresponding face region. b) Results of lip injection, where affected tets are shown in green on the right.
c) Effects of modifying the rigid bone structure of the chin.

Simulation of muscle paralysis. In Figure 13-c, we show how mus- final µ values vary between 0.7 − 5.7 and the density varies between
cle activations can be modified to simulate Bell’s palsy syndrome, 1 − 3д/cm3 , achieving artistic “undead” effects.
where the affected person is unable to activate certain facial muscles.
In this example, we marked the active muscles of the left half of
the face to behave like passive tissue, which simulates the effect of 9 LIMITATIONS AND FUTURE WORK
partial facial paralysis. In our approach we rely solely on a generic volumetric template and
a set of surface scans of the modeled person to derive the interior
Extreme face modifications. To push the limits of facial modifi- facial structure. This inherently limits the accuracy of our approach
cations, we created a virtual zombie character in Figure 13-e. We in terms of the true facial dynamics of the scanned actor. Getting
designed two texture maps to modulate the mass and stiffness (see access to the internal structure through volumetric scanning devices
Figure 13-e) and extrapolated their values into the volume using would allow building more faithful simulation models, but incurs
our diffusion tool. The idea was to increase the mass of the cheeks a high acquisition cost. A potentially more practical approach for
to create a flesh sagging effect, while increasing stiffness around future work is to build a statistical model of the bone and tissue
the lips and the eyes to avoid excessive pulling of the flesh. The structures from a sufficiently large set of volumetric scans, similar

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:12 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

Fig. 13. a) Simulating inertia under sudden motion changes (e.g., jumping). b) Dynamic deformations in a wind force field. c) Simulating Bell’s Palsy affecting
half of the face of an actor. d) VR headset obstructing the full motion of expressions on the face. e) Artistic editing to create a zombie character by adapting
the mass and stiffness distribution as indicated in the color-coded maps.

to the morphable face models that have been successfully applied ways to combine our simulation model with procedural or data-
for the skin surface (Blanz and Vetter 1999). driven methods for wrinkle generation to further increase the visual
Detailed physical simulation is computationally involved and our realism of the animations.
method is currently not suitable for realtime animation. While com- Other avenues for future work include modeling and simulating
putational efficiency was not the main focus of our work, we believe hair, adding person-specific teeth models and a simulation of the
that significant speedups can be achieved, in particular by more tongue, and generalizing our model to full body simulations.
explicitly exploiting spatial and temporal coherence. In the context
of realtime animation, our approach could potentially be used to 10 CONCLUSION
automatically create corrective shapes for a given blendshape basis We propose a physics-based simulation approach to face animation
in an offline process. How to select an optimal set of such correc- that complements existing generative methods such as blendshapes.
tives based on a given simulation is an interesting avenue for future These purely geometric methods can produce artifacts such as self-
research. intersections in facial poses that were not specifically considered
Our tet-mesh discretization is currently too coarse to correctly during the modeling of the blendshape basis – ensuring consistency
model small-scale effects such as skin wrinkles. However, increasing in all possible linear combinations quickly becomes intractable. Even
the resolution to the appropriate scale would lead to prohibitive more challenging is the correct handling of dynamic effects such as
computation times. Therefore, in future work, we want to explore interactions with external objects or inertial deformations.
We advocate the use of physics-based simulation as a principled
solution to these issues. Our experiments validate that this approach

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
Phace: Physics-based Face Modeling and Animation • 153:13

leads to high-quality facial animations and facilitates new editing Vicki Bruce and Andy Young. 1986. Understanding face recognition. British journal of
capabilities, e.g., by manipulating the face model’s physical structure psychology 77, 3 (1986), 305–327.
Chen Cao, Derek Bradley, Kun Zhou, and Thabo Beeler. 2015. Real-time high-fidelity
or its dynamic behavior. These advanced effects come at the cost facial performance capture. ACM Trans. Graph. 34, 4 (2015), 46.
of increased computational overhead. However, as computational Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2014. Facewarehouse:
a 3d facial expression database for visual computing. Visualization and Computer
power increases and algorithms are improved, we believe that the Graphics, IEEE Transactions on 20, 3 (2014), 413–425.
simulation-based approach to facial animation will become more John E Chadwick, David R Haumann, and Richard E Parent. 1989. Layered construction
and more viable in the future. This path certainly offers a rich for deformable animated characters. In ACM Siggraph Computer Graphics, Vol. 23.
ACM, 243–252.
set of opportunities for future research with applications not only Hon Fai Choi and Silvia S Blemker. 2013. Skeletal muscle fascicle arrangements can
in movies and games, but also in surgery simulation, interactive be reconstructed using a laplacian vector field simulation. PloS one 8, 10 (2013),
therapy, sports, or biomedical research. e77576.
Matthew Cong, Michael Bao, Kiran S Bhat, Ronald Fedkiw, and others. 2015. Fully auto-
matic generation of anatomical face simulation models. In Proc. of the EG/SIGGRAPH
ACKNOWLEDGMENTS Symposium on Comp. Anim. ACM, 175–183.
Matthew Cong, Kiran S Bhat, and Ronald Fedkiw. 2016. Art-directed muscle simulation
This material is based upon work supported by the National Science for high-end facial animation. In Proc. of the EG/SIGGRAPH Symposium on Comp.
Anim. 119–127.
Foundation under Grant Numbers IIS-1617172 and IIS-1622360, the Ali-Hamadi Dicko, Tiantian Liu, Benjamin Gilles, Ladislav Kavan, François Faure,
grant SVV-2017-260452 and GA UK 1524217. Any opinions, findings, Olivier Palombi, and Marie-Paule Cani. 2013. Anatomy transfer. ACM Trans. Graph.
and conclusions or recommendations expressed in this material are 32, 6 (2013), 188.
Paul Ekman and Wallace V Friesen. 1977. Facial action coding system. (1977).
those of the author(s) and do not necessarily reflect the views of the Ye Fan, Joshua Litven, and Dinesh K Pai. 2014. Active volumetric musculoskeletal
National Science Foundation. We also gratefully acknowledge the systems. ACM Trans. Graph. 33, 4 (2014), 152.
support of Activision. Pablo Garrido, Michael Zollhöfer, Chenglei Wu, Derek Bradley, Patrick Pérez, Thabo
Beeler, and Christian Theobalt. 2016. Corrective 3D reconstruction of lips from
monocular video. ACM Trans. Graph. 35, 6 (2016), 219.
REFERENCES Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling
using a hairstyle database. ACM Trans. Graph. 34, 4 (2015).
Oleg Alexander, Mike Rogers, William Lambeth, Jen-Yuan Chiang, Wan-Chun Ma, Alexandru Ichim, Ladislav Kavan, Merlin Nimier-David, and Mark Pauly. 2016. Building
Chuan-Chang Wang, and Paul Debevec. 2010. The digital Emily project: Achieving and Animating User-Specific Volumetric Face Rigs. In Proc. of the EG/SIGGRAPH
a photorealistic digital actor. Computer Graphics and Applications, IEEE 30, 4 (2010), Symposium on Comp. Anim.
20–31. Alexandru Eugen Ichim, the Bouaziz, and Mark Pauly. 2015. Dynamic 3D Avatar
Jérémie Allard, Stéphane Cotin, François Faure, Pierre-Jean Bensoussan, François Poyer, Creation from Hand-held Video Input. ACM Trans. Graph. (2015).
Christian Duriez, Hervé Delingette, and Laurent Grisoni. 2007. Sofa-an open source Alec Jacobson, Ladislav Kavan, and Olga Sorkine-Hornung. 2013. Robust inside-outside
framework for medical simulation. In MMVR 15-Medicine Meets Virtual Reality, segmentation using generalized winding numbers. ACM Trans. Graph. 32, 4 (2013),
Vol. 125. IOP Press, 13–18. 33.
Brian Amberg, Andrew Blake, Andrew Fitzgibbon, Sami Romdhani, and Thomas Vetter. Petr Kadlecek, Alexandru-Eugen Ichim, Tiantian Liu, Jaroslav Krivanek, and Ladislav
2007. Reconstructing high quality face-surfaces using model based stereo. In ICCV Kavan. 2016. Reconstructing Personalized Anatomical Models for Physics-based
2007. IEEE, 1–8. Body Animation. ACM Trans. Graph. 35, 6 (2016).
Vincent Barrielle, Nicolas Stoiber, and Cedric Cagniart. 2016. Blendforces, a Dynamic Kolja Kähler, Jörg Haber, and Hans-Peter Seidel. 2003. Reanimating the dead: recon-
Framework for Facial Animation. Comp. Graph. Forum (2016). struction of expressive faces from skull data. In ACM Trans. Graph., Vol. 22. ACM,
Thabo Beeler, Bernd Bickel, Paul Beardsley, Bob Sumner, and Markus Gross. 2010. 554–561.
High-Quality Single-Shot Capture of Facial Geometry. ACM Trans. Graph. 29, 3 Oliver Klehm, Fabrice Rousselle, Marios Papas, Derek Bradley, Christophe Hery, Bernd
(2010), 40:1–40:9. Bickel, Wojciech Jarosz, and Thabo Beeler. 2015. Recent advances in facial appear-
Thabo Beeler and Derek Bradley. 2014. Rigid stabilization of facial expressions. ACM ance capture. In Computer Graphics Forum, Vol. 34. 709–733.
Trans. Graph. 33, 4 (2014), 44. Sung-Hee Lee, Eftychios Sifakis, and Demetri Terzopoulos. 2009. Comprehensive
Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. Lightweight eye biomechanical modeling and simulation of the upper body. ACM Trans. Graph. 28,
capture using a parametric model. ACM Trans. Graph. 35, 4 (2016), 117. 4 (2009), 99.
Pascal Bérard, Derek Bradley, Maurizio Nitti, Thabo Beeler, and Markus H Gross. 2014. John P Lewis, Ken Anjyo, Taehyun Rhee, Mengjie Zhang, Frederic H Pighin, and Zhigang
High-quality capture of eyes. ACM Trans. Graph. 33, 6 (2014), 223–1. Deng. 2014. Practice and Theory of Blendshape Facial Models.. In Eurographics
Amit Bermano, Thabo Beeler, Yeara Kozlov, Derek Bradley, Bernd Bickel, and Markus (State of the Art Reports). 199–218.
Gross. 2015. Detailed spatio-temporal reconstruction of eyelids. ACM Trans. Graph. Duo Li, Shinjiro Sueda, Debanga R Neog, and Dinesh K Pai. 2013. Thin skin elastody-
34, 4 (2015), 44. namics. ACM Trans. Graph. 32, 4 (2013), 49.
Amit H Bermano, Derek Bradley, Thabo Beeler, Fabio Zund, Derek Nowrouzezahrai, Hao Li, Thibaut Weise, and Mark Pauly. 2010. Example-based facial rigging. In ACM
Ilya Baran, Olga Sorkine-Hornung, Hanspeter Pfister, Robert W Sumner, Bernd Trans. Graph., Vol. 29. ACM, 32.
Bickel, and others. 2014. Facial performance enhancement using dynamic shape Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime Facial Animation with
space analysis. ACM Trans. Graph. 33, 2 (2014), 13. On-the-fly Correctives. ACM Trans. Graph. 32, 4 (2013).
Bernd Bickel, Moritz Bächer, Miguel A Otaduy, Wojciech Matusik, Hanspeter Pfister, Tiantian Liu, Adam W. Bargteil, James F. O’Brien, and Ladislav Kavan. 2013. Fast
and Markus Gross. 2009. Capture and modeling of non-linear heterogeneous soft Simulation of Mass-Spring Systems. ACM Trans. Graph. 32, 6 (2013), 209:1–7.
tissue. ACM Trans. Graph. 28, 3 (2009). John E Lloyd, Ian Stavness, and Sidney Fels. 2012. ArtiSynth: A fast interactive biome-
Bernd Bickel, Peter Kaufmann, Mélina Skouras, Bernhard Thomaszewski, Derek Bradley, chanical modeling toolkit combining multibody and finite element simulation. In
Thabo the, Phil Jackson, Steve Marschner, Wojciech Matusik, and Markus Gross. Soft tissue biomechanical modeling for computer assisted surgery. Springer, 355–394.
2012. Physical face cloning. ACM Trans. Graph. 31, 4 (2012), 118. Wan-Chun Ma, Yi-Hua Wang, Graham Fyffe, Bing-Yu Chen, and Paul Debevec. 2012. A
Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D blendshape model that incorporates physical interaction. Computer Animation and
faces. In Proc. of the 26th annual conf. on Comp. graph. and interactive techniques. Virtual Worlds 23, 3-4 (2012).
187–194. Steve A Maas, Benjamin J Ellis, Gerard A Ateshian, and Jeffrey A Weiss. 2012. FEBio:
Mario Botsch, Leif Kobbelt, Mark Pauly, Pierre Alliez, and Bruno Lévy. 2010. Polygon finite elements for biomechanics. Journal of biomechanical engineering 134, 1 (2012),
mesh processing. CRC press. 011005.
Sofien Bouaziz, Sebastian Martin, Tiantian Liu, Ladislav Kavan, and Mark Pauly. 2014. Aleka McAdams, Yongning Zhu, Andrew Selle, Mark Empey, Rasmus Tamstorf, Joseph
Projective dynamics: fusing constraint projections for fast simulation. ACM Trans. Teran, and Eftychios Sifakis. 2011. Efficient elasticity for character skinning with
Graph. 33, 4 (2014), 154. contact and collisions. In ACM Trans. Graph., Vol. 30. 37.
Sofien Bouaziz, Yangang Wang, and Mark Pauly. 2013. Online modeling for realtime Koki Nagano, Graham Fyffe, Oleg Alexander, Jernej Barbic, Hao Li, Abhijeet Ghosh,
facial animation. ACM Trans. Graph. 32, 4 (2013), 40. and Paul Debevec. 2015. Skin microstructure deformation with displacement map
Richard P. Brent. 1971. An algorithm with guaranteed convergence for finding a zero convolution. (2015).
of a function. Comput. J. 14, 4 (1971).

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.
153:14 • Alexandru - Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly

Gerard Pons-Moll, Javier Romero, Naureen Mahmood, and Michael J Black. 2015. Dyna:
A model of dynamic human shape in motion. ACM Trans. Graph. 34, 4 (2015), 120.
Szymon Rusinkiewicz and Marc Levoy. 2001. Efficient variants of the ICP algorithm. In
3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference
on. IEEE, 145–152.
Shunsuke Saito, Zi-Ye Zhou, and Ladislav Kavan. 2015. Computational Bodybuilding:
Anatomically-based Modeling of Human Bodies. ACM Trans. Graph. 34, 4 (2015).
Robert Schleip, Thomas W Findley, Leon Chaitow, and Peter Huijing. 2013. Fascia: the
tensional network of the human body: the science and clinical applications in manual
and movement therapy. Elsevier Health Sciences.
Ken Shoemake and Tom Duff. 1992. Matrix animation and polar decomposition. In
Proceedings of the conference on Graphics interface, Vol. 92. Citeseer, 258–264.
Weiguang Si, Sung-Hee Lee, Eftychios Sifakis, and Demetri Terzopoulos. 2014. Realistic
biomechanical simulation and control of human swimming. ACM Trans. Graph. 34,
1 (2014), 10.
Eftychios Sifakis and Jernej Barbic. 2012. FEM simulation of 3D deformable solids:
a practitioner’s guide to theory, discretization and model reduction. In ACM SIG-
GRAPH 2012 Courses. 20.
Eftychios Sifakis, Igor Neverov, and Ronald Fedkiw. 2005. Automatic determination of
facial muscle activations from sparse motion capture marker data. In ACM Trans.
Graph., Vol. 24. 417–425.
Olga Sorkine and Marc Alexa. 2007. As-rigid-as-possible surface modeling. In Sympo-
sium on Geometry processing, Vol. 4.
Robert W Sumner and Jovan Popović. 2004. Deformation transfer for triangle meshes.
In ACM Trans. Graph., Vol. 23. 399–405.
Joseph Teran, Sylvia Blemker, V Hing, and Ronald Fedkiw. 2003. Finite volume methods
for the simulation of skeletal muscle. In Proc. of the EG/SIGGRAPH Symposium on
Comp. Anim. Eurographics Association, 68–74.
Joseph Teran, Eftychios Sifakis, Silvia S Blemker, Victor Ng-Thow-Hing, Cynthia Lau,
and Ronald Fedkiw. 2005a. Creating and simulating skeletal muscle from the visible
human data set. Visualization and Computer Graphics, IEEE Transactions on 11, 3
(2005), 317–328.
Joseph Teran, Eftychios Sifakis, Geoffrey Irving, and Ronald Fedkiw. 2005b. Robust qua-
sistatic finite elements and flesh simulation. In Proc. of the EG/SIGGRAPH Symposium
on Comp. Anim. ACM.
Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popović. 2005. Face transfer
with multilinear models. In ACM Trans. Graph., Vol. 24. ACM, 426–433.
Javier von der Pahlen, Jorge Jimenez, Etienne Danvoye, Paul Debevec, Graham Fyffe,
and Oleg Alexander. 2014. Digital Ira and Beyond: Creating Real-time Photoreal
Digital Actors. In ACM SIGGRAPH 2014 Courses (SIGGRAPH ’14). Article 1, 384 pages.
DOI:https://doi.org/10.1145/2614028.2615407
Andreas Wächter and Lorenz T Biegler. 2006. On the implementation of an interior-
point filter line-search algorithm for large-scale nonlinear programming. Mathe-
matical programming 106, 1 (2006).
Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime performance-
based facial animation. In ACM Trans. Graph., Vol. 30. ACM, 77.
Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt,
Markus Gross, and Thabo Beeler. 2016a. Model-based Teeth Reconstruction. ACM
Trans. Graph. 35, 6 (2016).
Chenglei Wu, Derek Bradley, Markus Gross, and Thabo Beeler. 2016b. An anatomically-
constrained local deformation model for monocular face capture. ACM Trans. Graph.
35, 4 (2016), 115.
Zygote. 2016. Zygote Body. (2016). https://zygotebody.com [Online; accessed 28-Dec-
2016].

ACM Transactions on Graphics, Vol. 36, No. 4, Article 153. Publication date: July 2017.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy