CGW201103
CGW201103
CGW201103
1 March 2011
ON THE COVER
SEE IT I N
At rst glance, the partnership of live-action director Gore
Verbinski and legendary VFX studio ILM on an animated movie
seems as out of place as, well, a Hawaiian shirt-clad chame-
leon in a western. But in the CG feature Rango, both pairings
couldnt be more perfect. See pg. 10.
Director Gore Verbinski and ILM discuss
the making of Rango.
How audio enhances video games.
Focus on storage in the studio.
The challenges of posting reality TV.
Features
Claim Jumpers
10
Industrial Light & Magic partners up with director Gore Verbinski, drawing on
its VFX experience to create the CG animated feature Rango.
By Barbara Robertson
Commercial Success
21
This years Super Bowl ads serve up a wide range of digital effects,
including an all-CG epic-style invasion, dogs that are the life of the party, a
grateful beaver, a black beetle on the go, a car heist thats over the top, and
TV icons who show their team spirit.
By Karen Moltenbrey
Cry Wolf
32
A CG werewolf and digital sets help set the stage for a modern-day
retelling of Red Riding Hood.
By Barbara Robertson
Mother of Invention
36
In its last performance, ImageMovers Digital creates an out-of-this-world
CG experience, using its performance-capture technology for the animated
feature lm Mars Needs Moms.
By Barbara Robertson
Recruitment
44
Double Negatives talent manager offers some career advice for those
seeking positions at DNeg as well as other VFX facilities.
Use a Web-enabled smartphone to access the stories tagged with QR codes
in this issue. If your phone does not have the required software, download
a reader free of charge at www.cgw.com/qr-code-app-info.aspx.
COVER STORY
The Skys the Limit
28
Projecting and viewing stereoscopic 3D in domed
environments capitalizes on new technological
advancements to offer unique experiences.
March 2011 Vol. 34 Number 2 I n n o v a t i o n s i n v i s u a l c o mp u t i n g f o r D C C p r o f e s s i o n a l s
Departments
Editors Note Wheres the Creativity?
2
Following some poor performances in past years, this years Super Bowl
ads scored relatively high in terms of their creativity and VFX.
Spotlight
4
Products Dells Latitude laptops, tablet, OptiPlex desktops and small
form-factor solution, and Precision workstations and mobile workstations.
The Foundrys Nuke and NukeX Version 6.2. Vicons T-Series cameras.
Okinos CAD conversion system for SolidWorks 2011. Nvidias NVS 300.
News Workstation market continues steady growth. PC graphics chip
shipments fall short of expectations.
Viewpoint
8
Social rendering. xxxxxxxxxxxx
Portfolio
42
Khalid Al-Muharraqi.
xx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x
Back Products
47
Recent product releases. xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x
x
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x
Whats a QR code? Find out in this months
Editors Note.
CHIEF EDITOR
karen@CGW.com
EditorsNote
Wheres the Creativity?
2 March 2011
T
he average price of a ticket to Super Bowl XLV: $4700. The average price of a 30-second
television commercial during the 2011 game: $3 million. But the real million-dollar ques-
tion is, did ad agencies obtain the priceless results they were hoping those commercials
would generate? At $100,000 per second, the follow-up question should be, did ad agencies
make good use of vendors dollars?
Summaries concerning the results of this yearly Ad Bowl have
pointed to the dismal economy for the conservative approach
to the Super Bowl advertising of lateor better said, the general
lack of creative content in these million-dollar commercials. As-
suredly, audiences can count on an ahhh moment from the annual
Anheuser-Busch Clydesdale spot, a hearty laugh from one of the
brewers comical Bud Light scenes, or a chuckle from a Coke or
Pepsi presentation. To fairly judge the caliber of the games lineup,
though, fans have to look beyond these all-star offerings and in-
stead examine the remaining positions. Based on this assessment,
the commercials scored fairly high this year, at least in my book.
Dont get me wrong. Super Bowl XLV brought many flubsfrom Christina Aguileras
botched national anthem, to the Steelers mistake-riddled first-half performance, to the lack-
luster Black Eyed Peas half-time show (much to my surprise). And a number of commercials
fell short of their mark, as well. The Best Buy spot with odd couple Justin Bieber and Ozzy
Osbourne was unimaginative, as was the GoDaddy.com spot, which continues to rely on sexy
women to sell an unrelated product (without any humor or other much-needed hook). And
then there was the backlash from the politically incorrect Timothy Hutton Groupon piece.
How did this super event turn into a circus? Money. When the first Super Bowl aired in
1967, a collective 41 million viewers watched the game. The average price of a 30-second spot:
$40,000. Hardly chump change, though the big game among advertisers had not yet started.
Nevertheless, there were nuggets of creativity in the ads that aired early on. Among them: the
1967 Noxema spot featuring New York Jets legendary quarterback Joe Namath, the 1980
Coke ad with Pittsburgh Steeler great Mean Joe Green (still voted one of the all-time favorites,
though it debuted months before the game), and the 1984 Apple Big Brother-themed spot.
Over the years, as the audience expanded, ad executives and vendors began stepping up their
game, rolling out some memorable (and not so memorable) commercials. Its debatable whether
the quality of the ads rose in conjunction with the price, however. Not in question, though, is
how competitive the commercial event has become. Yet, somewhere along the way, ad execs
appeared too focused on out-doing one another in terms of absurdity, not creativity. This year,
many of them seemed to have dusted off their older playbooks, and with positive results. A
number of more interesting commercials required digital assistance (see A Commercial Suc-
cess, pg. 21). But, cutting-edge VFX cannot go it alone. These commercials have to grab our
attention and stay with us. Just as amazing imagery cannot carry a CG animated film without
a good story, neither can smart digital work carry a catchy commercial that lacks a creative
message or story.
This year, studios including The Mill, Framestore, and Animal Logic took to the field, lend-
ing their expertise and applying their digital magic to funny and/or imaginative Super Bowl
XLV commercials. And the results were quite nicewhat I expect a $3 million commercial to
look like.
The Magazine for Digital Content Professionals
EDITORIAL
KAREN MOLTENBREY
Chief Editor
karen@cgw.com (603) 432-7568
CONTRIBUTING EDITORS
Courtney Howard, Jenny Donelan, Kathleen Maher,
George Maestri, Martin McEachern, Barbara Robertson
WILLIAM R. RITTWAGE
Publisher, President and CEO,
COP Communications
ADVERTI SI NG SALES
JEFF VICTOR
Director of SalesWest Coast
jvictor@cgw.com
(847) 367-4073
GARY RHODES
Sales ManagerEast Coast & International
grhodes@cgw.com
(631) 274-9530
KELLY RYAN
Marketing Coordinator
kryan@copcomm.com
(818) 291-1155
Editorial Office / LA Sales Office:
620 West Elk Avenue, Glendale, CA 91204
(800) 280-6446
CREATI VE SERVI CES
AND PRODUCTI ON
MICHAEL VIGGIANO
Art Director
mviggiano@copcomm.com
CUSTOMER SERVICE
csr@cgw.com
1-800-280-6446, Opt 3
ONLINE AND NEW MEDIA
Stan Belchev
sbelchev@copcomm.com
Computer Graphics World Magazine
is published by Computer Graphics World,
a COP Communications company.
Computer Graphics World does not verify any claims or other information
appearing in any of the advertisements contained in the publication, and
cannot take any responsibility for any losses or other damages incurred
by readers in reliance on such content.
Computer Graphics World cannot be held responsible for the
safekeeping or return of unsolicited articles, manuscripts, photographs,
illustrations or other materials.Address all subscription correspondence
to: Computer Graphics World, 620 West Elk Ave, Glendale, CA 91204.
Subscriptions are available free to qualified individuals within the United
States. Non-qualified subscription rates: USA$72 for 1 year, $98 for 2
years; Canadian subscriptions $98 for 1 year and $136 for 2 years;
all other countries$150 for 1 year and $208 for 2 years.
Digital subscriptions are available for $27 per year.
Subscribers can also contact customer service by calling
(800) 280 6446, opt 2 (publishing), opt 1 (subscriptions) or sending an
email to csr@cgw.com. Change of address can be made online at http://
www.omeda.com/cgw/ and click on customer service assistance.
Postmaster: Send Address Changes to
Computer Graphics World, P.O. Box 3551,
Northbrook, IL 60065-3551
Please send customer service inquiries to
620 W. Elk Ave., Glendale, CA 91204
Recent awards:
5HDG\IRUDGYHQWXUH
modo
Social Rendering
D
espite all the advances in technology, software rendering is still
slow. Of course, the big studios have their rendering farms, but
what about small studios or hobbyist animators? For most of us,
rendering times hinder our creative fow and cripple our production
pipeline. When it comes to rendering large projects, we often have few
options beyond the handful of computers in our
ofces and homes.
How can we get access to more rendering com-
puters? Our friends, family, and colleagues have
perfectly fne computers just sitting idle at their
homes, and most likely they would be more than
willing to share their resources. Yet, how can we
quickly and easily utilize their computers to help
render our animations?
While there have been a number of Internet-
based rendering solutions over the years, they al-
ways seemed overly complex. For example, in order
to volunteer, the user needs to download special
software, install the software, link up projects, and
so forth. While the technically minded are able to
do this, we want to draw volunteers from all of
our friends, regardless of their computer expertise.
So, where can we fnd hundreds of online friends
who might be willing to help us render? Te an-
swer, of course, is Facebook. By integrating the
entire rendering experience within Facebook, we would have a platform
that connects animators to a large pool of potential volunteers.
To address this goal, we are developing a new Facebook application
called RenderWeb, which allows artists and animators to upload their
animation projects. Once the projects are uploaded, Facebook users can
volunteer to render by simply clicking on a Web link. Te rendering oc-
curs directly within the Web browser, and preview images are displayed
to the volunteer. After the animations are rendered, the videos are made
available for the entire community to watch and tag.
While we hope to soon integrate commercial renderers, RenderWeb
is currently compatible with the popular Blender animation program.
Blender was an ideal ft for this project: It is open source, available on
multiple platforms, and has a small binary download (which is great
for Web applications). Yet, it was no trivial task to get Blender to work
within Facebook. Typically, Facebook applications utilize Web languag-
es, such as Java, JavaScript, or Flash. However, Blender is written in the
C programming language. While we could have rewritten Blender into
a Web language, this would have led to a buggy and incomplete ver-
sion of Blender. Instead, we decided to develop a distributed rendering
platform based on Java. Te rendering does not occur using Java, but
instead Java acts as a communication layer between Blender and the
RenderWeb server. Java detects a users operating system, temporarily
downloads the proper Blender version, executes Blender, and then pipes
the images to the Web server. All this occurs without the user having to
set up or confgure anything. Utilizing this method, we do not need to
make any modifcations to Blender. In fact, the version of Blender uti-
lized by RenderWeb is byte for byte identical to the version distributed
by blender.org. Moreover, this modularized approach will allow us to
integrate additional renderers into RenderWeb with little efort.
Although it is certainly interesting that Blender can be integrated
into a Web browser, the true power comes when many instances are
working together to render animations. In RenderWeb, the relation-
ships within Facebook are used to direct the fow of computation from
computers to specifc projects. RenderWeb allocates projects based on
the existing relationships within Facebook. When you are volunteering,
a higher priority is given toward using your computer to render your
Rendering
By AdAm mcmAhon
The RenderWeb rendering app works with the open-source Blender animation program.
March 2011
n
Adam McMahon, a PhD candidate at the
University of Miami, is the founder of
RenderWeb LLC. He can be reached at
www.RenderWeb.org.
B
l
e
n
d
e
r
F
o
u
n
d
a
t
i
o
n
w
w
w
.
b
l
e
n
d
e
r
.
o
r
g
.
Social Rendering
D
espite all the advances in technology, software rendering is still
slow. Of course, the big studios have their rendering farms, but
what about small studios or hobbyist animators? For most of us,
rendering times hinder our creative fow and cripple our production
pipeline. When it comes to rendering large projects, we often have few
options beyond the handful of computers in our
ofces and homes.
How can we get access to more rendering com-
puters? Our friends, family, and colleagues have
perfectly fne computers just sitting idle at their
homes, and most likely they would be more than
willing to share their resources. Yet, how can we
quickly and easily utilize their computers to help
render our animations?
While there have been a number of Internet-
based rendering solutions over the years, they al-
ways seemed overly complex. For example, in order
to volunteer, the user needs to download special
software, install the software, link up projects, and
so forth. While the technically minded are able to
do this, we want to draw volunteers from all of
our friends, regardless of their computer expertise.
So, where can we fnd hundreds of online friends
who might be willing to help us render? Te an-
swer, of course, is Facebook. By integrating the
entire rendering experience within Facebook, we would have a platform
that connects animators to a large pool of potential volunteers.
To address this goal, we are developing a new Facebook application
called RenderWeb, which allows artists and animators to upload their
animation projects. Once the projects are uploaded, Facebook users can
volunteer to render by simply clicking on a Web link. Te rendering oc-
curs directly within the Web browser, and preview images are displayed
to the volunteer. After the animations are rendered, the videos are made
available for the entire community to watch and tag.
While we hope to soon integrate commercial renderers, RenderWeb
is currently compatible with the popular Blender animation program.
Blender was an ideal ft for this project: It is open source, available on
multiple platforms, and has a small binary download (which is great
for Web applications). Yet, it was no trivial task to get Blender to work
within Facebook. Typically, Facebook applications utilize Web languag-
es, such as Java, JavaScript, or Flash. However, Blender is written in the
C programming language. While we could have rewritten Blender into
a Web language, this would have led to a buggy and incomplete ver-
sion of Blender. Instead, we decided to develop a distributed rendering
platform based on Java. Te rendering does not occur using Java, but
instead Java acts as a communication layer between Blender and the
RenderWeb server. Java detects a users operating system, temporarily
downloads the proper Blender version, executes Blender, and then pipes
the images to the Web server. All this occurs without the user having to
set up or confgure anything. Utilizing this method, we do not need to
make any modifcations to Blender. In fact, the version of Blender uti-
lized by RenderWeb is byte for byte identical to the version distributed
by blender.org. Moreover, this modularized approach will allow us to
integrate additional renderers into RenderWeb with little efort.
Although it is certainly interesting that Blender can be integrated
into a Web browser, the true power comes when many instances are
working together to render animations. In RenderWeb, the relation-
ships within Facebook are used to direct the fow of computation from
computers to specifc projects. RenderWeb allocates projects based on
the existing relationships within Facebook. When you are volunteering,
a higher priority is given toward using your computer to render your
friends projects, as opposed to other projects
in the queue. Tus, the more friends that an
animator has, the higher the potential for
computational power.
While there will always be members of the
community volunteering their computers,
sometimes we have an immediate need for sig-
nifcant computational power. To address this
need, RenderWeb allows you to automatically
update your Facebook wall to inform your
friends of projects that need to be rendered.
Tis optional feature posts a link on your
Facebook wall that allows your friends to ren-
der with just one click. Using this feature, you
can easily notify your friends when you have a
demanding project in the queue.
In a sense, RenderWeb is similar to cloud
computing and online rendering farms.
However, while those services are expensive,
RenderWeb is free because your friends and
the community will share the rendering load.
Moreover, since RenderWeb is integrated with-
in a social network, it has the added beneft
of allowing you to share your animations with
the community and bridge new contacts with
other animators.
In summary, we believe that social network
integration will be a game changer for Inter-
net-based rendering. RenderWeb will connect
animators with friends and communities that
are willing to share resources.
If you would like to participate in this new
arena of social rendering, join RenderWeb
at apps.facebook.com/renderweb. You can
upload your own Blender projects or volun-
teer your computers to render. With a shared
community efort, rendering will no longer
be a bottleneck in the pipeline. We will all
have ample computational power to render
everything that our production or creativity
demands. n
March 2011
C
o
u
r
t
e
s
y
D
o
u
g
J
a
m
e
s
.
RenderWeb, which relies on social networking,
offers users a free rendering method.
B
l
e
n
d
e
r
F
o
u
n
d
a
t
i
o
n
w
w
w
.
b
l
e
n
d
e
r
.
o
r
g
.
March 2011 10
CGI
ILM based Rangos CG charactersincluding (inset, from left to right) Rango leading the posse, Priscilla, and the Mariachi owlsas
well as the town of Dirt (above) and the hot, dusty, desert backdrop on artwork from production designer Mark Crash McCreery.
Priscilla in the middle, and the Mariachi owls at rightthe town of Dirt, and the hot, dusty, desert on artwork from production
designer Mark Crash McCreery.
Claim Jump ers
Industrial Light & Magic uses techniques and processes
honed in visual eects work to create live-action director
Gore Verbinskis rst CG feature animation
By Barbara Robertson
Claim Jump ers
Industrial Light & Magic uses techniques and processes
honed in visual eects work to create live-action director
Gore Verbinskis rst CG feature animation
By Barbara Robertson
Images 2011 Paramount Pictures.
Saddle up, CG cowboys. e fences are down, and the barn door ew open. Weve seen
lmmakers straddle the boundary between live-action and animated features for longer
than great-grandmas chin whiskers, but weve never seen anything head out for new
territory like Rango.
Directed by Gore Verbinski, designed by Mark Crash McCreery, and created at
Industrial Light & Magic, the Paramount feature, produced by Blind Wink, GK Films,
and Nickelodeon, is the rst animated lm for the live-action director. Its also the rst
animated lm for the designer, and the rst animated feature to move through ILMs
visual eects pipeline. Did that mean that the director and artists mimicked an anima-
tion studios pipeline and processes? Nope. Caint say they did.
OK, then, did they adopt Robert Zemeckiss style of making an animated lm using
live-action techniques? Nope. Didnt go there, either. is lm has no motion-captured
performances.
Heres how it worked: e crew simply herded the wacky spaghetti western down the
road as if it were a visual eects project and adapted to the scale of an animated lm as
needed. at makes Rango the rst animated feature created with visual eects, and it
opens the cattle gate to other such projects in the future.
We all came from live action, and that was our common language, says Tim Alex-
ander, visual eects supervisor. As we got into the pipeline, we found things we could
do better in terms of scale and continuity, but we kept our strengths.
Freaky Frontier
One of ILMs strengths is in creature animation, and boy howdy did they have creatures
to animate: 130 individual characters and 50 rigged variations. Of those, 50 were hero
characters, 26 were main characters. But, the quantity didnt cause the studio to slack
o. e characters in this lm are as detailed as the creatures we create for visual
eects, Alexander says.
McCreery designed all the characters based on animals, but, with few ex-
ceptions, they act like humans, and all but a few wear multiple layers of
clothing. All the characters are really humans with an animal design mo-
tif layered over them, says Hal Hickel, animation supervisor. e mayor
acts like John Huston from Chinatown. He doesnt act like a turtle.
e star, Rango (Johnny Depp), is a chameleon who bounced out of his ter-
rarium from the inside of a car traveling through the desert. As the lm begins, hes
free, alone, and lost.
But along comes Beans (Isla Fisher), a lovely lady lizard bobbing her way to town
in a rickety wooden wagon lled with empty, jostling water bottles. Shes holding
the reins of a javelina, a crusty wild pig thats pulling the wagon, and shes cranky.
Somehow, weve moved into a creature-sized world, and its rough, tough, and dirty.
Look-development supervisor Damian Steele describes the character design as
a cross between Robert Crumb and Beatrix Potter. Others simply call it nasty.
When Rango rst sees the town of Dirt on the horizon, it looks like two blocks of
ramshackle old buildings lining either side of a main street rising from the hot des-
March 2011 11
CGI
March 2011 21
March 2011 22
nnnn
Broadcast
Coke: Siege
Fire and ice dont mix well. And that was cer-
tainly the case in the all-CG Coke Siege
commercial, as two culturesone fre, one
iceclash on an epic scale.
Set in a breathtaking, icy fantasy world, the
60-second cinematic story focuses on a battle
between an army of fearsome fre warriors
descending toward a peaceful community of
ice-dwelling creatures. Accompanying the
warriors is a huge fre-breathing dragon, which
leaves a burning path of destruction in its
wake and little doubt as to the likely outcome
for the defenseless villagersprotected only by
a tall, wooden wall surrounding their tranquil
village. Suddenly, the city gates open and the
villagers wheel out a sculpted ice dragon. With
one blazing breath from the fre dragon, the ice
sculpture melts to reveal a bottle of Coca-Cola,
which is quickly consumed by the creature. Te
warrior general gestures for the dragon to attack
the castle battlements, but instead of emitting a
giant freball, the dragon expels an explosion of
harmless freworks into the air. Confused and
without their greatest weapon, the army beats
a hasty retreat, leaving the villagers to celebrate
their victory with bottles of Coke.
Te commercial, produced by Nexus, con-
tained an impressive expanse of CGI, from
furry creatures, feshy beings, and vast crowds,
to the metal armor, fre, smoke, freworks, an-
cient buildings and objects, towering snowy
landscapes, rich forests, moody skies, and
more. Tis expansive digital universe was built
at Framestore.
According to Diarmid Harrison-Murray,
Framestore VFX supervisor, the studios brief
was far from simple: Create a painterly-style
epic flm, set within a fantasy world. Te
directors were keen that it be flled with lots
of detail and richness in terms of the environ-
ments and the characters, he says. Specifcally,
the visual brief was to create an animation that
looked more like classic fantasy art than CG.
Tats not a look you get for free in CG.
Creating a Fantasy
Nexus, led by directors Fx & Mat, provided
the initial concept art, which was then built
out by Framestore. Some have compared the
commercials visuals to those from the new
World of Warcraft trailer, others to the style of
the movie Kung Fu Panda. Yet, unlike those
projects, this one has a softer style to the
computer graphics, achieved through matte
paintings (crafted by London-based Painting
Practice), textures, and techniques used in the
fnal composites whereby the artists painted
out some of the crisp CG detail to achieve the
spots fantasy-like aesthetic.
To create the diferent elements, the art-
ists used a wide range of tools, including:
Autodesks Maya for modeling, with sculpting
and some texturing done in both Autodesks
Mudbox and Pixologics ZBrush; Maya for
previs; Adobes Photoshop for matte paint-
ings; Side Efects Houdini for the main efects
work (pyrotechnics, including fre, smoke, and
freworks) and far background environments
(terrain and forest generation), along with
Maya for the closer hero-character environ-
ments; Houdinis Mantra and Mayas Men-
tal Ray (from Mental Images) for rendering;
Framestores proprietary software for the fur
creation and grooming; Te Foundrys Nuke
for compositing; and Apples Final Cut for ed-
iting. Animation was completed using mostly
keyframes in Maya, while facial animation was
achieved using a blendshape approach with
the expressions sculpted in Maya.
Te animators augmented the character
animation with motion capture performed
at Framestores in-house studio with a Vicon
system, and crowd simulation created from
Framestores own particle-based system. It
wasnt a complex avoidance system, but it did
the job in terms of cleverly managing the data
and making sure there was enough variation
in the crowds, Harrison-Murray explains. It
was created by a guy in our commercials divi-
sion, and I am sure we will build on it and
use it again in the future. For this project, the
crowd system handled a group of 1000 war-
riors in one scene and approximately 11,000
in another.
Te most difcult character to create was
the younger hero of the ice dwellers. He
required the most iterations, says Harrison-
Murray. He was tricky; he had a cat-like look
but couldnt look too primate-like, and he
had to have the appearance of a good, honest,
hard-working guy. Moreover, the character is
covered in furabout a half million hairs.
Another challenge was creating the fre.
As Harrison-Murray explains, the directors
wanted it to have the same dynamics and
movement of real fre, albeit with a painterly
aesthetic. Initially, the group produced the fre
using a volumetric fuid renderer, but had to
pull back on the rendering realism until the
imagery blended well with the painterly world.
Its hard to play with that many variables [in
the sim] to get it to look the way we did in the
renderer, he notes.
Te far background environments are most-
ly matte paintings. Sometimes they started as
geometry and later were projected back onto
the geometry in Nuke. Te forest foregrounds,
meanwhile, are CG, as are the burning trees.
Te city walls were built with geometry, with
an overlay of matte at the end.
Not surprising, all the diferent elements
within the scenes added up to quite a few lay-
Framestore created the 60-second, all-CG commercial Siege, which contains a full range of digital
imagery, from detailed structures and mountainous terrain, to digital characters, creatures, and
crowds, to smoke, fre, and freworks.
March 2011 23
Broadcast
nnnn
ers. Its endless stuf, says Harrison-Murray.
Te fre and ice dragons. Te matte behind
for the sky and the mountains. A band of for-
est and trees. A wall of smoke. And then each
of those elements had to be broken down into
separate render layers. Te snow had four, in-
cluding difuse and subsurface scattering, as
did the trees. Tere were layers of CG crowds,
and that was split into 10 diferent render
passes. We had mid-ground, hand-animated
surfaces for the Orc beast in the crowd. Te
hero Orcs came in with about eight or nine
diferent comp layers. Te ice dragon had lots
and lots of diferent render layers, with glitter,
specular, volumetric ice stuf. Plus, the fore-
ground had interactive footprints where the
armies had walked in the snow. Hero guys in
the foreground. I lost count.
Tere were many, many layers, Harrison-
Murray adds. It was tough on comp. We were
pulling a lot of CG and content from diferent
software and trying to give it a unifed feel.
Group Effort
While the Framestore flm and commercials
divisions share tech, they each have separate
pipelines due to the diferences in the scale
of the projects they encounter. We need our
tools to be lightweight and easily customiz-
able, says Harrison-Murray. For Siege,
though, the commercials group borrowed
from the flm group; the most valuable com-
modity: people.
Tey were quieter in flm at the time, and
we needed to ramp up quickly; we had to
double our size, and we got some good guys
to help us out, Harrison-Murray notes. Some
of that assistance came from Houdini artists,
who helped build the procedural forests with
techniques used for Clash of the Titans. One
of the major tech assets used from flm R&D
was the fur-grooming tool set, although the
fur-rendering tools were not transferable since
the flm side uses Pixars RenderMan, while
the commercials group uses Mental Ray for
rendering out the hair.
Yet, the help, whenever ofered, was greatly
appreciatedespecially given the condensed
timeframe of the commercial. We had three
months from start to fnish, Harrison-Mur-
ray states. At the early stages, the project had a
crew of a half-dozen, which ramped up to 60
at one point.
Whether its the commercials or flm group,
Framestore is best known for its photorealis-
tic CG characters set within live-action back
plates. And while the team may have been
taken out of its element for Siege, the results
are nonetheless stunning.
When it comes to Super Bowl commercials,
Anheuser-Buschs Bud Light brand tends to
get quite a bit of airtime, and this year was
not any diferent. And when it came to cre-
ating the digital work, at least for this years
game spots, Te Mill, headquartered in Lon-
don, seemed to be part of nearly every 30- or
60-second play. In all, the facility took on 19
commercials among its London, New York,
and Los Angeles ofces, including one of the
top favorites, Bud Lights Dog Sitter.
Te premise of the spot is simple: A guy
dog sits for a friend, who leaves him a refrig-
erator full of beer, along with several canines
that are really smart and will do whatever you
tell them. Lots of beer plus smart, obedient
dogs equals party (at least in the sitters mind),
during which the canines act as wait staf, pour
drinks behind the bar, spin tunes like a DJ,
and even man the barbecue grill.
No matter how smart the actorsin this
case, the dogsactually were, they needed
digital assistance to pull of these human-like
tricks. And thats where Te Mill New York
came in. In the spot, the dogs stand and walk
upright on their hind legs, performing tasks
such as holding trays, washing dishes, fipping
burgerswith their front paws. So, most of
the post work involved removing rigs from
each scene and then adding in new arms and
the objects with which they were interacting.
According to Tim Davies, VFX supervi-
sor for the spot, for any profle dog shots, it
was fairly easy to remove the rigs using a clean
plate, as no part of the rig occluded the dog.
But when the dogs walk toward the camera,
two trainers holding a horizontal bar would
stand either side of the dog. Tis rig, used to
support the dog, would cover a large section of
the animals upper torso and forearms, requir-
ing extensive cleanup work.
Davies, also the lead (Autodesk) Flame art-
ist on Dog Sitter, was on set during the flm-
ing, and after each take, he would acquire a
The canines in Dog Sitter required assistance from a trainer (top) and were flmed separately so
the animals would not be distracted. The fnal scene (bottom) was a compilation of the various shots,
including those with the human actors and the dogs, many of which required CG limb replacements.
Bud Light: Dog Sitter
March 2011 24
nnnn
Broadcast
plethora of high-resolution stills of the dogs
fur and textures using a Canon EOS 5D Mark
II camera. I asked the trainers to stand the
dogs upright so I could get a nice, clean shot of
their torsos without them being covered by the
rigs, he adds. Tose stills were then tracked
over the rigs, so the rigs could be removed.
Helping Hands
Yet, this was not simply an easy job of rotoing
and comping. In a lot of the shots, we com-
pletely removed their arms as well and put
new arms in, and added all the [serving] trays,
says Davies.
For instance, at the beginning of the party
sequence, a large dog answers the door while
holding a tray of beers. Tat dog needed a big
rig with a three-inch bar, and two trainers to
hold it up, notes Davies. We found that when
the dogs are standing upright on their back legs,
they are breathing quite heavily. So you cant
just track a still onto them. You have to animate
the expansion and contraction of the actual still
you are placing on top to simulate the breath-
ing. Ten you have to re-introduce the shadows
and seamlessly blend the fur.
As the dogs walked, supported by the rigs,
they often looked like they were leaning for-
ward in a rigid position, which was corrected
in Flame. Nearly all the dogs had to be roto-
scoped from the scene anyway, because they
had to be placed in front of or behind other
objects or dogs in the scenes. With the use of
this roto, we were able to adjust the posture of
the dogs by bending them at the waist, mak-
ing them appear more upright, says Davies.
Tere were also instances whereby the team
added a gentle sway to the dogs upper body
so it wouldnt looked as though the dog was
leaning on something fxed.
Every single dog required a separate take,
says Davies, noting that some of the animals
did not work well in the same space as the other
talent, whether human or canine. In the end,
nearly every scene was made up of more than
10 passes. Tat was key to the success, having
each dog shot as a separate layer. We were able
to choose the best of each dogs takes, and ev-
ery dogs performance could be retimed for the
best reaction. Davies ofers the example of the
bloodhound at the bar: We had over 50 sec-
onds of the dog looking up and down, which
allowed us to slip the timing of this layer inde-
pendently of the scene. Tis enabled the dog
to look up at the girl and then back down at
his beer glass while pouring, in perfect unison
with the girls actions.
In addition to the rig removal and comping,
Te Mill often had to replace and animate
limbs for shots in which the dogs arm was
holding onto somethingfor instance, the
doorknob, the beer tap, or any of the beer
trays. However, most of this was achieved by
animating still photography. I think the big-
gest success of the spot was that we decided
to go with an in-camera approach, says Da-
vies. In the early stages of the project, there
was talk about doing fully CG dogs. But thats
tough and time-consuming; they tend to end
up looking like CG dogs. Its not just the way
they look, but more the way they movethey
can look like animated cartoons.
Nevertheless, there were some instances
when CG was unavoidable. For one, in which
the dog is washing dishes, Davies says the crew
was unable to pull of the shot in 2D because
the motion and perspective needed was very
complex, so the CG team created models of
the dogs arms and upper body, and then sup-
plied photoreal animated elements of these
difcult tasks.
Te idea we were going for was that if these
dogs are clever and well trained, maybe they
could pull of these stunts, Davies explains.
We wanted to leave the question, Could
these dogs really do that?
Terefore, the motions were underplayed
and restricted. And, whenever possible, real
objects on poles and wires were shot for the
interaction, as was the case when the dog is
drying a glass with a towel. We got some of
the way there, Davies says. But not all the
way: Te team ended up incorporating a CG
mug and dog limbs to complete the contact.
Te Mill used Autodesks Maya for the CG
work in that shot, as well as for the dog fip-
ping burgers on the grill. We originally had
the dog fipping the burger over, which twist-
ed the dogs wrist around. It just felt like we
pushed it into the unbelievable, so we ended
up re-animating it at the last minute to sim-
plify the movement to something more believ-
able, explains Davies.
While some of the dog arms were re-created
in CG, the trays and other objects were added
in the Flame. We were able to project the
trays and bottle onto cards that were animated
in 3D space, allowing proper perspective and
parallax, notes Davies.
In all, the production company spent ap-
proximately three days on the shootand
Davies notes that they were extremely patient
since nearly every plate had to be set up for
VFX. We shot trays, we measured the dogs
height as they walked through the scene, and
then built trays to match and wheeled them
through the scene for the right focal and light-
ing references, he says. We had all these ele-
ments, and it was plate after plate. Te sav-
ing grace: the decision to go with a locked-of
camera, which eliminated the need for motion
control and camera tracking.
While all the dogs were shot on set, a few
of the more scrufy pooches had a greenscreen
placed behind them to ease the roto work, made
even more complicated by all the hair.
In addition to Flame, the team used Auto-
desks Smoke and Flare for the compositing,
Combustion for the roto work, and Maya for
the CG. Color grading was also handled by
Te Mill using FilmLights Baselight.
In addition to Dog Sitter, Davies worked
on two other humorous Bud Light spots for
the big game: Product Placement and Sev-
erance Package. But, it was Dog Sitter that
tied for the top spot in the USA Today Super
Bowl Ad Meter ranking.
This doggie DJ scene was among those with the highest number of layers. The canine in the fore-
ground alone required substantial work: The dog had to stand on its hind legs and bob its head to the
music, while its paw (with a digital assist) scrubbed the record back and forth.
CGW_Temp.indd 1 2/3/2011 4:27:52 PM
March 2011 26
nnnn
Broadcast
Animals are usually a sure crowd-pleaser when
it comes to commercials, and indeed, a bea-
ver that repays a motorists act of kindness was
well received during the Super Bowl break.
Te spot, which was driven by the same
crew that brought us Scream (featuring
a CG squirrel) during Super Bowl XLII in
2008, opens as a beaver lugging a tree branch
attempts to cross the road. In an ode to that
previous piece, the panicked animal, too
frightened to move, braces for impact with its
paws outstretched and mouth agape. Seeing
the helpless beaver, the man quickly swerves
his car to avoid hitting it. Te animal salutes
him in a sign of gratitude. Six months later, we
return to the same location, this time during a
rainstorm, as the driver stops the car just in the
nick of time, as a huge tree falls across the road.
As the shaken driver gets out of his car, he sees
that the bridge, visible in the original scene,
has been washed away by the now-raging river.
As relief sweeps over him, he sees the grateful
beaver standing beside a newly chewed tree
stump. Tis time, the rodent gives the man a
chest bump.
It was the same groupagency, director,
creatorfrom the Scream spot three years
ago, so we all knew one another, says Andy
Boyd, VFX supervisor/lead 3D artist at Meth-
od Studios, which handled the post work for
both productions. Te expectations, though,
were high since Scream had looked so good.
But for this, we were starting from the end-
point of all that other hard work. Te good
thing, though, is that technology has moved
on, so what was really hard then is not as hard
anymore in regard to the number of hairs on
an animal, for instance. Before, when I went
over 1gb of memory, my computer would
crash. Now I use 24gb, and it never crashes.
Leave It to Beaver
Obviously, the majority of the work on Carma
involved a computer-generated beaver. In fact,
there were seven to eight versions of the 3D
animal used in the 30-second spot, from the full
hero beavers, to the half CG/half live-action ani-
mal used for the chest bump. Tere was even a
real animal actor.
Te model looked fantastic, but we got
lucky on set with the director [Kinka Usher],
who spent a good amount of time trying to get
the animal [to do what was needed], and that
was a huge help, says Boyd. In the two end
shots, the real beaver almost did exactly what
we wanted him to doobviously not the chest
bump, but pretty damn close. So even though
we had the CG version, it is always good to use
the real stuf as much as possible.
Te digital model was built by Methods
team using Autodesks Maya and Pixologics
ZBrush, with rigging and animation done in
Maya. Te fur generation and rendering, how-
ever, was accomplished within Side Efects
Houdini.
According to Boyd, the crew carried over a
lot of technology from Scream, and adapted
it to the beaver. On Scream, it was the frst
time I did close-up fur stuf, the frst time I
had set up a fur system, so a lot of what I was
doing on the screen was prototyping, he says.
I now had that technology experience behind
me, so the work was more standard.
By applying lessons learned, the team was
able to generate a CG beaver with fve million-
plus hairs, 4000 of which are guide hairs. In
comparison, the CG squirrel contained closer
to one million hairs in its pelt. Tat would
have been a really big deal on the squirrel. [Te
model] would have drown in memory if we
had that many back then, Boyd notes. Not
so this time around, especially with a 64-bit
operating system. With the faster computers,
you can add the proper number of hairs un-
til it looks right, and you dont have to worry
about the number, as long as you stay under
fve million, he adds.
Grooming, to achieve the desired clumping
and texturing, was performed in Houdini and
rendered with Mantra. Tis was done using two
diferent approaches: Boyd groomed the bea-
ver used in the sunny shots, while Brian Burke
groomed the wet animal for the rainy shots. It
took on a totally diferent look and shape; its a
completely diferent groom. It could have been
a completely diferent animal, says Boyd of
the wet version. Despite the diference in styl-
ing, the model and rig were identical.
While the rig was not overly complicated,
the artists did add a more complex muscle sys-
An animal actor named Waldo (top, at right) starred in some scenes and served as a photo reference
for the realistic CG model (top, at left), which was created at Method using Maya, ZBrush, and Houdi-
ni. (Bottom) Scanline used its Flowline software to create the water sim near the end of the spot.
Bridgestone: Carma
March 2011 27
Broadcast
nnnn
tem with built-in dynamics, as they had done
for the squirrel. As it turns out, the real beaver
gave a fne performance, but if they would have
needed our model to do some more compli-
cated movement, like walking across the road,
we would have been ready, explains Boyd.
In the frst two shots, a trained beaver named
Waldo performed, carrying the tree branch.
Hes as trained as a beaver can possibly be,
says Boyd, noting that the group flmed the
animal on set for reference. Waldo also stood
up on his hindquarters, perfect for the chest
bump shot, but only if you waived food in
front of his face, Boyd adds with a chuckle.
For the chest bump, the artists used a CG arm
and chest, which was composited into the live
action; the artists also adjusted the head posi-
tion and the eye line slightly for the completed
shot. Te remaining shots contained the all-
CG model, while an animatronic was used for
shot reference.
According to Boyd, the most challenging
part of the project was the salute. It falls into
that weird ground of trying to get a creature
to do something that it cant really do, he
says. [Te work] can go into that uncomfort-
able place where [the model] looks real in the
frame but it is doing something that you know
it cant do. With this in mind, the team chose
a subtle motion for the salute that was notice-
able by viewers but didnt tread too far into
that unreal zone.
Te Pixel Farms PFTrack was used for cam-
era tracking. Matchmoving the beaver was
done in Maya.
In addition to the beaver, the commercial
also contains CG river shots amid the live ac-
tion. Burke modeled the digital bridge and
riverbed using Maya, while Scanline, a VFX
company in Germany that specializes in
fuid efects, generated the fast-moving river.
When we got the storyboards and saw the
raging-river shots, we wanted the work to be
the best, and Scanline does incredible water
work, says Boyd. Scanline used its proprietary
Flowline software to create the simulation,
and Methods Jake Montgomery pre-compd it
with the CG bridge and additional CG debris
using Te Foundrys Nuke. He then added
atmospherics and fnished the composite in
Autodesks Flame.
Aside from the simulation assist, the work
was handled by three CG artists and one com-
positor at Method. It was one of the best jobs
I have done in terms of just being in a small
team and having a lot of fun, says Boyd. Ev-
eryone really enjoyed themselves, and we were
lucky enough to have been given the time to
explore ideas to best tell this story.
Every so often a commercial uses a catchy tune
that stays with you for quite some time. Such
was the case for Volkswagens Black Beetle.
Te commercial features a fully animated
CG beetle (insect) that maneuvers its way
around the various obstacles and terrain it
encounters, doing so in an automotive style
and to the beat of Ram Jams Black Betty.
Te spot is fun, engaging, and creative in both
concept and design. Its an action-packed car
chase in the forest, says Te Mills Tom Bus-
sell, who, along with Juan Brockhaus, served
as lead 3D artists on the project.
Bussell and Brockhaus guided the spot
from beginning to endfrom storyboards,
to supervising the live-action shoot, to fnal
deliveryall of which spanned just six weeks.
When a project is predominantly based
around animation, the clients have to take
a huge leap of faith and trust us creatively as
well as technically, says Bussell. Te reality
of such a quick turnaround and so much CGI
is that it only comes together in the fnal few
days of the deadline.
Te scene opens in a lush, wooded environ-
ment, as some black bugs meander along the
groundwhen suddenly a black beetle over-
takes them, speeding along over the rocks and
dirt, quickening its pace across a moss-covered
branch spanning a creek. Te bug rounds a
corner, nearly careening into a centipede be-
fore catching air, and then buzzes past two
praying mantis, cuts through fre ants on the
march, again fies through the air above a feld
of grass and dandelion seed heads, before land-
ing sideways on a rock. Te screen grows dark
as a white line assumes the shape of the black
beetleand the new Volkswagen Beetle.
In contrast to the CG characters in the
spotthe mantis, ants, dragonfy, centipede,
caterpillar, and so forththe environments
are mostly live action, flmed in a studio. Tis
was no miniature, though. Christopher Glass
and his team re-created a huge section of or-
ganic forest inside the studio in Hollywood
that must have been 10x10 meters, describes
Bussell. Being on set felt like you were stand-
ing in a dense forest. Tey did a great job.
Beetle Mania
As for the insects, Te Mill created eight main
bugs, and in the case of the mantis and ants,
tweaked the subsequently replicated models
for variation. All the base models were gen-
erated using Autodesks Softimage and then
refned using the sculpting tools in Pixologics
ZBrush.
Te biggest challenge, though, was getting
the design of the hero beetle nailed down. For
all the other insects, we matched to how nature
intended them to look. Tat was the easy part,
says Bussell. But in a car commercial with no
actual car, there was a big design element to the
hero beetle; we had to convey the right message
about the car. We needed our beetle to subtly
reference the new design without the insect feel-
ing too engineered. If you look closely, you can
make out subtle shapes in the shell that act as
wheel arches, the eyes are headlamps, and the
silhouette from the profle is very similar to the
new car design.
Despite the commercial centering on the
design, the look was not fnalized until late
in the project. We needed to see the whole
animation together before knowing how far
to push the design of the beetle, adds Bus-
sell. Making that task somewhat easier was the
robust geometry-caching pipeline the crew
Volkswagen: Black Beetle
The Mill created the fun, lively spot for Volkswagen featuring computer-generated insects, including
the main character: the black beetle.
March 2011 28
nnnn
Broadcast
used, which gave them the ability to change
things up late into the project with little fuss
for the animators, rendering group, or lighters,
and enabled them to spend more time on the
creative aspects of the 3D.
Before the model was approved, the rigging
team, led by Luis San Juan, began building a
stable pipeline that would automate the rigs to
the numerous legs of these insects. Te centi-
pede rig was designed so that once the anima-
tor started working on the body, the legs would
subsequently move in the anatomically correct
way. Nevertheless, the animators could override
this movement with timed keyframes.
Although our brief was to create an insect
that behaves like a car, we felt it was impor-
tant to stay anatomically correct in order for
the animation to be believable, explains Bus-
sell. To this end, the artists studied various
BBC documentaries of insects, gathered slow-
motion footage, and built the digital insects
with this action in mind.
I know way too much about insects now!
says Bussell with a laugh.
In a complete 180 turn, though, the group
also studied iconic car-chase scenes, with the
reference ranging from Starsky and Hutch and
Te Fast and the Furious, to non-chase refer-
ence such as Te Matrix bullet-time efect.
Each shot in the commercial, from the fram-
ing of the shot to the animation of the beetle,
is based around similar concepts to those icon-
ic flm moments, notes Bussell.
While design and animation took some
time to develop, the music track was set from
day one, which was a tremendous help, says
Bussell, because it meant that the editor had a
track to cut to, and the artists had something
to base their animation on. Tis helped with
the buildup to the end crescendo in which the
beetle jumps of the log and fies through the
air Starsky and Hutch style, landing on the
rock and skidding to a halt, he adds.
Insects in Detail
Although the artists found a plethora of useful
textures online, they took things a step further,
contacting an expert at the Natural History
Museum, who helped the team fnd the spe-
cifc insects they were looking for. Tey then
took high-res photos of the bugs and, using
Adobes Photoshop, applied those surfaces,
along with some hand-painted textures, onto
the CG models. Te trick was to just keep
adding more and more detail, says Bussell.
Once the base model was created and the
UVs unwrapped, we started applying the high-
res textures. A fnal level of detail (pores and
imperfections) was then added in ZBrush.
Te insects were rendered in Softimage and
Mental Images Mental Ray. A Spheron cam-
era was used on set to capture HDRIs from
both a chrome (for refections and high spots)
and gray ball (for shading and color tempera-
ture) at the same angle and with the same
camera. Also for every shot, the crew photo-
graphed plastic insect models on set. We got
funny looks from the crew, but it was a useful
lighting guide, Bussell points out.
In addition to the beetle, the group focused
on the animation occurring around the main
insect. All these collective movements were
achieved in Autodesks Maya by a small team
led by Johannes Richter, which added particle
atmosphere to all the shotsfrom the pollen
to the small fying insectsto help bring the
shots to life. According to Bussell, the dust
trails and debris elements provided the biggest
challenge here, with the group using references
of various elementsfrom radio-controlled
cars skidding through dusty terrain, to a car
driving through the desert. It all boiled down,
though, to artistic license, since a bug the size
of the CG beetle wouldnt ever kick up as
much dust as it did in the spot.
Tat aside, we felt it was an important fnal
touch that referenced back to the idea that this
was a car chase, explains Bissell.
All these elements were then composited
into the fnal shots using Autodesks Flame
and Te Foundrys Nuke. Te comp team
also used Nuke to enhance the undergrowth
and vegetation of the live-action backgrounds.
Te environment in one of the fnal shots, in
which the beetle is fying through the air, was
put together entirely in Nuke using still pho-
tos from the set.
So, what made this project so successful?
Bissell says it boils down to a good idea from
the very start. I had the luxury of working
on some of the really great iconic work in ad-
vertising over the years, and this one is right
up there, he says. Every artist at Te Mill
wanted to work on it. Its just one of those
projects that has all the right ingredients from
the start.
In contrast to the Volkswagen commercial, in
Kia Optimas One Epic Ride, the focus is
on the vehicle throughout this wide-ranging
adventurewhich takes the audience from
land, to sea, to a distant planet, and beyond.
Te action starts of with all the suspense
of a James Bond flm, as a police ofcer im-
personator makes of with a couples Optima,
leaving them handcufed to his parked mo-
torcycle. As the person drives along a coastal
highway, a villain in a helicopter fres a high-
tech magnet, lifts the car, and carries it out to
sea to an awaiting yacht, where a handsome
fellow surrounded by beautiful women eagerly
awaits its arrival. Suddenly, in a nod to fantasy,
Poseidon emerges from the water and grabs
the vehiclebut only momentarily, as a green
light from a hovering spaceship beams the car
aboard. Te scene cuts to a sparse, dusty land-
scape, where an alien takes the wheel. A time-
warp portal opens, and the Optima is sucked
through to the other side, where a Mayan chief
receives this bounty from atop a pyramid, as
tens of thousands of warriors cheer in appre-
ciation of their new gift.
Sound a bit over the top? Tats the inten-
tion, says executive producer Melanie Wick-
ham from Animal Logic. Te purpose was
for everyone to go to extraordinary lengths to
get this car, with the antics getting more ridic-
ulous as [the spot] moves along, she says.
While the assets for the 60-second commer-
cial were built at Animal Logics Sydney head-
quarters, the live action was shot in California
at various locales, including a soundstage, with
some members of the Australia team attending
those shoots. Because of the short production
schedule for the spot, accurate previsualization
(created in Autodesks Maya) was especially
crucial to the spots overall success, notes Matt
Gidney, CG supervisor. Detailed concept
work was equally important, as it gave the di-
rector, agency, and client a clear understand-
ing of this design-intensive, multi-sequence,
multi-location spot.
Concept work is used to pitch something
beautiful, but for us, it was important to estab-
lishing direction as quickly as possible, says
Gidney.
Extreme Elements
Te spot incorporates many diferent sets
and infuses many diferent genres, each with
its own distinctive look. All the backgrounds
While the insects, like the two mantis above,
are CG, the environments in the commercial
are mostly organic, re-created in the studio
and shot as live action.
Kia Optima:
One Epic Ride
Joseph Taylor, School of Animation & Visual Effects Ting Chian Tey, School of Animation & Visual Effects Mustafa Lazkani, School of Animation & Visual Effects
enroll now
earn
your aa, ba, bfa, ma, mfa or
m-arch accredited degree
engage
in continuing art education courses
explore
pre-college scholarship programs
www.academyart.edu
800.544.2787 (u.S. Only) or 415.274.2200
79 new montgomery st, san francisco, ca 94105
Accredited member WASC, NASAD, Council for Interior
Design Accreditation (BFA-IAD), NAAB (M-ARCH)
*BFA Architecture and AA & BFA Landscape Architecture
programs not currently available online.
take classes online or
in san francisco
advertising
animation & Visual effects
architecture
*
art education
fashion
fine art
game Design
graphic Design
illustration
industrial Design
interior architecture & Design
landscape architecture
*
motion Pictures & television
multimedia communications
music for Visual media
Photography
web Design & new media
March 2011 30
nnnn
Broadcast
began with live-action plates, though a good
portion of the objects needed to support the
story line were built in CG. Even the alien
landscape is practical, shot near the Mojave
Desert, albeit with digital moons augment-
ing the landscape; a 3D alien completes the
scene. In addition to the practical backdrops,
the commercial incorporates matte paintings
and digital set extensions.
We were faking quite a lot. Every shot was
touched in some way, says Andy Brown, VFX
supervisor.
Te most obvious computer-generated ele-
ments (by way of the action) are the helicop-
ter, boat, and, of course, Poseidon. Mostly the
star of the spotthe caris practical, though
at times, it, too, had to be built digitally.
Maya was used to create and animate the
models; texturing was done in Maya and
Adobes Photoshop, with some experimenta-
tion conducted in Te Foundrys Mari. Mean-
while, Te Foundrys Nuke and Autodesks
Flame were employed for compositing. For
tracking, the group used 2d3s Boujou. Ren-
dering was done in Pixars RenderMan and
Animal Logics MayaMan, the studios Maya-
to-RenderMan software.
One scene that especially challenged the art-
ists, and for which they are most proud, is that
with the all-CG, water-simulated Poseidon.
During the past few years, Animal Logic
has been developing ALF, its 3D software
framework, and when it came time to tie up
some lose ends with the water module, Gid-
ney had sat down with the studios R&D team
to determine the best solution to incorporate
into the framework. After a test period during
which time the developers examined Side Ef-
fects Houdini, Next Limits RealFlow, and in-
house solutions, Animal Logic committed to
extending the functionality of the proprietary
water modules for the ALF Nexus tool set.
We decided that we could get more done
within our own framework, because once the
coding was done, we could iterate on the so-
lutions quickly, saving expensive resource cal-
culations, explains Gidney. We just have to
code a particular solution once. As a result, we
were able to do some very large simulations
distributed across the farm, producing vast
amounts of data describing water, which we
then could pass back into RenderMan. With
this setup, explains Wickham, the team only
has to write hooks into other software, such
as Houdini, Maya, and Autodesks Softimage,
saving a great deal of time otherwise spent do-
ing complex coding.
Tat decision paid of when it came time
to simulate the water for the Poseidon se-
quence. We were able to get those sims out
very quickly, says Wickham, noting that the
studio shares the framework across its flm
and commercials divisions. In fact, the water
module was used extensively in the animated
feature Legend of the Guardians: Te Owls
of GaHoole.
Animal Logics teams in both Los Angeles
and Sydney worked together on Epic Ride.
According to Brown, all the previs and shoot
prep was done in LA, and the spot was post-
ed in Sydney. As Wickham points out, Te
agency was a little nervous sending [the work]
down to Sydney, but with our review tools, its
now easy to work remotely. And thats a con-
cept the crew proved true in epic style.
NFL: Best
Fans Ever
Perennial advertising giants Budweiser and
Coke are not the only brands known for cre-
ative Super Bowl commercials. In fact, the
NFL has been coming up with smart plays
of its own in recent years, including 2011s
Best Fans Ever, featuring digitally altered
clips from a range of favorite television shows
past and present in which the characters are
re-dressed in team gear and football-centric
elements are inserted into the scenes.
We were tasked to create a story for the
Super Bowl built around the experience that
everyone shares, recalls editor Ryan McKenna
from Te Mill. Te group settled on the con-
cept of preparation, focusing on the antici-
pation and excitement of the big game.
A large crew from Te Mills New York of-
fce spent weeks sifting through mounds of
television footagefrom iconic series such as
Seinfeld, Cheers, 90201, Te Brady Bunch, and
the Sopranos, as well as Glee, Te Family Guy,
and morelooking for certain moments that
had potential. Tat is, potential for the clip to
be re-created into a fan moment. Tose clips
were then placed into categories describing the
scenefor instance, stars delivering one-liners,
A mix of in-camera and digital elements fuels the unique Epic Ride spot. Animal Logic used its own
water modules to create the all-CG Poseidon at top, while a range of off-the-shelf tools, including
Maya, were used to create the digital elements for the bottom scene, which also includes a live actor.
March 2011 31
Broadcast
nnnn
making entrances, eating food, and so on.
Te list of shows didnt really shrink much
from what we started with, says McKenna.
Tere were very few Nos. Having actor
Henry Winkler and actor/director Ron How-
ard sign of from day one, minute one, on
their Happy Days clips didnt hurt the NFLs
cause, either. Other talent soon followed,
though some with caveats. New Yorker Jerry
Seinfeld would only agree if he was portrayed
as a Giants fan, which is what we wanted him
to be anyway, says Ben Smith, creative director
at Te Mill-NY.
Real Fans
Te fctional location of the shows dictated
which team those characters would root for:
Cheers Norm is dressed in a Patriots jersey;
the Sopranos crew is decked out in Jets gear;
Te Dukes of Hazzard s General Lee sports
a Falcons logo. Te group decided early on,
though, that the characters, who span nearly
40 years of television, would wear modern
styles rather than those more appropriate for
their period. According to Smith, this made
the gags more obvious to the audience, despite
the fact that the change was otherwise seam-
lessly integrated into the various shots.
Typically, the agency would frst secure rights
to the imagery, and after the edit was fnalized
and locked, the post team would then begin
its work revising the clips. However, when the
client handed Te Mill this project, the group
was already facing a late-running clock. So, Te
Mill crew had little choice but to begin post
work on some of the clips that had been ap-
proved by the network, with the hope that the
talent would sign of as well; if the rights were
not granted, then the work was abandoned for
another clip that ft into the edit.
For this project, the edit didnt get locked
for about six weeks, McKenna notes. As a re-
sult, there were times when the team would
get about 90 percent fnished with the efects,
and a shot would change, requiring new edito-
rial, and with it, new efects.
One Shot at a Time
In the end, the commercial contains approxi-
mately 40 altered clips, though the digital crew
worked on far more than that, for some reason
or another, didnt make it into the fnal spot.
How the team approached each clip, however,
varied. No two shots were the same, says
Smith. Some shots contained 2D elements
flmed at Te Mill using the studios lighting
and greenscreen setup, and then composited
into the clip; others incorporated CG ele-
ments. Tis mixed approach required the team
to simply take a brute-force approachwhat-
ever the problems were in the shot, the artists
had to deal with them in whatever method
worked best for that clip. Sometimes the art-
ists tried diferent solutions until one stuck.
A lot of the camera work contained nodal
pans and tilts, so there wasnt much 3D cam-
era tracking to do, says Smith. Tat made
tracking and comping much easier. Often,
the group had to take out the camera move,
clean up the frame, composite the new imag-
ery, and then add the camera move back ion.
Ten there were trickier shots that required
CG, as with the Seinfeld clip of Jerry and New-
man. Even though the camera is just a pan,
because their bodies are moving so much, we
couldnt get away with a 2D approach. It had
to be a 3D solution, explains Smith. Tat
involved tracking the camera, roto-animating
both characters, and then building the jersey,
hat, and jacket, and then lighting, rendering,
and compositing as we normally would do.
Tats a long process. And just for a few sec-
onds of a clip. Times 40 clips.
What we learned on one shot couldnt be
applied to another because it contained a whole
other set of problems, says McKenna. Tats
unusual for VFX shots, where its usually one
setup that is propagated through all the shots.
Te work also entailed a great deal of cloth
simulation, since most of the revisions involved
clothing. It was a challenge because we were
dealing with a moving camera and a moving
person, notes Smith. Compositing 3D cloth
next to live-action limbs, where a live-action
hand meets a CG cufit had to track abso-
lutely perfectly or there would be slipping.
For the cloth sim, the artists used Au-
todesk Mayas nCloth. Tey also employed
Apples Final Cut, as well as a mishmash of
Autodesks Maya, Mudbox, Softimage, and
Flame; Science.D.Visions 3DEqualizer; Te
Pixel Farms PFTrack; Te Foundrys Nuke;
Adobes After Efects; and FilmLights Base-
light for color grading.
According to Smith, the most difcult foot-
age to work with was from 90210 because the
quality was so poor. But then again, nearly ev-
ery plate the group dealt with was a diferent
format and quality. To make matters worse, for
most of the shows the group had to work from
DVDs as opposed to higher-quality tapes due
to the time crunch.
Even though the path taken to get the f-
nal results took many twists and turns, in the
end, Te Mills work on Best Fans Ever gen-
erated a large number of fans as wellfrom
audiences as well as the talent used in the clips.
I was sure we would end up with four shows
[that would sign on]. Te
project just seemed too am-
bitious for the timeframe,
says McKenna. But this
just goes to show the power
of the NFL. People love
it. n
Karen Moltenbrey is chief editor
of Computer Graphics World.
The Mill team re-dressed actors and sets from approximately 40 iconic TV shows in NFL gear for an
NFL ad. The project required a tremendous amount of camera tracking, rotoscoping, and compositing,
though each clip required a unique approach.
Use your smart-
phone to view
video clips of
the commercials
discussed here.
C
r
y
W
o
l
f
e visual eects shots in Red Riding Hoods modern retelling of the
classic fairy tale total only approximately 230, but they include the
pivotal character, the werewolf, and the medieval setting in which the
wolf and Red live. Catherine Hardwicke directed the Warner Bros
romantic thriller that stars Amanda Seyfried as the red-hooded Val-
erie, Gary Oldman as the werewolf hunter Father Solomon, and Julie
Christie as Reds grandmother.
Jerey A. Okun supervised the visual eects work, which included
79 shots of the always CG werewolf created at Rhythm & Hues, and
a 3D village and set extensions by Zoic Studios, with Soho VFX han-
dling everything else. In addition, Paul Bolger at COS FX, hired to do
temp work, eventually took on nal shots. His work was so good, we
asked him to output at 2, Okun says. He added debris to a shot
in which someone ies through a bookshelf, made an ax y through
the air, did the eye-changing shots on the character that turns into the
wolf, and others.
CG Character
Claim Jump ers
Industrial Light & Magic uses techniques and processes
honed in visual eects work to create live-action director
Gore Verbinskis rst CG feature animation
By Barbara Robertson
Mother
of Invention
March 2011 32
In this modern fairy tale, a CG werewolf stalks Red Riding Hood by night, but
during the day, pretends to be a friend-or, maybe, family By Barbara Robertson
C
r
y
W
o
l
f
Which character? It could be anyone in the
village, Okun says coyly. Barbara Robertson,
CGW West Coast editor, asked Okun to tell us
more about the werewolf and the other visual
efects shots in the flm.
Does the werewolf transform from one of
the characters in the village during the flm?
We specifcally decided we didnt want to
show a transformation. He shows up reason-
ably formed every time. I was up for the challenge, but from the story
point of view, it didnt make sense, and the expense didnt make sense.
Even so, toward the end of the show, the studio asked us whether wed
again explore doing that. Instead, we used an old trick: When the wolf
gets angry during a big fght scene in the daytime, we fgure out who the
wolf is because the eyes of the individual who is the wolf change.
Reasonably formed?
To economize, Catherine [Hardwicke] decided to introduce the
wolf during an attack sequence with a series of blurs. Te concept
was brilliant, and the execution was doubly brilliant because of the
work by my editor, Neil Greenberg, and Craig Talmy, Derek Spears,
and others at Rhythm & Hues. We were able to fnd places where
you can begin to see the wolf. We ramped up the action beyond
what we thought we could aford, yet stayed within the budget and
schedule. We ended up with an exciting sequence that reveals the
wolf bit by bit based on the actions.
What does the werewolf look like?
Our wolf doesnt look like a wolf, exactly; it looks like our wolf. It
has four paws, a snout, a tail, and short hair, almost like a greyhound.
We discovered that we lost muscle defnition with longer hair. Derek
[Spears], the Rhythm & Hues supervisor, suggested porcupine quills on
his shoulders to make the character look more lethal. Catherine didnt
like quills, but we used something like thatspikey hair that looks like
it has a lot of product in it. Te face is based on a wolf s face, but
the nose is more lethal-looking, and the teeth and gums look like they
havent been brushed in years. We added dried blood in the fur, and
spittle and goo. Te wolf eats a lot of people. And we spent a lot of time
on the eyes, especially because we had to fgure out how to get human-
ity into the eyes. Te wolf s eyes are amber, and sometimes they glow.
We could control the glow on a per-shot basis based on how menacing
or kind he . . . or she . . . was. We studied a documentary about wolves
that had phenomenal shots of the eyes. When the lights right and you
see the wolf in profle, the depth is visible. So we had to do 3D eyes and
add glow on a 2.5D basis because the glows have to come from deep
inside. For a couple shots, we made the eyes in 8k resolution. I doubt
anyone will notice all this subtlety. But theyll feel it.
How did you come up with the design?
Catherine did an amazing amount of research. She found every were-
wolf from TV, flm, and books. Digital Domain created a 3D wolf on
a turntable from the concept art that Catherine and I presented to the
studio to get approval. Once we got a green light on the flm, Digital
Domain wasnt available, and we hired Rhythm & Hues to fesh out
March 2011 33
CG Character
nnnn
Image courtesy Warner Bros. Pictures.
In this modern fairy tale, a CG werewolf stalks Red Riding Hood by night, but
during the day, pretends to be a friend-or, maybe, family By Barbara Robertson
Jeffrey Okun
March 2011 34
f
the design. Te wolf has to do things a wolf
cant do, so our challenge was to still keep him
true to form, and Craig Talmy did an amaz-
ing job.
How does the werewolf act?
Teres a sequence in the frst attack where
he was supposed to stand on two legs and
corner Valerie. He looked horribly deformed.
Wolves cant lift their elbows sideways; they
can only go forward. So we had to fnd ways
to make our wolf look right with the action
they had planned and shot. We had toIll
create a new wordtruthify what they shot
on set. Craig fgured out some clever stuf,
and it all clicked when Catherine, who is
from Texas, decided with my help that the
wolf should be more like a rodeo horse than
an anthropomorphic wolf. We found mo-
tions we could justify from out-of-control
rodeo horses and bulls, and feral hyenas, and
we mixed them with feline actions. We had
sleekness and grace countering frenzy and
feral. And that was the key to making this
wolf what it was. In the dye pool alley, when
the wolf tries to persuade Valerie to come
away with him, or her, we mixed feral crazy
with reasoning and persuasiveness, and that
ups the ante. And then in the fnal sequence,
the wolf shows intelligence, patience, and the
beginnings of insanity. Te wolf s feral side
bubbles up because the clock is ticking.
What about facial animation? Does the
wolf talk?
We chose early on to have the wolf talk
through telepathy because we wanted to keep
him, or her, believable. Originally, we had a
lot of animation and secondary animation
with the facial animationthe face came
alive in a realistic way. We had wind blowing
in the fur. Subsurface muscles moving around
to demonstrate agitation and frustration. But,
we discovered that it looked like we had failed
to animate his mouth, like the animation was
unfnished. So, we had to dial back the sec-
ondary animation to a degree. It was always
a battle between how much we could do and
not make it look like he should be talking.
How did you flm the wolf on set?
We asked for a green-suited actor because
its always funny. Instead, the person the wolf
ends up being was there doing the dialog
wearing a wire mask that put the snout and
the eyes in the right place, and we had the
person posed at the right eye-line height. Also,
we had fufy, a Styrofoam cut-out of the wolf
so people could understand how big the thing
was; stufy, the 3D furred head and shoulders
on a C-stand with eyes that lit up; fatty, a
fat piece of foam core in the shape of the
wolf lying on the ground so people wouldnt
invade his space; and a stunt performer wear-
ing a wolf suit from a costume store, the wire
head thing, and a wire tail. When we were
blocking the scenes and working out the ac-
tions on set, the stunt guy was brilliant. Hes in
such good shape that he would really scare the
actors, and thats what we needed.
Our procedure involved a number of passes
that Catherine graciously let us have. For the
action scenes, we shot a pass with the stunt
guy and a witness camera. Ten wed shoot a
pass with stufy to get fur lighting reference.
Ten, wed set up a laser eye line or put an X
on a C-stand, and theyd do the scene with no
wolf. During a sequence in the dye pool al-
ley, we had such wild camera moves that we
knew we couldnt re-create them, so we shot
blank tiles and put them together after the fact
when we knew which part of the set they used.
We shot the move with all the actors. Ten
the camera operator, Steve Campanelli, shot a
blank version to the best of his recollection.
And we went in with still cameras and HD
cameras to fll in all the perspectives.
Was the movie shot entirely on sets?
We shot the entire movie inside a tiny sound-
stage, although its supposed to take place out-
doors, in a village in a forest in the hills. We built
the sets to have depth, even though theyre up
against a wall, by using forced perspective. We
fgured out what lenses the DP would use and
how often wed be shooting of the set based on
the lenses, then fgured out a way to not shoot
of the set, and then said, Lets throw away the
budget for a moment and fgure out what we
would do if we were flming on location in a
village in a forest. We included that in the vi-
sual efects package. Ten we had them hang
a neutral gray curtain of set. Most of the story
takes place in the snow, so if you shoot a bit
of set and the gray shows up, it will look like
gray sky.
What we didnt take into account was that
because of the style of shooting and the shot
schedule, a lot of lights had to be in place,
and some were in front of the set. So we had
a number of fx-it shots. For example during
Solomons arrival, Steve [Campanelli, camera
operator] brought the camera low and shot up,
which put Gary Oldmans head inside a light.
Zoic did a phenomenal roto job to fx that.
What kinds of set extensions did you do?
Zoic spent a great deal of time designing the
part of the village that we didnt build on set
alongside Catherine. Because it was 3D, they
were able to drop it into shots and open up
the feel of the village. When a shot felt claus-
trophobic, we also sometimes lowered the
frame and added sky, trees, and mountains
beyond, and the tops of houses. Grandmas
house was on another ridiculously small stage.
Tey put a white cyce all the way around it,
with lights everywhere and trees. We removed
all that, and instead of just getting rid of hot
spots, we added a lake and mountains to open
it up. We considered how they would shoot
it if it were real, scaled down based on bud-
get, and then looked for opportunities to add
things back in.
Inside the tavern, for example, they had a
small practical set and scenes with an awful lot
of people. And again, we have lights directly
behind peoples heads, but without any mo-
tivation. So we put windows in the tavern,
which fxed that and, oddly enough, makes
the tavern feel better.
Did it feel like you were illustrating a
fairy tale?
It did. On the visual efects end, we strug-
gle between what looks cool and what serves
the story best. It took a little while to get into
Catherines head to understand the look she
was going for. Ten, I understood it was a
little bit fairy tale with an edge of reality. So,
wed put a fock of birds on the ground to dis-
tract the eye. Darken the left and right sides
of the frame. Desaturate images and let the
reds pop.
Tis sounds like more work than 200-
plus shots suggests.
For me, it was some 500-odd shots. Zoics
set extensions would go to Rhythm & Hues
to drop the wolf in, and then Soho would add
the moon and sky. Zoic would build a city, and
Rhythm & Hues would add the wolf. Te three
vendors shared shots, and I had to account for
them as three separate shots because I had three
payments. It was a moving target with a lot of
pieces sliding in all directions. And Catherine
[Hardwicke] likes to screen a lot. Every two
weeks, we had a friends and family screening,
and the studio did screenings, as well. So, ev-
ery two weeks we had to have new or updated
shots ready to drop in. Tat was a lot of temp
work, and then that kicked into real work. So
to me, it felt like a 1500-shot show, it was that
intense. All the moving pieces. Staying liquid
and trying to be responsive to anything the di-
rector and studio asked for. And, staying on
budget. It was fun. n
nnnn
CG Character
The fusion of video, audio, animation and effects. The inspiration behind content
produced to inform and entertain audiences around the corner or across the
globe in any number of formats. This is the art of integration.
Experience the gallery of innovation that is the NAB Show
professional graphics,
featuring AMD Eyefnity multi-display technology.
2011 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, FirePro, the FirePro logo, and combinations thereof are
trademarks of Advanced Micro Devices, Inc.. Other names are for informational purposes only and may be trademarks of their respective owners.
COMMAnD yOur CrEAtI Ons
ATI FirePro