Computer animation and its constant leaps forward

Animation never ceases to evolve, seemingly with every new release. But where did it begin, and will it have unintended effects?

It’s hard to believe we’re in a world where the original Toy Story, the first fully computer-generated feature film, is more than 20 years old. Its computer animation was revolutionary for 1995, but just how much have things changed since then. On a nearly unimaginable scale.

For perhaps the best indicator, look at the first part of this recording of Steve Jobs, talking to the American computing conference SIGGRAPH in August 1995, shortly before Toy Story’s release. He’s probably best known to you for leading the company which gave you all those nifty music players, but he was also critical to Pixar’s rise.

Him talking reverently about 600 billion bytes of data (roughly 60GB) which, let’s face it, consumer tablets exceeded years ago, gives you some idea of how far we’ve come in quite a short amount of time. But making an 80-minute short film where the most ambitious computer animated film barely went past five was an enormously risky undertaking. What enabled Pixar to do it, and what have the advances been since then?

Computer graphics was once the preserve of computer scientists, who as far back as the 1960s attempted to render images in the most rudimentary form. There were computer graphics companies, but almost none of them working to create elements for motion pictures. Tron was a brave early experimenter in 1982, but faltered at the box office for its troubles.

Ad – content continues below

The real breakthrough came with Lucasfilm’s Computer Animation Division, formed in 1979 and drawing in future Pixar alumni like Edwin Catmull and Ralph Guggenheim. It was a few years before Disney animator John Lasseter joined the group and produced a quirky 2-minute short, Andre And Wally B. 1986’s Luxo Jr., the short about father-and-son desk lamps playing with a ball, caused shockwaves in the computer and animation communities.

1986 was also a big year in that the division was sold by George Lucas to Steve Jobs, who branded it Pixar, and it heralded the crucial RenderMan, a ray tracing software (more on that fancy term later) which would become not just Pixar’s standard but for the visual effects industry, being used in every Visual Effects Oscar nominee for the last 15 years.

After several more short films and adverts, including the Oscar-winning Tin Toy, it was only in 1991 that Pixar started working with Walt Disney Animation Studios (WDAS), formerly named WD Feature Animation, developing the CAPS system to digitally colour parts of the studio’s Best Picture nominee Beauty And The Beast. After that film’s Best Picture Oscar nomination, a contract was worked out for a feature film.

Making Toy Story, or indeed any computer-animated film since, involved the usual extra-careful planning that animated films require, i.e. in storyboarding, scripting and concept art before anything is committed to animation. In Toy Story this prep alone took eighteen months to do.

The animation itself required collaboration between older methods and new; traditional hand-drawn animators were brought in to realise the characters, whilst computer scientists and engineers were on hand to write software and provide solutions to help create that vision.

The end result? A deceptively simple comedy about two toys quarreling for favourite status in their owner’s bedroom but growing to understand each other. Toy Story arrived just as Walt Disney Animation Studios’ peak of success had already been reached with The Lion King, and Pocahontas fell below its main advocate Jeffrey Katzenberg’s lofty expectations (more on him later). Of course it helped that the film itself was frankly better than could be hoped for, and was almost the highest-grossing film of that year.

Ad – content continues below

Suddenly, computer animation became the ‘in’ thing in Hollywood. Disney’s hand-drawn renaissance continued until the turn of the millennium, but within ten years of Toy Story, the computer division which WDAS had originally courted now outclassed by it to such an extent that 2D theatrical animation had to be abandoned. Now as Pixar have themselves taken a relative (important to stress that) downturn with the exception of Inside Out, WDAS have begun their own run of smash-hits, of which Zootropolis is only the latest. With Frozen, the studio created an enormous, phenomenon-level hit akin to almost twenty years before with The Lion King, and on par with their Emeryville counterpart’s own Finding Nemo and Toy Story 3.

Appreciating that Pixar has been under Disney ownership since 2006 (an enormous $7 billion sale), it really has become a technological arms race amongst the various animation studios, each pushing the other to create the newest and most advanced animation yet.

Not just within Disney of course. Jeffrey Katzenberg left Disney and founded Dreamworks with Steven Spielberg and studio exec/record producer David Geffen, making up the SKG. Dreamworks’ computer animation efforts began in 1996 in collaboration with Pacific Data Images, a small studio which had survived the surge and slump in interest in CGI around Tron’s time. Their first feature film, Antz, came out just a month before Pixar’s own ant-centric film A Bug’s Life, and involved new facial animation tools with special focus on eye control. Just as well too, considering its main character was voiced by Mr. Idiosyncratic-dialogue himself, Woody Allen, and featured almost half-a-dozen other celebrity voices.

Or how about Blue Sky Studios, who in a distribution deal with Fox have one of animation’s most consistently lucrative franchises, the Ice Age films? By 2002, when the first film was released, they had more realistic rendering software than either Pixar or PDI/Dreamworks. Developed by Ice Age’s director Chris Wedge, the renderer exploited better than before the idea of ray tracing to achieve more efficient, but ever subtler and more realistic lighting, shade and shadows.

Now explaining how rendering works is where things get complicated.

Ad – content continues below

Imagine if an animated scene had all its elements in place in a frame, like a live-action film with a camera ready to shoot it, and all that was missing was the lighting. You’d think that the simulated light rays would externally enter the virtual camera by bouncing off the scene elements, into the camera and creating the image, like a real-life scenario. Instead, the camera fires its own light rays, which ‘trace’ a path to simulated light sources and create a virtual image that way.

Concentrating on lighting and rendering what’s in the frame instead of trying to light and render an entire virtual environment just for one shot was, for the time at least, not time, resource or cost-effective. Ray tracing isn’t necessarily better or worse than its close cousin, path tracing, but can be a more streamlined way of rendering in that it only calculates lighting from direct (i.e. original) sources. Other non-physical processes like global illumination (how and where light rays are reflected by surfaces in a scene) are dealt with in separate computing equations, whereas path tracing needs more computing power but can deal with all of those equations at once, with arguably more realistic results.

Still with us? Good.

And Ice Age, still a decent-looking film, was 14 years ago. It’s safe to say the animation studios’ programs and methods are now ridiculously sophisticated, and only getting better.

Dreamworks Animation’s main advancement in recent years has been Premo, formerly Emo (stop snickering at the back). It basically makes animators’ lives a lot easier and more flexible, able to pose multiple characters in real time, where editing even one character’s position before Premo required time lags and data entry. Director Dean DeBlois credits this advance with being able to put thousands of elements at play in How To Train Your Dragon 2’s central battle scene. Premo is one of many software updates which have tried to redress the balance between powerful computer processes and animation artists applying their intuition, such as Pixar’s Presto or WDAS’ Nitro, the latter giving animators a realistic render as they’re working.

Disney’s Hyperion must surely be the motherboard (sorry) of all animation technology at the moment. Remember the bit about path tracing? Well, when aided by one of the world’s most powerful supercomputers, Hyperion is able to calculate more extensive and subtle lighting than ever before, with more than enough left over to create entire digital environments. Big Hero 6 was the first film made using Hyperion, and where before the fictional city of San Fransokyo would have been filled out by background paintings, now the entire city can be rendered and made functional, for the purpose of a more authentic view. You can watch their own video explanation of it here

Ad – content continues below

Natural effects are hardest to animate because everyone has real-life references for how liquids, weather effects and humans look and interact from every day of their lives. That’s why the first computer animated features tended to avoid such effects where they could. Toys were something Pixar was already innately familiar with animating, hence the focus of their first feature (notice the camera angle which means we don’t see the milk that a singed Woody dunks himself into).

Lasseter himself pointed out during Brave’s production the problem with animating specific clothing or hair: “The computer likes things are completely geometric and in a film like that nothing is geometric.”

Brave meant overhauling Pixar’s animation software, and part of Presto’s magic was in giving animators the power to pose millions of elements at once, including hair. For Merida’s hair specifically, it was a simulator named Taz (yes, after that Taz) which treated each hair like a coiled wire which would extend when Merida moved but spring back when she stopped.

And where getting just Sulley’s fur right was an enormous challenge on Monsters, Inc., Zootropolis is at the point where it can work out how to render 9 million hairs on giraffes, a species which doesn’t even provide a prominent character for the story. And that species is just one of 64 featured in the film.

Yet even noting these advancements, Pixar’s most recent film The Good Dinosaur is, whatever you think of the finished film, the current animation pinnacle. The stats alone stagger: just one sequence, featuring Arlo being swept away and separated from his dad, involved 17 terabytes, more data than the entirety of Cars 2, just four years ago.

Working with comprehensive data gained from the US Geological Survey, the film overall features 900 natural effects shots, twice as many as any other Pixar film. Brave, three years before, had river shots in the dozens; TGD has over 200. The animators could actually create fully volumetric, interactive clouds instead of painting them in.

Ad – content continues below

The results are utterly astonishing environments that met lots of criticism for ‘clashing’ with the stylized character models, but to me that’s just an extension of the 2D animation tradition, where the backgrounds felt tactile and real amidst the animated foreground. Finding Dory will predictably only improve Pixar’s processes further: this instalment can actually have an octopus, for instance (albeit missing one of its legs)!

There’s also a sense that, as digital effects are more and more prominent in live-action films than ever (and now at the point where we can actually debate if Jon Favreau’s recent The Jungle Book is actually live-action or an animated film), the filmmaking craft in animation is gaining more respect. Sharon Calahan, Pixar’s resident cinematographer, gained ASC (American Society of Cinematographers) membership in 2014 based on her animation portfolio, a pretty staggering feat given how rarely cinematography is thought about in relation to animated films by observers.

But then animation studios have received input from leading live-action cinematographers. The much-respected Roger Deakins gave Pixar animators brief advice regarding shots in the first twenty minutes of WALL-E, and has since consulted on several of Dreamworks’ films (HTTYD, The Croods). Frequent Tarantino collaborator Robert Richardson also supervised Big Hero 6.

If this switch to animated reliance on computer science has a problem, it’s how much power it’s given to those few studios. Only they can afford continuous investment into researching and developing new software, as well as bigger budgets for ever grander 3D animated spectacles, the kind that steals audience hearts year after year. How can independents, even those staffed with really talented animators and technicians, hope to compete?

Perhaps the closest we’ve had to a CG outsider win in the Oscars’ Best Animated Feature category is George Miller’s Happy Feet, made through Australian studio Animal Logic, and that was distributed by Warner Bros. Otherwise it’s been Disney/Pixar/Dreamworks hegemony (don’t get me started on The LEGO Movie’s snub again…). 

The Irish studio Cartoon Saloon perhaps shows the way, going back to 2D basics with its sumptuous Song Of The Sea, but using digital tools to enable even more techniques. Director Tomm Moore told us that making the film as is, with moving watercolour effects and more, would have been impossible without them.

Ad – content continues below

Now if all of this has a more satisfying flipside, it’s that computer animation is pretty good at exposing the shoddier efforts of those out to score a quick buck from a lucrative field. I’m talking about films like Foodfight!, basically Toy Story’s nightmarish opposite where a supermarket comes alive at night. Throwing in celebrity voices like Charlie Sheen and Christopher Lloyd and hurling dozens of popular food brands on screen instead of doing anything interesting with them, it’s a cynical disgrace of a film whose production woes left it with horrific, blocky animation. In a perverse way, it testifies to the importance of actual, particular expertise when making animation better than any latest, greatest wizardry ever could.

Another concern is that as computing power grows, older productions which were once seen as pioneering become too dated to resonate, their pixels too obvious to retain the audience’s willing suspension of disbelief. Could The Good Dinosaur really look dated in 20 years’ time? It doesn’t seem possible, but then Toy Story was unlike anything audiences had seen up to that point, and look how far animation’s progressed since then. It’s often pointed out by people in the industry that humans require some artistic twist to avoid them falling into the Uncanny Valley. And if Toy Story has aged noticeably, it’s in its human characters.

But they obviously aren’t the film’s attraction by some distance. Just skip to about 21:10 of the above SIGGRAPH video, and the second Toy Story clip. Remember this is probably the first time many in that audience had seen anything of Toy Story, and they don’t even have the full story for context.

Listen to how immediately they connect with Woody and Buzz’s bickering beyond the novelty of the animation, how genuinely they laugh at Buzz calling Woody, without irony, “a sad, strange little man”, and there’s why the film will never become dated. Toy Story didn’t coast on its novelty for immediate attention; it’s a classic because it used its innovative means to funny, tragic and unforgettable ends, alongside unrivalled voice acting and a perfect Randy Newman soundtrack. Computer animation will only continue to improve, especially with studios around the world to advance the form, but any film which follows Toy Story’s example has nothing to worry about.