Lifelike means convincing the audience an animated character has intelligence, personality, and emotion while inhabiting
a physical world.
Toy Story, the first full-length computer-generated animated feature film (released 1995) established itself as a visual benchmark for computer graphics hardware and software development. Soon after the film’s debut, graphics chip makers wanted to know how they could compute Toy-Story-quality imagery on a PC; game developers wanted to know how they could deliver Toy-Story-quality animation on game consoles; and robotics researchers wanted to know how they could build artificial intelligence into their machines to achieve Toy-Story-quality lifelike characters.
As we at Pixar tried to answer, we also sought to create scenes even more complex, images more wondrous, and characters more fluid. For A Bug’s Life (released 1998), we extended our lighting and shading methodology to depict the transparency and back-lighting of an insect world. We developed new methods for modeling and animating large crowds of characters. And we embraced the use of subdivision surfaces to provide more flexible and organic characters.
Toy Story 2 (released November 1999) leveraged these developments, depicting the Toy Story world with far more detailed sets, visually richer texturing, and more sophisticated design and animation of human characters.
But any claim that the answers to these questions lie in more processing power, bandwidth, and memory obscures the more interesting truth. That’s why we focus here on how—and why—Pixar animators have made Buzz, Woody, Flik, and many other characters so lifelike.