Docstoc

Gregory Francis Final Research Paper

Document Sample
Gregory Francis Final Research Paper Powered By Docstoc
					                    GAME DESIGN: CHARACTER ANIMATION DESIGN
                                                       Gregory Francis
                                               Department of Computing Sciences
                                           Villanova University, Villanova, PA 19085
                                            CSC 3990 – Computing Research Topics
                                                 Gregory.francis@villanova.edu



ABSTRACT                                                           invention of computers. However, these older and highly
                                                                   manual techniques first introduced by Walt Disney and
Recent advances in technology and work in artificial               other early animated filmmakers set the stage for
intelligence have lead to the development of more                  character animation as we know it with motion pictures
cognitive virtual characters in video games today. Thus,           like Snow White and the Seven Dwarves. More recently,
video games that are released upon the gaming industry             companies such as Pixar have further developed the
today tend to take place in a very realistic and fluid virtual     technology used for animation with animated features
environment. With procedures like rotoscoping in the               such as Toy Story. Each of these very different movies
past helping to pave the way for modern methodologies              used a form of motion capture to aid in the animation of
including cognitive modeling languages, character skin             their onscreen characters.
animation, and finite state machines the animation and
cognition that video game characters are endowed are               When Walt Disney Studios released Snow White in 1937,
improving exponentially. This paper aims to explain the            it used what is known today as rotoscoping to give actions
most modern methods in which developers animate their              to the characters in the motion picture. The first
characters in order to create the realistic environments           rotoscope was invented by Max Fleischer in 1915. “The
that gamers immerse themselves in during game play.                device projected live-action film, a frame at a time, onto a
                                                                   light table, allowing cartoonists to trace the frame’s image
KEY WORDS                                                          onto paper” [1]. Rotoscoping is still used today in the
Character animation, game design, animation design                 entertainment field in order to copy realistic motion from
                                                                   actor to cartoon. It is actually the ancestor of motion
                                                                   capture.
1. INTRODUCTION                                                    In its day, rotoscoping was cutting edge motion capture
                                                                   design. But as time progressed, technology became more
In households today, video games are a very common                 advanced. As such, more advanced motion capture
sight. Video games such as Bungie’s Halo 3 and                     systems were designed. One such system was that of Op-
Ubisoft’s Assassin’s Creed boast intricately designed              Eye. This system relied on blinking LEDs and a series of
character animations and intelligence [9]. But how do              cameras. “They wired a body suit with the LEDs […]
these characters inherit the actions that they perform?            Two cameras with special photo detectors returned the 2-
What is the technology that supports these very lifelike           D position of each LED in their fields of view. The
behaviors, movements and actions? In this paper, a                 computer then used the position information from the two
historical overview ranging from the original                      cameras to obtain a 3-D world coordinate for each LED”
methodologies for character animation, such as                     [3]. Op-Eye was used in the Graphical Marionette project
rotoscoping through to the current techniques for                  that was a joint project by MIT and the New York
character animation are presented. Particular emphasis is          Institute of Technology. The two combined efforts to
placed on the technology now used by programmers                   track the movements of the human body using motion
whose jobs it is to implement today’s popular and highly           capture. This project, conducted from 1982-1983 helped
graphical games, with the capability to use approaches             to bring about the advanced motion capture systems we
such as motion capture design and cognitive modeling               know today.
languages.
                                                                   Another early method of motion capture was originally
                                                                   used by Tom Calvert of Simon Fraser University from
2. BACKGROUND/PAST HISTORY                                         1980-1983. He attached potentiometers at strategic points
                                                                   on the body in order to track human motion for a study in
Many now-antiquated approaches to character animation              his biomechanics lab. “To track knee flexion, for
were not first seen in video games, since they pre-date the        instance, they strapped a sort of exoskeleton to each leg,
positioning a potentiometer alongside each knee so as to
bend in concert with the knee. The analog output was then
converted to a digital form and fed to the computer
animation system” [3].

In 1992, Brad deGraf introduced a different and very
improved type of motion capture. He called his system
Alive! and it eventually had the capability to track the
motion of the actor’s head, torso, feet, and hands [3].
This system actually helped to create the type of motion
capture that aided in the creation of Toy Story. By the
time that Toy Story was completed in 1995, character
animation had evolved quite substantially since
rotoscoping. Toy Story used a method of motion capture
very similar to what we know today to enhance the
viewing pleasure of the film. Up until now, the human
body had never really been used for the animation of
characters in movies, let alone video games. For this
reason Toy Story was truly a ground breaking movie [9].          Figure 1: Motion Capture example from The Polar Express [11]

3. CURRENT METHODOLOGIES                                      3.1.1. CAPTURING THE NUANCES OF
                                                              HUMAN ACTION
In today’s gaming world programmers use cutting-edge
methods to animate virtual characters. These methods          Motion capture has other advantages that tend to be over
include a more advanced version of motion capture,            looked. While programmers have the ability to animate
character skins, cognitive modeling languages and finite      characters with complicated actions including evasive
state machines.                                               maneuvers in certain situations, they also have the ability
                                                              to portray many simple actions. For instance, a certain
3.1. MOTION CAPTURE DESIGN                                    character may be designed with subtle motions that
                                                              indicate some sort of emotion. For instance, oftentimes
One method that consistently has been used since the          gamers find their characters shrugging to indicate
early days of character animation is “motion capture.”        ambivalence; and while this may seem insignificant, it is
Motion capture first involves the portrayal of deliberate     these minor actions that truly make a virtual character
actions by a human actor. These portrayed actions are         seem real. In his book, Michael Kipp addresses the idea
captured by a system such that they may be translated to      that simple actions that humans perform on a daily basis
on-screen animation. Modern day systems need a series         are the ones that are often ruled out and that without these
of cameras as well as an actor outfitted in a suit designed   actions, a character cannot be realistic. He separates the
for motion capture. This suit is equipped with either         actions of humans and ultimately virtual characters into 6
blinking LED’s or reflective balls. Regardless of which       classes [9].
are used, the camera system detects the position of them
as the actor moves.                                           3.1.2. CLASSES OF ANIMATION
By using this method to aid in the design of a character’s
motions and behaviors, programmers can create such            The six classes that Kipp mentions are adaptor, iconic,
animations maintaining a very high degree of realism.         emblem, deictic, metaphoric, and beat. Each of these
The motions portrayed by a human are capture and              classes has their own specific actions that fall into place
translated to the virtual character. These motions are        under them. For instance the adaptor class of actions
recorded and captured using a system that is very similar     involves the touching of objects or one’s self. This may
to that of deGraf’s Alive! Series of motion capture clips     include the scratching of one’s cheek during a normal
may be linked together and characterized as high-level        conversation [9]. Once these classes of animation have
behaviors [2]. Figure 1 shows and example of an actors        been captured, it is important that designers have
motion being portrayed in a virtual character.                characters to perform them. Character skins aid in the
                                                              degree of realism conveyed by characters.

                                                              3.2. CHARACTER SKINS

                                                              Though designers may be able to capture the motions via
                                                              motion capture with a strikingly high degree of realism,
there must be a realistic character that performs them.
While many may not realize it, in order for a character to
seem human, a great deal of detail is required. This
includes the animation of tiny nuances like the skin of a
virtual character. Character skins, or meshes, are
typically laid over a hierarchical skeleton. Designers then
implement an action that the character may perform in the
sequence game play by using what is called motion
compensation and deformation. Both of these are then
applied to the skeleton that is laid beneath the character
mesh such that as the skeleton moves, so too does the
character skin. With gaming industry’s interaction
systems today, computation methods tend to dominate the
means by which motion is derived [6].

3.2.1. LINEAR SKIN BLENDING                                           Figure 3: Top Row: Desired outcome. Middle Row: Outcome via
                                                                     Linear Skin Blending. Bottom Row: Outcome via Extended Linear
                                                                     Skin Blending. This is with the addition of one hinge joint to prevent
Many character skins today are implemented by using                                           limb twisting. [6]
linear skin blending. Though this method is very accurate
and advantageous for designers, it also has its pitfalls and        3.3. FINITE STATE MACHINES
flaws. The most prominent downfall of using linear skin
blending is the twisting and stretching of limbs. Figure 2          After an actor’s motions have been captured, designers
shows an example of limbs twisting and stretching at key            need a way to store them so that a character can later
areas in the character skin [6].                                    search through them when determining which action to
                                                                    perform based on a particular situation. This can be done
                                                                    by using what is known as a Finite State Machine, or
                                                                    FSM. Once a character’s motions have been captured and
                                                                    organized into high-level behaviors, they are then
                                                                    intertwined with one another into the FSM [2]. Figure 4
                                                                    shows an example of a simple FSM [2].




    Figure 2: Example of the stretching and twisting of limbs [6]


3.2.2. EXTENSION OF LINEAR SKIN
BLENDING

Alex Mohr and Michael Gleicher have introduced an
extension to linear skin blending that fixes its original
                                                                                         Figure 4: A simple FSM [2]
pitfalls while maintaining efficiency. Along with
remaining efficient, this method allows for the depiction
of more detailed skins; this includes the bulge of muscles          3.3.1. SEARCHING THROUGH THE FSM
of characters. Mohr and Gleicher show that skin
deformations in virtual characters tend to collapse around          Using search algorithms, designers give the character the
hinge joints; these include the wrist, knee, and elbow.             ability to search through the FSM and plan a sequence of
While this collapse is inevitable with original skin                events and motions that are designed to meet a user-
blending, using their extension, it is possible to prevent          defined end. In today’s gaming industry, move-trees are
this by simply adding another hinge joint. These joints             used to generate these sequences of events. Move trees
are added in the locations where previous linear skin               are designed solely on a reactive basis such that
blending fails in order to prevent the original pitfall [6].        characters choose their actions based on the current
Figure 3 shows a desired design of a character skin, the            environment [2]. While it seems that a specific FSM is
pitfall of twisting limbs that comes along with linear skin         required to animate each character in an environment,
blending, and the manner that this is fixed with the                certain FSM’s may be applied to a range of characters.
extension of linear skin blending [6].
                                                                    3.3.2. APPLICABILITY OF THE FSM
One of the major advantages of using an FSM to aid in
the animation of a character in a gaming environment is
that in certain situations, one FSM can be used to animate               3.3.3. USING A GLOBAL APPROACH
many characters. This holds true for characters that are
not even of the same species. For instance, Manfred Lau                  Manfred Lau and James Kuffner introduce a new way to
and James Kuffner used the same FSM to animate both a                    search through an FSM that increases applicability as well
skateboarder and a horse placed in dynamic                               as artificial intelligence. Kuffner and Lau’s approach
environments. This is shown in figure 5 [2].                             creates a search tree of states of motion within the FSM
                                                                         and then decides between a group of possible sequences
                                                                         based on global characteristics before finally deciding the
                                                                         final motion. While avoiding a great deal of pitfalls, this
                                                                         approach also increases the applicability of the FSM. For
                                                                         example, an FSM designing a skateboarder can also be
                                                                         used to design the motion of a horse. This algorithm may
                                                                         also be interrupted at anytime to return a best-fit motion
                                                                         series if constraints call for it. The motions that are
                                                                         generated, however, may need to be smoothed out
                                                                         accordingly.

                                                                         3.4. COGNITIVE MODELING
                                                                         LANGUAGES

                                                                         Animation designers are constantly looking for ways to
                                                                         make their characters more intelligent. One way to do
                                                                         this is to give them the ability to learn on their own. This
 Figure 5: Both the skateboarder and the horse are animated using the
 same FSM even though they are of different character species and in a   can be done by means of cognitive modeling. By using
                      different environment. [2]                         cognitive modeling, programmers give their characters
The applicability of FSM’s also aids in efficiency of                    the ability to learn and plan their own actions. This
developing character animation. For example, one FSM                     method involves the animated character learning as it
can be applied to groups of characters that are all                      progresses. They then store the information, remember
designed to perform similar actions. This can be seen in                 certain scenarios and how the knowledge was obtained so
games today in many different situations. Take for                       that it can later be used to create its own actions in a new
example, a first person shooter in which groups of                       situation. Cognitive modeling may be broken down into
enemies must attack and eliminate an opposer. They may                   the subcategories of domain knowledge specification and
each be designed using the same FSM such that they                       character direction [10].
follow the same actions in order to seek out and destroy a
certain target. Figure 6 shows a group of one-hundred                    3.4.1. DOMAIN KNOWLEDGE
characters animated using a single FSM along with a                      SPECIFICATION AND CHARACTER
possible design for that FSM [2].
                                                                         DIRECTION

                                                                         Domain knowledge specification involves the character’s
                                                                         understanding of its surrounds as a global aspect along
                                                                         with the means by which these surroundings are subject to
                                                                         change. The aspects of a character’s global computer
                                                                         generated domain that are subject to change are known as
                                                                         “fluents.” Character direction, on the other hand,
                                                                         involves the planning and choosing of a sequence of
                                                                         events for the character to perform in certain situations in
                                                                         order to obtain a user specified end. For this reason, the
                                                                         high level programming language called Cognitive
                                                                         Modeling Language was developed, or CML [10].

                                                                         3.4.2. ANIMATING CHARACTERS BASED
                                                                         ON “FLUENTS”
                                                                         CML is based off of the idea that “knowledge + directives
Figure 6: Top image depicts one-hundred characters animated with the
                                                                         = intelligent behavior.” Since characters are armed with a
 same FSM. Bottom image shows a possible design of that FSM. [2]
number of actions, both primitive and complex, CML             own actions based upon the abilities that they are
give them the ability to choose between both types of          endowed with via CML. This would allow for a more
actions to characters based on the “fluents” of the global     realistic and fluid environment.
computer generated domain. When an action is
performed and a “fluent” changes, CML allows that fluent       4.1.2 LET THEM SEARCH ON THEIR
to take on a new specified value. This can be seen in the      OWN
following CML pseudo code:

        occurrence drop(x) results in !Holding(x);             Since characters designed using CML have the ability to
       occurrence pickup(x) results in Holding(x);             learn actions on their own, why not allow them to search
                          [10]                                 through the FSM independently as well? When it comes
                                                               to characters designed with CML their ability to learn is
With CML, designers have the ability to give intelligence      endowed to them by their developers. After being placed
to characters as well as cameras to implement a more           in an environment filled with “fluents,” they must make
fluid and cognitive environment. Ultimately, unlike a          decisions as to what they should do in specific scenarios.
behavioral model which is reactive, when it comes to           With characters having the ability to learn on their own, it
designing character animations, a cognitive model is           seems feasible for them to be able to search through an
deliberative. With the help of the high-level language         FSM on their own as well. Take for example a character
CML, characters can analyze preconditions and                  whose sole purpose is to hide. The actions within the
consequences of actions following for a more realistic         FSM may be for them to run in all directions as well as
computer generated world of game play. The following           crouch behind objects to take cover. When the situation
CML pseudo code determines how long it has been since          calls for it, let the character search through the FSM
the character last spoke [10].                                 independently and make a decision on their own what
                                                               they should do.
    occurrence tick results in silenceCount = n − 1            With the ability to search through an FSM independently
      when silenceCount = n && !Talking(A,B);                  coupled with the intelligence to learn via CML, characters
occurrence stopTalk (A,B) results in silenceCount = ka;        seem more realistic. This ultimately would bring
   occurrence setCount results in silenceCount = ka;           developers one step closer to what seems to be the end
                         [10]                                  goal of creating a human-like virtual being.
                                                               Unfortunately, these characters are still virtual and
                                                               oftentimes they do not make the smartest of decisions.
4. PROPOSED WORK                                               [2][10]

No matter how advanced our technology becomes in all           4.2. MISTAKE RECOGNITION
fields, there is always room to improve. This holds true
for character animation design as well.
                                                               Virtual evolution of characters brings them ever closer to
                                                               the goal of a perfectly human complex. However, there is
4.1. COMBINGING THEM ALL                                       still one aspect that these entities lack; the ability to
                                                               reflect upon a certain decision, change, and improve it.
With the introduction, use and advancement of CML,             That is not to say that this is not possible. Using an FSM
FSM, and motion capture developers now have the means          already enables developers to interrupt the algorithm
and tools by which to endow their characters with              searching through it in order to output a best-fit algorithm.
intelligent actions and animations. However, what if a         This brings across the idea of interruption of a virtual
developer were to combine these techniques in order to         character’s action if they get stuck in a loop known to
animate their characters? In order to do this, it would        gamers as a “glitch.”
only be necessary to integrate CML into the process.           A glitch is when a virtual personification encounters an
Since an FSM is already a combination of high-level            internal problem and gets “stuck” performing some
behaviors designed and captured with the help of motion        miscellaneous action. Since an FSM can interrupt a
capture it would only be redundant to attempt to combine       character’s actions in order to make a best-fit animation, it
these two. Thus, the only combination necessary is             is plausible then to posit that if a “glitch” occurs the
between FSM and CML.                                           animation can be stopped entirely. Upon stopping the
                                                               animation developers could tell the character to restart
4.1.1. INTEGRATING CML                                         their entire process; reanalyze their surroundings,
                                                               including “fluents,” restart the search through the FSM,
Though integrating CML into an FSM may be                      and finally restart their process of action. With the ability
cumbersome, the process could potentially bring about an       to reflect upon and correct decisions, virtual characters
abundance of improvements. Not only would characters           again step towards the realm of reality. [2][10]
have the realistic actions given to them by motion capture
design, they would also have the intelligence to learn their
                                                               harms way. Ultimately, with the advancement of finite
4.3. QUALIFICATIONS                                            state machines, character skins, cognitive modeling
                                                               languages and motion capture, we have the ability to
With the experience that I have gained from programming        benefit and improve our own well being.
classes such as Computer Systems and Algorithms and
Data Structures, a solid groundwork has been set such that     REFERENCES
I would be able to step up to learning something as
complex as a CML. Coupled with the understanding of            [1] Alberto Menache. Understanding Motion Capture for
tree structures from extensive use in Java, I am able to       Computer Animation and Video Games. Morgan
grasp the idea of an FSM seeing as that in its simplest        Kaufmann Publishers, Sanfrancisco, CA 1995.
form it is a combination of loops in a tree-like structure.
Though this would be a big step in order to learn this type    [2] Manfred Lau, James J. Kuffner. “Behavior Planning
of programming, I believe that the foundation that has         for Character Animation.” Symposium on Computer
been laid underneath me from my programming classes in         Animation. Proceedings of ACM
the past qualifies me to be able to complete such a project.   SIGGRAPH/Eurographics symposium on Computer
                                                               animation, 2005. Pages 1-6.
4.4. TIMELINE
                                                               [3] David J. Sturman. "Character Motion Systems",
                                                               SIGGRAPH 94: Course 9, 1999.
If I were able to conduct a project with the aim of                    (This reference is only one page)
completing the work that I have proposed I believe that it
would take about two months to do so. Attempting to            [4] John Lasseter. “Principles of traditional animation
implement CML would be a time consuming process and            applied to 3D computer animation.” International
in itself would probably take anywhere from two weeks to       Conference on Computer Graphics and Interactive
one month. Once this has been done, at least another two       Techniques. Proceedings of the 14th annual conference
weeks would be necessary to allow characters to search         on Computer graphics and interactive techniques. 1987,
through their own FSMs without being told to do so. In         pages 35-44.
the remaining few weeks, the final step of allowing virtual
characters to fix their mistakes would be cumbersome.          [5] N. Burtnyk , M. Wein, Interactive skeleton techniques
This process would take anywhere from two weeks to             for enhancing motion dynamics in key frame animation,
another month to accomplish. Overall, the timeline for         Communications of the ACM, v.19 n.10, p.564-569, Oct.
such a project would range from two to three months            1976.
total.
                                                               [6] Alex Mohr, Michael Gleicher. “Building efficient,
5. CONCLUSION                                                  accurate character skins from examples.” ACM
                                                               Transactions on Graphics (TOG) Volume 22 , Issue 3
Although character animation continues to gain realism,         (July 2003) Proceedings of ACM SIGGRAPH 2003.
there are limitations and restrictions to a virtual            Pages 562-568.
personification’s cognition and animation. While we as
humans posses the ability to learn and consistently            [7] Katherine Pullen, Christoph Bregler. “Motion capture
increase our cognition seemingly without bound, a virtual      assisted animation: texturing and synthesis.” ACM
character may only advance insofar as their creators           Transactions on Graphics (TOG) Proceedings of ACM
endow them. The applications of character animations           SIGGRAPH 2002. Pages 93.
are many, ranging from the video gaming industry to the
analysis of human motion for biological and                    [8] Thomas Moeslund, Erik Granum. “A Survey of
biomechanical studies.       Though many refute the            Computer Vision-Based Human Motion Capture.” King
advancement of character animation because it is               Fahd University of Petroleum & Minerals, 2001. Pages
seemingly useless to science and only a drain on our           231-268.
intellect, character animation can in fact teach us more
about the way that we move. This can clearly be seen by        [9] Michael Kipp. Gesture Generation by Imitation:
the studies of Tom Calvert with his use of potentiometers      From Human Behavior to Computer Character
in the biomechanics lab of Simon Fraser University. The        Animation. Dissertation.com: Boca Raton, Florida, 2005.
advantages of using character animation design to
simulate human motion are truly beneficial to human            [10] John Funge, Xiaoyuan Tu, Demetri Terzopoulos.
kind. With the use of animation design we are able to          “Cognitive modeling: knowledge, reasoning and planning
figure out ways to improve ourselves physically. This          for intelligent characters.” International Conference on
subject also pertains to the military. With the use of         Computer Graphics and Interactive Techniques
virtual characters designed to be lifelike, we are able to     Proceedings of the 26th annual conference on Computer
simulate combat situations without putting humans in           graphics and interactive techniques, 1999. Pages 29-38.
[11] Rob McKaughan. “Motion Capture: Acting with
Balls”. February 22, 2006. Last Accessed December 9,
2009. Image of motion capture used for The Polar
Express.
http://www.artisticwhim.com/blog/archives/2006/02/moti
on_capture_acting_with_bal.html.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:9/9/2012
language:English
pages:7