Cowe G A, Johnston A, 2002, "Analysing and imitating facial movement" Perception 31 ECVP Abstract Supplement
Analysing and imitating facial movement
G A Cowe, A Johnston
Faces are dynamic channels of communication, and it is therefore important to understand how this information is encoded and represented. Computer-generated faces, or avatars, are becoming increasingly sophisticated, but are visually unrealistic and their control remains problematic. Previous work has implemented complex three-dimensional polygonal models, often generated from laser scans, with intricate hard-coded muscle models for actuation of speech and expression. Driving the avatar through mimicry involves tracking a real actor's facial movements, usually through markers physically attached to the face or by locating natural feature boundaries. Here we show that principal components analysis of the dense optic-flow fields generated by facial movement delivers interpretable component movements of the face. This allows the comparison of natural varieties of movement with standard facial action coding systems such as FACS [Ekman and Friesen, 1978 Manual for the Facial Action Coding System (Palo Alto, CA: Consulting Psychologists Press)]. Optic flow is calculated by using a biologically motivated optic-flow algorithm (Johnston et al, 1999 Proceedings of the Royal Society of London, Series B 266 509 - 518). Principal components are visualised by warping static images of the face. This fully automated technique yields a virtual avatar onto which the movements of an actor can automatically be projected for convincing performance-driven animation, with no need for markers.
These web-based abstracts are provided for ease of seaching and access, but certain aspects (such as as mathematics) may not appear in their optimum form. For the final published version of this abstract, please see
ECVP 2002 Abstract Supplement (complete) size: 1753 Kb