Character Animation Using Facial Gestures

Computer Facial Animation
          For centuries, humans have had a fascination with studying and replicating facial movements and expressions. Facial expressions convey social information between humans and play a significant role in communication. For animators, recreating human expressions and speech patterns is a time intensive, computer-generated process that involves (1) techniques to gather animation data and (2) methods to apply the recorded data to an animated character. In motion capture sessions, the movements of a live actor are sampled and recorded several times per second, then projected onto an animated character; animators trace the live-action footage frame by frame, matching the movements and actions of the recorded footage. Although motion capture is an industry-standard animation technique, as early as 2009 developers began exploring processes to achieve real-time facial tracking and expression transfer using facial recognition software.

Facial Recognition in Multimedia Applications
          The Faces feature of iPhoto, Apple’s digital photograph manipulation software application, was first introduced in 2012 and used facial recognition to automatically sort and organize photographs into albums. The software scanned each photograph for the presence of a human face (oval head shape, facial features, skin tone), and then analyzed the unique properties of each detected face (nose, mouth, hair color or lack of hair, distance between the eyes). After analysis, iPhoto created stacks of images, based on similar facial features. In 2013, Apple’s Final Cut Pro X, a professional non-linear editing application, introduced an automatic tagging feature called Find People. The software analyzes a video clip for human faces, and then tags and sorts segments of the clip into keyword collections based on shot detection (one-person, two-person, group and close-up, medium, or wide shot). With Apple, Facebook, and Google using face detection and recognition tools to auto-tag and catalogue photographs, the melding of computer generated animation and facial recognition software became inevitable.

Character Animation Using Facial Gestures
           In 2015, Adobe Creative Cloud released a BETA version of Character Animator, an application that allows users to animate characters created in Photoshop or Illustrator in real time, using a webcam, microphone, keyboard, mouse, or touch screen. The software uses a special hierarchy and naming protocol for the character’s layers (head, eyes, mouth shape, hair) to trigger facial expressions captured by the user’s webcam. By staring at the webcam, the user can set a rest pose and create tracking dots across his/her visage.The user can then activate the record feature of Character Animator and move and rotate his/her head and make different facial features (blinking, talking); the character in the animation program mimics the user’s movements and actions in real time. Behavior parameters such as head tilt strength, arm swing sensitivity, and walking stride length can be adjusted in the Properties panel to modify how the character reacts to webcam and keyboard commands. Character animation scenes can be exported as independent video files or imported directly into After Effects or Premiere Pro for further compositing and editing.

Educational Use of Character Animation
          A virtual model interface can simulate any number of human interactions with a high degree of consistency and realism. Especially in the fields of psychology, history, law, and medicine, virtual characters can be used for patient communication, interactive conversations, history taking, clinical assessments and more. Software and hardware advances are being leveraged to promote increased interactions between humans and machines through both voice and facial recognition. Artificial intelligent tutors in virtual labs track student facial expressions during class lectures and demonstrations and provide automated feedback (NMC Horizon Report Higher Education, 2017). And devices that understand gestures, facial expressions, and brain waves are providing greater access to education for those with disabilities. Character Animator transitioned from BETA to its stable version on October 18, 2017 and promises to be the go-to application for educators and students interested in game development, virtual learning environments, and artificial intelligence.

Written by Kate Lee, Smith College Senior Media Producer

Leave a Reply

Your email address will not be published. Required fields are marked *