In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
will defend his dissertation proposal
Real-Time Facial Performance Capture and Manipulation
Capturing and manipulating human facial performance has been a long standing problem in computer graphics. Since human eyes are especially sensitive to facial performance and appearance, creation of high resolution 3D face models and realistic facial animations for films, VR systems and games often involves complex setup and substantial post-editing from experienced artists. The goal of this dissertation is to achieve real-time fine-scale facial performance acquisition and manipulation from a monocular RGB video. To this end, I first propose a system to photo-realistically transform facial expressions in real-time. We first capture facial expressions from monocular RGB video. Then we train an adapted CycleGAN model to transform the expression into the target expression. To synchronize the output lip motion to match source audio and ensure smoothness of the output sequence, we propose an optimization framework to do both in real-time. The first work captures large-scale facial expression while omits fine-scale details such as wrinkles. To capture such geometric details, we employ shape-from-shading method to estimate lighting, albedo and vertex displacement of face mesh. We use block coordinate decent algorithm to alternately compute each of the them. To solve the non-linear problem in real-time we designed a data-parallel preconditioned conjugate gradient method in CUDA. We show that our method can generate results on par with the other state-of-the-art methods.
Date: Thursday, November 15, 2018
Time: 1:00 PM
Place: PGH 550
Advisors: Dr. Zhigang Deng
Faculty, students, and the general public are invited.