Dissertation Proposal - University of Houston
Skip to main content

Dissertation Proposal

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

Yuhang Wu

will defend his dissertation proposal

Robust 3D Model Registration Based Compact Face Template Learning for Pose-Invariant Face Recognition


Abstract

In automatic face recognition systems, a human face is represented as an N-dimensional feature vector. This feature vector is named as the `face template'. When comparing facial images/videos, face templates are first extracted, then the similarities among the face templates are measured. The system output the identity of a face based on the similarities of the face templates. Modern problems in face recognition require identifying an individual person from millions of candidate images, which impose high requirements on the quality of face template. A face template needs to be: (i) discriminative enough to distinguish two subjects from a large candidate pool and (ii) short enough for fast face retrieval. Intrinsically, it requires a face recognition system to filter out the information that irrelevant to identity and learn to compress the identity-relevant information as compact as possible into the face template. Most of the existing works generate face template via learning discriminative mappings (e.g., convolutional neural networks) from raw facial images to face templates directly. The solutions achieve high face verification rate as the head pose variations in the facial images are near frontal. However, it is not the case when the head pose variations are far from frontal. Besides, the length of the template has rarely been taken into account when generating the face template. It is hard to compare millions of templates in minutes based on most of the existing solutions. To improve the performance of face recognition system under large pose variations, a 3D facial model can be employed to frontalize the facial textures before template generation. This geometric transformation mapped the facial texture onto a canonical plane which reduces the identity irrelevant variations in facial templates that are detrimental to face identification. However, one preliminary requirement of face frontalization is it requires the 3D model to be accurately registered to the 2D image. Because model registration itself is sensitive to head pose variations and external occlusions, this requirement is not trivial to be satisfied. In another hand, to reduce the dimensionality of face template, a trade-off between accuracy and the compressing rate need to be made. However, it is still unclear how compact a face template can be given a certain amount of training data and a pre-determined requirement on the face identification accuracy. To tackle these challenges, the main contribution of this research is: the author significantly improved the accuracy and robustness of 3D-2D model registration under large head pose variations, and hereby statistical significantly improved the accuracy of face identification. The author will focus on reducing the dimensionality of the facial template while keeping its discriminative capability. These two contributions will have large impacts on existing 3D-aided face recognition systems and contribute to achieving high accuracy high-speed face matching. Concretely, the goal of this research is to achieve statistically significant improvements in the performance of face recognition systems using images and videos with challenging pose. The specific objectives are to 1. Develop and evaluate 3D-2D model registration algorithms for template extraction that overcome the challenges of large pose variations and occlusion caused by accessories. 2. Develop and evaluate a single compact template from a collection of media (images and videos) suitable for face matching. 3. Develop and evaluate a template matcher suitable for matching the templates developed in Objective 2.


Date: Friday, February 23, 2018
Time: 11:00 AM
Place: HBS 314
Advisors: Dr. Ioannis A. Kakadiaris

Faculty, students, and the general public are invited.