Ph D. Dissertation Defense: Boyu Lu

Tuesday, April 2, 2019
4:00 p.m.-6:00 p.m.
AVW 4424
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT:   Ph D. Dissertation Defense


Name: Boyu Lu

 
Advisory Committee:
Professor Rama Chellappa, Chair/Advisor
Professor Joseph JaJa, 
Professor Behtash Babadi
Dr. Carlos Castillo
Professor Ramani Duraiswami, Dean's Representative
 
Date/time: Tuesday, April 2, 2019 at 4:00 pm - 6:00 pm
 
Location: AVW 4424
 
Title: DOMAIN ADAPTION FOR UNCONSTRAINED FACE VERIFICATION AND IDENTIFICATION
 
Abstract:
 
Face recognition has been receiving consistent attention in computer vision community for over two decades. While recent advances in deep convolutional neural networks (DCNNs) have pushed face recognition algorithms to surpass human performance in most controlled situations, the unconstrained face recognition performance is still far from satisfactory. This is mainly because the domain shift between training and test data is substantial when faces are captured under extreme pose, blur or other covariates variations. In this dissertation, we study the effects of covariates and present approaches of mitigating the domain mismatch to improve the performance of unconstrained face verification and identification.
 
First, we study how covariates affect the performance of deep neural networks on the large-scale unconstrained face verification problem. We implement five state-of-the-art deep convolutional networks (DCNNs) and evaluate them on three challenging covariates datasets. In total, seven covariates are considered: pose (yaw and roll), age, facial hair, gender, indoor/outdoor, occlusion (nose and mouth visibility, and forehead visibility), and skin tone. Some of the results confirm and extend the findings of previous studies, while others are new findings that were rarely mentioned before or did not show consistent trends. On the other hand, we demonstrate that with the assistance of gender information, the quality of a pre-curated noisy large-scale face dataset can be further improved. After retraining the face recognition model using the curated data, performance improvement is observed at low False Acceptance Rates (FARs).
 
Second, we propose a metric learning method to alleviate the effects of pose on face verification performance. We learn a joint model for face and pose verification tasks and explicitly discourage information sharing between the identity and pose metrics. Specifically, we enforce an orthogonal regularization constraint on the learned projection matrices for the two tasks leading to making the identity metrics for face verification more pose-robust. Extensive experiments are conducted on three challenging unconstrained face datasets that show promising results compared to the state-of-the-art methods.
 
Third, since blur is an important factor that significantly degrades the accuracy of face recognition, we propose two approaches to reduce the performance drop induced by image blur. First, we present an incremental dictionary learning approach to mitigate the distribution difference between sharp training data and blurred test data. Some blurred faces called supportive samples are selected, which are used for building more discriminative classification models and act as a bridge to connect the two domains. Second, we propose an unsupervised face deblurring approach based on disentangled representations. The disentanglement is achieved by splitting the content and blur features in a blurred image using content encoders and blur encoders. An adversarial loss is added on deblurred results to generate visually realistic faces. We conduct extensive experiments on two challenging face datasets that show promising results.

Finally, apart from the effects of covariates like pose and blur, face verification performance often suffers from source and target domain mismatch introduced by the requirement that the subjects in the training and test set must be mutually exclusive. To tackle this problem, we propose a template adaptation method for template-based face verification. A template-specific metric is trained to adaptively learn the discriminative information between test templates and the negative training set, which contains subjects that are mutually exclusive to subjects in test templates. Extensive experiments on two challenging face verification datasets yield promising results compared to other competitive methods.

Audience: Graduate 

 

June 2019

SU MO TU WE TH FR SA
26 27 28 29 30 31 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 1 2 3 4 5 6
Submit an Event