Clark School Home UMD

Events Calendar

Event Information

Ph.D. Dissertation Defense: Upal Mahbub
Tuesday, August 14, 2018
11:00 a.m.
3450, A.V. Williams Building
For More Information:
Maria Hoo
301 405 3681

ANNOUNCEMENT:  Ph.D. Dissertation Defense

Name: Upal Mahbub
Professor Rama Chellappa (Chair)
Professor K. J. Ray Liu
Professor Min Wu
Professor Tudor Dumitras
Professor Larry S. Davis, Dean's Representative


Date/Time:  Tuesday, August 14, 2018 at 11:00 am


Location: Room 3450, A.V. Williams Building




Recent advances in mobile technology have brought new authentication challenges into light. With increasing usage of smartphones not only as communication devices but also as the port of entry for a wide variety of user accounts at different information sensitivity levels, the need for hassle free authentication is on the rise. Going beyond the traditional one-time authentication concept, active authentication (AA) schemes are emerging which authenticates users periodically in the background without the need of any user interaction. The purpose of this research is to explore different aspects of the AA problem and develop viable solutions by extracting unique biometric traits of the user from the wide variety of usage data obtained from Smartphone sensors. The key aspects of our research are the development of different components of user verification algorithms based on (a) face images from the front camera and (b) data from modalities other than face. 
Our work revealed interesting insights about the user’s faces captured by the front camera of a smartphone. Generic face detection algorithms do not perform very well in the mobile domain due to significant presence of occluded and partially visible faces. In this regard, we propose a face detection technique to handle the challenge of partial faces based on facial segments. Starting from proposal-based topology, we are currently working towards developing fast end-to-end regression-based face detectors specifically for active authentication. The proposal-based approaches rely on the generation of face proposals that contain facial segment information. We have developed three increasingly accurate detectors, namely Facial Segment-based Face Detector (FSFD), SegFace and DeepSegFace,respectively, which perform binary classification on each proposal based on the features learned from facial segments. The proposal generation can, however, be very time consuming and is not truly necessary for the active authentication problem. Hence, we propose the Deep Regression-based User Image Detector (DRUID) network which shifts from the classification to the regression paradigm to avoid the need for proposal generation. DeepSegFace and DRUID have unique network architectures with customized loss functions and utilize a novel data augmentation scheme to train on relatively small amount of data. DRUID is very fast as it outputs the bounding boxes for the face and the segments in a single pass. Being robust to occlusion by design, the facial segment-based face detection methods, especially DRUID show superior performance over other state-of-the-art face detectors in terms of precision-recall and ROC curve on two mobile face datasets. Face-based verification of the device user is a even more challenging task especially because of the partially visible faces. We have performed benchmark evaluation of several state-of-the-art face verification techniques on a mobile dataset and established that the performances of current verification algorithms are far from satisfactory when it comes to activeauthentication.
We extended the concept of facial-segments to facial attribute detection for partially visible faces. State-of-the-art methods for attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. We developed a deep convolutional neural network-based method named Segment-wise, Partial, Localized Inference in Training Facial Attribute Classification Ensembles (SPLITFACE) to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, SPLITFACE takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent attribute detection methods especially for partial faces.

The potentials of other sensor data such as touch dynamics, acceleration, rotation in the space, location information etc has also been explored for user classification/verification. Our benchmark evaluation on the touch gesture-based user verification and location-based next place prediction established the fact that in completely unconstrained setting, the state-of-the-art algorithms are barely useful. Aiming to discover the pattern of life of a user, we processed the location traces into separate state space models for each user and developed the Marginally Smoothed Hidden Markov Model (MSHMM) algorithm to authenticate the current user based on the last several states that the individual has traversed. The method takes into consideration the sparsity of the available data, the transition phases between states, the timing information and also the unforeseen states. We looked deeper into the impact of unforeseen and unknown states in another research work where we evaluated the feasibility of application usage behavior of the users as a potential solution to the active authentication problem. Our experiments show that it is essential to take unforeseen states into account when designing an authentication system with sparse data and marginal-smoothing techniques are very useful in this regard. 
In the final part of this dissertation, we describe some ongoing efforts and future directions of research. Merging the idea of facial segments with attention networks, we believe a robust user verification model can be developed that will authenticate the user locally at different segments and then combine the local decisions into a global feature vector. The model will be very useful in most authenticationscenarios, especially where partial faces are needed to be verified. We are also working on a novel feature extraction mechanism that would enforce a ranking constraint on the feature vector so that the features are ranked according to importance. Our preliminary results show promising results on a car-model verification dataset. This research might be instrumental in developing search protocols for large scale data and might provide good insight for selecting the optimum size of the feature vector. Finally, we are working towards modeling the attribute detection problem as a reinforcement learning task where the reinforcement learning framework is trained to determine the next attribute to be detected based on the attributes that are already estimated. Our approach is expected to reveal key dependencies between local and global attributes and demonstrate how the neural network’s decision for the next estimate is influenced by the prior estimates.



This Event is For: Graduate • Faculty

Browse Events By Calendar

Calendar Home

« Previous Month    Next Month »

October 2018
1 2 3 4 5 6 w
7 8 9 10 11 12 13 w
14 15 16 17 18 19 20 w
21 22 23 24 25 26 27 w
28 29 30 31 w
Search Events


Events Calendar
Submit Event


News Search
News Archives