Remote Ph.D. Defense: Mingliang Chen

Tuesday, July 6, 2021
10:00 a.m.
Join Zoom Meeting: https://umd.zoom.us/j/98115974320
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT:  Remote Ph.D. Defense

 
Name:   Mingliang Chen
 
Committee:
Prof. Min Wu, Advisor/Chair
Prof. Dana Dachman-Soled
Prof. Furong Huang
Prof. Chau-Wai Wong
Prof. Michelle Mazurek, Dean’s Representative
 

Date/Time: Tuesday, July 6, 2021 at 10am

 

Location: Join Zoom Meeting:  https://umd.zoom.us/j/98115974320    
 
Title: Security Enhancement and Bias Mitigation for Emerging Sensing and Learning Systems
 
Abstract:
Artificial intelligence (AI) is being used across various practical tasks in recent years, facilitating many aspects of our daily life. With AI-based sensing and learning systems, we can enjoy the services of automated decision making, computer-assisted medical diagnosis, and health monitoring. Since these algorithms are coming into human society and influencing our daily life, such important issues as access control, intellectual property protection, privacy protection, and fairness/equity, should be considered when we are developing the algorithms, beyond their successful performance. In this dissertation, we improve the design of emerging AI-based sensing and learning systems from security and fairness perspectives.

The first part is the security protection of deep neural network (DNN). DNN models are becoming an emerging form of intellectual property for the model owners and should be protected from unauthorized access and piracy to encourage healthy business investment and competition. Taking advantage of DNN's intrinsic mechanism, we propose a novel framework to provide access control to the trained DNNs so that only authorized users can utilize them properly to prevent piracy and illicit usage.

The second part is privacy protection in facial videos. Remote Photoplethysmography (rPPG) can be used to collect a person's physiological signal when he/she is sitting in front of a camera, which may raise privacy issues from two aspects. Firstly, individual health conditions may be revealed unawares by a party without his/her explicit consent from a facial recording. To avoid physiological privacy issue, we develop \textit{PulseEdit}, a novel and efficient algorithm that can edit the physiological signals in facial videos without affecting visual appearance to protect the person's physiological signal from disclosure. On the other hand, R\&D of rPPG technology also has a potential leakage of identity privacy. We usually require public benchmark facial datasets to develop rPPG algorithms, but facial videos are often very sensitive and have a high leakage risk in identity privacy. We develop an anonymization transform that removes sensitive visual information identifying an individual, but in the meantime, preserves the physiological information for rPPG analysis.

Last part, we investigate fairness in machine learning inference. Various fairness definitions of prior art were proposed to ensure that decisions guided by the machine learning models are equitable. Unfortunately, the ``fair'' model trained with these fairness definitions is threshold sensitive, i.e., the condition of fairness will no longer hold when tuning the decision threshold. To this end, we introduce the definition of threshold invariant fairness, which enforces equitable performances across different groups independent of the decision threshold.
 

Audience: Graduate  Faculty 

remind we with google calendar

 

June 2024

SU MO TU WE TH FR SA
26 27 28 29 30 31 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 1 2 3 4 5 6
Submit an Event