Ph.D. Dissertation Defense: Chihuang Liu

Monday, December 20, 2021
12:30 p.m.
AVW 2328
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense
 


Name: Chihuang Liu
 
Committee:
Professor Joseph JaJa (Chair)
Professor Min Wu
Professor Furong Huang
Professor Shuvra S. Bhattacharyya
Professor Dinesh Manocha (Dean’s representative)
 
Location/Time: Monday, December 20, 2021 at 12:30 PM in AVW 2328 
 
Title: Reliable Machine Learning: Robustness, Calibration, and Reproducibility


Abstract: 
 
Modern machine learning algorithms are being applied to a rapidly increasing number of tasks and have demonstrated excellent performances. Despite all the successes, it is heavily concerning that these methods are not always reliable. It has been shown that they are very vulnerable to adversarial attacks, and are over-confident even when they are not accurate. In this thesis, we focus on the problem of making machine learning algorithms more reliable in terms of robustness, calibration, and reproducibility.


In the first part of this thesis, we explore novel approaches to improve the adversarial robustness of a deep neural network. We present a method that involves feature regularization and attention-based feature prioritization to motivate the model to only learn and rely on robust features that are not manipulated by the adversarial perturbation. The resulting model is significantly more robust than existing methods.

In the second part of this thesis, we discover that the current training scheme of using one-hot labels under cross-entropy loss is a major cause of the over-confident behavior of deep neural networks. We first propose a generalized definition of confidence calibration that requires the entire output to be calibrated, which then directly motivates a novel form of the smooth labeling algorithm, called class-similarity based label smoothing, which tries to approximate a distribution that is optimal for generalized confidence calibration. We show that a model trained with the proposed smooth labels is significantly better calibrated than all existing methods.

In the third part of this thesis, we propose an approach that can improve the calibration performance of robust models. We first learn a representation space using prototypical learning which bases its classification on the distances between the representation of a sample and the representations of each class prototype. We then use the distance information to train a confidence prediction network to encourage the model to make calibrated predictions. We demonstrate through extensive experiments that our method can improve the calibration performance of a model while maintaining comparable accuracy and adversarial robustness levels.

In the fourth part of this thesis, we tackle the problem of determining large-scale function patterns for the whole brain from a group of fMRI subjects. Because of the non-linear nature of the signals and significant inter-subject variability, how to reliably extract patterns that are reproducible across subjects has been a challenging task. We propose a group-level model, called LEICA, that uses Laplacian eigenmaps as the main data reduction step to preserve the correlation information in the original data as best as possible in a certain rigorous sense. The nonlinear map is robust relative to noise in the data and inter-subject variability. We show that LEICA detects functionally cohesive maps that are much more reproducible than the state-of-the-art methods.

 

Audience: Graduate  Faculty 

remind we with google calendar

 

April 2024

SU MO TU WE TH FR SA
31 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 1 2 3 4
Submit an Event