Ph.D. Research Proposal Exam: Zachary Lazri

Friday, October 27, 2023
11:00 a.m.
2211 KEB
Emily Irwin
301 405 0680
eirwin@umd.edu

ANNOUNCEMENT: Ph.D. Research Proposal Exam

Name: Zachary McBride Lazri

Committee:

Min Wu (Chair)

Dana Dachman-Soled

Furong Huang

 

Date/time: 10/27/2023 11:00 a.m. to 1:00 p.m.

Location: 2211 Kim Engineering Building

Title: Analyzing Fairness in Machine Learning and Artificial Intelligence Applications

 

Abstract

Over the past decade, machine learning and artificial intelligence systems have become increasingly widespread in applications that affect our everyday lives. However, the unrestricted use of these systems in high-stakes applications that involve sensitive information has raised legal and ethical concerns because of their potential to perpetuate algorithmic bias in their decision-making process. To properly correct for such biases, suitable definitions must be constructed to quantify bias in different applications; algorithms must be developed to correct for such biases; and analyses must be performed to understand the extent to which an algorithm may correct for such biases. In this proposal, we aim to advance these objectives by proposing (1) an algorithmic framework for satisfying multiple fairness constraints while maintaining model accuracy and (2) a framework for analyzing the tradeoffs among different fairness definitions.

While many works have been devoted to applications that require different demographic groups to be treated fairly (inter-group fairness), algorithms that aim to satisfy inter-group fairness may inadvertently treat individuals within the same demographic group unfairly. To address this issue, the first part of this proposal introduces a formal definition of  within-group fairness that maintains fairness among individuals from within the same group. We then propose a preprocessing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy. The framework maps the feature vectors of members from different groups to an inter-group-fair canonical domain before feeding them into a scoring function. The mapping is constructed to preserve the relative relationship between the scores obtained from the unprocessed feature vectors of individuals from the same demographic group, guaranteeing within-group fairness. We apply this framework to the COMPAS risk assessment dataset and compare its performance in achieving inter-group and within-group fairness to two regularization-based methods.

In the second part of this proposal, we ask the question: “Is it possible for a model to accurately satisfy multiple definitions of fairness simultaneously?” Realizing that this answer may depend on the data available to a model, which may be limited under different privacy constraints, we proposed a framework that models the tradeoff between accuracy and fairness under four practical scenarios that dictate the type of data available for analysis.  In contrast to prior work that examines the outputs of a scoring function, our framework directly analyzes the joint distribution of the feature vector, class label, and sensitive attribute by constructing a discrete approximation from a dataset. Through formulating multiple convex optimization problems, an empirical analysis is performed on a suite of fairness definitions that include group and individual fairness. Experiments on three datasets demonstrate the utility of the proposed framework as a tool for quantifying the tradeoffs among different fairness notions and their distributional dependencies.

Building on this work, we plan to extend and expand our research efforts in the domain of algorithmic fairness in multiple directions to complete this dissertation work. First, through modeling of dynamic, time-varying systems in different application scenarios, such as college admissions of financial lending, we aim to investigate how the injection of interventions at different points in the pipeline can contribute to solutions that not only statically satisfy fairness definitions, but also lead to the long-term well-being of different demographic groups. Second, we aim to explore solutions for mitigating the algorithmic biases associated with skin tone commonly found in optical health monitoring applications.

Audience: Graduate  Faculty 

remind we with google calendar

 

April 2025

SU MO TU WE TH FR SA
30 31 1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 1 2 3
Submit an Event