Ph.D. Dissertation Defense: Zachary McBride Lazri

Monday, March 24, 2025
9:00 a.m.
2211 Kim Engineering Building
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense

Name: Zachary McBride Lazri

Committee: 
Prof. Min Wu, Chair
Prof. Dana Dachman-Soled
Prof. Furong Huang
Prof. Michelle Mazurek
Prof. Sushant Ranadive, Dean's Representative

Date/Time: Monday, March 24, 2025 at 9:00 a.m.

Location: 2211 Kim Engineering Building
Zoom link: https://umd.zoom.us/j/8594620312?omn=99402663133

Title: Analyzing and Enhancing Algorithmic Fairness in Social Systems and Data-Restricted Applications

Abstract

Over the past decade, machine learning (ML) and artificial intelligence (AI) have become increasingly prevalent in applications that impact our daily lives. However, their use in high-stakes domains involving sensitive data raises significant ethical and legal concerns, particularly around algorithmic bias. Research on fairness in AI/ML (FairAI) seeks to address how AI/ML models' treatment of data may conflict with societal values. Addressing issues within this domain requires creating suitable definitions of bias, developing algorithms to mitigate it, and analyzing and evaluating their success. To facilitate this effort, this dissertation addresses key challenges in improving algorithmic fairness within social systems and data-constrained environments, aiming to ensure ethical model deployment in high-stakes situations.

The first part of this dissertation proposes an algorithm to achieve both inter-group and within-group fairness. While many studies focus on fairness across different demographic groups, algorithms designed for inter-group fairness can unintentionally treat individuals within the same group unfairly. To address this issue, we introduce the notion of within-group fairness and present a pre-processing framework that satisfies both inter- and within-group fairness with minimal accuracy loss. This framework maps feature vectors from different groups to a fair canonical domain before passing them through a scoring function, preserving the relative relationships among scores within the same demographic group to guarantee within-group fairness.

The second part of this dissertation explores trade-offs in satisfying multiple fairness constraints in high-stakes data-restricted decision-making contexts. While previous research has explored trade-offs between fairness and accuracy through analyzing model outputs, these studies do not consider how data restrictions impact a model's ability to satisfy fairness constraints. To fill this gap, we propose a framework that models fairness-accuracy trade-offs in data-restricted settings. Our framework analyzes the optimal Bayesian classifier’s behavior using a discrete approximation of the data distribution, allowing us to isolate the effects of fairness constraints. Key insights include: (1) enforcing equal accuracy on imbalanced datasets can degrade performance under fairness constraints, (2) individual and group fairness often conflict, and (3) decorrelating sensitive attributes does not usually improve accuracy. These findings demonstrate that our framework provides an effective, structured approach for practitioners to assess fairness constraints in decision-making pipelines.

The third part of this dissertation examines FairAI from a sustainability perspective by developing testbeds to validate fairness algorithms over time. It has been shown that applying fairness constraints in static problem formulations can have negative long-term effects on disadvantaged groups. To address this issue, emerging research focuses on creating fair solutions that persist over time. While many approaches treat fairness as a single-agent problem, real-world systems often involve multiple interacting entities that influence outcomes. By modeling these entities as agents, we can analyze their interventions and effects on system dynamics in a more flexible way. Towards this end, we introduce the concept of Multi-Agent Fair Environments (MAFEs) for modeling different social systems for testing FairAI models and present and analyze three MAFEs that model distinct social systems. Experimental results demonstrate the utility of our MAFEs as testbeds for developing multi-agent fair algorithms.
 
 

Audience: Graduate  Faculty 

remind we with google calendar

 

March 2025

SU MO TU WE TH FR SA
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5
Submit an Event