Event
Ph.D. Proposal Exam: Purbesh Mitra
Tuesday, January 14, 2025
11:00 a.m.
AVW 2328 - Via Zoom
Maria Hoo
301 405 3681
mch@umd.edu
Ph.D. Proposal Exam
Name: Purbesh Mitra
Committee:
Prof. Sennur Ulukus (Chair)
Prof. Behtash Babadi
Prof. Kaiqing Zhang
Date/time: Tuesday, January 14, 2025 at11 AM EST
Location: AVW 2328
via Zoom
Title: Timely Information Dissemination in Distributed Networks for Decentralized Learning and Inference
Abstract: We study information freshness in large networks of users, where the users communicate time-varying data for distributed learning. In particular, we focus on the timeliness aspect of the communication, which is essential for model convergence while in training phase of edge-intelligent systems. We consider different systems, such as single source gossip networks, multi-source decentralized learning network, and hierarchical federated learning networks. We address open problems regarding characterization of timeliness, derive closed form expression for the long-term expected values and formulate bounds of those values as the network size grows large.
In our first completed work, we consider timeliness of a single source gossip network, where the source generates dynamic information and the nodes of the network try to track that information closely. We show that if the nodes employ an opportunistic gossiping scheme, then the timeliness metric remains bounded as O(1), with respect to the network size n, i.e., larger network size does not negatively affect the information dissemination performance. In our second completed work, we formulate the lower bounds of best possible age performance in gossip networks. We show that by deriving a semi-distributed gossiping scheme where the freshest node in the network is allowed to gossip with full capacity, while all the other nodes remain silent. This results in an O(1) average age, which is analytically shown to be the best achievable performance. Further, we develop a fully-distributed scheme that encounters interference but still results in an O(1) average age. In our third completed work, we consider hierarchical federated learning network, where the users are grouped into cloud-edge-client based hierarchy. The users train their local model, transmit it to the edge servers via uplink communication, which is then aggregated in cloud servers. The learning is considered to be timely but asynchronous, i.e., it is coordinated timely by the edge server, but the users train their model asynchronously. We show that, for guaranteed convergence of users in large network size, i.e., to satisfy the bounded staleness condition, the number of edge servers must be bounded as O(1) with respect to the total number of users n, thus only a limited number of clusters can be accommodated in a particular setting.
Next, we present three proposed works. In the first proposed work, we consider the timeliness performance of decentralized learning networks. In these networks, instead of a centralized parameter server, the locally trained model of a node is gossiped to its neighbors and averaged successively. The local training process also continues asynchronously. Here, we aim to find the bounded staleness condition under network scaling. In our second proposed work, we consider a network with asymmetric sparse topology. In this network, we aim to find out the optimal source-to-network rate allocation to maintain fairness in timely performance of the nodes. In this context, fairness is formulated as optimizing the worst case user performance. Our third proposed work considers a time varying network topology for decentralized learning, where the network users are mobile in nature, thus connectivity changes over time. We aim to find the sufficient gossip capacity conditions for model convergence in such a network.