Ph.D. Defense: Baturalp Buyukates

Monday, December 13, 2021
4:00 p.m.
AVW 2328
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT:  Ph.D. Defense
 
 
Name: Baturalp Buyukates
 
Committee:
Prof. Sennur Ulukus, Chair/Advisor
Prof. Behtash Babadi
Prof. Nuno Martins
Prof. Richard La
Prof. Radu Balan, Dean’s Representative
 
Date/Time: Monday, December 13, 2021 at 4 pm 
 
Location: AVW 2328
 
Title: Age of Information in Large Networks, Distributed Computation and Learning
 
 
Abstract:
 
We consider timely data delivery in real-time communication networks that have gained significant importance with recent promising applications including augmented reality/virtual reality (AR/VR), autonomous vehicular networks, and smart factories. Age of information is a network performance metric introduced to guarantee access to fresh data in such systems. Considering increasing connectivity in communication networks and ever growing prominence of distributed computation and learning systems, in this dissertation, we study age of information in large networks with a particular focus on its scalability with growing network size as well as the design of distributed computation and learning systems that handle time-critical data using potentially large number of worker nodes with as little age as possible. 
  
First, we consider a multihop multicast network with a single source node sending time-sensitive updates to $n^L$ end nodes where $L$ denotes the number of hops. Updates from the source go through relay nodes in each hop to reach the end nodes. We show that by utilizing appropriate transmission stopping thresholds in each hop, the age of information at the end nodes can be made a constant independent of $n$. We then find the optimum stopping value for each hop for arbitrary shifted exponential link delays.

Next, we focus on a single hop multicast network where two update streams share the same network, called type I and type II updates. We show that utilizing an earliest $k_1$ and $k_2$ transmission scheme for type I and type II updates, respectively, prevents information staleness for both update streams. We find the optimum $k_1$ and $k_2$ stopping thresholds for arbitrary shifted exponential link delays to individually and jointly minimize the average age of both update streams.

Then, we consider the age scaling in a large peer-to-peer network consisting of $n$ randomly paired source-destination pairs. We first propose a three-phase transmission scheme which utilizes \emph{local cooperation} among the nodes along with \emph{mega update packets} and show that it achieves an average age scaling of $O(n^{\frac{1}{4}}\log n)$ per-user as $n$ grows. Next, we show that, under a hierarchical implementation of the proposed scheme, an average age scaling of $O(n^{\alpha(h)}\log n)$ per-user is achievable, where $h$ denotes the number of hierarchy levels and $\alpha(h) = \frac{1}{3\cdot2^h+1}$. The proposed hierarchical scheme asymptotically achieves a per-user average scaling of $O(\log n)$. 

Next, we consider the version age of information scaling in gossip networks, where a total of $n$ nodes are clustered into distinct communities and they are allowed to share their versions of the source information with their neighbors within each cluster. By assuming different topologies for the clusters, we show that per node average version age scalings of $O(\sqrt{n})$, $O(n^{\frac{1}{3}})$, and $O(\log n)$ are achievable in disconnected, ring, and fully connected cluster models, respectively. We then increase connectivity across clusters by implementing a hierarchical gossip mechanism to further improve the version age scaling results, and find the version age-optimum cluster sizes for various settings.

Then, we consider a status updating system in which the update packets are data rich and need to be processed further to extract the embedded information. This processing is handled in a distributed manner at a computation unit which consists of a master node and $n$ worker nodes. We investigate the age performance of uncoded and coded (repetition coded, MDS coded, and multi-message MDS (MM-MDS) coded) schemes in the presence of stragglers. We show that, asymptotically, MM-MDS coded scheme outperforms the other schemes, and characterize the age-optimal codes.

Next, we study age of information in federated learning and propose a novel timely communication scheme specifically designed for learning applications that use highly temporal rapidly changing client datasets such as recommendation systems and next place forecasting tasks. The proposed timely communication scheme aims to incorporate time-critical client data into the global model updates with as little age as possible by also considering the limited client availability and communication resources. We show that, in addition to ensuring timeliness, the proposed policy significantly improves the average iteration time of training without hurting the convergence performance of the algorithm. 

Then, we consider utilizing the age of information metric to improve convergence performance in distributed learning with coded computation and partial recovery for straggler mitigation. We propose a novel age-based encoding framework that regulates the recovery frequency of the partial computations received from the workers. We show through several experiments on a linear regression problem that the proposed age-based encoding strategy significantly improves the convergence performance compared to conventional static encoding schemes.

Next, we propose a novel gradient coding (GC) scheme with dynamic clustering, called GC-DC, to improve the average iteration time of the existing static gradient coding techniques. The proposed GC-DC scheme clusters the workers and applies the GC scheme in each cluster separately. Under a time correlated straggling behavior for the workers, GC-DC dynamically forms the clusters based on the past straggling behavior to distribute straggling workers to clusters as uniformly as possible. We show through extensive simulations on both homogeneous and heterogeneous worker profiles that the GC-DC scheme significantly improves the average iteration time compared to the existing static GC schemes without any increases in the communication load.

Finally, we study a status updating system with a single sampler which takes samples from an observed phenomenon and sends them to a monitor node through a single server that implements a blocking policy. We consider two scenarios with partial non-i.i.d.~components: Gilbert-Elliot service times and i.i.d.~interarrival times; and i.i.d.~service times and Gilbert-Elliot interarrival times. We characterize the average age experienced by the monitor node and determine the age-optimal state transition matrix for both of these scenarios.
 

Audience: Graduate  Faculty 

remind we with google calendar

 

April 2024

SU MO TU WE TH FR SA
31 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 1 2 3 4
Submit an Event