Ph.D. Dissertation Defense - Sagnik Bhattacharya

Wednesday, May 13, 2026
3:30 p.m.
AVW 2168
Souad Nejjar
301 405 8135
snejjar@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense

 

NAME: Sagnik Bhattacharya

Committee:
Professor Prakash Narayan (Chair/Advisor)
Professor Saikat Guha
Professor Alexander Barg
Professor Behtash Babadi
Professor Victor Albert
Professor Aravind Srinivasan (Dean's Representative)


Date/Time: Wednesday, May 13, 20263:30 - 5:00 pm

Location: AVW 2168

Title: Markov Random Fields for Spatial Sampling, Learning and Compression: Classical and Quantum

Abstract:
Our networked and data-driven world contains vast amounts of spatially correlated, time-varying data that must be processed efficiently. Physical, computational, and storage constraints often limit data collection, and network resources must be used efficiently, requiring us to infer network-wide structures from local samples and solve global network tasks by combining solutions to local subproblems. Such problems arise in fields as diverse as mobile networks, robot swarms, biological neural networks, the Internet-of-Things (IoT), social networks and the emerging quantum internet.

In this dissertation, we use a class of Markov random fields, based on graphical models, to treat these disparate applications through a unified, mathematical lens. These graphical models treat the individual signals---neurons, robots/drones or components of the quantum internet---as nodes in a network, and model their relationships by connections among them. Such models leverage a structural property that many real networks possess: activities of nearby nodes are more correlated than those that are far apart.

To address the core challenge of quantifying the dependence among multiple random data signals, we investigate a measure of mutual statistical dependence called shared information (SI), that generalizes to multiple terminals Claude Shannon’s foundational notion of mutual information as a measure of correlation between two random signals.

This dissertation has five main thrusts.

(a) Spatial sampling: We provide fundamental bounds for the best single-shot lossy compression rate possible when selecting the most informative signals from a large correlated dataset, and in using the sampled signals to reconstruct the entirety of the signals. We analyze the interplay among spatial sampling rates and the tasks of compression, learning and inference, and the impact of different forms of sampling mechanisms on the tradeoff between sampling rate and the performance of algorithms for these tasks. We consider two broad forms of randomized sampling; in (the more complex) closed-loop sampling, the sampled set depends on the signals, whereas in open-loop sampling it is (conditionally) probabilistically independent of the signals. The latter is useful in an IoT setting, where sensors often lack adequate computational resources to run complex algorithms.

(b) A chain rule for SI and new models in network common randomness generation: Akin to the chain rule for mutual information, we obtain a chain rule for SI in terms of conditional SI that could potentially lead to more applications of SI in network information theory. Along the way, we treat common randomness generation together with function computation requirements, and obtain new formulations and upper and lower bounds for the combined problem. Also, introducing a theoretical model that involves an all-knowing but rate-limited oracle, we propose algorithms and derive fundamental limits for network common randomness generation under secrecy requirements. Our results are potentially useful in analyzing mobile networks assisted by a satellite that sees the entire network state. We show precisely how assistance from an oracle terminal increases the amount of common randomness that can be generated by the network, and provide both converse upper bounds and achievability schemes.

(c) Applications in network quantum information processing: Inspired by SI and the framework for classical network secret key agreement, we examine new models, algorithms and fundamental limits that allow information-theoretic analysis of multiterminal entanglement distillation in quantum networks. Specifically, we introduce as a starting point a model of quantum terminals sharing noiseless pairwise entanglement. The terminals use Local quantum Operations and Classical Communication to share noiseless multiterminal entangled GHZ states. This communication setup, called LOCC, is fundamental to current distributed quantum protocols. Our upper bounds for the entanglement distillation rate improve upon those previously known, and can be used to prove optimality of some entanglement distillation protocols.

(d) Computation of SI based on its structural properties: We show that the complex optimization (over all nontrivial partitions of the signals) defining SI takes a particularly simple form for several special network structures. In tree-structured Markov random fields, which are widely used to approximate complex graphs for efficient decoding of Turbo and LDPC codes in 4G, 5G and WiFi networks, we show that SI takes a particularly simple form (a partition into two components). We further show that the proof technique used for trees and the resulting structural insights generalize to wider classes of nontree models. In particular, we introduce a class of graphical models called cliqueylons that are potentially useful in analyzing mobile networks and robot swarms. In each case, the search space of optimization over partitions is significantly reduced, especially for sparse graphs.

(e) Estimating SI from data: We use the aforementioned structural insights to estimate shared information from data when the underlying probability distribution is not known, using new bandit algorithms that take data correlations into account.  This is a hitherto underexplored class of bandit problems for sequential estimation that presents an intriguing direction for further research.

Throughout, we emphasize how the conditional independence information encoded by Markov random fields manifests itself in the structural properties of SI and optimal algorithms for the specific tasks.

Audience: Graduate  Faculty 

remind we with google calendar

 

May 2026

SU MO TU WE TH FR SA
26 27 28 29 30 1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31 1 2 3 4 5 6
Submit an Event