Ph.D. Dissertation Defense: Aneesh Raghavan

Friday, November 8, 2019
12:00 p.m.-2:00 p.m.
AVW 2168
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense  
 
 
Name: Aneesh Raghavan
 
Committee Members:
Professor John S. Baras, Chair/Advisor 
Professor Prakash Narayan
Professor Armand Makowski 
Professor Eyad Abed
Professor Benjamin Kedem.
 
Date: Friday, November 8, 2019 at 12:00 PM to 2:00 PM
 
Location: AVW 2168

 
Abstract:
Networked multi-agent systems have become an integral part of many engineering systems. Collaborative decision making in multi-agent systems poses many challenges. In this thesis, we study the impact of information and its availability to agents on collaborative decision making in multi-agent systems.

We consider the problem of detecting Markov and Gaussian models from observed data using two observers. We consider two Markov chains and two observers. Each observer observes a different function of the state of the true unknown Markov chain. Given the observations, the aim is to find which of the two Markov chains has generated the observations. We formulate block binary hypothesis testing problem for each observer and show that the decision for each observer is a function of the local likelihood ratio. We present a consensus scheme for the observers to agree on their beliefs and the asymptotic convergence of the consensus decision to the true hypothesis is proven. A similar problem framework is considered for the detection of Gaussian models using two observers. Sequential hypothesis testing problem is formulated for each observer and solved using local likelihood ratio. We present a consensus scheme taking into account the random and asymmetric stopping time of the observers. The notion of ``value of information" is introduced to understand the ``usefulness" of the information exchanged to achieve consensus.

Next, we consider the binary hypothesis testing problem with two observers. There are two possible states of nature. There are two observers which collect observations that are statistically related to the true state of nature. The two observers are assumed to be synchronous.  Given the observations, the objective of the observers is to collaboratively find the true state of nature. We consider centralized and decentralized approaches to solve the problem. In each approach there are two phases: (1) probability space construction: the true hypothesis is known, observations are collected to build empirical joint distributions between hypothesis and the observations; (2) given a new set of observations, hypothesis testing problems are formulated for the observers to find their individual beliefs about the true hypothesis. Consensus schemes for the observers to agree on their beliefs about the true hypothesis are presented. The rate of decay of the probability of error in the centralized approach and rate of decay of the probability of agreement on the wrong belief in the decentralized approach are compared. Numerical results comparing the centralized and decentralized approaches are presented.

All propositions from the set of events for an agent in a multi-agent system might not be simultaneously verifiable. We study the concepts of \textit{event-state-operation structure} and \textit{relationship of incompatibility} from literature and use them as a tool to study the structure of the set of events.  We present an example from multi-agent hypothesis testing where the set of events do not form a boolean algebra but form a ortholattice. A possible construction of a 'noncommutative probability space', accounting for \textit{incompatible events} (events which cannot be simultaneously verified) is discussed. As a possible decision-making problem in such a probability space, we consider the binary hypothesis testing problem. We present two approaches to this decision-making problem. In the first approach, we represent the available data as coming from measurements modeled via projection valued measures (PVM) and retrieve the results of the underlying detection problem solved using classical probability models. In the second approach, we represent the measurements using positive operator valued measures (POVM). We prove that the minimum probability of error achieved in the second approach is the same as in the first approach.

Finally, we consider the binary hypothesis testing problem with learning of empirical distributions. The true distributions of the observations under either hypothesis are unknown. Empirical distributions are estimated from observations.  A sequence of detection problems is solved using the sequence of empirical distributions. The convergence of the information state and optimal detection cost under empirical distributions to the information state and optimal detection cost under the true distribution are shown. Numerical results on the convergence of optimal detection cost are presented. 
 
 

Audience: Graduate  Faculty 

remind we with google calendar

 

March 2024

SU MO TU WE TH FR SA
25 26 27 28 29 1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31 1 2 3 4 5 6
Submit an Event