Ph.D. Research Proposal Exam: David Hartman
Friday, June 5, 2020
301 405 3681
ANNOUNCEMENT: Ph.D. Research Proposal Exam
Name: David Hartman
Professor John S. Baras (Chair)
Professor Eyad H. Abed
Professor Steven I. Marcus
Date/time: Friday, June 5, 2020 at 10:00 AM
Title: Sensor Scheduling, Decentralized Control, and Reinforcement Learning for Robust Problems
In this proposal, we explore three areas: sensor sampling and scheduling in the Kalman Filter and robust estimation setting; distributed and decentralized robust control; and robust control for unknown dynamics.
Area 1: Sensor Sampling and Scheduling in Kalman and Robust Filtering
Problems with sensor networks like in environmental monitoring and target tracking include limitations on battery life and bandwidth capacity. Hence, we must frugally use the resources at hand to minimize the estimation error. The estimation error criteria can either be the minimum mean square error as in the Kalman Filter case or min-max error as in the robust setting. The limitation on sensor resources can be ensured by capping the number of times the sensor is activated. This is the case of sensor sampling. Additionally, we can cap the number of sensors that are activated at any given time. This is the case of sensor scheduling. The sensor sampling and scheduling problem can be formulated as a mixed integer convex problem. The main question is, does the structure of the problem lend itself to easier algorithms to solve the optimization problem.
Area 2: Distributed Estimation and Decentralized Control in Robust Setting
We next investigate a sensor scheduling problem in the setting of distributed robust estimation. The goal is that the sensors should reach a consensus on the estimation of the state. Furthermore, we investigate a decentralized problem in the setting of robust control. Decentralized, as opposed to distributed, allows for no communication between the agents. The information pattern in this decentralized setting is one in which all agents independently make a decision. There are two different kinds of controller feedback solutions we restrict our problem to: 1. dynamic feedback 2. state feedback. There are also two different kinds of dynamics we restrict our problem to: 1. states are independent for each agent 2. a single state is shared by agents. The main question here is can we find an LMI solution and/or a coupled Riccati equation solution to the decentralized robust problem.
Area 3: Robust Control with Unknown Dynamics
It is not always the case in our control problem that the dynamics are known. Lastly, we investigate discrete time robust control problems when the dynamics are unknown (model-free setting). There are two problems to investigate in this part. The first problem is an LQ zero sum game when the matrices of the dynamics are unknown. It is known that the controller and disturbance in such a problem are linear functions of the state and therefore parameterizable by matrices. Therefore, we solve a min-max problem where the variables are restricted to the space of matrices instead of the space of non-parameterized function space, using simulated gradients. The main question here is can we prove global convergence when simulated gradients are used instead of direct gradients. The second problem relaxes the LQ assumption and subsequently the assumption that the controller and disturbance solutions are parameterizable by matrices. In this case, we search for policies over the controller and disturbance function space using approximate dynamic programming. The main question here is can we find a correct set of basis functions for parameterizing the value function that leads to a reasonable solution to the min-max problem.