Ph.D. Research Proposal: Sai Sandeep Damera

Tuesday, March 10, 2026
10:00 a.m.
AVW 1146

 
ANNOUNCEMENT: Ph.D. Research Proposal Exam

 

Name: Sai Sandeep Damera

 

Committee:

Professor John S. Baras (Chair)

Professor Calin Belta

Professor Dinesh Manocha


Date/time: 10 AM - 11 AM; Tuesday, March 10, 2026.

 

Location: AVW 1146

 

Title: Learning-enabled Control for Trusted Autonomy

 

Abstract: Trusted Autonomy requires that robots operating in safety-critical, unstructured environments simultaneously achieve robust performance in high-dimensional state and action spaces, provide verifiable safety guarantees under formal constraints, and actively manage perceptual uncertainty through strategic information gathering. Sampling-based predictive control, particularly Model Predictive Path Integral (MPPI) control, has become a cornerstone of modern robotics because of its ability to handle nonlinear dynamics, non-convex costs, and contact-rich physics without linearization. However, as robotic systems scale in complexity, brute-force sampling fails: the effective sample size decays exponentially with dimensionality, and independent samples collapse onto a single mode in multimodal cost landscapes. Generative Model Predictive Control addresses the dimensionality problem by replacing random Gaussian sampling with learned generative priors, but standard implementations rely on passive priors and heuristic safety filters that lack formal assurance.

In this proposal, we present a rigorous learning-enabled control framework built on two interlocking pillars. The first, Differentiable Programming for Assured Autonomy, adapts classical assurance mechanisms into end-to-end differentiable components that operate natively within generative control architectures. The key claim is that the mere existence of a differentiable simulator is insufficient; the entire algorithmic stack, including safety monitors, logical specifications, and perception modules, must be differentiable so that physically consistent gradients can steer generative priors toward safety, logical satisfaction, and information gain. The second, Fokker-Planck Unification, recognizes that sampling-based control, generative modeling, and particle-based transport all manipulate probability densities through controlled evolution under the Fokker-Planck equation. This shared density-evolution framework provides a unified mathematical language connecting every method in the proposal.

We structure the investigation around five problems that progressively construct this framework. The first two problems make sampling and safety differentiable: gradient-based refinement of MPPI samples with conformally calibrated trust regions (Problem 1), followed by differentiable safety fields that steer a learned generative prior toward collision-free trajectories in a prediction-correction architecture (Problem 2). Problem 3 extends this to complex temporal missions by making Signal Temporal Logic specifications differentiable, enabling logical satisfaction gradients to flow through the physics engine. Problem 4 provides the theoretical backbone: reformulating control as Wasserstein gradient flow over distributions, where kernel-mediated particle interactions prevent mode collapse and maintain multimodal diversity. Finally, Problem 5 closes the perception-action loop by connecting the framework to Active Inference, synthesizing behaviors that simultaneously complete tasks, satisfy constraints, and actively reduce uncertainty through information-seeking.

Collectively, these contributions establish a mathematically coherent and computationally tractable end-to-end differentiable framework for Trusted Autonomy, enabling high-dimensional robots to combine the expressiveness of learned generative priors with the formal guarantees of control theory.

Audience: Graduate  Faculty 

remind we with google calendar

 

March 2026

SU MO TU WE TH FR SA
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31 1 2 3 4
Submit an Event