Event
Ph.D. Dissertation Defense: Senthil Hariharan Arul
Tuesday, August 26, 2025
2:00 p.m.
AVW 1146
Souad Nejjar
301 405 8135
snejjar@umd.edu
ANNOUNCEMENT: Ph.D. Dissertation Defense
Name: Senthil Hariharan Arul
Committee:
Professor Dinesh Manoch (Chair)
Professor Pratap Tokekar
Professor Kaiqing Zhang
Professor Yiannis Aloimonos
Professor Huan Xu (Dean's Representative)
Date/time: Tuesday, August 26, 2025 at 2:00 PM
Location: AVW 1146
Title: Safe and Efficient Navigation for Single- and Multi-Robot Autonomy in Complex Environments
Abstract:
Autonomous robots are increasingly deployed in applications such as autonomous driving, warehouse automation, search and rescue, last-mile delivery, and household service robotics. A fundamental requirement across these domains is the ability to navigate reliably and safely while operating alongside other decision-making agents such as humans, pets, and other robots. Achieving this in real-world environments requires algorithms that can operate under sensing uncertainty, adapt to dynamic surroundings, and maintain safety by avoiding collisions with other obstacles in its path.
Our research presents multiple novel algorithms addressing both single and multi-robot navigation. A unifying aspect of these approaches is that in both domains, robots are independent decision-makers, where each robot plans for itself based on the local observations and goal, and lacks centralized coordination. In the multi-robot domain, we present our contribution to decentralized navigation in quadrotor swarms that account for agent dynamics, rotor downwash effects, localization uncertainty, and safety constraints. Additionally, we discuss local planners for navigating multiple autonomous ground vehicles in dense scenarios, maintaining safety guarantees while resolving deadlocks and reducing congestion. In addition, we discuss our contribution in learning-based approaches that jointly learn navigation policies and selective inter-agent communication strategies to improve navigation. In the single-robot domain, we discuss planners for navigating household environments with narrow corridors, dynamic agents such as pedestrians and pets, and imperfect localization, while reducing deadlocks and maintaining probabilistic safety guarantees. An object goal navigation method leveraging vision–language models to enable robots to interpret natural language commands and locate target objects in cluttered indoor environments.
We evaluate our methods in complex simulation and real-world scenarios with physical robots, observing improvements over state-of-the-art approaches, including up to a 24% increase in success rate for reinforcement learning–based navigation, a 3–6x reduction in collisions in quadrotor swarm navigation, fewer deadlocks, and the maintenance of probabilistic safety guarantees in single-robot navigation. Our novel algorithms take on the order of tens of milliseconds per planning cycle, enabling real-time execution, and our multi-robot methods are evaluated in scenarios with up to 50–100 robots.
Our research presents multiple novel algorithms addressing both single and multi-robot navigation. A unifying aspect of these approaches is that in both domains, robots are independent decision-makers, where each robot plans for itself based on the local observations and goal, and lacks centralized coordination. In the multi-robot domain, we present our contribution to decentralized navigation in quadrotor swarms that account for agent dynamics, rotor downwash effects, localization uncertainty, and safety constraints. Additionally, we discuss local planners for navigating multiple autonomous ground vehicles in dense scenarios, maintaining safety guarantees while resolving deadlocks and reducing congestion. In addition, we discuss our contribution in learning-based approaches that jointly learn navigation policies and selective inter-agent communication strategies to improve navigation. In the single-robot domain, we discuss planners for navigating household environments with narrow corridors, dynamic agents such as pedestrians and pets, and imperfect localization, while reducing deadlocks and maintaining probabilistic safety guarantees. An object goal navigation method leveraging vision–language models to enable robots to interpret natural language commands and locate target objects in cluttered indoor environments.
We evaluate our methods in complex simulation and real-world scenarios with physical robots, observing improvements over state-of-the-art approaches, including up to a 24% increase in success rate for reinforcement learning–based navigation, a 3–6x reduction in collisions in quadrotor swarm navigation, fewer deadlocks, and the maintenance of probabilistic safety guarantees in single-robot navigation. Our novel algorithms take on the order of tens of milliseconds per planning cycle, enabling real-time execution, and our multi-robot methods are evaluated in scenarios with up to 50–100 robots.