Ph.D. Research Proposal Exam: Mohamed Bashir Dafaalla Elnoor

Friday, May 9, 2025
12:00 p.m.
AVW 2328
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT: Ph.D. Research Proposal Exam

 

Name: Mohamed Bashir Dafaalla Elnoor

 

Committee:

Professor Dinesh Manocha (Chair)

Professor Pratap Tokekar

Professor Kaiqing Zhang

 

Date/time: Friday May 9th, 2025, 12:00 PM – 2:00 PM

Location: Via Zoom

Zoom Link: umd.zoom.us/my/dmanocha

 

Title: Towards Robust and Efficient Multi-modal Perception for Autonomous Navigation

 

Abstract:

Autonomous navigation plays a critical role across a wide range of robotic applications such as logistics, agriculture, hospitals, surveillance, and disaster response. However, when deployed in unstructured and dynamic environments, robots must overcome challenges related to uncertain terrain properties, sensor degradation, and the need for contextual reasoning. We investigate these challenges through a sequence of multimodal perception methods that integrate vision, LiDAR, proprioception, and language models to enable robust and efficient navigation. We begin with ProNav, which estimates terrain traversability using proprioceptive signals to enhance stability and predict failures across vegetated, rocky, and granular terrains. Building on this, we introduce AMCO, which combines visual and proprioceptive data into an adaptive cost mapping framework to guide gait and velocity selection in real time. While large Vision-Language Models (VLMs) offer strong semantic reasoning capabilities, their high computational cost and latency make real-time deployment on robots challenging. To address this, we propose VLM-GroNav, which refines VLM predictions by grounding them in physical interaction, which enables improved path planning across slippery and deformable terrains. Most recently, we developed Vi-LAD, which distills the reasoning abilities of large VLMs into a lightweight, attention-aware model that supports real-time, socially compliant navigation on resource-constrained robots. We deploy and validate these methods across diverse platforms, including Clearpath Husky, Boston Dynamics Spot, and Ghost Vision 60, in a range of challenging environments.

 

Audience: Faculty 

remind we with google calendar

 

May 2025

SU MO TU WE TH FR SA
27 28 29 30 1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
1 2 3 4 5 6 7
Submit an Event