Event
Ph.D. Research Proposal Exam: Ruwanthi Abeysekara
Thursday, February 6, 2025
12:30 p.m.
AVW 1146
Maria Hoo
301 405 3681
mch@umd.edu
ANNOUNCEMENT: Ph.D. Research Proposal Exam
Name: Ruwanthi Abeysekara
Committee:
Professor Behtash Babadi, (Chair)
Professor Jonathan Z. Simon
Professor Shihab Shamma
Date: February 6, 2025 at 12.30 pm- 1.30 pm
Location: AVW 1146
Title: Dynamic Neural Models for Real-Time Auditory and Cognitive Processing in Complex Environments
Abstract:
Understanding how the brain processes information and adapts to real-world challenges, area of neuroscience.
This work brings together cutting-edge methods in neural connectivity analysis, auditory attention
decoding, and brain response modeling to address some key questions in the field.
We start by introducing a computationally efficient method, based on variational inference,
to analyze how neural networks in the brain change over time. This approach helps us uncover
how the brain reconfigures itself to support flexible thinking—like remembering information or
making decisions—when faced with different tasks or stimuli.
Building on this, we tackle the challenge of decoding auditory attention in environments
with multiple speakers. By exploring how the accuracy of attention decoding changes with
different decision window lengths, we identify the trade-offs that influence real-time applications.
Using magnetoencephalography (MEG) data, we also evaluate popular decoding algorithms,
revealing their strengths and limitations for real-world use.
Lastly, we take initial steps to address the nonstationarities involved in brain activity under
the “cocktail party problem” which examines how the brain selectively focuses on a single sound
source in noisy, multi-speaker environments. To explore this, we extend the widely used temporal
response function (TRF) model by incorporating a Hidden Markov Model (HMM). This new
approach aims to capture the dynamic and switching interactions in auditory neural processing,
including transitions between different network-level stages of attention.
Together, these contributions create a unified framework for studying and decoding neural
activity in dynamic and non-stationary environments. The insights gained could lead to innovations
in advanced hearing aids, brain-computer interfaces, and our understanding of cognitive processes.