Ph.D. Dissertation Defense: Xiaomin Wu

Friday, December 9, 2022
12:00 p.m.
IRB 4107
Emily Irwin
301 405 0680
eirwin@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense

Name: Xiaomin Wu

Committee:

Professor Shuvra S. Bhattacharyya, Chair/Advisor
Professor Rong Chen, Co-Chair/Co-Advisor
Professor Manoj Franklin
Professor Behtash Babadi
Professor Aravind Srinivasan, Dean's Representative

Date/Time: Dec. 9th, Friday, 12-2 pm

Location: IRB 4107

Title: Efficient Machine Learning Techniques for Neural Decoding Systems

Abstract: In this thesis, we explore efficient machine learning techniques for calcium imaging based neural decoding in two directions: first, techniques for pruning neural network models to reduce computational complexity and memory cost while retaining high accuracy; second, new techniques for converting graph-based input into low-dimensional vector form, which can be processed more efficiently by conventional neural network models.

Neural decoding is an important step in connecting brain activity to behavior
--- e.g., to predict movement based on acquired neural signals  Important
application areas for neural decoding include brain-machine interfaces and
neuromodulation. For application areas such as these, real-time processing of
neural signals is important as well as high quality information extraction from
the signals. Calcium imaging is a modality that is of increasing interest for
studying brain activity.  Miniature calcium imaging is a neuroimaging modality
that can observe cells in behaving animals with high spatial and temporal
resolution, and with the capability to provide chronic imaging.  Compared to
alternative modalities, calcium imaging has potential to enable improved neural
decoding accuracy.  However, processing calcium images in real-time is a
challenging task as it involves multiple time-consuming stages: neuron
detection, motion correction, and signal extraction. Traditional neural
decoding methods, such as those based on Wiener and Kalman filters, are fast;
however, they are outperformed in terms of accuracy by recently-developed deep
neural network (DNN) models. While DNNs provide improved accuracy, they involve high computational complexity, which exacerbates the challenge of real-time processing. Addressing the challenges of high-accuracy, real-time, DNN-based neural decoding is the central objective of this research.

As a first step in addressing these challenges, we have developed the
NeuroGRS system. NeuroGRS is designed to explore design spaces for compact DNN models and optimize the computational complexity of the models subject to
accuracy constraints. GRS, which stands for Greedy inter-layer order with
Random Selection of intra-layer units, is an algorithm that we have developed
for deriving compact DNN structures. We have demonstrated the effectiveness of
GRS to transform DNN models into more compact forms that significantly reduce
processing and storage complexity while retaining high accuracy.

While NeuroGRS provides useful new capabilities for deriving
compact DNN models subject to accuracy constraints, the approach has
a significant limitation in the context of neural decoding. This limitation
is its lack of scalability to large DNNs. Large DNNs arise naturally
in neural decoding applications when the brain model under investigation
involves a large number of neurons. As the size of the input DNN increases,
NeuroGRS becomes prohibitively expensive in terms of computational
time. To address this limitation, we have performed a detailed experimental analysis of how pruned solutions evolve as GRS operates, and we have used insights from this analysis to develop a new DNN pruning algorithm called Jump GRS (JGRS). JGRS maintains similar levels of model quality --- in terms of predictive accuracy --- as GRS while operating much more efficiently and being able to handle much larger DNNs under reasonable amounts of time and reasonable computational resources. Jump GRS incorporates a mechanism that bypasses (``jumps over'') validation and retraining during carefully-selected iterations of the pruning process. We demonstrate the advantages and improved scalability of JGRS compared to GRS through extensive experiments in the context of DNNs for neural decoding.

We have also developed methods for raising the level of abstraction in the
signal representation used for calcium imaging analysis. As a central part of
this work, we invented the WGEVIA (Weighted Graph Embedding with Vertex
Identity Awareness) algorithm, which enables DNN-based processing of neuron
activity that is represented in the form of microcircuits. In contrast to traditional representations of neural signals, which involve spiking signals, a microcircuit representation is a graphical representation. Each vertex in a microcircuit corresponds to a neuron, and each edge carries a weight that captures information about firing relationships between the neurons associated with the vertices that are incident to the edge.  Our experiments demonstrate that WGEVIA is effective at extracting information from microcircuits. Moreover,
raising the level of abstraction to microcircuit analysis has the potential to
enable more powerful signal extraction under limited processing time and
resources.

Audience: Public  Graduate  Faculty 

remind we with google calendar

 

February 2023

SU MO TU WE TH FR SA
29 30 31 1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 1 2 3 4
Submit an Event