Event
Ph.D. Dissertation Defense: Madhura Pathegama
Wednesday, July 2, 2025
3:00 p.m.
AVW 2168
Maria Hoo
301 405 3681
mch@umd.edu
ANNOUNCEMENT: Ph.D. Dissertation Defense
Name: Madhura Pathegama
Committee:
Professor Alexander Barg (Chair)
Professor Prakash Narayan
Professor Sanghamitra Dutta
Professor Perinkulam Krishnaprasad
Professor Radu Balan (Dean's representative)
Date/ time: Wednesday, July 2, 2025 at 3.00 pm
Location: AVW 2168
Title: CODE SMOOTHING, UNIFORM DISTRIBUTIONS, AND APPLICATIONS
Abstract:
This dissertation investigates the problem of approximating uniform distributions over Hamming spaces, with applications to secure communication, randomness extraction, and computational hardness reductions. The central technique used in this study is code smoothing, which involves applying additive noise to a uniformly random codeword, producing an output distribution that closely approximates uniformity. We characterize the code rate thresholds required to achieve strong uniformity guarantees, quantified using Renyi divergence. Our results show that random linear codes, as well as structured families like Reed-Muller and LDPC codes, are highly effective at smoothing. Building on these code families, we develop practical and effective coding schemes for use in wiretap channel settings.
We also study uniformity guarantees achievable through hash functions, using techniques closely related to code smoothing. In particular, we show that certain classes of linear and $k$-universal hash families provide strong uniformity guarantees, quantified by Renyi divergence. This enables the extraction of uniform distributions to meet modern cryptographic requirements that impose stringent security guarantees.
We further apply smoothing techniques to investigate cryptographic reductions, with a focus on the computational hardness of the Learning Parity with Noise (LPN) problem—a foundational problem in lightweight cryptography. While prior work has explored reductions from the decoding problem to LPN to establish its hardness, our work sharpens this connection by delineating the parameter regimes where such reductions are viable, thus clarifying both the possibilities and the limitations of this approach.
We also study uniformity guarantees achievable through hash functions, using techniques closely related to code smoothing. In particular, we show that certain classes of linear and $k$-universal hash families provide strong uniformity guarantees, quantified by Renyi divergence. This enables the extraction of uniform distributions to meet modern cryptographic requirements that impose stringent security guarantees.
We further apply smoothing techniques to investigate cryptographic reductions, with a focus on the computational hardness of the Learning Parity with Noise (LPN) problem—a foundational problem in lightweight cryptography. While prior work has explored reductions from the decoding problem to LPN to establish its hardness, our work sharpens this connection by delineating the parameter regimes where such reductions are viable, thus clarifying both the possibilities and the limitations of this approach.
Together, these contributions demonstrate how smoothing serves as a unifying analytical tool across coding theory, cryptography and computer science enabling both practical constructions and theoretical insights.