Information and Coding Theory Seminar  


Abstracts of talks

Spring 2007

2/07 Arya Mazumdar (UMD), Interleavers in Parallel Turbo Codes (Part 1)

ABSTRACT:
Interleavers in Parallel Turbo Codes : Interleavers are the central issue in parallel concatenated Turbo codes design. We will consider interleaver design from a minimum distance maximization point of view. We will compare random interleavers, weakly random interleavers and deterministic interleavers. We will show that cycle-free factor graph of Turbo code ensures high minimum distance. We will prove results and describe open problems related to optimal interleaver construction.

2/14 Cancelled because of snow

2/21 Arya Mazumdar (UMD), Interleavers in Parallel Turbo Codes (Part 2)

ABSTRACT:
Interleavers in Parallel Turbo Codes : Interleavers are the central issue in parallel concatenated Turbo codes design. We will consider interleaver design from a minimum distance maximization point of view. We will compare random interleavers, weakly random interleavers and deterministic interleavers. We will show that cycle-free factor graph of Turbo code ensures high minimum distance. We will prove results and describe open problems related to optimal interleaver construction.

2/28 Punarbasu Purkayastha (UMD), Bounds on codes in the NRT-metric space

ABSTRACT:
We will introduce the Niederreiter-Tsfasman-Rosenbloom metric space (ordered Hamming space), and derive some bounds on the size of codes (as a function of the minimum distance) in this space.

3/7 Wei Kang (UMD) A new data processing inequality and its applications in multi-user information theory (Part 1)

ABSTRACT:
In the study of multi-user information theory, more specifically, the problems of distributed source and channel coding, it is often needed to characterize the joint probability distribution of a pair of random variables satisfying an n-letter Markov chain. The exact solution to this problem is intractable. We propose a new data processing inequality, which provides a single-letter necessary condition for the n-letter Markov chain. This result yields outer bounds in various distributed source and channel coding problems, e.g., multi-terminal rate-distortion region and multiple access channel with correlated sources.
In this talk, I will focus on the problem of multi-terminal rate-distortion region. I will first introduce the background and existing single-letter outer bounds by Berger-Tung-Housewright and Wagner-Anantharam. Then I will present our two new outer bounds. The first one is tighter than
the previous outer bounds, but it involves an n-letter Markov chain. By applying our new data processing inequality, we obtain the second outer bound, which is in single-letter form, and potentially improves on the previous results.

3/14, 3/21 no seminar

3/28 Wei Kang (UMD) A new data processing inequality and its applications in multi-user information theory (Part 2)

ABSTRACT:
In the study of multi-user information theory, more specifically, the problems of distributed source and channel coding, it is often needed to characterize the joint probability distribution of a pair of random variables satisfying an n-letter Markov chain. The exact solution to this problem is intractable. We propose a new data processing inequality, which provides a single-letter necessary condition for the n-letter Markov chain. This result yields outer bounds in various distributed source and channel coding problems, e.g., multi-terminal rate-distortion region and multiple access channel with correlated sources.
In this talk, I will focus on the problem of multi-terminal rate-distortion region. I will first introduce the background and existing single-letter outer bounds by Berger-Tung-Housewright and Wagner-Anantharam. Then I will present our two new outer bounds. The first one is tighter than
the previous outer bounds, but it involves an n-letter Markov chain. By applying our new data processing inequality, we obtain the second outer bound, which is in single-letter form, and potentially improves on the previous results.

4/04 Ravi Tandon (UMD)
Dependence balance bounds for single-output two-way channels

ABSTRACT:
If in a transmission, the inputs of a single-output two-way channel exhibit some interdependence, this dependence must have been created during earlier transmissions. The idea that no more dependence can be consumed than produced is used to obtain new upper bounds to the capacity region of the discrete memoryless single-output two-way channel. With these upper bounds, it is shown that Shannon's Inner bound region is the capacity region for channels in a certain class. For the Multiple Access Channel with noiseless feedback, analogous results are obtained.

4/11 Nan Liu (UMD)
Sending a Bi-Variate Gaussian Source over a Gaussian MAC

ABSTRACT:
Abstract: In this talk, we consider a problem where a memoryless bi-variate Gaussian source is to be transmitted over an additive white Gaussian multiple-access channel with two transmitting terminals and one receiving terminal. The first transmitter only sees the first source component and the second transmitter only sees the second source component. We are interested in the pair of mean squared-error distortions at which the receiving terminal can reproduce each of the source components. It is demonstrated that in the symmetric case, below a certain signal-to-noise ratio (SNR) threshold, which is determined by the source correlation, uncoded communication is optimal. For SNRs above this threshold we present outer and inner bounds on the achievable distortions.
References:
[1] Lapidoth Amos, Tinguely Stephan: Sending a Bi-Variate Gaussian Source over a Gaussian MAC, in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Seattle, USA, 2006
[2] Bross Shraga, Lapidoth Amos, Tinguely Stephan: Superimposed Coded and Uncoded Transmissions of a Gaussian Source over the Gaussian Channel, in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Seattle, USA, 2006

4/25 Prasanth Anthapadmanabhan (UMD)
A Strong Converse for the "Decode-one" Multiple-Access Channel

ABSTRACT:
Abstract: We introduce the "decode-one" multiple-access channel, where the decoder is required to output the message from only one of the senders as opposed to decoding all transmitted messages in a normal multiple-access channel (MAC). In addition, all senders use the same codebook. This type of channel arises in collusion-secure fingerprinting. A strong converse technique due to Ahlswede/Dueck will be demonstrated to show an upper bound on the capacity of the discrete memoryless decode-one MAC with respect to average probability of error.
REFERENCES: [1] R. Ahlswede, ``An elementary proof of the strong converse theorem for the multiple-access channel,'' J. Combinatorics, Information and System Sciences, Vol. 7, No. 3, pp. 216-230, 1982. [2] G. Dueck, ``The strong converse to the coding theorem for the multiple-access channel,'' J. Combinatorics, Information and System Sciences, Vol. 6, No. 3, pp. 187-196, 1981.

5/02 Guang Han (UMD)
Thresholds of Monotone Properties

ABSTRACT:
Abstract: A graph property is monotone if it is preserved under edge-addition. In this talk, we will prove the existence of a sharp threshold for every monotone property of Bernoulli random graphs. We will examine the Margulis-Russo identity, which is crucial to the proof. Some extensions of this result will also be discussed. References: E. Friedgut and G. Kalai, Every monotone graph property has a sharp threshold, Proc. Amer. Math. Soc. 124 (1996) 2993--3002.

5/09 Jun Chen (IBM)
On the Duality Between Slepian-Wolf Coding and Channel Coding

ABSTRACT:
Abstract: Slepian-Wolf coding, also known as distributed lossless source coding, is of fundamental importance for many emerging applications. In this talk, we will discuss the intimate connections between Slepian-Wolf coding and channel coding. We show that there exist two different dualities between Slepian-Wolf coding and channel coding: type-level duality and linear codebook-level duality. These two dualities together provide a comprehensive picture of Slepian-Wolf coding and clarify many subtle differences between linear block codes, fixed-rate nonlinear codes, and variable-rate codes. The implication of this work on Slepian-Wolf code design will also be discussed.
Bio: Jun Chen received the M.S. and Ph.D. degrees in electrical and computer engineering from Cornell University in 2003 and 2005, respectively.
He was a Postdoctoral Research Associate in the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign from 2005 to 2006. He is currently a Josef Raviv Memorial Postdoctoral Fellow at the IBM Thomas J. Watson Research Center.