# 2008 Research Summary

## Structural Gain in Network Communication Problems

View Current Project Information

Bobak Anthony Nazer and Michael Gastpar

Random coding arguments are at the core of most channel capacity achievability proofs. The basic idea is as follows. First, choose several random variables with an appropriate joint distribution. Then, generate high-dimensional codebooks with entries drawn i.i.d. according to this joint distribution. Finally, analyze the error performance of this strategy in expectation and use this to show the existence of at least one good set of codebooks.

Amazingly, this proof technique takes us quite far in network information theory. It has been used to find the capacity region of the multiple-access channel, stochastically degraded broadcast channel, and physically degraded relay channel to name a few. However, an elegant multiterminal problem developed by Korner and Marton showed that random code constructions are not always sufficient; structured random codes, such as linear or lattice codes, may be required [1]. The essential feature of linear (and lattice) codes is that the sum of two codewords is always a codeword itself. This allows one to perform linear operations on codewords themselves and get a meaningful result.

In previous work, we showed that structured codes can be extremely useful for computing functions over multiple-access channels [2]. If the multiple-access channel is a noisy linear function of its inputs, then using the same linear (or lattice) code at each encoder can convert the channel into a reliable computation unit. The computation rate of the system is increased proportionally to the number of users regardless of the correlations between observations. We call this improvement structural gain as it comes from taking advantage of the algebraic structure of the problem rather than choosing an appropriate probability distribution. Similar gains can be shown in non-linear problems as well.

We have also shown that structured codes can result in improved performance when we are only interested in communicating bits across a network [2-4]. For instance, in a network coding scenario, we may only be interested in sending a function of messages down certain paths in the network. Our tools allow us to directly compute these functions "on-the-air" rather than have to collect individual messages and then compute [4]. We are currently working on generalizing our coding scheme beyond network coding and distributed computation. In essence, our scheme allows one to harness interference in networks to do distributed linear processing in a reliable fashion. Usually, interference is perceived as an obstacle to communication but with structured codes it can become a boon.

Figure 1: In a random codebook, the sum of two codewords is not a codeword. However, in a lattice code book the sum of two codewords is always a codeword.

- [1]
- J. Korner and K. Marton, "How to Encode the Modulo-Two Sum of Binary Sources,"
*IEEE Transactions on Information Theory,*Vol. 25, No. 2, March 1979, pp. 219-221. - [2]
- B. Nazer and M. Gastpar, "Computation over Multiple-Access Channels,"
*IEEE Transactions on Information Theory, Special Issue on Models, Theory, and Codes for Relaying and Cooperation in Communication Networks,*Vol. 53, No. 10, October 2007, pp. 3498-3516. - [3]
- B. Nazer and M. Gastpar, "The Case for Structured Random Codes in Network Communication Theorems,"
*Information Theory Workshop (ITW 2007),*Lake Tahoe, CA, September 2007. - [4]
- B. Nazer and M. Gastpar, "Lattice Coding Increases Multicast Rates for Gaussian Multiple-Access Networks,"
*Proceedings of 45th Annual Allerton Conference on Commununication, Control and Computation,*Monticello, IL, September 2007.