Abstracts for Kannan Ramchandran

The EECS Research Summary for 2003


List Viterbi Decoding with Continuous Error Detection for Inter-Symbol Interference Channels

Dragan Petrovic
(Professors Borivoje Nikolic and Kannan Ramchandran)

In this project, we propose and analyze the list Viterbi algorithm (LVA) with an arithmetic coding-based continuous error detection (CED) scheme for transmission on inter-symbol interference (ISI) channels. We have shown that the system localizes error occurrences to the end of the transmission and that the proportion of bits where errors are likely to occur asymptotically approaches zero as the inverse of the number of transmitted bits. We analytically derived an upper bound on the bit error rate (BER) of the LVA-CED system as a function of the redundancy added by the CED code, the number of bits transmitted, and the BER when the standard maximum likelihood detector is used on the same channel. We analytically derived the number of paths required by the LVA for a target error rate. To show the benefits of these theoretical results in a practical setting, we have applied the system to high-order partial-response magnetic recording channels. Simulations show that this system results in a 2 dB improvement over maximum transition run (MTR) encoded EEPR4 partial response decoding with maximum likelihood sequence detection (EEPRML) at a BER of 2 x 10-6 in additive white Gaussian noise, and confirm the theoretical predictions.


Figure 1: Proposed LVA-CED system for magnetic recording

[1]
D. Petrovic, B. Nikolic, and K. Ramchandran, “List Viterbi Decoding with Continuous Error Detection for Magnetic Recording Channels,” Proc. IEEE Global Conf. Communications, Globecom, San Antonio, TX, November 2001.

More information (http://basics.eecs.berkeley.edu/) or

Send mail to the author : (dragan@eecs.berkeley.edu)

Data Funneling: Routing with Aggregation and Compression for Wireless Sensor Networks

Dragan Petrovic and Rahul Shah
(Professors Jan M. Rabaey and Kannan Ramchandran)
(DARPA) F29601-99-1-0169 and (NSF) CCR-0219722

This work considers the problem of minimizing the amount of communication needed to send readings from a set of sensors to a single destination in energy constrained wireless networks. Substantial gains can be obtained using packet aggregation techniques while routing. The routing algorithm we developed, called data funneling, allows the network to considerably reduce the amount of energy spent on communication setup and control, an important concern in low data-rate communication. This is achieved by sending only one data stream from a group of sensors to the destination, instead of having an individual data stream from each sensor to the destination. This strategy also decreases the probability of packet collisions when transmitting on a wireless medium because the total number of packets is reduced by incorporating the information of many small packets into a few large ones. Additional gains can be realized by efficient compression of data. This is achieved by losslessly compressing the data by encoding information in the ordering of the sensors’ packets. This “coding by ordering” scheme compresses data by suppressing certain readings and encoding their values in the ordering of the remaining packets. Using these techniques together can more than halve the energy spent in communication.

[1]
D. Petrovic, R. C. Shah, K. Ramchandran, and J. Rabaey, "Data Funneling: Routing with Aggregation and Compression for Wireless Sensor Networks," IEEE ICC Int. Workshop on Sensor Network Protocols and Applications, 2003 (submitted).

More information (http://bwrc.eecs.berkeley.edu/Research/Pico_Radio/Default.htm) or

Send mail to the author : (dragan@eecs.berkeley.edu)

Capacity of Multiple Antenna Gaussian Broadcast Channel at High SNR

Vinod Prabhakaran
(Professors Kannan Ramchandran and David N. C. Tse)
(DARPA) F30602-00-2-053, MICRO with matching grant from Qualcomm, (NSF) ANI-981456, (NSF) CCR-01-1878, and (NSF) CCR-009607

The downlink problem in a wireless system with multiple transmit antennas and multiple users can be modeled by a multiple input multiple output Gaussian broadcast channel. The capacity region of this channel is still an open problem. Caire and Shamai [1] applied a technique known as "dirty-paper coding," due to Costa [2], to find an achievable region. It was later shown [3] that the maximum sum rate of this region is also the maximum possible sum rate. It is widely believed that this achievable region is in fact optimal. We attempt to show that this conjecture is true at high SNRs. We hope that solving this problem would give insights into the general problem.

[1]
G. Caire and S. Shamai, "On the Achievable Throughput of a Multiantenna Gaussian Broadcast Channel," IEEE Trans. Information Theory (submitted).
[2]
M. Costa, "Writing on Dirty Paper," IEEE. Trans. Information Theory, Vol. 29, May 1983.
[3]
P. Viswanath and D. Tse, "Sum Capacity of the Multiple Antenna Gaussian Broadcast Channel and Uplink-Downlink Duality," IEEE Trans. Information Theory (submitted).

Send mail to the author : (vinodmp@eecs.berkeley.edu)

High-Resolution Acquisition Methods for Wideband Communication Systems

Irena Maravic1
(Professors Kannan Ramchandran and Martin Vetterli)
Swiss National Science Foundation

Sampling theory has been a topic of extensive research over the past decade, which has led to a reexamination of some of the foundations of Shannon's theory, and development of more general formulations with immediate relevance to signal processing and communications. Recently, it has been shown that it is possible to develop sampling schemes for a large class of non-bandlimited signals, namely certain signals of finite innovation rate [1,2]. A common feature of these signals is that they allow for a parametric representation with a finite number of degrees of freedom, and can be perfectly reconstructed from a finite set of samples. We consider one possible application of our sampling results to be the problem of timing synchronization and channel estimation in wideband communication systems, such as ultra-wideband and CDMA systems.

Synchronization in wideband communication systems is a crucial task that imposes serious restrictions on system performance. Vast literature has appeared recently, addressing both algorithmic and implementation issues of various synchronization techniques, with a clear trend toward eliminating the necessity for analog components as much as possible and performing all processing digitally. Even though many high-performance schemes have already been proposed, their application in real time systems is often not feasible due to their high computational complexity. Furthermore, almost all of them use the Nyquist sampling rate, which requires very fast and expensive A/D converters and therefore high power consumption. This problem becomes critical in ultra-wideband systems, where in digital-oriented solutions A/D converters must operate in the gigahertz range.

We propose a low-complexity timing synchronization method that uses low-rate uniform sampling and well-developed algorithmic solutions. Specifically, we extend some of our sampling results [2] to the problem of multipath delay estimation in wideband channels and develop an algorithm for estimating unknown propagation delays from a low-dimensional subspace of a received signal [1]. Our approach leads to reduced computational requirements and lower power consumption compared to existing techniques, thus allowing for a practical hardware implementation and all the benefits of a digital design.

[1]
I. Maravic, M. Vetterli, and K. Ramchandran, “High Resolution Acquisition Methods for Wideband Communication Systems,” ICASSP, 2003 (submitted).
[2]
I. Maravic and M. Vetterli, “Digital DS-CDMA Receivers Working Below the Chip Rate,” Proc. Asilomar Conf. Signals, Systems and Computers, Pacific Grove, CA, November 2002.
1Visiting Scholar, Swiss Federal Institute of Technology, Lausanne

More information (http://lcavwww.epfl.ch/~irena) or

Send mail to the author : (maravic@eecs.berkeley.edu)

Compression of Encrypted Data

Mark Johnson
(Professors Kannan Ramchandran and David A. Wagner)
Microsoft and Philips

When a data source is to be transmitted across an insecure, bandwidth-constrained channel, the standard solution is to first compress the data and then encrypt it. We examine the problem of reversing the order of these steps, first encrypting and then compressing. Such a scheme could be used in a scenario where the data generator and the compressor are not co-located, and the link between them is vulnerable to eavesdropping.

We encrypt a binary data sequence by forming the mod-2 sum with a pseudo-random binary key. Because the decoder will also have access to the same key, we can compress the encrypted data using distributed source coding principles. The DISCUS framework [1] provides a constructive method for generating codes that approach the Slepian-Wolf bound. By modifying the DISCUS decoding procedure, we should be able to incorporate more complex encryption algorithms into our scheme.


Figure 1: Secure compression framework

[1]
S. S. Pradhan and K. Ramchandran, "Distributed Source Coding Using Syndromes (DISCUS): Design and Construction," Proc. Data Compression Conference, Snowbird, UT, March 1999.

More information (http://basics.eecs.berkeley.edu/) or

Send mail to the author : (mjohnson@eecs.berkeley.edu)

Distributed Compression of Correlated Audio Sources

Abhik Majumdar
(Professor Kannan Ramchandran)
Intel Corporation

We address the problem of compressing audio sources that are noisy filtered versions of the same audio signal. Exploiting the correlations between the remote sources will lead to better quality while maintaining the same transmission rate, as compared to a scheme that neglects these correlations. The objective is to develop algorithms for distributed compression of these correlated audio sources to attempt to achieve the gains predicted in theory [1]. The algorithms are based on the distributed source coding using syndromes (DISCUS) [2] framework and incorporate the use of perceptual masks, which are common in the compression of audio signals. We are currently in the process of implementing our algorithm. Initial results have been very encouraging.

We expect our algorithms to operate in a very power- and bandwidth-constrained environment. As such, our algorithms are designed to operate in medium to low bit rate regimes. Furthermore, we allow little or no communication among the sensors. Since the sensors may also have stringent power requirements and low computing power, the algorithms need to have sufficiently low complexity and also need to be optimized for the specific architectures of the audio sensors.


Figure 1: System set-up

[1]
A. D. Wyner and J. Ziv, "The Rate Distortion Function for Source Coding with Side Information at the Decoder," IEEE Trans. Information Theory, Vol. 22, January 1976.
[2]
S. S. Pradhan and K. Ramchandran, "Distributed Source Coding Using Syndromes (DISCUS): Design and Construction," Proc. IEEE Data Compression Conf. Snowbird, UT, March 1999.

More information (http://www.cs.berkeley.edu/~abhik) or

Send mail to the author : (abhik@eecs.berkeley.edu)

Multimedia Distribution over Wireless Networks

Abhik Majumdar
(Professor Kannan Ramchandran)
Intel

We consider the problem of seamless distribution of multimedia data over high bandwidth wireless networks. Specifically, we focus on the popular IEEE 802.11b wireless LAN. In addition to the wireless channel related challenges, such as fluctuations in channel quality and high bit error rate, there is also the problem of heterogeneity among receivers, since each user will have different channel conditions, power limitations, processing capabilities, etc. We investigate novel algorithms for real-time multimedia transmission in such scenarios. This problem is complicated due to the close interaction of system parameters including source coding, channel coding, networking, transport layers, etc. Here, we address an integrated, end-to-end implementation of a multimedia dissemination system, rather than optimizing a component in isolation. To tackle this problem, we use progressive source coding together with the unequal error protection ideas presented in [1,2] to make the video bit stream insensitive to the position of packet loss. For the multicast case, we formulate the problem as an optimization of a maximum regret cost function across the multicast user space [3]. We have developed a modular server that works on a Linux platform and presents a transparent interface that enables it to support application layer error resilience with a wide range of video coders and transport protocols. We have successfully integrated the server with a real-time version of the packetization and transcoding ideas presented in [2]. We multicast the video stream to receivers with different channel characteristics and observe graceful quality degradation with the variation in packet loss rate and bandwidth. The end-to-end system operates in the 3 Mb/s regime and can be configured to run at higher rates. For details please refer to [3].

[1]
A. Albanese, J. Blomer, J. Edmonds, M. Luby, and M. Sudan, "Priority Encoded Transmission," IEEE Trans. Information Theory, November 1996.
[2]
K.-W. Lee, R. Puri, T.-E. Kim, K. Ramchandran, and V. Bhargavan, "An Integrated Source Coding and Congestion Control Framework for Video Streaming in the Internet," Proc. INFOCOM, March 2000.
[3]
A. Majumdar, D. G. Sachs, I. Kozintsev, K. Ramchandran, and M. Yeung, "Multicast and Unicast Real-Time Video Streaming over Wireless LANs," IEEE Trans. Circuits and Systems for Video Technology, Vol. 12, June 2002.

More information (http://www.cs.berkeley.edu/~abhik) or

Send mail to the author : (abhik@eecs.berkeley.edu)

VISDOM: Video Streaming Using Distributed Encoding over Multiple Servers

Abhik Majumdar and Rohit Puri1
(Professor Kannan Ramchandran)
Intel Corporation and MICRO

In this project, we address the problem of streaming real-time video from multiple servers in a distributed manner. We invoke a scalable video encoding format (e.g., H.263+, MPEG-4 FGS (fine granular scalability)), and use a network-friendly distributed streaming algorithm. The interface between the video content and the network is provided by a distributed packetization strategy. This packetization approach uses an FEC-based multiple description coding framework which is rate distortion efficiently matched dynamically to both the video source content and the network channel conditions.

Due to the diversity effect of multiple servers, our distributed algorithm is immune to single points of failure, provides natural load-balancing on the multiple servers, and provides a graceful degradation in quality with an increase in multi-server load or a decrease in network throughput.

We have developed an algorithm to deliver a nearly constant perceptual video quality to the client by streaming simultaneously from multiple servers leading to a satisfying user experience. The algorithm accomplishes this task while minimizing the net aggregate bandwidth used from all the servers put together. The input video sequence is assumed to be a scalable video bit-stream. Our solution is inherently distributed, so that the servers need not communicate for encoding purposes. Synchronization among the servers is achieved through the receiver.

We have implemented our distributed video streaming algorithm and are currently running tests over the Internet. We are in the process of developing solutions for multicast. For details please refer to [1].


Figure 1: Block diagram of the VISDOM system

[1]
A. Majumdar, R. Puri, and K. Ramchandran, "Distributed Multimedia Transmission from Multiple Servers," Int. Conf. Image Processing, Rochester, NY, 2002.
1Postdoctoral Researcher

More information (http://www.cs.berkeley.edu/~abhik) or

Send mail to the author : (abhik@eecs.berkeley.edu)

Denoising via Recursive Wavelet Thresholding

Alyson Fletcher
(Professor Kannan Ramchandran)
Intel and (NSF) CCR 0096071

Wavelet thresholding has been a popular technique for denoising since the seminal work by Donoho and Johnstone [1]. The basic principle of wavelet thresholding is to identify and zero out wavelet coefficients of a signal which are likely to contain mostly noise. By preserving the most significant coefficients, wavelet thresholding preserves important highpass features of a signal such as discontinuities. This property is useful, for example, in image denoising to maintain the sharpness of the edges in an image [2].

It has long been recognized that the periodic shift-invariance of the wavelet transform implies that denoising by wavelet thresholding is itself periodically shift-invariant. Thus, various choices of shifting, denoising, and shifting back give different denoised estimates, and there is no simple way to choose the best from among these estimates. Coifman and Donoho [3] introduced cycle spinning, a technique estimating the true signal as the linear average of individual estimates derived from wavelet-thresholded translated versions of the noisy signal.

In this work, it is demonstrated that such an average can be dramatically improved upon [4]. In the conceptually simplest form, the proposed algorithm is to apply wavelet thresholding recursively by repeatedly translating, denoising the input via basic wavelet denoising, and then translating back. (Significant computational improvements are possible.) Exploiting the convergence properties of projections, the proposed algorithm can be regarded as a sequence of denoising projections that converge to the projection of the original noisy signal to a small subspace containing the true signal. Certain local and global converge properties are proven, and simulations on piecewise polynomial signals show marked improvement over both basic wavelet thresholding and standard cycle spinning. Figure 1 shows the improvement in denoising typically provided by our algorithm when the true signal is a piecewise polynomial. See [4] for details. Further work involves investigating various extensions to sets of wavelet bases and sets of observations.


Figure 1: Piecewise polynomial denoising results

[1]
D. L. Donoho and I. M. Johnstone, "Ideal Spatial Adaptation via Wavelet Shrinkage," Biometrika, Vol. 81, 1994.
[2]
Y. Xu, J. B. Weaver, D. M. Healy, Jr., and J. Lu, "Wavelet Transform Domain Filters: A Spatially Selective Noise Filtration Technique," IEEE Trans. Image Processing, Vol. 3, No. 6, November 1994.
[3]
R. R. Coifman and D. L. Donoho, "Translation-Invariant De-Noising," ed. A. Antoniadis and G. Oppenheim, Wavelets and Statistics, Springer Lecture Notes in Statistics, Vol. 103, Springer-Verlag, New York, 1995.
[4]
A. K. Fletcher, K. Ramchandran, and V. K. Goyal, "Wavelet Denoising by Recursive Cycle Spinning," Proc. IEEE Int. Conf. Image Processing, Rochester, NY, September 2002.

More information (http://www.eecs.berkeley.edu/~alyson) or

Send mail to the author : (alyson@eecs.berkeley.edu)

Multimedia Error Concealment by Projection onto Convex Sets (POCS) based Methods

Andrew Cheng1 and Rohit Puri2
(Professor Kannan Ramchandran)
Intel

In this work we present an error concealment scheme for damaged spatial blocks in image/video sequences. A promising error concealment approach in such a scenario is based on the projection onto convex sets (POCS) [1] method. If the two convex sets intersect, this algorithm guarantees a reconstructed block that will be smoothly connected with adjacent blocks without altering the correctly received adjacent blocks. If the two sets don't intersect, the algorithm oscillates between two points that are close to both properties. The POCS method has the additional benefit of edge continuity and preservation via edge detection and adaptive filters. However, one shortcoming of the algorithm is that it assumes a damaged block is surrounded by undamaged blocks. In this work we deal with the case where damaged blocks often are contiguous or located on the boundary of the image. This has potential application in some recently proposed low-complexity video coding algorithms [2,3] that rely on the coding with side information approach [4] to achieve compression as opposed to conventional motion compensation based approaches. We seek to generalize the POCS algorithm to incorporate these cases. Furthermore, we are investigating various methods of providing an initial estimate of the missing block for the POCS algorithm. Simple averaging techniques have been remarkably successful for individual missing blocks, but more elaborate methods are needed as the number of contiguous missing blocks increases.

[1]
H. Sun and W. Kwok, “Concealment of Damaged Block Transform Coded Images Using Projections onto Convex Sets,” IEEE Trans. Image Processing, Vol. 4, No. 4, April 1995.
[2]
R. Puri and K. Ramchandran, “PRISM: A New Robust Video Coding Architecture based on Distributed Compression Principles,” Allerton Conf. Communication, Control, and Computing, Allerton, IL, October 2002.
[3]
A. Aaron and B. Girod, “Wyner-Ziv Coding of Motion Video,” Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, November 2002.
[4]
D. Slepian and J. K. Wolf, “Noiseless Coding of Correlated Information Sources,” IEEE Trans. Information Theory, Vol. 19, July 1973.
1Undergraduate (EECS)
2Postdoctoral Researcher

Send mail to the author : (drewc@eecs.berkeley.edu)

Distributed Sampling for Sensor Networks

Animesh Kumar and Prakash Ishwar1
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722

In the context of bandlimited signals, there are two ways to sample a signal. One is a PCM method, in which the sampling rate is slightly above the Nyquist rate, and each sample is quantized with a precise A/D converter. To reduce the cost there are schemes which use a 1 bit A/D converter with oversampling. Recently, it was shown [1,2] that one could achieve similar accuracy from the oversampling and PCM methods for the same rate. (PCM accuracy is so far the best known result.) This indicates that one should be able to trade off the oversampling with the A/D precision. We have some partial results in this area. At first the dither-based schemes [1] were generalized to trade off average oversampling with the accuracy of an A/D converter. Also, the one-bit dither method and the PCM method turned out to be a special case of our scheme, so it connects two completely separate looking schemes as well. We also have a distributed and energy efficient algorithm for sampling a one-dimensional bandlimited field.

A further related problem that we are pursuing is the distributed sampling of finite rate innovation signals [3]. It requires selection of suitable models, followed by schemes/algorithms and their analyses. Our main goals are good/best reconstruction and low energy consumption.


Figure 1: An example of conservation of bits principle

[1]
Z. Cvetkovic and I. Daubechies, "Single-Bit Oversampled A/D Conversion with Exponential Accuracy in the Bit-Rate," Proc. Data Compression Conf., Snowbird, UT, March 2000.
[2]
Z. Cvetkovic, I. Daubechies, and B. F. Logan, "Interpolation of Bandlimited Functions from Quantized Irregular Samples," Proc. Data Compression Conf., Snowbird, UT, April 2002.
[3]
M. Vetterli, P. Marziliano, and T. Blu, "Sampling Signals with Finite Rate of Innovation," IEEE Trans. Signal Processing, Vol. 50, No. 6, June 2002.
1Postdoctoral Researcher

Send mail to the author : (animesh@eecs.berkeley.edu)

LDPC Codes Can Approach the Slepian Wolf Bound for General Binary Sources

Daniel Schonberg and Sandeep Pradhan1
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722

We propose a constructive framework for distributed lossless source coding of two binary sources that have arbitrary correlation structures (Figure 1), i.e., not just symmetric correlation structures of the form of Y = X + N where X and N are independent. Our proposed framework accommodates the important special case of the absence of any correlation between the two sources, wherein it becomes an entropy coding scheme for a single source (Figure 2). This framework is obtained by combining Low-density parity check (LDPC) codes with the distributed source coding using syndromes (DISCUS) framework of [1]. This combined algorithm is powerful enough to reach the Slepian-Wolf bound for two memoryless binary sources and the entropy rate for a single memoryless binary source.


Figure 1: Block diagram of symmetric DISCUS. Two sources are compressed independently and reconstructed at the receiver.

Figure 2: Symmetric DISCUS construction used for lossless coding, or in the special case of zero correlation between sources.

[1]
S.S. Pradhan and K. Ramchandran, "Distributed Source Coding Using Syndromes (DISCUS): Design and Construction," IEEE Data Compression Conf., Snowbird, UT, March 1999.
1Professor, University of Michigan

Send mail to the author : (dschonbe@eecs.berkeley.edu)

Tracking and Exploiting Correlations in Dense Sensor Networks

Dragan Petrovic and Jim Chou
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538, (DARPA) F29601-99-1-0169, and (NSF) CCR-0219722

We propose a novel approach to reducing energy consumption in sensor networks using a distributed adaptive signal processing framework and efficient algorithms. While the topic of energy-aware routing to alleviate energy consumption in sensor networks has received attention recently, in this project, we propose an orthogonal approach to previous methods. Specifically, we propose a distributed way of continuously exploiting existing correlations in sensor data based on adaptive signal processing and distributed source coding principles. Our approach enables sensor nodes to blindly compress their readings with respect to one another without the need for explicit and energy-expensive inter-sensor communication to effect this compression. Furthermore, the distributed algorithm used by each sensor node is low in complexity and easy to implement, while an adaptive filtering framework is used to continuously learn the relevant correlation structures in the sensor data. Our simulations show the power of our proposed algorithms, revealing their potential to effect significant energy savings (from 15%-40%) for typical sensor data corresponding to a multitude of sensor modalities.


Figure 1: Distributed source coding setup: node with reading X must compress the reading to close to H(X|Y) without knowing Y

Figure 2: Many sensor nodes reporting to a data-gathering node are clustered according to the correlations of their readings.

[1]
J. Chou, D. Petrovic, and K. Ramchandran, "Tracking and Exploiting Correlations in Dense Sensor Networks," Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, November 2002.

More information (http://basics.eecs.berkeley.edu/) or

Send mail to the author : (dragan@eecs.berkeley.edu)

Audio Data Hiding with Application to Surround Sound

Jim Chou, Dan Sachs1, Doug Jones2, William Wu3, and Hansen Bow4
(Professor Kannan Ramchandran)

There has been a lot of interest recently in applying channel coding with side information (CCSI) concepts to data hiding. Part of the interest stems from the fact that information theoretic bounds can be derived for such systems. In this project, we model the audio data embedding problem as a parallel CCSI problem. This is done by dividing the audio spectrum into frequency bins and then treating each bin as a separate CCSI channel. A perceptual mask is derived from the audio signal to determine the amount of power to use in each channel. It acts as a "water-filling" formula by determining the amount of distortion that can be introduced into each frequency bin without introducing audible distortion. As a result, our data embedding scheme will be imperceptible to the human ear. We will show in this project how a variable-rate trellis construction can be used in conjunction with a perceptual mask to embed data. We will also show how the perceptual mask can be seamlessly conveyed from encoder to decoder. An exciting application for our audio data embedding solution is to embed data within the audio signal that will enable surround sound at the receiver. The resulting surround sound system will be better than existing surround sound systems that are based on manipulating two stereo channels to derive the other surround channels.

[1]
J. Chou and K. Ramchandran, "Robust Audio Data Hiding for Noisy Channels," Proc. ICASSP, Salt Lake City, UT, May 2001.
1Visiting Scholar, University of Illinois at Champaign
2Visiting Professor, University of Illinois at Champaign
3Undergraduate (EECS), University of California at Berkeley
4Undergraduate (EECS), University of California at Berkeley

Send mail to the author : (jimchou@eecs.berkeley.edu)

Turbo Coded Trellis-based Constructions for Channel Coding with Side Information

Jim Chou and S. Sandeep Pradhan1
(Professor Kannan Ramchandran)
(NSF) CCR-0208883

Many current applications such as data hiding and watermarking can be posed as the problem of channel coding with side information. There has been considerable interest in designing codes to try and attain the theoretical capacity of the problem. In order to achieve capacity, a powerful channel codebook that partitions into a powerful source codebook should be chosen. The data to be embedded will index the source codebook partition. The constructions that exist in the literature, however, are typically based on powerful channel codebooks and weak source codebooks and hence remain at a considerable gap to capacity. In this project, we present two methods of construction that are based on a powerful channel codebook (i.e., turbo codes) and powerful source codebook partitions (i.e., trellis coded quantization) to try and bridge the gap to capacity.

1Visiting Professor, University of Michigan

Send mail to the author : (jimchou@eecs.berkeley.edu)

Connexions: DSP Education for a Networked World

Julius Kusuma1 and Richard G. Baraniuk2
(Professor Kannan Ramchandran)

Connexions is a new approach to authoring, teaching, and learning that aims to fully exploit modern information technology. Available free of charge to anyone under open-content and open-source licenses, Connexions offers custom-tailored, current course material, is adaptable to a wide range of learning styles, and encourages students to explore the links among courses and disciplines.

In contrast to the traditional process of textbook writing and publishing, Connexions fosters world-wide, cross-institution communities of authors, instructors, and students, who collaboratively and dynamically fashion "modules" from which the courses are constructed.

We believe the ideas and philosophy embodied by Connexions have a potential that is pedagogically sound, both time and cost efficient, and fun.

[1]
R. G. Baraniuk, J. Kusuma, K. Ramchandran, et al., "Connexions: DSP Education for a Networked World," Proc. IEEE Int. Conf. Acoustics Speech and Signal Processing, Orlando, FL, May 2002.
1MIT
2Professor, Rice University

More information (http://cnx.rice.edu) or

Send mail to the author : (kusuma@mit.edu)

Rate-Constrained Robust Estimation for Unreliable Sensor Networks

Prakash Ishwar1, Rohit Puri2, and S. Sandeep Pradhan3
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722

We study the problem of rate-constrained robust estimation of noisy sensor measurements in an unreliable sensor network. Noisy measurements of physical process X (e.g., temperature) made by individual sensors need to be communicated to a central observer over unreliable channels. Limited power resources of the sensors place a strong limit on the resolution with which the noisy sensor readings can be transmitted. Both the sensors and the communication links can break down. However, the central observer is guaranteed to reliably receive readings from a minimum of k out of n sensors. The goal of the central observer is to estimate the observed physical process from the received transmissions where a specified distortion metric provides an objective measure of estimation quality. In this work, we derive an information-theoretic achievable rate-distortion region for this problem when the sensor measurement noise statistics are symmetric with respect to the sensors.

In the special case of a Gaussian sensor network, i.e., when the source is i.i.d. Gaussian, the sensor noise processes are additive i.i.d. Gaussian, and the distortion metric is mean squared error, we have the following remarkable result. When any k out of n unreliable sensor transmissions are received, the central decoder's estimation quality can be as good as the best reconstruction quality that can be achieved by deploying only k reliable sensors and the central decoding unit is able to receive transmissions from all k sensors. Furthermore, when more than k out of the n sensor transmissions are received, the estimation quality strictly improves.


Figure 1: An unreliable sensor network. A noisy version of source X is observed at each of the n sensors. The noisy observations Y1,...,Yn are encoded and the encoded indices I1,...,In are transmitted. The network delivers some m >= k indices. The decoder obtains the best possible reconstruction of X from the received indices.

Figure 2: Random code construction. n independent random codebooks of block length l are constructed, each containing 2lR' codewords. Each codebook is randomly partitioned into 2lR bins each with approximately 2l(R'-R) codewords.

Figure 3: Structure of sensor encoders that can achieve rate-distortion performance. The encoder of each sensor consists of a rate R' quantizer, for block length l, followed by a rate R random binning function.

Figure 4: Structure of the central observer decoder that can achieve rate-distortion performance. The central observer first decodes the received indices to intermediate representations of length l, and then forms the best estimate of the source. Here, m >= k.
1Postdoctoral Researcher
2Postdoctoral Researcher
3Professor, University of Michigan, Ann Arbor

Send mail to the author : (ishwar@eecs.berkeley.edu)

n-Channel Multiple Descriptions: Theory and Constructions

Rohit Puri1 and S. Sandeep Pradhan2
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538

Multiple description (MD) source coding has recently emerged as an attractive framework for robust multimedia transmission over unreliable channels, such as the Internet and the wireless medium. The basic idea in MD coding is to generate multiple descriptions of the source, such that each description independently describes the source with a certain fidelity, and when more than one description is available, they can be synergistically combined to enhance the quality.

In this work, we present new achievable rate regions and code constructions for the symmetric n-channel multiple descriptions (MD) coding problem for n>2. Our approach is inspired by unexplored connections between MD and the problem of distributed source coding.

We describe the underlying information-theoretic framework, and then formulate practical code constructions based on scalar/vector quantizers and linear channel codes to emulate the information theoretic performance.

[1]
S. S. Pradhan, R. Puri, and K. Ramchandran, "(n,k) Source Channel Erasure Codes: Can Parity Bits also Refine Quality?" Proc. Conf. Information Sciences and Systems, Baltimore, MD, March 2001.
[2]
R. Puri, S. S. Pradhan, and K. Ramchandran, "n-Channel Multiple Descriptions: Theory and Constructions," Data Compression Conf., Snowbird, UT, April 2002.
[3]
R. Puri, S. S. Pradhan, and K. Ramchandran, "n-Channel Symmetric Multiple Descriptions: New Rate Regions," Int. Symp. Information Theory, Lausanne, Switzerland, July 2002.
1Postdoctoral Researcher
2Professor, University of Michigan, Ann Arbor

Send mail to the author : (rpuri@eecs.berkeley.edu)

Seamless Digital Upgrade of Analog Transmission Systems Using Coding with Side Information

Rohit Puri1 and Sandeep Pradhan2
(Professor Kannan Ramchandran)
MICRO 00-081

We study the problem of seamless digital upgrade of analog transmission systems in an information-theoretic setting. Our notion of seamlessness is in two regards: (1) backward compatibility with existing analog receivers, and (2) no extra digital spectrum for the upgrade. In this work, we consider Gaussian sources with memory (parallel Gaussian source model) and arbitrarily colored Gaussian channels (parallel Gaussian channel model) with the goal of optimizing (under a squared-error metric) the delivered analog and digital signal quality under a fixed (total) transmission power constraint. We show that a constructive scheme consisting of the cascade of source coding with side information (SCSI) and channel coding with side information (CCSI) can be used to come up with the jointly optimal end-to-end delivered digital and analog signal qualities corresponding to the peak digital upgrade quality deliverable.

[1]
R. Puri, K. Ramchandran, and S. S. Pradhan, "On Seamless Digital Upgrade of Analog Transmission Systems Using Coding with Side Information," Allerton Conf. Communication, Control, and Computing, Allerton, IL, October 2002.
1Postdoctoral Researcher
2Professor, University of Michigan Ann Arbor

Send mail to the author : (rpuri@eecs.berkeley.edu)

Duality between MIMO Source and Channel Coding

Sandeep Pradhan1
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722

We address duality in a variety of multiple-input-multiple-output (MIMO) source and channel coding problems of interest under different scenarios of "one-sided" inter-terminal collaboration at either the transmitter or at the receiver, including certain cases of duality between (1) broadcast channel coding and distributed source coding, and (2) multi-access channel coding and multiple-descriptions source coding. Our notion of duality in this project is in a functional sense, where the optimal encoder mapping for a MIMO source coding problem becomes identical to the optimal decoder mapping for the dual MIMO channel coding problem, and vice versa. For ease of illustration we give the formulation only for two-input-two-output systems, which can be easily extended to the MIMO case. We present the precise mathematical conditions under which these encoder-decoder mappings are swappable in the two dual MIMO problems, identifying the key roles played by the source distortion and channel cost measures respectively in the MIMO source and channel coding problems in capturing this duality.

Since the first observation by Shannon in 1959 [1] that source and channel coding problems can be studied as information-theoretic duals of each other, a number of researchers have made significant contribution to furthering this understanding, as noted in excellent textbooks such as [2,3]. In [4], this duality has been studied in the context of quantization and modulation. As multiuser information theory has gained significant interest recently, there has been a corresponding interest in extending this duality to these scenarios. Duality between source and channel coding with side information was first reported in [5], and later studied in [6].

A mathematical formulation of the functional duality between conventional point-to-point source and channel coding problems was given in [7], where the important roles of distortion and cost measures for these two problems were highlighted, inspired by seemingly unrelated work on the optimality of uncoded transmission in [8].

This concept was then extended to the case of source and channel coding with side information. The notion of duality addressed in [7] is in a functional sense, i.e., the optimal encoder-decoder mappings for one problem become the optimal decoder-encoder mappings for the dual problem. The precise conditions under which such swappable mappings are feasible involve dual relationships between distortion and cost measures in the source and channel coding problems respectively. Study of this functional duality serves two important purposes: (1) it provides new insights into these problems from the different perspectives of source and channel coding, and allows for cross-leveraging of advances in the individual fields; (2) more importantly, it provides a basis for sharing efficient constructions of the encoder and decoder functions in the two problems, e.g., through the use of structured algebraic codes, turbo-like codes, trellis-based codes, etc.

In this work we extend this notion of functional duality to more instances of MIMO source and channel coding problems. We study various MIMO structures admitting different scenarios of collaboration among multi-terminal inputs and/or outputs. (To keep the exposition simple, in this work we describe only two-input-two-output systems.) The collaboration scenarios we consider involve those where either the multi-terminal encoders or the multi-terminal decoders can collaborate, i.e., be joint, but not both. (The case of collaboration at both ends degenerates to point-to-point MIMO systems.) Under this one-sided collaboration abstraction, we address four problems of interest in source and channel coding: (1) distributed source coding; (2) broadcast channel coding; (3) multiple description source coding with no excess sum-rate; and (4) multiple access channel coding with independent message sets. In (1) and (4), the decoders collaborate, whereas in (2) and (3), the encoders collaborate. These four problems have been studied in the literature extensively. In this project we point out that for a given distributed source coding problem, under certain cases, we can obtain a specific dual broadcast channel coding problem and vice versa. Similarly, for a given multiple description coding problem, we can find a specific dual multiple access channel coding problem and vice versa.

[1]
C. E. Shannon, "Coding Theorems for a Discrete Source with a Fidelity Criterion," IRE Nat. Conv. Rec., Vol. 4, 1959.
[2]
I. Csiszar and J. Korner, Information Theory: Coding Theorems for Discrete Memoryless Sources, New York, Academic Press, 1981.
[3]
T. M. Cover and J. A. Thomas, Elements of Information Theory, New York, John Wiley and Sons, 1991.
[4]
M. V. Eyuboglu and G. D. Forney, "Lattice and Trellis Quantization with Lattice and Trellis-bounded Codebooks--High Rate Theory for Memoryless Sources," IEEE Trans. Information Theory, Vol. 39, January 1993.
[5]
J. Chou, S. S. Pradhan, and K. Ramchandran, "On the Duality between Distributed Source Coding and Data Hiding," Proc. Asilomar Conf. on Signals, Systems, and Computers, November 1999.
[6]
M. Chiang and T. M. Cover, "Unified Duality between Channel Capacity and Rate Distortion with Side Information," Proc. Int. Symp. Information Theory, Washington, DC, June 2001.
[7]
S. S. Pradhan, J. Chou, and K. Ramchandran, Duality between Source and Channel Coding with Side Information, UC Berkeley Electronics Research Laboratory, Memorandum No. UCB/ERL M01/34, December 2001.
[8]
M. Gastpar, B. Rimoldi, and M. Vetterli, "To Code or Not to Code," Proc. IEEE Int. Symp. Information Theory, Sorrento, Italy, June 2000.
1Professor, University of Michigan

More information (http://www.eecs.umich.edu/~pradhanv/) or

Send mail to the author : (pradhanv@eecs.umich.edu)

Duality between Source and Channel Coding

Sandeep Pradhan1 and Jim Chou
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722

We explore a mathematical characterization of the functional duality between classical source and channel coding, formulating the precise conditions under which the optimal encoder for one problem is functionally identical to the optimal decoder for the other problem. We then extend this functional duality to the case of coding with side information. We consider several examples which correspond to both discrete-valued and continuous-valued cases to illustrate our formulation.

Classical source coding under a rate-distortion (R-D) constraint and channel coding under a channel cost constraint have long been considered information-theoretic duals (this should not be confused for Lagrangian duality in optimization theory) of each other starting from Shannon's landmark paper in 1959. In the source coding (rate-distortion) problem, the goal is to find, for a given source distribution, the conditional distribution between the input source and the output reconstruction that minimizes the mutual information [1] between the two subjects to a target distortion constraint corresponding to a given distortion metric. In the channel coding (capacity-cost) problem, the goal is to find, for a given channel input-output conditional distribution, the channel input distribution that maximizes the mutual information between input and output subject to a target channel cost (or power) constraint corresponding to a channel cost metric. Cover and Thomas [1] point out that these two problems are information-theoretic duals of each other. This duality has also been exploited in the formulation of numerical optimization algorithms for the channel capacity and rate-distortion problems respectively using the Blahut-Arimoto algorithm. In [1] (see Sec 13.5), a packing versus covering interpretation of duality in Gaussian source and channel coding has been formulated to illustrate this duality. Indeed, some of the solutions to practical source coding (channel coding) have been inspired by solutions to channel coding (source coding): e.g., trellis-coded quantization in source coding has been inspired by trellis-coded modulation in channel coding.

We address the duality between source and channel coding with the objective of providing a mathematical characterization of their functional duality. To be specific, given an optimal source (channel) coding scheme (encoder and decoder), we detail the conditions under which such a scheme is a functional dual to a channel (respectively source) coding scheme (encoder and decoder), in the sense that the optimal encoder mapping for one problem is functionally identical to the optimal decoder mapping for the other problem. We show in the sequel that this functional duality is enabled through appropriate choices of metrics and constraints for the dual source (or channel) coding problem. Our inspiration for this formulation is derived from recent work by Gastpar et al. on the optimality conditions for uncoded communications [2]. We then extend these results to coding with side information and obtain the conditions under which the source and channel coding with side information are exact functional duals.

[1]
T. M. Cover and J. A. Thomas, Elements of Information Theory, New York: John Wiley and Sons, 1991.
[2]
M. Gastpar, B. Rimoldi, and M. Vetterli, "To Code or Not to Code," Proc. IEEE Int. Symp. Information Theory, Sorrento, Italy, June 2000.
1Professor, University of Michigan

More information (http://www.eecs.umich.edu/~pradhanv/) or

Send mail to the author : (pradhanv@eecs.umich.edu)

Multi-User Successive Refinement

Sandeep Pradhan1
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722

In this work we propose the concept of successive refinement of information for multiple users. We give achievable rate-distortion regions for the Gaussian source. The performance of the proposed scheme is shown to be superior to conventional approaches based on multiplexed solutions of optimal point-to-point successively refinable transmission strategies. We also provide a more universal approach to multiuser successive refinement based on the Wyner-Ziv method of coding with side-information, where the source reconstruction based on the base layer is treated as side-information during the refinement phase.

With the exponential growth in communication bandwidth and computing speed, more users are getting networked than ever before for seamless transmission of information between each other. Furthermore, heterogeneity of connectivity in the networking world is ever-increasing. The bandwidth of a communication link might vary from a few tens of kilobits per second in hand-held devices to a few megabits per second in high speed data links in ADSL lines. In such scenarios, the management of information storage and transmission has to incorporate interaction of information signals of multiple users. This provides the motivation for the study undertaken in this paper.

Consider an encoder wishing to transmit an information source X to two users as shown in Figure 1. The two receivers are connected to the encoder through errorless channels of unequal capacities. We wish to have a successively refinable transmission of information to these two receivers. We assume the broadcasting of information such that only a part of the information transmitted to the user with higher capacity is received by the user with lower capacity. Thus this is a generalization of the successive refinement of information for a single user. Since the capacities of the channels are unequal, the refinement of the users must also be unequal and proportional to the capacities of the individual links. Note that this work deals with source coding only, and the individual links are assumed to be error-free.


Figure 1: Multiuser successive refinement: the encoder sends an information sequence at rate R bits/sample. This information is intended for both the decoders. Each decoder has access to a quantized version of X.
1Professor, University of Michigan

More information (http://www.eecs.umich.edu/~pradhanv/) or

Send mail to the author : (pradhanv@eecs.umich.edu)

Compression for Microarray Images

Wei Wang and Bin Yu1
(Professor Kannan Ramchandran)
(NSF) CCR-0208883

Microarrays are a powerful tool in bioinformatics because they enable quantitative measures of gene expression levels in the transcription phase of thousands of genes simultaneously. As DNA sequencing techniques become automated, understanding the function of genes is still a huge challenge. For example, given that most cells in our body carry the same set of genetic material, how do cells differentiate? How are complex biological pathways genetically regulated? What are genetic causes of disease?

Microarrays are a new and evolving technology. Experiments are costly and large quantities of 40 MB image files are generated routinely, yet there are no standardized methods for image processing and information extraction. Consequently, compression is a necessary tool, and introduces an interesting question: what is the effect of compression loss on extracted features?

We propose a practical lossless and lossy with refinement compression scheme, SLOCO. We show empirically that the error in extracted gene expression levels caused by our compression loss is smaller than the variability in repeated experiments, which in turn is smaller than the variability caused by different image processing methods [1]. In fact, compression has a denoising effect on the extracted features. We further study rate distortion performance using multiple non-linear distortion measures.


Figure 1: This display is .5/100 of an inkjet microarray image displayed in false color: red indicates high intensities and blue indicates low intensities. Each spot corresponds to one gene.

[1]
R. Jornsten, W. Wang, B. Yu, and K. Ramchandran, "Microarray Image Compression: SLOCO and the Effect of Information Loss," EURASIP Signal Processing Journal: Special Issue on Genomic Signal Processing (to appear).
1Outside Adviser (non-EECS), Statistics

Send mail to the author : (wangwei@eecs.berkeley.edu)