Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences

COLLEGE OF ENGINEERING

UC Berkeley

   

2009 Research Summary

Distributed Source Coding Based Robust Low-Latency Video Transmission

View Current Project Information

Vinod Prabhakaran, Jiajun Wang and Kannan Ramchandran

Real-time video transmission over lossy networks is an area that has been widely studied by both academia and industry and has a broad range of applications. In this project, we focus our attention on applications that have particularly stringent delay constraints. Examples include TV over cell phone, video surveillance, and conservational services such as video telephony and distance learning. In order to enable this class of applications, we would like to have highly compressed bitstream that is robust to transmission losses under low latency constraint.

Today's hybrid video coders can achieve high compression efficiency through exploiting temporal redundancy. However, it requires that the encoder has a deterministic copy of the decoded video. A lot of different techniques, such as FEC, have been adopted to ensure this synchronization for channels with transmission losses. This can be wasteful under a low latency constraint, especially if the channel loss has a bursty nature.

We propose to take an alternative approach that is motivated by ideas of distributed source coding (DSC) from the information theory literature. In the DSC-based approach, instead of trying to maintain this deterministic synchronization, decoding succeeds as long as the statistical correlation between the encoder and the decoder reconstruction of the video is known at both the encoder and the decoder.

Previous work on video coding based on DSC principles [1,2] focused on achieving high robustness under low latency constraint and low encoding complexity. In this project, we pose and answer the question of how to achieve higher compression efficiency while maintaining the robustness under the same tight latency constraint if we are allowed to increase encoder complexity and do motion search at the encoder. The key insight is to associate each encoding block with multiple predictors instead of one best predictor. The compressed bitstream can be thought of as consisting of two parts: (1) a baseline that has just enough information to recover the current block under no channel loss, and (2) a robustness layer that provides the additional information needed to recover from a different predictor when the best predictor is corrupted due to channel loss. This approach is well-suited to a DSC-based approach as we are always sending information about the source instead of the residual signal. The information about the source (video block to be encoded) contained in the baseline is all that is needed to decode from a worse predictor. By taking advantage of the fact that the robustness layer is only needed when the best predictor is corrupted, we can further compress the robustness layer. (See [3] for more details.)

Our current work uses the H.263+ style motion estimation tool and we plan to adopt the more advanced motion estimation tool of H.264. (See [4] for results on the baseline layer). This will allow us to obtain higher-quality predictors and thus enhance compression efficiency in both the baseline and the robustness layer.

Figure 1
Figure 1: Performances of different systems over simulated Gilbert-Elliot channel that represents the bursty nature of wireless channels. One can see that the DSC-based system is able to recover quickly in quality after a burst of packet drops.

[1]
R. Puri and K. Ramchandran, "PRISM: A New Robust Video Coding Architecture Based on Distributed Compression Principles," Allerton Conf. Communication, Control and Computing, Allerton, IL, October 2002.
[2]
R. Puri, A. Majumdar, P. Ishwar, and K. Ramchandran, "Distributed Video Coding in Wireless Sensor Networks," IEEE Signal Processing Magazine, July 2006.
[3]
J. Wang, V. Prabhakaran, and K. Ramchandran, "Syndrome-based Robust Video Transmission over Networks with Bursty Losses," Int. Conf. Image Processing, Atlanta, GA, October 2006.
[4]
S. Milani, J. Wang, and K. Ramchandran, "Achieving H.264-like Compression Efficiency with Distributed Video Coding," VCIP, San Jose, CA, January 2007 (to appear).