Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences


UC Berkeley


2008 Research Summary

Robust Distributed Multi-View Video Compression for Wireless Camera Networks

View Current Project Information

Chuohao Yeo, Jiajun Wang and Kannan Ramchandran

We investigate the problem of compressing and transmitting video from multiple camera sensors in a robust and distributed fashion over wireless packet erasure channels. This is a challenging problem that requires taking into account the error characteristics and bandwidth constraints of the wireless channel, the limitations of the sensor mote platform, and stringent latency constraints imposed by monitoring applications.

Recognizing the strong correlation in overlapping views between cameras, and leveraging on the ability of the decoder to operate jointly on all camera views, we would like to exploit inter-camera redundancy effectively. We address two main issues: (1) perform reconstruction at the decoder even when the encoder has no access to neighboring views; and (2) perform image correspondence in a distributed fashion. We propose two novel techniques that address these two issues by building on PRISM, a video coding framework based on distributed source coding [1]. One is view synthesis search [2], which first performs disparity estimation and view interpolation to generate a predicted view, and then performs a small search to generate predictors. Another is decoder disparity search [3], which uses epipolar geometry to generate predictors from neighboring views in an efficient fashion. Our experimental results demonstrated that our proposed approach gives better reconstruction quality in the presence of packet losses than baseline approaches such as Motion JPEG (MJPEG), H.263+ with FEC, and H.263+ with random intra refresh.

Figure 1
Figure 1: Problem set-up for robust multi-view video compression

Figure 2
Figure 2: Illustration of disparity search. The decoder searches for a suitable predictor along the epipolar line in the available camera view.

Figure 3
Figure 3: Illustration of view synthesis search. Decoder first synthesizes the view for camera 1 using camera 0 and 2. It then searches around the co-located position for a suitable predictor.

R. Puri and K. Ramchandran, "PRISM: A New Robust Video Coding Architecture Based on Distributed Compression Principles," Proc. Allerton Conference on Communication, Control, and Computing, 2002.
C. Yeo and K. Ramchandran, "Robust Distributed Multi-View Video Compression for Wireless Camera Networks," Proc. SPIE Visual Communications and Image Processing, January 2007.
C. Yeo, J. Wang, and K. Ramchandran, "View Synthesis for Robust Distributed Video Compression in Wireless Camera Networks," Proc. IEEE International Conference on Image Processing, September 2007.