BAD 2006

Bay Area Day on Signals, Information, and Control

June 2, 2006

Abstracts

Jose Carmena

Emerging Directions in Brain-Machine Interfaces

The advent of multi-electrode recordings and brain-machine interfaces (BMIs) has provided a powerful tool for the development of neuroprosthetic systems. BMIs are powerful tools that use brain-derived signals to control artificial devices such as computer cursors and robots. By recording the electrical activity of hundreds of neurons from multiple cortical areas in subjects performing motor tasks we can study the spatio-temporal patterns of neural activity and quantify the neurophysiological changes occurring in cortical networks, both in manual and brain control modes of operation. In previous work at Duke University we demonstrated that monkeys can learn to reach and grasp virtual objects by controlling a robot arm through a BMI using visual feedback, even in the absence of overt arm movements. Learning to operate the BMI is paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMI are incorporated into motor and sensory cortical representations. Additionally, we have applied the primate BMI technology in the human intraoperative setting to asses the feasibility of using neurons in subcortical motor areas to drive a BMI. We recently showed that neuronal acute recordings from subcortical structures of the human brain, specifically motor thalamus and subthalamic nucleus, can predict motor function and therefore have the potential to provide informative control signals in a human BMI. While significant breakthroughs have been achieved in recent years and the field is rapidly taking off, there are challenges that need to be met before BMI technology fully reaches the clinical realm. In this talk I will give an introduction to the field of BMIs followed by an outline on the emerging directions the field is taking towards the development of neuroprosthetic devices for the impaired.

Emerging Directions in Brain-Machine Interfaces

The advent of multi-electrode recordings and brain-machine interfaces (BMIs) has provided a powerful tool for the development of neuroprosthetic systems. BMIs are powerful tools that use brain-derived signals to control artificial devices such as computer cursors and robots. By recording the electrical activity of hundreds of neurons from multiple cortical areas in subjects performing motor tasks we can study the spatio-temporal patterns of neural activity and quantify the neurophysiological changes occurring in cortical networks, both in manual and brain control modes of operation. In previous work at Duke University we demonstrated that monkeys can learn to reach and grasp virtual objects by controlling a robot arm through a BMI using visual feedback, even in the absence of overt arm movements. Learning to operate the BMI is paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMI are incorporated into motor and sensory cortical representations. Additionally, we have applied the primate BMI technology in the human intraoperative setting to asses the feasibility of using neurons in subcortical motor areas to drive a BMI. We recently showed that neuronal acute recordings from subcortical structures of the human brain, specifically motor thalamus and subthalamic nucleus, can predict motor function and therefore have the potential to provide informative control signals in a human BMI. While significant breakthroughs have been achieved in recent years and the field is rapidly taking off, there are challenges that need to be met before BMI technology fully reaches the clinical realm. In this talk I will give an introduction to the field of BMIs followed by an outline on the emerging directions the field is taking towards the development of neuroprosthetic devices for the impaired.

Bernd Girod

Distributed Video Coding

Distributed coding is a new paradigm for video compression, based on Slepian and Wolf’s and Wyner and Ziv’s information-theoretic results from the 1970s. This talk reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. Wyner- Ziv coding protects the video waveform rather than the compressed bit-stream and thus achieves graceful degradation under deteriorating channel conditions without a layered signal representation.

Joint work with Anne Aaron, Shantanu Rane, David Rebollo-Monedero, and David Varodayan

http://www.stanford.edu/~bgirod/pdfs/DistributedVideoCoding-IEEEProc.pdf

Distributed Video Coding

Distributed coding is a new paradigm for video compression, based on Slepian and Wolf’s and Wyner and Ziv’s information-theoretic results from the 1970s. This talk reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. Wyner- Ziv coding protects the video waveform rather than the compressed bit-stream and thus achieves graceful degradation under deteriorating channel conditions without a layered signal representation.

Joint work with Anne Aaron, Shantanu Rane, David Rebollo-Monedero, and David Varodayan

http://www.stanford.edu/~bgirod/pdfs/DistributedVideoCoding-IEEEProc.pdf

Ashish Goel

Multi-Objective Network Protocols

Different variants of the congestion control protocol, TCP, can be viewed as maximizing different utility functions. We will address the following question: could there be a single version of TCP that simultaneously optimizes all reasonable utility functions? We will first define a canonical class of utility functions (symmetric, non-decreasing, and concave). We will then present centralized algorithms that obtain a single solution which is an O(log n) approximation, simultaneously, for all canonical utility functions. Finally, we will outline how the centralized algorithms can be converted into a primal-dual TCP-like protocol. These techniques are applicable to a wide variety of convex optimization problems.

Our protocol requires per-flow state at each router. Whether this kind of simultaneous optimization can be achieved without per-flow state remains an important open problem.

Multi-Objective Network Protocols

Different variants of the congestion control protocol, TCP, can be viewed as maximizing different utility functions. We will address the following question: could there be a single version of TCP that simultaneously optimizes all reasonable utility functions? We will first define a canonical class of utility functions (symmetric, non-decreasing, and concave). We will then present centralized algorithms that obtain a single solution which is an O(log n) approximation, simultaneously, for all canonical utility functions. Finally, we will outline how the centralized algorithms can be converted into a primal-dual TCP-like protocol. These techniques are applicable to a wide variety of convex optimization problems.

Our protocol requires per-flow state at each router. Whether this kind of simultaneous optimization can be achieved without per-flow state remains an important open problem.

Ramesh Johari

Interdomain Routing: An Impossibility Theorem

Thousands of competing autonomous systems must cooperate with each other to provide global Internet connectivity. Each autonomous system (AS) encodes various economic, business, and performance decisions in its routing policy. The current interdomain routing system enables each AS to express policy using *rankings* that determine how each router in the AS chooses among different routes to a destination, and *filters* that determine which routes are hidden from each neighboring AS. Because the Internet is composed of many independent, competing networks, the interdomain routing system should provide *autonomy*, allowing network operators to set their rankings independently, and to have no constraints on allowed filters. This paper studies routing protocol stability under these conditions. We prove that, when providers can set rankings and filters autonomously, guaranteeing that the routing system will converge to a stable path assignment imposes strong restrictions on the rankings ASes are allowed to choose. We discuss the implications of these results for the future of interdomain routing.

Interdomain Routing: An Impossibility Theorem

Thousands of competing autonomous systems must cooperate with each other to provide global Internet connectivity. Each autonomous system (AS) encodes various economic, business, and performance decisions in its routing policy. The current interdomain routing system enables each AS to express policy using *rankings* that determine how each router in the AS chooses among different routes to a destination, and *filters* that determine which routes are hidden from each neighboring AS. Because the Internet is composed of many independent, competing networks, the interdomain routing system should provide *autonomy*, allowing network operators to set their rankings independently, and to have no constraints on allowed filters. This paper studies routing protocol stability under these conditions. We prove that, when providers can set rankings and filters autonomously, guaranteeing that the routing system will converge to a stable path assignment imposes strong restrictions on the rankings ASes are allowed to choose. We discuss the implications of these results for the future of interdomain routing.

Erik Ordentlich

On the Computational Complexity of 2D Maximum Likelihood Sequence Detection

We consider a two-dimensional version of the classical maximum-likelihood sequence detection problem for a binary antipodal signal that is corrupted by linear intersymbol interference (ISI) and then passed through a memoryless channel. For one-dimensional signals and fixed ISI, this detection problem is well-known to be efficiently solved using the Viterbi algorithm. We show that the two-dimensional version is, in general, intractable, in the sense that a decision formulation of the problem is NP complete for a certain fixed two-dimensional ISI and memoryless channel with errors and erasures. We also extend the result to the additive white Gaussian noise channel and the same fixed ISI as in the previous case. Finally, we show how our proof of NP completeness for the two-dimensional case can be used to prove the NP completeness of multiuser detection under a Toeplitz constraint, shown by Verd\'{u} to be equivalent to a variant of one-dimensional maximum likelihood sequence detection when the ISI is not fixed, thereby proving a conjecture of Verdu\'{u}.

This is joint work with Ron Roth (Technion and HP Labs).

On the Computational Complexity of 2D Maximum Likelihood Sequence Detection

We consider a two-dimensional version of the classical maximum-likelihood sequence detection problem for a binary antipodal signal that is corrupted by linear intersymbol interference (ISI) and then passed through a memoryless channel. For one-dimensional signals and fixed ISI, this detection problem is well-known to be efficiently solved using the Viterbi algorithm. We show that the two-dimensional version is, in general, intractable, in the sense that a decision formulation of the problem is NP complete for a certain fixed two-dimensional ISI and memoryless channel with errors and erasures. We also extend the result to the additive white Gaussian noise channel and the same fixed ISI as in the previous case. Finally, we show how our proof of NP completeness for the two-dimensional case can be used to prove the NP completeness of multiuser detection under a Toeplitz constraint, shown by Verd\'{u} to be equivalent to a variant of one-dimensional maximum likelihood sequence detection when the ISI is not fixed, thereby proving a conjecture of Verdu\'{u}.

This is joint work with Ron Roth (Technion and HP Labs).

Anant Sahai

Balancing Forward and Feedback Error Correction for Erasure Channels

Abstract: One of the key engineering questions in communication and networking is to find the appropriate balance of forward error and feedback error correction. This talk explores this question from an information-theoretic perspective for the simplest possible model: the erasure channel. Constant rate traffic is considered with a fixed end- to-end latency requirement, and the tradeoff between rate, latency, and the probability of "error" is explored in the asymptotic regime of large end-to-end latency. After first reviewing the extreme cases of no feedback and perfect instantaneous high-rate feedback, some limitations on the feedback path will be explored. After first considering small round-trip delays on the feedback path, the data rate of the feedback will be throttled back. I show that even a tiny amount of delayed feedback is enough to approach the perfect-feedback bounds in the asymptotic regime of large end-to-end latency requirements. The standard retransmission-request strategy is a bad idea and it is hoped that the structure of asymptotically optimal codes sheds some light on the right balance between feedback and forward error correction. Though self contained, this is the second talk in a trilogy. The first part was given 12/12/05 at Berkeley and 05/11/06 at ISL. The third part will be given on 07/07/06 at the Kailath Colloquium.

Balancing Forward and Feedback Error Correction for Erasure Channels

Abstract: One of the key engineering questions in communication and networking is to find the appropriate balance of forward error and feedback error correction. This talk explores this question from an information-theoretic perspective for the simplest possible model: the erasure channel. Constant rate traffic is considered with a fixed end- to-end latency requirement, and the tradeoff between rate, latency, and the probability of "error" is explored in the asymptotic regime of large end-to-end latency. After first reviewing the extreme cases of no feedback and perfect instantaneous high-rate feedback, some limitations on the feedback path will be explored. After first considering small round-trip delays on the feedback path, the data rate of the feedback will be throttled back. I show that even a tiny amount of delayed feedback is enough to approach the perfect-feedback bounds in the asymptotic regime of large end-to-end latency requirements. The standard retransmission-request strategy is a bad idea and it is hoped that the structure of asymptotically optimal codes sheds some light on the right balance between feedback and forward error correction. Though self contained, this is the second talk in a trilogy. The first part was given 12/12/05 at Berkeley and 05/11/06 at ISL. The third part will be given on 07/07/06 at the Kailath Colloquium.

David Tse

Data and Voice over Wireless Channels: Segregated or Aggregated?

A central problem in wireless communication is to ensure reliability over an unreliable fading channel. Traditionally two separate approaches have been adopted for delay-tolerant data traffic and for real-time traffic such as voice. Reliability for data traffic is achieved mainly via a retransmission protocol. Real-time traffic does not have the luxury to wait for retransmissions, and conservative use of the channel resources is needed to ensure that information can get through with high probability at the first attempt. In designing wireless networks which support both of these very different traffic types, a natural approach is to segregate them onto different frequency bands or different time slots. In this talk, we argue that there is performance advantage in aggregating them on the same time/frequency resource. Using the diversity-multiplexing tradeoff framework, we characterize precisely the performance gain achievable using this approach. For a wide variety of wireless channels, we showed that Gaussian superposition coding between the two types of traffic is optimal, by establishing a connection with the problem of broadcast channels with degraded message sets.

This is joint work with Suhas Diggavi at EPFL.

Data and Voice over Wireless Channels: Segregated or Aggregated?

A central problem in wireless communication is to ensure reliability over an unreliable fading channel. Traditionally two separate approaches have been adopted for delay-tolerant data traffic and for real-time traffic such as voice. Reliability for data traffic is achieved mainly via a retransmission protocol. Real-time traffic does not have the luxury to wait for retransmissions, and conservative use of the channel resources is needed to ensure that information can get through with high probability at the first attempt. In designing wireless networks which support both of these very different traffic types, a natural approach is to segregate them onto different frequency bands or different time slots. In this talk, we argue that there is performance advantage in aggregating them on the same time/frequency resource. Using the diversity-multiplexing tradeoff framework, we characterize precisely the performance gain achievable using this approach. For a wide variety of wireless channels, we showed that Gaussian superposition coding between the two types of traffic is optimal, by establishing a connection with the problem of broadcast channels with degraded message sets.

This is joint work with Suhas Diggavi at EPFL.