Multipart papers and conference papers that are subsumed by journal versions are listed together to avoid double-counting. (click here for an official list that does double-count) Very closely related conference papers are also listed together. Ones with some overlap, but not complete inclusion, are listed separately. Please click here to see this list without the brief comments about each paper. An alternative categorization of these papers by conceptual area can be found here for the more info-theoretic work and here for the more signal-processing/wireless oriented work.Four categories are used to reflect my own current personal feelings regarding the significance of the papers:
An alternative perspective can be found using a Google Scholar search for my work. A few key papers get missed in that search, so this and this and this and this and this should bring them up.
Delay is the most basic cost communication systems must pay for reliability. This paper shows that the reliability function for bits with fixed delay can be very different from the reliability function for blocks with fixed block-length in the presence of feedback. This corrects a result of Pinsker (in png graphics form: , , , , , ) that claims otherwise. This paper also puts much of the prior work on feedback and reliability into perspective.
As a part of this work, we derive a general upper bound (the uncertainty-focusing bound) on the probability of bit-error with respect to delay and show that the bound is tight for the Binary Erasure Channel and any channel with nonzero zero-error capacity with feedback. This bound is much higher than the corresponding bound for block codes. We also give a different bound for cases without feedback and show that it can be beaten with feedback for generic channels. This illustrates how codes with feedback can exploit an additional degree of freedom (flow control) in order to improve reliability with respect to end-to-end latency.
The uncertainty-focusing bound itself first appeared (though without that name) in my student Tunc Simsek's doctoral thesis, but there are other many different results in both the paper and the thesis itself.
Shows the equivalence (reductions in both directions) between stabilization of linear systems over a noisy communication channel and anytime reliable communication with noiseless feedback. This shows the fundamental importance of the delay reliability function. For necessity, this paper gives an explicit reduction from the communication problem to the control problem using a Cantor Set construction. On the sufficiency side, this paper uses interesting ideas based on "writing-on-dirty paper" and precoding for communicating information through a control systems to simulate a noiseless feedback link where none is explicitly present.
The paper also gives random constructions for memoryless observers and their corresponding controllers. Along the way, it shows how to use the equivalence between control and communication to generalize Schalkwijk/Kailath's scheme for the AWGN channel to the sequential setting. The paper closes with a general discussion of equivalences and complexity-hierarchies in communication problems where noisy channels are the critical resources rather than computational operations.
Part II extends the results in Part I to linear control problems with a vector-valued state. It introduces the anytime rate region and shows that even in a seemingly point-to-point setting, giving different reliabilities to different bitstreams can be fundamentally important. The paper also addresses the issue of intrinsic delays in control systems. Finally, by using the vector case results in the context of an erasure channel, it shows that differentiated service and the communication layer can be required to satisfy control application demands.
This paper asks what it should mean for an encoding and decoding system to be capacity achieving in the context of power-limited communication over the classic AWGN channel. The idea is unify the problem by considering the power spent in the encoding and decoding as well as the transmission and to ask the question in an asymptotic sense of arbitrarily low probability of error and arbitrarily cheap encoding and decoding. This seemingly minor change nevertheless results in a somewhat surprising fact: traditional coding approaches that achieve an error exponent with respect to a complexity parameter are not in fact capacity achieving if the decoding power required is linear in the complexity parameter. At least double exponential reductions in the probability of error with the decoding power seem to be required!
We introduce a simple model for a general iterative parallel decoder and asks how the power spent in the receiver for decoding tradesoff with the power used at the transmitter. For this model, it is shown that the total power spent in the transmitter and receiver combined must go to infinity as the end-to-end probability of error tends to zero. This is illustrated visually using the concept of the waterslide curve. This result rules out a strong sense of capacity achieving codes.
However, we can support a weaker sense of capacity achieving that focuses only on the transmitter power. The lower bounds here suggests that iterative decoding can result in codes that encourage the use of a bounded amount of transmit power even as the required probability of error tends to zero. However, it seems that the asymptotically optimal power used at the transmitter does depend on the application setting and the technology used in the receiver to implement the decoder. It is generally slightly different from the predictions made by using Shannon capacity alone and this can be interpreted as a soft counterpart to the computational cutoff rate that applies for iterative decoding. This effect is significant at communication over moderate ranges of a few tens of meters and becomes even more pronounced as the range gets shorter.
This group of five papers looks at the Witsenhausen counterexample through information-theoretic eyes. The journal version on top introduces a vector version of Witsenhausen's classic counterexample and shows how strategies related to joint source/channel encoding as well as Dirty-Paper coding can be adapted to become nonlinear control strategies. Quite surprisingly, the problem turns out to be related to other classical information-theoretic problems and has an interpretation in terms of suppressing interference in communication systems. This is perhaps the simplest unsolved "point-to-point" information theory problem.
Even so, from a distributed control perspective we are able to obtain many interesting results. First, we get the first improvement over Witsenhausen's lower bound in 40 years. Second, we're able to show that in an asymptotic sense, we can give approximately optimal strategies in terms of control performance that are no more than a factor of 2 from the optimal regardless of the values of the problem parameters. This is a significant improvement from the prior bound of infinity. The bound is then improved in the 2010 ITW Submission to be better than 1.5, and moreover, tight in the case of zero second-stage cost.
The results are then extended to the non-asymptotic case in two ways. The 2009 CDC submission gives an elementary proof for the scalar case in the spirit of Witsenhausen's original bound, but the more illuminating and general finite-dimensional case is addressed in the submission to ConCom. There it is shown that lattice-based nonlinear control strategies achieve performance within a constant factor of optimal for all dimensions, where the constant is around 17 for the scalar case. The result also illuminates how finite-dimensional information-theoretic bounds can be derived as composite "shadows" of infinite-dimensional ones.
The 2008 CDC conference paper also considers lossless source coding and shows that the complexity of encoding plus that of decoding must satisfy a fundamental limit in order to achieve a desired probability of bit error.
This paper studies the hard deadline problem for streaming data in the context of packet erasure channels (a semi-practical model for networks). Instead of perfect feedback, it has unreliable feedback that must pass through an erasure channel itself. The uncertainty-focusing bound can still be asymptotically achieved at high enough rates. This approach also works for binary erasure channels in both directions (so sequence numbers are out of the question). Perhaps most interestingly, this work shows that even if the system is "half-duplex" in the sense that a feedback transmission occurs at the cost of giving up a forward one, it is still worth spending some of the channel uses on feedback because this can reduce the end-to-end delay by a very large factor relative to forward-only error correction. A side-effect of the techniques is to show that the symmetric uncertainty-focusing bound is achievable with perfect feedback at high rates for any DMC whose matrix contains a zero.
Takes an errors and erasures perspective on streaming communication corresponding to a case of communicating a bitstream with "soft" end-to-end deadlines and access to a feedback channel. Detected errors (erasures) are presumed to be relatively low cost and only undetected errors are penalized. In this context, we show that once again block-coding imposes an artificial limit to system performance. Not only can the Burnashev bound be beaten, but the system can be made robust to noise in the feedback link, as long as the feedback capacity is larger than the target exponent for the probability of undetected errors. The robustness result shows that the theoretical reliability gains from feedback are real and not artifacts of unrealistic models.
On the surface, this pair of papers provides an alternative proof of the converse to the classical separation theorems for rate-distortion and conditional rate-distortion. Rather than showing that the separation architecture (source coding followed by channel coding) achieves an information theoretic bound and is hence optimal, this paper establishes the existence of direct reductions between the relevant communication problems to demonstrate an equivalence at the operational level. This is done by introducing a new AVC model that is related to problems in steganography and watermarking. Alternatively, it can be considered a generalization of the traditional coding theory (minimum distance) perspective with Hamming distortion replaced by an arbitrary additive distortion measure.
More fundamentally, this is an attempt to take a serious information-theoretic look at the problem of modularity (abstraction) and interfaces. The traditional separation theorem is used to justify a particular layering and this paper essentially argues that the same argument can be used to justify many other architectures.
This work explores the impact of uncertainties about the noise and fading in a cognitive radio system that is attempting to detect the presence of a weak primary transmission within the band. We show that under even weak uncertainty, robust detection becomes impossible below a certain SNR, regardless of how many samples we take. This work considers noncoherent, coherent, and feature-detection based strategies for detecting signals and shows that they are all afflicted with SNR Walls, although some are better than others. The capacity/robustness tradeoff is described as a way to compare various detection strategies, and noise-calibration is discussed as a key tool for improving robustness. By example, it is shown that cyclostationary feature detection is suboptimal since it does not achieve all the noise-calibration gains that are possible in a system.
This work studies the reliability function for block communication with noisy feedback relative to expected block-length -- the soft-deadline problem for block messages. There are two major issues that are addressed: how can the noisy feedback be used to reduce the probability of undetected error and how can the synchronization be maintained between the encoder and decoder for the invariable retransmissions that must occur.
The final paper shows that at all rates, upto half the Burnashev bound is attainable using any noisy DMC in the feedback path, as long as it has enough capacity. At rates close to capacity, the achievable reliability functions approach the Burnashev bound. The key ideas are to combine erasure-decoding and randomly hashed-feedback to get the most utility out of the noisy feedback link, and to use anytime codes over the feedback channel to maintain synchronization between encoder and decoder.
The first paper in this sequence was the first paper to show robustness of reliability increases to noise in the feedback link for DMCs and made the connection to anytime codes for synchronization.
This group of papers aimed at a general EECS readership gives a unified treatment of the challenges in sensing spectrum holes. The Proceedings paper was the first paper to really tackle the spatial nature of spectrum holes and it proposes two key metrics for spectrum sensing: the "Fear of Harmful Interference" that captures the sense of safety for primary users as well as the "Weighted Probability of Area Recovered" that captures the sense of performance for secondary users. The book chapter interprets these in the classical binary hypothesis testing framework. The advantage of these metrics is that they can incorporate asymmetric uncertainties between the primary and the secondary without being overly constraining to the architectures for sensing. The Proceedings paper also gives a way to express fading uncertainty and shows how cooperation can improve performance. Finally, it shows how multiband sensing can help resolve certain uncertainties. The book chapter shows how this perspective brings out the critical role of diversity even in single-user problems.
After reviewing anytime (delay-universal) reliability and relating it to classical results on tree codes, this paper solves the lossy source-coding problem for exponentially unstable Markov processes, completely characterizes the nature of information within them, and shows that not all bits are alike even for a single source with a simple additive difference-distortion measure. Resolves a long-standing open problem in information theory (completing progress made by Berger and Gray in the 70s) and demonstrates the usefulness of the information-embedding and equivalence perspective in understanding information.
This resolves the rate-distortion function for Berger's classic source-coding game with a switcher that is allowed non-causal access to the underlying random source realizations. The idea here was to find a way to understand what can happen in situations where the uncertain source is neither completely free nor completely random. (For conceptual motivation, consider a video camera that is being pointed by a non-blind human. Even if independent activities are happening in different directions, the human is pointing the camera based on knowing what is going on.) Modeling this as a random sampling of the different independent activities is clearly an over-idealization of the problem that assumes that the human's goals are entirely orthogonal to the source-code's goals. Universal coding can tell you what to do from an algorithmic point of view, but this paper complements that by telling how much rate you need to provision for such scenarios. We give a single-letter optimization that computes the relevant rate-distortion function.
Summarizes our initial set of findings regarding opportunistic spectrum use by using sensing to locate unused bands. Identifies robustness to unknown or undermodeled uncertainty as the major issue constraining system design. Shows how position and fading uncertainties gives rise to a "sensing link budget" that also depends on usage characteristics. Shows how noise and interference uncertainty give rise to fundamental limits on sensitivity that also depend on the degree to which the signal to be detected is known. Explores the need for cooperation both within a system and among nearby different secondary systems in order to overcome some of these limits.
In particular, the presence of unknown interference from nearby secondary systems is potentially the dominant term in the uncertainty limiting our ability to identify which band is empty. Conceptualizing this in terms of fairness, the paper proposes the need for a sensing MAC protocol to limit the uncertainty about interference. Shows that this effect makes fair opportunistic use practically impossible using radiometer-based sensing, and quantifies the gains possible using coherent processing. Closes with an interpretation in terms of a complexity-conservation principle.
The Allerton conference paper was one of the first papers in the modern cognitive radio and opportunistic spectrum sharing literature. In addition to preliminary thoughts on power control, it also includes SNR Wall results for quantized signals.
Considers the streaming problem for distributed lossless source-coding. The role of feedback is played by the receiver side-information. Upper and lower bounds are given to the reliability function with fixed delay for this problem, and shows that despite the Slepian-Wolf result in terms of achievable rates, the delay performance of distributed coding can be much worse than non-distributed coding. This turns out to be related to an earlier result by Jelinek in the context of buffer overflows.
This is a pair of short papers for general audiences with some overlap between them. They give a general overview of spectrum sharing by cognitive radios. The idea of "overhead reduction" is used as the unifying concept and they include some very nice figures. The first is notable for the appearance of semi-empirical data on the magnitude of spectrum holes in the United States Television Bands based on data crunched from the FCC Database of television channels. (The numbers themselves are a bit obsolete already with the Nov 4th FCC ruling changing things slightly. We've got an update on the true numbers.) The second paper expands on the issue of identity and incentives and takes a market-oriented perspective to balance the opportunistic perspective in the first column. The column for SP Magazine does not include an abstract, so here is one below:
The radios of tomorrow will be capable of frequency agility. This is good because under the current static system of frequency assignment a great deal of spectrum remains underused. Using the FCC database of television towers and ITU propagation models, we see that about half of the TV channels are safely available for medium-range use across the United States. However, if we try to detect these opportunities using single radios acting alone, then only about half of these opportunities will typically show up as being guaranteed to be safe. This is because the fear of fading forces radios to try and rule out even very weakly-received channels from use. Such detection of weak channels has its own signal processing challenges that arise from the uncertainty in the noise models. The fading uncertainty can be overcome by using cooperative sensing approaches, but this leads to another problem --- how can cooperative sensing approaches ever be certified and regulated? To avoid putting the government in the precarious position of trying to prove the correctness of code and protocols, we study the overhead that would be imposed by a more reactive system of spectrum policing and punishment. There are two prongs to this strategy. First, radios must be certified to be appropriately vulnerable to punishment in the form of spectrum jails where the more bands a radio wants to expand into, the more severe are the jail sentences. This leads to overhead in the form of innocent radios being wrongfully convicted and thus sitting in jails for a time. The second is for radios to explicitly encode their identity in their transmission patterns to both allow themselves to be more easily caught as well as prevent wrongful conviction by allowing themselves to be distinguished from other suspects. This leads to overhead in the form of spectrum that goes unused for data transmission because it is being implicitly used to convey identity information.
Explores the benefits and limits of within-system cooperation in detecting unused bands for opportunistic use by cognitive radios. Quantifies the benefits of such cooperation in terms of the individual sensitivities of the cognitive radios themselves. Shows the fundamental limits to the cooperative gain that is imposed by allowing a fraction of untrusted nodes.
This gives a computable and non-trivial upper bound on the
dirty-paper capacity when there is an unknown block fade on the
"dirt." This builds on a technique of Khisti, Wornell, and
Lapidoth. In many cases, our bound shows that almost the entire DPC
gain is wiped out. In this sense, it is a somewhat
surprising result since it is qualitatively different from the usual
results for compound channels. As a result, this paper takes a
skeptical look at the possibility for "cognitive radio" systems using
bands that are already being used by primary transmitters. It says
that it is not enough for a cognitive transmitter to decode and know
the transmitted interference signal, it has to also know the complex
phase at which this signal arrives at the receiver.
This short paper shows how to transform classical low-complexity sequential decoding algorithms from channel coding into low complexity controllers that work with the memoryless random observers from the "anytime for control" journal paper. While this simplicity comes at the cost of performance (the uncertainty-focusing bound is not attained), the scheme here allowed for the first long run simulations of provably stable control over truly noisy channels. Prior schemes all had either exponentially growing complexity or no provable stability guarantees. The underlying analysis can be found in this other paper.
This is a semi-serious paper that gives a very general recipe for boosting the reliability function with block-length for network channel coding problems with noiseless feedback. (MAC, broadcast, relay, etc.) Because the scheme is absurd, it essentially argues that block-coding reliability functions in such a context are uninteresting since the average nature of the power constraint lets us boost them as large as we would like even in the context of fixed block-lengths. On the technical side, it does resolve a longstanding question left unanswered in Ozarow's classical paper --- namely, what is the reliability function for the Gaussian MAC channel with feedback for points other than the symmetric rate points.
This is an application of the "fortification" ideas of the "delay not blocks" journal paper within the stabilization context of the "anytime for control" journal paper. This shows how to hit the uncertainty-focusing bound's predicted stabilization performance bound for cases when a small noiseless "flow-control" channel is available to supplement the main noisy feedback channel. It uses event-based sampling to both simplify the combined system and to overcome a technical requirement for finite channel output alphabets. This elucidates the role of the implicit feedback that comes back through the plant itself.
This is a follow-on paper to this paper that evaluates the minimum energy required to communicate to two users simultaneously over short ranges. It also shows how to get better upper-bounds to the error-exponent regions for broadcast channels.
This paper actually goes through the FCC ruling legalizing white-space devices and uses real census data and real TV tower data to estimate how much white space is there. It also shows what the underlying policy tradeoffs are in an easy-to-understand way that gets to the heart of the problem.
This paper addresses the issue of Faulhaber's "hit and run" radios. Dynamic spectrum sharing (whether by markets or cognitive radios) has a problem: how can you make sure that the radios are playing by the rules. A priori certification imposes a heavy-hand of regulation, but to make incentives/penalties work, there must be a way to identify offenders. This paper explores a simple MAC-layer approach to identity.
This paper shows that simply using multiple antennas in sensing does not free a single user from fundamental limits on sensitivity. It introduces the idea of "event-based sensing" that tries to find temporary holes in a strong primary user's use of the band. For event-based sensing, a new kind of diversity turns out to be important: interference diversity. This exploits the fact that unintentional emmitters and other sources of possibly strong interference tend to be local to a single cognitive user while the primary has a much larger footprint.
This paper asks how far we can move from an a priori model of spectrum regulation in which systems are guaranteed by design not to cause interference to an a posteriori model in which systems are incentivized not to cause harmful interference by giving them incentives and penalties. A toy game-theoretic model is introduced and used to show that if any such approach aims to have a universal set of incentives, the punishment for cheating must go beyond simply banning the offending device from the band in which it cheats for some extent of time. Instead, the system should have to stake its own band as a kind of virtual collatoral against cheating. When cognitive use is viewed as a bandwidth-expanding strategy, bounds can be derived that reveal how good sensing and other enforcement parameters have to be to deter cheating and minimize overhead.
This paper shows that macro-scale features (much larger than the uncertainty) can be used to eliminate SNR walls entirely if we can design primary signals to have such features. This suggests that the fundamental tradeoff is not between primary capacity and secondary robustness but between primary capacity and secondary sensing delay.
This is a serious look at the problem of speeding up the journal review process (taking more than 2 years for many disciplines) by incentivizing timely reviewing. The tension is explored between providing public reputations to encourage good citizenship and keeping the identity of referees anonymous. This is very preliminary work but it gives us hope.
This paper proposes multiband sensing as a way to improve overall sensing robustness and use cooperation in a fundamentally different way. The first insight is to realize that there are two dimensions of "sparsity" in physical environments that current approaches to spectrum sensing do not exploit. First, while shadowing is poorly modeled and potentially varies on a very slow scale geographically, it is not very frequency selective. Second, the economics of real-estate and zoning laws push primary users to share a few towers with each one hosting many transmissions. By sensing multiple bands at once, a radio node can estimate its local shadowing environment and the collective thereby knows which sensor measurements to trust at runtime. The secondary insight is to capture the idea of political mistrust and incentive misalignment between primary and secondary users by using different uncertainty models while evaluating different metrics. The probability of harmful interference can be evaluated against a more uncertain model while the probability of finding open spectrum can be evaluated against a higher-fidelity model.
This is a short accompanyment to this paper that evaluates the minimum energy required to communicate over short ranges when the bandwidth used is arbitrary.
This shows how to generalize the uncertainty-focusing bound in the lossless source-coding context to the multistream (unequal error protection) context.
This shows how to robustly achieve the empirical first-order capacity of channels whose states are ``modeled'' as an individual sequence. Our contribution here was to greatly reduce the amount of feedback (relative to a scheme of Shayevitz and Feder) to asymptotically nothing by adapting the chunk/block based coding strategies that we had used earlier for improving the reliability of fixed-delay coding using unreliable feedback for erasure channels.
This work gives universal non-asymptotic bounds on reliability functions for lossless source coding that depend only on the alphabet size, rate, and upper-bound on the entropy. We realized that it is very natural to just know that a compound channel has a certain minimum capacity or a compound source has an upper bound on its entropy. However, such interesting sets of channels and sources are decidedly not convex, and so the usual saddle-point approaches do not give useful bounds. It is also not enough to say that asymptotically we will do as well at runtime as a code tuned to the distribution. The reliability function is most useful at design-time to adequately provision a system to make sure that the desired performance will be obtained robustly. We adapted an approach out of a problem of Gallager, but had to correct a nontrivial error along the way since a subtle non-convexity was overlooked in his original solutions. Because the bounds developed here also hold non-asymptotically, they can also be turned around to apply when alphabet sizes grow with block-length.
This extends some of the results of this paper to the context of lossy coding with a per-letter peak-distortion constraint.
Most of this paper is dedicated to showing that it is possible to achieve the random-coding error exponent with fixed delay for streaming data for compound DMC channels (ie channels that are known only to a certain uncertain set). The interesting aspect from a control context is that it gives an easy to check sufficient condition for the stabilizability of an unstable system over such compound channels. This sufficient condition only depends on the compound channel capacity and the alphabet size.
This shows that the need for cooperation increases significantly when attempting to share spectrum with primary users that have smaller footprints. Relevant to the case of wireless microphones in the television bands.
This paper shows how sequential decoding (stack algorithm) can be applied to decode distributed source codes where the side-information is only available at the receiver. The arguments extend directly to joint source-channel coding for distributed settings.
A new approach to dealing with strong in-band narrow interference while attempting to detect very weak almost-periodic signals. It involves an approximate approach that is tailored to software defined radios that works flexibly, with very little computational impact.
This shows how to use an approximation idea to dramatically reduce the computational burden of searching for a known set of wideband signals without having a very precise frequency reference. This turns out to be the core problem in very low SNR GPS and the algorithms given here were the foundations of the GPS approach we did in the software defined radio context.
Looks at the source-coding analog of the "anytime" or delay-universal idea. Here, the channels are noiseless and we show how to do distributed compression of streaming sources at all rates within the Slepian-Wolf rate region. These results hold for memoryless processes and the same error exponents with delay can be achieved for both a known model decoded using ML as well as universally when the model probabilities are unknown. Because the code is universal over delay, it allows us to achieve asymptotically zero probability of error with delay for all generic sources.
In this sense, it provides a variable-delay analog of truly lossless communication that continues to hold for generic distributed sources. This can be interpreted as an answer to problem posed by Slepian and Wolf in their original paper. The technical innovation here is a sequential form of "binning" and an approach to dealing with the many possible error events in a distributed setting.
This expands on the "delay not blocks" journal paper by considering joint source/channel coding in which the channel is noisy and the source is nonuniform. In the block-coding case, it had been established earlier by Csiszar that traditional separation fails to achieve the best possible block exponents. We give new (and higher) upper bounds and achievable schemes when a hard end-to-end delay constraint is considered. When feedback is available, we give even higher bounds as well as a scheme for the erasure channel that meets the bounds. This scheme suggests that for many channels, separation is actually possible in the engineering sense by using appropriate variable-length codes at the interface. It also effectively says that the delay-reliability gains we get by moving to nonblock codes with feedback for channel coding is not merely a manifestation of the traditional joint vs separate gap in block-coding.
This short note builds upon our prior work in this TAPAS paper as well as this ICC paper and our earlier papers at WirelessCom and Allerton. The focus here is on the sensor network aspect of enabling cognitive radio.
This paper builds upon this ICC paper and our earlier papers at WirelessCom and Allerton. The focus here is on the minimum amount of coordination required to enable cognitive radio. The need to control uncertainty about interference from nearby cognitive radio nodes during spectrum sensing is identified as a major issue.
Explores whether it is worth knowing the codebook that interference has come from under the assumption that the interference signal is too weak (or too high rate) to decode correctly. Uses a simple genie-aided argument and the Gaussian MAC converse to argue that in such cases, there is no advantage to knowing the codebook.
Gives a cute anytime generalization of orthogonal signaling and shows how one can theoretically achieve maximally energy-efficient communication in a delay-universal (receivers get to pick their desired delays, it is not imposed by the transmitter) way. The code itself is based on a variation of pulse position modulation and the results here are partially generalized to Verdu's capacity per unit cost formulation. The interesting feature here is how simple the analysis and proof of optimality becomes for this infinite-bandwidth channel without feedback. This paper is currently being augmented with a (sadly complicated to analyze) scheme for the case with feedback.
A dull sounding title (it was originally titled: "Ultrastacked refinement, frequency-following probes, sub-millisecond chunking, and mixed references for position determination"), but it really is a host of new adaptive approximate signal processing techniques to enable very fast GPS operation in challenging environments. It represents a new way of thinking about this sort of adaptive signal processing problem that combines ideas from computer-science and traditional SP algorithms in the context of flexible software-defined radios for GPS.
An approach to adaptive interpolation suited for software defined radio GPS which operates slower than real-time in challenging environments and hence must use data more carefully than traditional approaches to the problem.
Studies Cramer-Rao bounds for localization in large UWB-based sensor networks in both the anchored and anchor-free (no absolute position references) cases. Shows that working in purely local coordinates sometimes gives better performance. Gives locally computable upper and lower bounds to the CRB that depend only on the neighboring nodes in the network. The goal of this research was to develop bounds that would help a network and/or its nodes to decide what kinds of algorithms they need to deploy based on the localization environment that they find themselves in.
This paper studies sequential source coding in the average entropy rate and average distortion context rather than looking at an explicit representation in terms of bits. The problem is cast as an optimization problem on an appropriate convex set of probability measures. Existence and properties of optimal sequential codes are explored. A sequential rate distortion theorem is proved and a construction given to show that in general, a ``causality gap'' exists --- i.e. sequentiality will not come for free.
Shows how anytime (delay universal) reliability can be achieved over a degraded broadcast channel using ML decoding and no feedback. Adapts the techniques of our Slepian-Wolf Paper to this context.
Calculates the exact anytime capacity for the GE channel with noiseless feedback of both the output and the channel state. Shows that the probability of error can be made to decrease doubly exponentially with delay at all rates below capacity and that furthermore, this can be done in a delay-universal, or anytime fashion. A simpler form of the hybrid-control based approach used in this paper from Allerton 2004.
Takes a problem introduced by Jack Wolf, and gives it a secrecy dimension. Uses the idea of leaking information to explore in what sense "binning" is all that we can do in network source coding scenarios. Has a partial result showing that any entropically efficient scheme for broadcast source coding must be "like binning" in that it must leak significant information to eavesdroppers, even if the source code has noisefree access to the recipient side information.
This paper explores the limits on power scaling in the cognitive radio setting. Fundamentally, it establishes the required detection capability of a cognitive radio in order to be able to transmit at a certain level of power. In general, the radio must be able to detect substantially weaker signals, and the tradeoff rule depends on how many such secondary cognitive radios are going to be in operation, as well as what the nature of their power requirements are. In particular, we explore the effect of the heterogenous propagation loss functions likely to occur in practice.
Shows how anytime (delay universal) reliability can be achieved over a MAC channels using ML decoding and no feedback. Adapts the techniques of our Slepian-Wolf Paper to this context.
Shows how to take data and use it to correct for systematic biases that might be introduced in a low SNR GPS environment by looking at the typical additional delay introduced by multipaths in building environments.
Studies Cramer-Rao bounds for tracking objects in dense UWB-based sensor networks in the high SNR regime. Explores the idea of using channel estimates from the UWB communication system to position objects without tags by fusing multipath data assuming specular reflections. For a dense network, the wireless channels are not independent since the paths interact with the same objects. Gives asymptotic bounds for both centralized and decentralized processing and also gives an order-optimal algorithm for both cases. Proposes a heuristic solution to the problem of multiple objects.
Shows how to use approximate signal processing techniques and linear programming for GPS in order to do very precise and flexible region based digital restrictions management (DRM) in which the demands on the device are roughly proportional to the specificity of the region requirement.
Calculates the exact feedback anytime reliabilities for packet erasure channels where the packets can be of variable length but data must be communicated explicitly within the body of the packet. Shows the reliability advantage to having only an average constraint on the packet length as opposed to having to use fixed sized packets.
Considers a channel that either adds white Gaussian noise or independently erases the input. Calculates the feedback anytime reliability function for the channel and shows how to achieve it by using a hybrid control strategy involving both a queue and a time-varying linear system.
Examines the control of linear systems over noisy communication channels, with a particular focus on the LQG problem. Presents the sequential rate distortion framework and gives information patterns for which it tightly characterizes the achievable performance for the control system.
An approach to combine information from different acquired GPS signals to help speed up the acquisition of additional ones. This works through a linear programming formulation, though the key benefit is in the reduction of frequency uncertainty once any individual GPS signal has been acquired.
A systems perspective on how to put the various algorithmic components together to do GPS.
This pair of patents describe how to use knowledge of the GPS data message to enable longer coherent integration.
This article surveys the algorithms that I helped develop while at Enuvis to do very low SNR signal detection when the signal is a known GPS transmission. The algorithmic framework we developed was reflective and adaptive. The algorithms tracked the uncertainty facing the system and switched modes based on efficiency considerations. That enabled rapid acquisition of strong signals while still allowing for slower acquisition of weaker ones. The algorithms developed here are an order of magnitude better than previous work. The work here is absolutely first rate but this paper does not do it justice. The technical details are buried in the patents.
Re-examines Witsenhausen's famous counterexample showing that linear control need not be optimal for LQG problems without a classical information pattern. Used ideas from quantization to show that there are nonlinear strategies that can do asymptotically infinitely better than the best linear strategies for a particular limit of problems.
Special thanks to our past and present research sponsors: