In the age of today's application demands and computational landscape, it does not always make sense to think about signal processing as a sequence of signal processing blocks to perform a particular fixed task. My own interest in this area comes from the wireless side, particularly as relates to questions of spectrum sharing and cognitive radio. I believe that much signal processing for such applications will have to be both robust and adaptive.
By "robust" signal processing, I mean processing that worries about the question that needs to be asked as well as what needs to be done to answer it. I have explored this aspect in the framework of cognitive radio where we have showed that the level at which we need to detect a primary user depends critically on what we wish to do. The question "is the band clear to use?" means something different when we are aiming to use a very low transmit power as compared to when we want to transmit a louder signal. The common concern is to robustly achieve the aim of non-interference with the primary user.
By "adaptive," I am not referring to the traditional adaptation of filter weights. Instead, I mean that the algorithm needs to adapt its approach to the difficulty of the problem and also to the computational resources that it has access to. For example, in the cognitive radio problem of detecting the use of a band, it makes sense to first search in a band looking at the in-band received-energy since it is an easy way of detecting powerful users. After some time, it makes sense to switch to coherently searching for known pilot signals since that performs better at low SNRs.
On the computational side, I am also interested in exploring the issue of adaptively shifting computations in a system from weaker nodes to more capable nodes when that makes sense. For example, if a wireless access point wants to use a band that it believes is not being used to set up a long range mesh, then it can do some of the required processing on its own. But if we know that the access point is only going to perform this function when it has a laptop connected to it, it may not make sense to build the worst-case signal-processing functionality into the access point. Rather, it could recruit the more powerful laptop connected to it to help out with the occasional difficult computation since it is in both of their interests to complete the computation fast.
In tomorrow's world, the old conception of the roles of infrastructure and consumer devices might need radical rethinking. Some infrastructure could be frequency-independent with sophisticated sensors while the computational loads are offloaded onto more capable consumer devices!
Summarized our major findings regarding opportunistic spectrum use by using sensing to locate unused bands. Identified the major effects, the idea of a sensing link budget, the fundamental limits on sensitivity, the need for within-system cooperation, the role for cooperation among systems, as well as the power and limits of coherent detection strategies to overcome these limits.Part II focused on reviewing cyclostationary feature detectors, discussing the key hardware limitations, and then showing how they might be overcome using time-domain or spatial-domain filtering techniques
We take some baby steps towards formalizing the issues involved in Spectrum Zoning (one of the key tasks assigned to the FCC and NTIA, and World Radio Congress) as distinct from the problem of actually assigning bands to users. They key challenge is to understand why flexibility is useful and we give a formulation in which it is sometimes rational to operate away from the Pareto optimal curve because the flexibility is "worth it." This paper also has some preliminary maps showing just how useful the TV whitespaces might be.
This paper actually goes through the FCC ruling legalizing white-space devices and uses real census data and real TV tower data to estimate how much white space is there. It also shows what the underlying policy tradeoffs are in an easy-to-understand way that gets to the heart of the problem.
This is a pair of short papers for general audiences with some overlap between them. They give a general overview of spectrum sharing by cognitive radios. The idea of "overhead reduction" is used as the unifying concept and they include some very nice figures. The first is notable for the appearance of semi-empirical data on the magnitude of spectrum holes in the United States Television Bands based on data crunched from the FCC Database of television channels. (The numbers themselves are a bit obsolete already with the Nov 4th FCC ruling changing things slightly. We've got an update above.) The second paper expands on the issue of identity and incentives and takes a market-oriented perspective to balance the opportunistic perspective in the column. The DSP column does not include an abstract, so here is one below:The radios of tomorrow will be capable of frequency agility. This is good because under the current static system of frequency assignment a great deal of spectrum remains underused. Using the FCC database of television towers and ITU propagation models, we see that about half of the TV channels are safely available for medium-range use across the United States. However, if we try to detect these opportunities using single radios acting alone, then only half of these opportunities will typically show up as being guaranteed to be safe. This is because the fear of fading forces us to try and rule out even very weak channels from use. Such detection of weak channels has its own signal processing challenges that arise from the uncertainty in the noise models. The fading uncertainty can be overcome by using cooperative sensing approaches, but this leads to another problem --- how can cooperative sensing approaches ever be certified and regulated? To avoid putting the government in the precarious position of trying to prove the correctness of code and protocols, we study the overhead that would be imposed by a more reactive system of spectrum policing and punishment. There are two prongs to this strategy. First, radios must be certified to be appropriately vulnerable to punishment in the form of spectrum jails where the more bands a radio wants to expand into, the more severe are the jail sentences. This leads to overhead in the form of innocent radios being wrongfully convicted and thus sitting in jails for a time. The second is for radios to explicitly encode their identity in their transmission patterns to both allow themselves to be more easily caught as well as prevent wrongful conviction by allowing themselves to be distinguished from other suspects. This leads to overhead in the form of spectrum that goes unused for data transmission because it is being implicitly used to convey identity information.
This paper aimed at a general EECS readership addresses the question of Spectrum Holes and focuses on their spatial nature. It proposes two key metrics for spectrum sensing: the "Fear of Harmful Interference" that captures the sense of safety for primary users as well as the "Weighted Probability of Area Recovered" that captures the sense of performance for secondary users. The advantage of these metrics is that they can incorporate asymmetric uncertainties between the primary and the secondary without being overly constraining to the architectures for sensing. The paper also gives a way to express fading uncertainty and shows how cooperation can improve performance. Finally, it shows how multiband sensing can help resolve certain uncertainties.
This paper shows that macro-scale features (much larger than the uncertainty) can be used to eliminate SNR walls entirely if we can design primary signals to have such features. This suggests that the fundamental tradeoff is not between primary capacity and secondary robustness but between primary capacity and secondary sensing delay.
This paper asks how far we can move from an a priori model of spectrum regulation in which systems are guaranteed by design not to cause interference to an a posteriori model in which systems are incentivized not to cause harmful interference by giving them incentives and penalties. A toy game-theoretic model is introduced and used to show that if any such approach aims to have a universal set of incentives, the punishment for cheating must go beyond simply banning the offending device from the band in which it cheats for some extent of time. Instead, the system should have to stake its own band as a kind of virtual collatoral against cheating. When cognitive use is viewed as a bandwidth-expanding strategy, bounds can be derived that reveal how good sensing and other enforcement parameters have to be to deter cheating and minimize overhead.
This paper addresses the issue of Faulhaber's "hit and run" radios. Dynamic spectrum sharing (whether by markets or cognitive radios) has a problem: how can you make sure that the radios are playing by the rules. A priori certification imposes a heavy-hand of regulation, but to make incentives/penalties work, there must be a way to identify offenders. This paper explores a simple MAC-layer approach to identity.
This family of partially overlapping papers explores the impact of uncertainties about the noise and fading in a cognitive radio system that is attempting to detect the presence of a weak primary transmission within the band. We show that under even weak uncertainty, robust detection becomes impossible below a certain SNR, regardless of how many samples we take. This work considers noncoherent, coherent, and feature-detection based strategies for detecting signals and shows that they are all afflicted with SNR Walls, although some are better than others. The capacity/robustness tradeoff is described as a way to compare various detection strategies, and noise-calibration is discussed as a key tool for improving robustness. By example, it is shown that standard cyclostationary feature detection is suboptimal since it does not achieve all the noise-calibration gains that are possible in a system. With noise calibration, the uncertain color of the noise becomes more important than the uncertainty in the overall noise intensity
This paper shows that simply using multiple antennas in sensing does not free a single user from fundamental limits on sensitivity. It introduces the idea of "event-based sensing" that tries to find temporary holes in a strong primary user's use of the band. For event-based sensing, a new kind of diversity turns out to be important: interference diversity. This exploits the fact that unintentional emmitters and other sources of possibly strong interference tend to be local to a single cognitive user while the primary has a much larger footprint.
This paper takes a skeptical look at the possibility for "cognitive radio" systems using bands that are already being used by primary transmitters by exploiting writing on dirty-paper style ideas. A bound is given for DPC coding with uncertain channel phase information and this bound is shown to essentially wipe-out all the DPC coding gain in many cases. It says that it is not enough for a cognitive transmitter to decode and know the transmitted interference signal, it has to also know the complex phase at which this signal arrives at the receiver.
These papers propose multiband sensing as a way to improve overall sensing robustness and use cooperation in a fundamentally different way. The first insight is to realize that there are two dimensions of "sparsity" in physical environments that current approaches to spectrum sensing do not exploit. First, while shadowing is poorly modeled and potentially varies on a very slow scale geographically, it is not very frequency selective. Second, the economics of real-estate and zoning laws push primary users to share a few towers with each one hosting many transmissions. By sensing multiple bands at once, a radio node can estimate its local shadowing environment and the collective thereby knows which sensor measurements to trust at runtime. The secondary insight is to capture the idea of political mistrust and incentive misalignment between primary and secondary users by using different uncertainty models while evaluating different metrics. The probability of harmful interference can be evaluated against a more uncertain model while the probability of finding open spectrum can be evaluated against a higher-fidelity model.
This shows that the need for cooperation increases significantly when attempting to share spectrum with primary users that have smaller footprints.
This short note builds
upon our prior work in this TAPAS
paper as well as this ICC
paper and our earlier papers at WirelessCom and Allerton. The
focus here is on the sensor network aspect of enabling cognitive
This paper builds upon this ICC
paper and our earlier papers at WirelessCom and Allerton. The
focus here is on the minimum amount of coordination required to enable
cognitive radio. The need to control uncertainty about interference
from nearby cognitive radio nodes during spectrum sensing is
identified as a major issue.
Explores whether it is worth knowing the codebook that interference has come from under the assumption that the interference signal is too weak (or too high rate) to decode correctly. Uses a simple genie-aided argument and the Gaussian MAC converse to argue that in such cases, there is no advantage to knowing the codebook.Readers might find the associated presentation a useful accompaniment.
Explores the need for cooperation among nearby different secondary systems in order to robustly detecting unused bands for opportunistic use. Identifies the presence of unknown interference from nearby secondary systems as the dominant term in the uncertainty limiting our ability to identify which band is empty. Conceptualizes this in terms of fairness, and proposes the need for a sensing MAC protocol to limit the uncertainty about interference. Shows that this effect makes fair opportunistic use practically impossible using radiometer-based sensing, and quantifies the gains possible using coherent processing. Closes with an interpretation in terms of a complexity-conservation principle.
Explores the benefits and limits of within-system cooperation in detecting unused bands for opportunistic use by cognitive radios. Quantifies the benefits of such cooperation in terms of the individual sensitivities of the cognitive radios themselves. Shows the fundamental limits to the cooperative gain that is imposed by allowing a fraction of untrusted nodes.
This paper explores the limits on power scaling in the cognitive radio setting. Fundamentally, it establishes the required detection capability of a cognitive radio in order to be able to transmit at a certain level of power. In general, the radio must be able to detect substantially weaker signals, and the tradeoff rule depends on how many such secondary cognitive radios are going to be in operation, as well as what the nature of their power requirements are. In particular, we explore the effect of the heterogeneous propagation loss functions likely to occur in practice.
Most of the results here are covered in greater detail within Niels Hoven's MS Thesis.
A first principles analysis of the basic problem of cognitive radio: detecting unused bands so secondary users can use them. Shows that the SNR we must detect at depends on the power that we wish to transmit at as well as the degree of protection desired for the primary users. Shows that without explicit pilot or training sequences transmitted by the primary receivers, knowledge of just the primary modulation system is almost useless to the secondary users in that performance is as bad as simple radiometry. Shows how the energy detector is not robust to receiver uncertainty in low SNR environments and if the receiver is quantized, this lack of robustness extends to all possible detectors.
This set of slides from a subsequent presentation might also prove useful for readers interested in this subject.
Studies Cramer-Rao bounds for localization in large UWB-based sensor networks in both the anchored and anchor-free (no absolute position references) cases. Shows that working in purely local coordinates sometimes gives better performance. Gives locally computable upper and lower bounds to the CRB that depend only on the neighboring nodes in the network. The goal of this research was to develop bounds that would help a network and/or its nodes to decide what kinds of algorithms they need to deploy based on the localization environment that they find themselves in.
Studies Cramer-Rao bounds for tracking objects in dense UWB-based sensor networks in the high SNR regime. Explores the idea of using channel estimates from the UWB communication system to position objects without tags by fusing multipath data assuming specular reflections. In a dense network, the wireless channels are not independent since the paths interact with the same objects. Gives asymptotic bounds for both centralized and decentralized processing and also gives an order-optimal algorithm for both cases. Proposes a heuristic solution to the problem of multiple objects.
This article surveys the algorithms that I helped develop while at Enuvis to do very low SNR signal detection when the signal is a known GPS transmission. The algorithmic framework we developed was reflective and adaptive. The algorithms tracked the uncertainty facing the system and switched modes based on efficiency considerations. That enabled rapid acquisition of strong signals while still allowing for slower acquisition of weaker ones. The algorithms developed here are an order of magnitude better than previous work.
A dull sounding title (it was originally titled: "Ultrastacked refinement, frequency-following probes, sub-millisecond chunking, and mixed references for position determination"), but it really is a host of new adaptive approximate signal processing techniques to enable very fast GPS operation in challenging environments. It represents a new way of thinking about this sort of adaptive signal processing problem that combines ideas from computer-science and traditional SP algorithms in the context of flexible software-defined radios for GPS.
An approach to adaptive interpolation suited for software defined radio GPS which operates slower than real-time in challenging environments and hence must use data more carefully than traditional approaches to the problem.
Shows how to take data and use it to correct for systematic biases that might be introduced in a low SNR GPS environment by looking at the typical additional delay introduced by multipaths in building environments.
Shows how to use approximate signal processing techniques and linear programming for GPS in order to do very precise and flexible region based digital restrictions management (DRM) in which the demands on the device are roughly proportional to the specificity of the region requirement.
A new approach to dealing with strong in-band narrow interference while attempting to detect very weak almost-periodic signals. It involves an approximate approach that is tailored to software defined radios that works flexibly, with very little computational impact.
This shows how to use an approximation idea to dramatically reduce the computational burden of searching for a known set of wideband signals without having a very precise frequency reference. This turns out to be the core problem in very low SNR GPS and the algorithms given here were the foundations of the GPS approach we did in the software defined radio context.
An approach to combine information from different acquired GPS signals to help speed up the acquisition of additional ones. This works through a linear programming formulation, though the key benefit is in the reduction of frequency uncertainty once any individual GPS signal has been acquired.
A systems perspective on how to put the various algorithmic components together to do GPS
This pair of patents describe how to use knowledge of the data message to enable longer coherent integration.
Special thanks to our past and present research sponsors: