Real-time wireless link reliability estimation is a fundamental building block for self-organization of multihop sensor networks. Observed connectivity at low power is more chaotic and unpredictable than in wireless LANs, and available resources are severely constrained. We seek estimators that react quickly to large changes, yet are stable, have a small memory footprint, and are simple to compute. We create a simple model that generates link loss characteristics similar to empirical traces collected under different contexts. With this model, we simulate a variety of estimators, and use the simple exponentially weighted moving average (EWMA) estimator as a basis for comparison. We find that recently proposed flip-flop estimators are not superior, however, our cascaded EWMA on windowed averaging is very effective.
Motivated by our technical report on the subject of empirical study of epidemic algorithms in large scale multihop networks, we have pursued the idea of neighborhood discovery based on passive link quality estimations, and the use of such information to analyze different distributed algorithms to perform routing tree construction. One interesting routing algorithm we have developed seeks to obtain the maximum reliable routing path from each node to the base station in a distributed manner based on local link quality estimations. Initial results from static analysis to low level simulations in TOSSIM were promising and motivate us for further investigation against different distributed routing algorithms such as naïve broadcast based routing, and shortest path routing over reliable neighbors. Lacking a visually rich simulator, we adopted Vanderbilt’s Matlab simulator for TinyOS (Prowler) and enhanced it with features that allow detail study of routing algorithms and GUI support to visualize the routing tree evolution. To capture a more realistic model of the radio channel for the simulator, we perform new empirical experiments on the new MICA platform to create a probabilistic model for the radio channel and packet collision. Current results of this work were presented at the Berkeley NEST retreat, Intel Berkeley Research Lab, and NEST PI meeting.
The ability to use wireless sensor networks to monitor habitats on the scale of the organism can provide higher resolution data and profound results for scientists. In order to study the feasibility of deploying wireless sensor networks for habitat monitoring and retreiving useful data, we performed an in-depth study of real-world habitat monitoring on Great Duck Island, Maine. We developed a set of system design requirements that cover the hardware design of the nodes, the design of the sensor network, and the capabilities for remote data access and management. A serious concern is sensor node power management. Scientists need to be able to collect data for an entire breeding season without replacing batteries or disrupting the animals in their natural habitat. By evaluating the power consumption of all aspects of the node during operation, we designed a strict power budget and management scheme. We designed a system architecture to address general habitat monitoring requirements and deployed an instance of the architecture for monitoring seabird nesting. The currently deployed network consists of 32 nodes on a small island off the coast of Maine streaming useful live data onto the web. The application-driven exercise serves to focus our efforts for further work in data sampling, communications, network retasking, and health monitoring in the field of wireless sensor networks. We are developing other operating system services including power management, watchdog functionality, and time synchronization.

Figure 1: The system architecture for habitat monitoring applications

Figure 2: The Mica wireless sensor node with the Mica weather board developed for environmental monitoring applications
FlashBack is a peer-to-peer replication middleware designed specifically for power-constrained devices. FlashBack provides a reliable storage layer for personal area networks (PANs) by spreading backup data among peers. It is designed to be scalable, fault-tolerant and opportunistic in the presence of fixed infrastructure. It also leverages heterogeneity within the PANs. This project examines the challenges presented by mobile personal devices. We propose the FlashBack architecture and algorithms, and present experimental results that illustrate FlashBack's performance characteristics.
1Staff, Intel Research Laboratory at Seattle
2Staff, Intel Research Laboratory at Seattle
A few years ago, file sharing utilities were the first to explore the power of peer-to-peer computing. Protocols such as Napster, Gnutella, Freenet, and KaZaa provided users with an easy way to share popular files, documents, and other data. First generation protocols had their limitations. Freenet and Gnutella did not provide guaranteed results. Napster had limited scalability due to its use of centralized servers, and Gnutella's scalability was limited by the use of broadcast query messages. All of these protocols relied on the popularity (and resulting large number of replicas) of documents to make them easily accessible via search.
The second generation of peer-to-peer systems, including Tapestry [1,2], Chord [3], content-addressable networks (CAN) [4], and Pastry [5], provide reliable decentralized object location and routing (DOLR) functionality while only keeping local state logarithmic to the network size. These systems scale well in design, and under non-failure conditions, guarantee that queries find the desired object if it exists in the network.
To exploit this deterministic file location property, we've designed and implemented Interweave, a fully decentralized P2P file location utility on top of Tapestry. Interweave allows users to advertise local files by publishing IDs through the Tapestry object location layer, where IDs are generated from specified keywords. Users can do multi-field searches on multiple keywords or filenames, and prune their results further with constraints on file size and last modified date. Most importantly, search results are precise, and a document can easily be found, even if there is only one copy residing in a global network. Finally, Interweave leverages Tapestry's network locality to limit its results by the network distance between the client and each replica.
The Interweave design includes no points of centralization and no inherent scalability limitations, paving the way for its deployment on a large/global user base. The current implementation is feature complete, and interoperates with the current Tapestry Java implementation.
Current work revolves around improving the usability of the Interweave client, and performance optimizations to reduce the number of results returned (and the associated bandwidth usage) of popular requests.
Instant messaging (IM) is a new and useful way to communicate between networked users. Existing solutions such as AOL IM, MSN Messenger, ICQ, and Yahoo Messenger are generally centralized protocols with limited reliability and scalability properties.
In Shuttle, we take the traditional IM design, and implement it on a purely decentralized peer-to-peer infrastructure (Tapestry [1,2]). In the resulting protocol, users find their friends using a fully distributed lookup service rather than a centralized server. Communication is purely between peers. Contact lists are encrypted and distributed into the network storage layer of Tapestry, making them accessible from anywhere on the network.
Shuttle can also leverage the fault-tolerant routing aspects of Tapestry to route around congested and failed links to deliver messages more reliably. Finally, the completely server-less design of Shuttle means the system architecture does not limit scaling of the system to any sized network.
Shuttle is currently implemented on the current Java implementation of Tapestry. Plans are underway to deploy it as a long running service on an "always-up" Tapestry network anchored by a set of local Berkeley machines.
In today's chaotic Internet, data and services are mobile and replicated widely for availability, scalability, durability, and locality. Components within this global infrastructure interact in novel, rich, and complex ways, greatly stressing traditional approaches to location and routing. This article explores an alternative to traditional approaches, Tapestry [1, 2], which provides a peer-to-peer overlay location and routing infrastructure offering efficient, scalable, location-independent routing of arbitrary size messages directly to the closest copy of an object or a service using only localized resources. Tapestry supports a generic decentralized object location and routing (DOLR) API using a self-repairing, soft-state based fault-tolerant routing layer.
Using the Tapestry architecture, we have built two Tapestry implementations: a C-based simulator that can simulate up to 10,000 nodes; and a Java-based implementation that supports an efficient virtual node mode, where multiple virtual Tapestry nodes execute on a single physical machine. We have deployed the Java implementation on the large-scale PlanetLab [3] network testbed, which consists of approximately 100 machines at 42 academic and corporate institutions on three continents (North America, Europe, and Australia).
Using virtualization, we are performing detailed evaluation of the performance of Tapestry, by comparing the efficiency of overlay operations on Tapestry to idealized performance metrics, and analyzing Tapestry's large-scale behavior under highly dynamic, high-load scenarios, where client and server nodes may enter or leave the network in very large groups, and the rate of change in network membership is high. Finally, we are evaluating the usability and generality of Tapestry's extensible up-call interface.
Existing P2P overlay networks [1-5] utilize scalable routing tables to map unique identifiers to network locations as a decentralized object location and routing (DOLR) layer, and to locate nearby copies of object replicas as distributed hashtables (DHT). They allow network applications such as distributed file systems and distributed web caches to efficiently locate and manage object replicas across a wide-area network.
While these systems excel at locating objects and object replicas, they rely on known globally unique identifiers (GUID) for each object, commonly generated by applying a secure hash function over the data. This is problematic for text-based objects, where replicas are often similar but not identical. Replicas then can have different IDs, complicating their management.
To address this problem, we propose an approximate text addressing (ATA) layer, which uses a combination of block text fingerprints and P2P object location to find matches between highly similar data objects. To validate the ATA architecture, we design a decentralized spam-filtering application that leverages ATA to accurately identify junk email messages despite formatting differences and evasion efforts by marketing firms.
For evaluation, we use both probability models to predict the accuracy of our fingerprint vector scheme, and confirm those predictions by running experiments on real email traces. We measure both the accuracy rate, and also the false positive rate, and determine the size of fingerprint vector necessary to achieve an acceptable level of accuracy. We also simulate the spam filter application, and measure the success rate of matching a "marked" spam message as a function of both the percentage of nodes marking the message, and also the number of hops a request travels before termination. This allows us to tune the tradeoff between accuracy and network bandwidth for any given "marking rate" of a spam message.
Future work includes the implementation of the decentralized spam filter, and its deployment on the Tapestry network. We also plan to repeat our experiments on larger email datasets, while further validating the ATA design with the design and analysis of additional applications.
Figure 1: Fingerprint vector: a fingerprint vector is generated from
the set of checksums of all substrings of length L, post-processed with sort, selection, and reverse operations.
I am currently investigating how introspection can be used in conjunction with code and data migration techniques to improve an Internet service's ability to adapt to changing network and resource conditions. Many of today's Internet services, such as e-mail, perform poorly when the connection to the server is high latency or low bandwidth. We have also shown that tomorrow's Internet services, based on peer-to-peer technologies, exhibit this same sensitivity to changes in network conditions. This sensitivity is usually due to inefficient use of the network connection.
Code and data migration techniques can be used to decrease this sensitivity by placing frequently used data objects nearer to the client, and by sending code fragments to run on the server to avoid round trip times. We are investigating how introspection can be used to build models of user access, so the infrastructure can predict expected user performance when code and data migration is used. Based on these models, the infrastructure can then decide when to employ the various code and data migration techniques to optimize a given metric.
Communication reliability is a desired property in computer networks. One key technology to increase the reliability of a communication path is to provision a disjoint backup path. One of the main challenges in implementing this technique is that two paths that are disjoint at the IP or overlay layer may share the same physical links. As a result, although we may select a disjoint backup path at the overlay layer, one physical link failure may cause the failure of both the primary and the backup paths.
In this project, we propose a solution to address this problem. The main idea is to take into account the correlated link failure at the overlay layer. More precisely, our goal is to find a route for the backup path to minimize the joint path failure probability between the primary and the backup paths. To demonstrate the feasibility of our approach, we perform extensive evaluations under both single and double link failure models. Our results show that, in terms of robustness, our approach is near optimal and is up to 60% better than no backup path reservation and is up to 30% better than using the traditional shortest disjoint path algorithm to select the backup path.
Novel value-added services will be the driving force behind future communication networks and the mobile Internet. In this context, we consider the idea of service composition, where independent service components deployed and managed by providers are assembled to enable a novel composed service. Two simple examples of composition are shown in Figure 1.
We address several challenges with respect to availability as well as performance of composed services. Using a trace-based study, we find that it is possible to define a notion of failure which lasts for several tens of seconds or more. Such long outages can be predicted with timeouts of much shorter durations--within about 1.2-1.8 seconds on most Internet paths, and such timeouts happen quite infrequently in absolute terms, only about once an hour or less. These point at the usefulness and feasibility of a failure detection mechanism.
We develop an architecture for service composition based on an overlay network of service clusters. The overlay network is virtual-circuit based, with the implication that we can do failure recovery without waiting for the failure information to propagate and stabilize across the network. We deploy a composed text-to-speech service application as well as our recovery mechanisms in a wide-area testbed and run long-running experiments to evaluate the improvement in availability due to our recovery algorithms. We observe dramatic improvements in availability. In quantitative terms, we are able to reduce the system downtime by factors of up to 2-10 in many cases. This shows the usefulness of our architecture from the point of view of the end-user.
Figure 1: Examples of service composition
Many carriers are migrating away from SONET ring restoration toward mesh restoration through intelligent O-E-O cross-connects (XC). In mesh restoration, when failure occurs, traffic along the service path needs to be quickly rerouted to the restoration path. Due to the nature of distributed protocols, several restoration paths may compete for limited capacity on the same link, which
The reason for crank back can either be different paths competing for the same channel, which we call glare, or there may not be enough free bandwidth available at all. Our research aims to develop mechanisms to reduce these two sources of crank back. We developed channel selection methods [1] that considerably reduced glare, while at the same time maintained large groups of contiguous channels (for high bandwidth connections) across multiple parallel links. In addition, we proposed and analyzed a hybrid solution [2] that utilized a centralized restoration path server to optimize the restoration path selection, yet utilized distributed control to compute service paths and set up service/restoration paths. Our objective was to achieve very fast provisioning, fast restoration upon network failure, and efficient use of capacity. To support this, we also presented an efficient restoration path selection algorithm.
Large enterprises connect their locations together by building a network (sometimes called an Intranet) out of leased, private communications lines. Although these lines are dedicated to the company's traffic, and thus provide good quality of service, they are very expensive. Because of this, they are not always overprovisioned. HP's Systems Research Center (Palo Alto, California) has developed the delta routing protocol, which allows nodes on a company's network to communicate with each other using both private lines as well as virtual private network (VPN) tunnels over the shared Internet. Each node in the network makes a local decision as to which path (the private network or the shared Internet) is best and forwards data accordingly. Because each node uses only local information, it is possible that it will forward traffic to a part of the network that is congested. In this case, that distant node will have to forward such "excess" traffic over VPN tunnels rather than the private network.
Using this protocol as a base, our contribution is to install measurement agents into each node in the company's network to perform traffic matrix estimations in order to make smarter forwarding decisions. A traffic matrix is a data structure that captures the amount of traffic at each point in the network. With such data, nodes would be able to avoid sending traffic to a portion of the network that is congested. Instead, it would be able to tunnel the traffic around the congestion to a point on the private network where it will be able to continue to the destination. Unfortunately, in a large network it is impractical to give each node a consistent notion of the traffic matrix. Instead, we propose that nodes directly measure flows that they forward and use estimates for flows that travel through another part of the network. These estimates come from periodic flow measurements broadcast from other nodes. A private network using delta routing should be able to respond to congestion if it occurs by shedding excess load to the less predictable, but more flexible, shared Internet.
The delta routing protocol is currently implemented in the ns simulator, and the improved protocol which includes flow measurement and traffic-matrix estimation functionality is in progress. Simulations are going to be performed on both artifically generated networks as well as a model of Compaq's private network.
1HP Systems Research Center
Congestion pricing, which varies prices according to load, more efficiently allocates scarce resources by modifying user behaviors. We study its effectiveness and user acceptance for allocating bandwidth at a local area network access link. We conducted two iterations of prototyping, deployment, and evaluation with about ten users. In the first iteration, we offered users three sizes of connectivity and varied their prices. We found that limiting user usages with different bandwidth sizes is not suitable for dealing with short bursts. In the second iteration, we offered users three levels of quality of service (QoS) that differed on degree of traffic smoothing and charged once every 15 minutes. We found that this scheme is effective because we can easily entice users to select a lower QoS, one with more smoothing applied to their traffic, by increasing the price for higher quality. Using a survey, users state that they would be willing to use the scheme because they only need to make a purchasing decision at most once every 15 minutes. Through simulations, we show that if half of the users in a large network can be enticed to have their traffic smoothed, then the burstiness at the network’s access link can be reduced by 20-30%.
We propose OverQoS, an architecture for providing Internet QoS using overlay networks. OverQoS empowers third-party providers to offer enhanced network services to their customers using the notion of a controlled loss virtual link (CLVL). The CLVL abstraction bounds the loss-rate experienced by the overlay traffic in order to eliminate the unexpected losses due to cross traffic in the underlying network.
We build a CLVL abstraction along every overlay link and treat all flows traversing the link as part of a bundle. OverQoS uses specific scheduling policies at the overlay nodes to distribute the available bandwidth and the loss amongst the competing flows in a bundle. We show that OverQoS can provide differential rate allocations, statistical bandwidth and loss assurances, and enable explicit-rate congestion control algorithms.
1Professor
2Professor, MIT
3Professor
Failure resilience is one of the desired properties in communication networks. A lot of schemes have been proposed to deal with the failure in optical layers, however, most existing protection schemes consider single link failures only. This is not realistic because it implies a dependency between link failures. Our algorithm aims to break this constraint and develop a new protection scheme for a more general model of independent link failures.
The problem can be formulated as follows: for a given primary path, find a backup path with minimum cost in terms of reserved bandwidth while providing a hard constraint on the protection success rate. We first model the problem using integer linear programming. Then we propose an approximation algorithm that can reduce the communication and computation overhead while guaranteeing the hard constraint on the protection success rate.
In our preliminary experiments, we investigate the performance of our algorithm in a network with 78 nodes and 122 links. The network topology is obtained from AT&T that is used in the ROLEX simulation. We evaluate the performance with respect to protection success rate, lightpath blocking rate, and wavelength usage ratio under various link failure rates and target protection success rates. Preliminary results show that our algorithm can always guarantee the target protection rate with a high bandwidth utilization rate. In addition, we compare our algorithm with the exclusive path protection and shared path protection schemes. Our algorithm can achieve a lower request blocking ratio and more efficient bandwidth usage while meeting the requirement of the pre-determined target protection success rate.
To provide good QoS to calls and maximize network resource utilization in IP Telephony networks, it is necessary to design a differentiated service architecture that includes call admission and call redirection policies. In this work, we first study a call admission policy based on congestion sensitive pricing. The scheme uses congestion pricing to preferentially admit users who place a higher value on making a call while simultaneously maintaining a high utilization of IP Telephony Gateway (ITG) voice ports. Second, we design a call redirection policy to select the best ITG to serve the call. The policy balances load to improve network efficiency and incorporates QoS sensitivity to improve call quality. We present a performance study of the call admission and the redirection policies using both analytical and simulation models. The techniques studied in this paper can be combined into a single resource management solution that can improve network resource utilization, provide differentiated service, and maximize provider revenue.
1Professor, UC Davis
Several protocols have been proposed to enable efficient multi-point communication (multicast or broadcast) both at the IP layer and at the application or overlay layer. However, no single protocol is universally deployed. In fact, we believe that none of the existing broadcast protocols will become the sole dominant Internet broadcasting technology. Instead, we expect islands of non-interoperable broadcast connectivity to exist. To address this, we propose an architecture that enables the composition of different multicast-capable networks, each with their own multicast or broadcast protocols, to provide an end-to-end broadcast service. We call this architecture the broadcast federation. In order to achieve high throughput scalability, our implementation is based on a clustered broadcast gateway (which is the centerpiece of the federation system). We design the broadcast federation architecture, implement the clustered broadcast gateway, and obtain results on the scalability of these gateways.
1ATT Research, Menlo Park, CA.
An increasing number of ASes have been connecting to the Internet through the BGP inter-domain routing protocol. With increasing stress on the scale of this system and increasing reliance on Internet connectivity, more participants want additional functionality from inter-domain routing that BGP was not designed to handle. For example, we believe that the recent trend toward multihomed stub networks exhibits an intent to achieve fault tolerant and load balanced connectivity to the Internet. However, BGP today offers route fail-over times as long as 15 minutes, and very limited control over incoming traffic across multiple wide area paths. More research literature and the news media are calling for stemming malicious or erroneous routing announcements. We propose a policy control architecture, OPCA, that runs as an overlay network on top of BGP. OPCA will allow ASes to negogiate and apply policy changes to routing that address these issues. The proposed architecture and protocol will co-exist and interact with the existing routing infrastructure and will allow for a scalable rollout of the protocol.
1Professor, UC Davis
It is known that alternative paths to normal Internet routes could be used to improve connectivity. This can be achieved through well placed overlay routers. In order to prevent interference with each other, different overlay flows would benefit from knowledge of bottlenecks that they share. We are currently investigating techniques by which we can decide if two flows share the same bottleneck. Our goal is to do make this decision based on the correlation of the throughputs observed by the flows. Detecting such correlation is complicated because the short-term dependence of TCP throughput on packet drops is not very well understood. Also, the packet drops themselves may not be evenly distributed among flows in case of droptail queues.
The current optical infrastructure has abundant bandwidth, but limited services. A new service infrastructure is needed. Current optical networking is limited in computation and applications. We are looking at some services and architectural issues, such as: ways to build a light path among service providers; methods of building a service composition of the light-path between technologies, i.e., Access, Metro, and Long Haul; ways to share the expensive infrastructure among service providers; and finding the right available optical resource among providers. It is hard to add computation to the core of the optical infrastructure. We seek to determine types of intelligence the can be added to the edge of the optical core. We also are investigating how to build a service directory of storage, backup, computation, optical VPN, and available Lambdas.
Examples of services for research are: setting the right optical link for grid computing and finding the best instance of storage area network (SAN) and determining what is best; disaster recovery over Metro, including storage and computation; providing optical VPN on demand to a remote site needing 10 Gb/s for 25 minutes; providing a link with HDTV quality (for a remote medical operation lastin two hours); and providing periodic network backup capabilities (preferred to a backup bunker) without the need for local tapes. Finally, it is important to let the applications negotiate the resources' availability and control the network path creation.
The optical networking infrastructure and DWDM have changed dramatically in the last few years. The bandwidth in the core (not accessed yet) has become a commodity, changing many networking assumptions. Recent advancements in optical technologies have created a radical mismatch between the optical transmission world and the electrical forwarding/routing world. Currently, a single strand of optical fiber can transmit more bandwidth than the entire Internet core. Furthermore, some of the basic characteristics of optical networking are different from traditional L3 networking.
Impendence mismatch has many dimensions. One example is the bottlenecks that are located at the peering points between the current ASs and ISPs. The peering is done in L3 and above. New sets of services can be built in L1-L2 and a dynamic light path can be created. Provisioning is a long and inefficient process. New technologies have emerged to build the capabilities of automatic switched optical networks (ASON) in the Long Haul and resilient packet rings (RPR) in the Metro. The capabilities of new optical technologies are limited by current Internet architecture. We are analyzing solutions to bridge the mismatch between optical and electrical transmission. One of our fundamental research goals is to determine what is needed to build a new Internet architecture that utilizes the new advancements in optical networking.
Recently there has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. In this project, we first compare the un-cooperative pulling of Web contents used by commercial CDNs with the cooperative pushing. Our results show that the latter can achieve comparable users' perceived performance with only 4-5% of replication and update traffic compared to the former scheme. Therefore, we explore how to efficiently push content to CDN nodes. Using trace-driven simulation, we show that replicating content in units of URLs can yield a 60-70% reduction in clients' latency, compared to replicating in units of Web sites. However, it is very expensive to perform such a fine-grained replication.
To address this issue, we propose to replicate content in units of clusters, each containing objects which are likely to be requested by clients that are topologically close. To this end, we describe three clustering techniques, and use various topologies and several large Web server traces to evaluate their performance. Our results show that the cluster-based replication achieves 40-60% improvement over the per Web site based replication. In addition, by adjusting the number of clusters, we can smoothly trade off the management and computation cost for better client performance.
To adapt to changes in users' access patterns, we also explore incremental clusterings that adaptively add new documents to the existing content clusters. We examine both offline and online incremental clusterings, where the former assumes access history is available while the latter predicts access pattern based on the hyperlink structure. Our results show that the offline clusterings yield close to the performance of the complete re-clustering at much lower overhead. The online incremental clustering and replication cut down the retrieval cost by 4.6-8 times compared to no replication and random replication, so it is especially useful to improve document availability during flash crowds.
1Microsoft Research
2Undergraduate (EECS)
Estimating end-to-end Internet distance can benefit many applications and services, such as efficient overlay construction, overlay routing and location, and peer-to-peer systems. While there have been many proposals on Internet distance estimation, how to design and build a scalable, accurate, and dynamic monitoring system is still an open issue. To this end, we propose an overlay distance monitoring system, Internet Iso-bar, which yields good estimation accuracy with much better scalability and far less overhead for online monitoring than the existing systems. Internet Iso-bar clusters hosts based on the similarity of their perceived network distance, and chooses the center of each cluster as a monitor site. The distance between two hosts is estimated using inter- or intra-cluster distances. Evaluation using real Internet measurements shows that Internet Iso-bar achieves high estimation accuracy and stability with much smaller measurement overhead than Global Network Positioning (GNP) [1]. Furthermore, by adjusting the number of clusters, we can smoothly trade off the measurement and management cost for better estimation accuracy.
Route flap damping is a widely deployed mechanism in core routers that limits the widespread propagation of unstable BGP routing information. Originally designed to suppress route changes caused by link flaps, flap damping attempts to distinguish persistently unstable routes from routes that occasionally fail. It is considered to be a major contributor to the stability of the Internet routing system.
We show in this project that, surprisingly, route flap damping can significantly exacerbate the convergence times of relatively stable routes. For example, a route to a prefix that is withdrawn exactly once and re-announced can be suppressed for up to an hour (using the current RIPE recommended damping parameters). We show that such abnormal behavior fundamentally arises from the interaction of flap damping with BGP path exploration during route withdrawal. We study this interaction using a simple analytical model, and understand through the use of simulations the impact of various BGP parameters on its occurrence. Finally, we outline a preliminary proposal to modify a route flap damping scheme (based on ignoring monotonic route changes that flap damping triggers) that removes the undesired interaction in all the topologies we studied.
1Professor, USC
2Professor, UC San Diego
This work considers the problem of minimizing the amount of communication needed to send readings from a set of sensors to a single destination in energy constrained wireless networks. Substantial gains can be obtained using packet aggregation techniques while routing. The routing algorithm we developed, called data funneling, allows the network to considerably reduce the amount of energy spent on communication setup and control, an important concern in low data-rate communication. This is achieved by sending only one data stream from a group of sensors to the destination, instead of having an individual data stream from each sensor to the destination. This strategy also decreases the probability of packet collisions when transmitting on a wireless medium because the total number of packets is reduced by incorporating the information of many small packets into a few large ones. Additional gains can be realized by efficient compression of data. This is achieved by losslessly compressing the data by encoding information in the ordering of the sensors’ packets. This “coding by ordering” scheme compresses data by suppressing certain readings and encoding their values in the ordering of the remaining packets. Using these techniques together can more than halve the energy spent in communication.
The topology of a randomly deployed sensor network can have a dramatic impact on the network's performance, as well as its lifetime. In a radio network without topology control, every node uses the same transmission power to send packets to its neighbors, with a neighbor defined as any node that can be directly reached by this node.
This approach has some major problems. In many envisioned real world scenarios, the positions of the sensors in a network can't be pre-arranged. In those areas where node density is high, one node may have many neighbors, while in other areas where node density is low, node degree is very small. Nodes with fewer neighbors may become bottleneck points and be biased by the upper layer routing protocol to relay more traffic, die fast, and disconnect the network quickly. Those high-degree nodes will spill out unnecessary energy and generate a lot of interference to their neighborhoods.
In this research we propose a novel distributed topology control protocol called zone-based topology control. It is a two-phase procedure to generate a topology where every single node in a network has roughly the same number of neighbors. We can demonstrate a substantial energy savings from these results.
At this time we are evaluating the performance and stability of the proposed protocol in real world senarios.
The PicoRadio Project at the Berkeley Wireless Research Center (BWRC) is focusing on networks of extremely small self-powered nodes communicating via radio. A PicoRadio network must be self-configuring, power-aware, and scalable, so the media access and network protocols are necessarily complex. The goal of the project is to build a single-chip version of a node in which much of the protocol implementation is to be hard-coded on the IC.
In order to ensure that the protocols work before the chip is built, we have implemented the protocol suite on the PicoRadio test bed: a network of nodes built from off-the-shelf components with configurable elements such as a processor and programmable logic. The test bed emulates fairly closely the conditions expected when the single-chip nodes are deployed, so we can gain a detailed understanding of the functionality of the protocols, the behavior of the network under dynamic conditions, and the parts of the protocol suite that should be parameterized.
The test bed implementation has already proven to be a valuable tool for understanding network behavior. The implementation team has performed a large set of measurements under varying conditions, and is continuing to gather data. Lessons learned from previous studies have been incorporated into the protocol set, for instance, the stability of the media access protocol in the presence of rapidly changing radio channels has been greatly improved. These lessons are currently being applied to the implementation of a preliminary version of the final PicoRadio node.
1Staff
2Postdoctoral Researcher
3Undergraduate (EECS)
4Visiting Researcher, Swiss Federal Institute of Technology, Lausanne
5Undergraduate (EECS)
This work considers the problem of localization in low-cost, wireless sensor networks. Localization refers to the process by which the nodes in a sensor network discover their geographical location. In a typical sensor network only a few nodes know their position a priori. Thus, I am researching localization algorithms by which the unknown nodes in the network can calculate their positions using information from the known nodes. A good localization solution has the following requirements. First, it must be scalable and distributed. Second, a solution has to be tolerant to errors in the distance measurement. (Errors will be present in the distance measurements due to the difficulty of accurately measuring distance in environments with multi-path reflections.) Third, the solution must converge to accurate location estimates in many different and time-varying environments with constrained communication and computation resources. In other words, a robust solution is required.
The overall goal of this project is to research and evaluate several existing localization algorithms, then extend these algorithms to increase their robustness and incorporate features such as current error estimation, early termination when error bounds are low, and better convergence guarantees.
PicoRadio is a ubiquitous network of ultra low-power and low-cost wireless sensor nodes. Each node contains a complete protocol stack based upon the OSI model. My research is on implementing the medium access control (MAC) layer of this stack. Its functionality includes initialization and maintenance management, such as assigning local address, keeping one-hop network topology, and processing control broadcast messages and datapath between the network layer and physical layer for broadcast and unicast messages with FIFO buffers. MAC also uses CSMA and beaconing mechanisms. It discovers neighbors in the initialization phase, maintains a neighbor lookup table, forwards the neighbor information to the network layer, keeps a list of two-hop neighbors’ addresses, assigns its own local address randomly, computes its own location, maintains on a periodical basis, issues random backoff, designates channel for transmission, etc. In order to capture the functionality of the MAC, Stateflow in Simulink is chosen as the implementation tool. The underlying model of computation is suitable for simulation of control-dominated applications like the MAC. We have been focusing on functional simulations for a PicoNode network in Simulink for the purpose of protocol validation and verification. Current effort has been done on system design using Simulink Stateflow. Issues in the development of the MAC layer such as power management, power control, and CSMA sensing method are considered. Stateflow has been converted into VHDL using SSHAFT Translator. VHDL verification will be done in BEE. Automatic synthesis using SSHAFT design flow down to ASIC will be used to complete this project.

Figure 1: Design flow
The goal of the PicoRadio project is to build a wireless sensor network that is versatile, self-organizing, dynamically reconfigurable, and multi-functional. The primary constraint in building the nodes is the extremely low energy budget. This imposes tight constraints on the entire design, and instead of the traditional layered protocol design, we are tightly integrating the protocol stack vertically for maximum energy optimization. The network layer of the PicoNode protocol stack has two primary functions: routing and addressing of nodes.
Addressing of nodes is based on the geographical positions of nodes. Queries in sensor networks are typically not concerned with a particular node, rather they are directed to any nodes which satisfy particular criteria of position, type, etc. This leads to our concept of class-based addressing, which is a triplet consisting of location, node type, and node sub-type. This eliminates the overhead of assigning and maintaining fixed addresses across the network as well as the problem of distributing them dynamically.
For the routing of packets across the network, we have developed an energy efficient protocol we call "energy aware routing." This scheme takes the view that trying to optimize every route to consume the least amount of energy is not in the best interest of the network. Instead, we try to optimize the power of the entire network and in the process maximize the lifetime. Thus, we find a set of routes on demand and choose between them in a probabilistic fashion. The choice is based on a metric which combines both transmit/receive power and residual energy at a node. This means that all nodes across the network participate more actively in routing, and a few nodes do not die out just because they are in a good position to forward packets. Simulations verify this behavior and the network lifetime also increases substantially.
In addition, we are working on defining the capacity of sensor networks at the network level. This capacity is basically determined by a set of nodes that are the most energy constrained, forming a cutset. The challenge is to find a routing scheme that can route packets in such a way as to minimize the energy consumption of nodes in this set. The probabilistic scheme discussed earlier performs this task to some extent, but more work is needed to actually find the optimal scheme.
Sensor networks (SNs) have emerged as the bridge between the Internet and the physical world, as the driver of the next potential economic revolution, and as the imminent largest producer of data and information. Therefore, development of intellectual property protection (IPP) for SNs are of prime importance.
We propose the first IPP approach for data and information generated in sensor networks. Contrary to the current state-of-the-art of image and audio watermarking, we embed signatures before and/or during the process of data acquisition and processing. The key idea is to impose additional conditions that correspond to the cryptographically encoded author/owner's signature during the data acquisition or processing in such a way that the quality of recording or processing is not impacted, or is only minimally impacted.
The technique is generic in that it is applicable to a number of tasks in SNs, including sensor design, deployment, and data fusion. It enables real-time watermark checking and can be used in conjunction with a variety of watermarking protocols. It can also be applied to any collected multimodal data or data generated for simulation purposes. Finally, the technique is adaptive, optimization-intensive, and resilient against attack. Through simulation and prototyping we demonstrate the effectiveness of active watermarking on three important sensor network tasks: location discovery, navigation, and light characterization.
1Outside Adviser (non-EECS), UCLA
Network discovery is a fundamental task in wireless ad hoc (sensor) networks. The goal is to identify a part of the network with a particular user-specified set of graph theoretic and/or geometric properties. Representative examples of network discovery tasks include finding the shortest path between two points, a path or spanning tree through a specified subset of points, or all nodes in a given area, and areas that are not covered (holes and obstacles). We are developing efficient localized algorithms for all of these problems using the same basic systematic approach. The key new insight and building block is at each step of the process the goal is to visit nodes that provide new knowledge about the network structure, but also are likely to have neighbors that will facilitate the network discovery process. The likelihood is measured using a variety of weighted Monte Carlo integral calculations for evaluation of newly discovered areas. Although the algorithms are localized, they are able to efficiently build or estimate global objective functions.
The algorithms are flexible in the sense that they can be easily adapted to a variety of network, communication, and cost models. The effectiveness of the approach and algorithms is demonstrated on benchmarks using statistical scaling algorithms that quantify the tradeoff between the quality of the solution and the energy cost of the algorithm.
1Outside Adviser (non-EECS), UCLA
Scaling can be defined as the quantitative characterization of properties, events, and mechanisms as the instance of the object under consideration increases its size. In many fields, from physics and computational biology to multiprocessor systems and the Internet, scaling is widely studied. Often, the key prerequisite for the application of an architecture, piece of system software, or an algorithm, is how it scales. Until now, scaling has not been systematically studied in the context of wireless ad hoc sensor networks.
From the design point of view, the key novelty is that our goal is not just to analyze how already existing algorithms and mechanisms scale, but also to develop insights and techniques for the development of scalable algorithms. From a methodological point of view, the main novelty is the use of statistical techniques for analysis of scaling, including non-parametric modeling and validation, sampling, correlation, and perturbation. Special emphasis is placed on localized algorithms. We introduce several generic techniques for the development of localized algorithms and study how they scale.
1Outside Adviser (non-EECS), UCLA
The demand for network resources is steadily increasing. End user experience with a network service is often limited by the weakest link along the path from the client to the server. With the client-server computing model still dominant in today's Internet, traffic tends to converge at the edge of the network, creating performance bottlenecks. As a consequence, performance is still a crucial part of any Internet service.
The performance issue becomes even more important in the face of the proliferation of mobile (Internet) services. Mobile devices often rely on wireless Internet connections that have much more limited bandwidth and are much less reliable. Work load characteristics and usage patterns are substantially different from those of traditional wired and desktop applications. Yet all of these have to work with technologies that were conceived in the context of wired services. This poses significant challenges to the designers, developers, and deployers of mobile services.
In this research, we begin by surveying existing mobile services, and identify and examine specific performance issues faced by these services. In particular, we study the service models, system architectures, and most importantly, the workload characterizations of existing mobile services and their implications for today's Internet architecture, applications, algorithms, and protocols. Finally, we are beginning a study of iMobile (a.k.a. AT&T Mobile Network)--a mobile service platform that allows mobile devices to communicate with each other and to securely access corporate and Internet contents and services. We are working with AT&T to collect and analyze trace data from online iMobile gateways and servers.
Structured peer-to-peer overlay networks have recently gained popularity as a platform for the construction of resilient, large-scale distributed systems [1-5]. Structured overlays conform to a specific graph structure that allows them to locate objects by exchanging O(Lg N) messages where N is the number of nodes in the overlay.
Structured overlays can be used to construct scalable, robust, decentralized services such as distributed hash tables and application level multicast. These services in turn promise to enable novel classes of highly scalable, resilient, distributed applications, including cooperative archival storage, cooperative content distribution, and messaging.
Currently, each structured overlay protocol exports a different API with subtly different semantics. Thus, application designers must understand the intricacies of each protocol to decide which system best meets their needs. As a result, applications are locked into one system and unable to leverage innovations in other protocols. Moreover, the semantic differences make a comparative evaluation of different protocol designs difficult.
This work is an initial attempt to identify the fundamental abstractions provided by structured overlays and to define an API for the common services they provide. Such an API should be easily implemented by overlay protocols and allow efficient implementation of a wide range of applications. We believe that a common API will accelerate the adoption of structured overlays, facilitate independent innovation in overlay protocols, services, and applications, and permit direct experimental comparisons between systems.
Wireless token ring protocol (WTRP) is a medium access control (MAC) protocol for wireless networks. The MAC protocol, through which mobile stations can share a common broadcast channel, is essential in wireless networks. In an IEEE 802.11 network, the contention among stations is not homogeneous due to the existence of hidden terminals, partially connected network topology, and random access. Consequently, quality of service (QoS) is not provided. WTRP supports guaranteed QoS in terms of bounded latency and reserved bandwidth, which are crucial real-time constraints of the applications. WTRP is efficient in the sense that it reduces the number of retransmissions due to collisions. It is fair in the sense that each station uses the channel for an equal amount of time. The stations take turns transmitting and are forced to give up the right to transmit after transmitting for a specified amount of time. It is a distributed protocol that supports many topologies since not all stations need to be connected to each other or to a central station. WTRP is robust against single node failure. WTRP recovers gracefully from multiple simultaneous faults. WTRP has applications to inter-access point coordination in ITS DSRC, safety-critical vehicle-to-vehicle networking, home networking, and provides extensions to sensor networks and Mobile IP.
1Postdoctoral Researcher
2Professor
Wireless communications have rapidly evolved in the recent decade. Increasing desire to provide connectivity for mobile computers and communication devices is firing an interest in wireless networks. Future communications will employ high speed wireless systems in the local area, low-speed wireless systems in the wide area, and utilize high capacity wired media in the metropolitan environment.
In order to achieve the goal of offering broadband communication services and providing universal connectivity to mobile users, the standards for wireless local area networks (WLANs) are designed and an approach to interconnect these WLANs to the existing wired LANs and wide area wireless network are developed. Study group 802.11 was formed under an IEEE project to recommend an international standard for WLANs. The scope of the 802.11 study group is to develop MAC and physical layer standards for wireless connectivity within a local area. IEEE adopted the first standard [1] in 1997 and revised it in 1999.
Although IEEE 802.11 can be extended to a multihop architecture, currently, it is implemented for a single hop architecture. Building a large number of access points (APs), for example, in a metropolitan area with a dense population, has a high cost. In fact, there are other kinds of networks, namely, packet radio or ad hoc networks, in which no APs are needed. One of the advantages of these networks is low cost because no infrastructure is needed and the networks can be deployed instantly. However, these ad hoc networks may be limited to specialized applications, such as battlefields and traveling groups, due to the vulnerability of paths with many possible mobile stations. Nevertheless, this vulnerability can be greatly reduced if the number of wireless hops can be limited and the station mobility is not high.
This project is concerned with a proposal to give mobile stations forward information when they are neither the initial transmitter nor the final receiver.
We propose a novel channel access scheme that exploits the application-specific characteristics of sensor networks to meet their power, real-time deadline, fairness, and congestion control requirements. The primary characteristic of the sensor network is that the destination of all the data packets in the network is a central data collector. This central data collector, which is usually denoted as an access point, has unlimited power, whereas sensor nodes have one-battery power for remaining alive for several years. Our protocol PEDAMACS uses the access point to directly synchronize and schedule all the nodes in the network by increasing its transmission power. After learning the topology information, which includes the neighbor and the next hop to reach the access point, of all the nodes in the network in topology learning and topology collection phases, the access point explicitly schedules the node transmissions and announces this schedule to all the nodes. Assuming that the nodes generate packets periodically at the same rate, we described the goal of the scheduling algorithm to be minimizing the time necessary for all the packets to reach an access point where each node has exactly one packet at the beginning. After proving the NP-completeness of the problem, we developed a polynomial time algorithm that can guarantee an upper bound on the maximum delay experienced by the packets, which is proportional to the number of the nodes in the network. Simulations performed in TOSSIM, which is a simulation environment for TinyOS, show the efficiency of the proposed scheme compared to the conventional random access scheme in terms of power and delay.
The study of the constraint optimization routing (COR) problem is motivated by recent research efforts on the QoS routing problem of the communication network. QoS routing leads to a series of problems including link state advertisement, path selection, admission control, path setup, and path maintenance. The COR problem focuses on finding a path through the network that satisfies the QoS requirements. Conventional routing protocols characterize network links by a single metric, such as hop count. Routing protocols like RIP or OSPF find the shortest path based on this metric. For routing that supports QoS requirements, the link model must include multiple parameters such as bandwidth, cost, delay, delay jitter, loss probability, and so on. Routing protocols then become more complicated because they must find paths that satisfy multiple constraints.
The COR problem has many applications in a variety of areas such as management of transportation systems, decision-making, and mission planning. For example, in the Mixed Initiative Control for Automa-teams (MICA) Project, the unmanned aerial vehicles, or UAVs, want to destroy valuable targets subject to possible attacks from these targets, limited fuel, and other obstacles. This path planning problem can be easily abstracted as a COR problem with constraints of hazard, path length, etc.
A latest research effort is to build a dynamic routing algorithm with traffic engineering (DRATE) framework under the structure of the COR problem. In this framework, flows come and are routed one by one with specific link metric sets. These link metric sets are calculated based on dynamic flow information in real time, such that traffic engineering objectives are achieved. The key techniques are (1) to map the traffic engineering objectives appropriately into link metric sets, and (2) to integrate multiple objectives and fit the COR structure.
I have been studying the structural properties of shortest path algorithms. The problem can be thought of as an inverse to the classical shortest path problem. In particular, assume that given a network of nodes, e.g., a network of autonomous systems (AS) in the context of BGP, each of these has totally ordered preferences over all possible paths that lead to a specified destination. The problem that we attempt to answer is whether link costs exist such that the ordering induced by such costs coincides with the order given originally.
Necessary and sufficient conditions in terms of the original path order are derived, which characterize when such costs exist. Interestingly, in some cases where appropriate link costs do not exist, if such cost existed, then the calculation of the optimal routes by each node could have been solved efficiently in a distributed fashion by employing some variant of the Bellman-Ford routing algorithm. It turns out that one can extend the notion of link cost to a transformation, and by appropriately defining the action of this transformation on the integers. The classical shortest path algorithm is a particular case of this formulation, where each integer corresponds to a transformation, and each transformation acts by translation by the associated integer. This extended formulation of shortest paths allows us to find appropriate extended link costs that match the given path preferences in cases where the classical formulation fails. Currently, I am looking at methods for efficiently implementing these extended link costs.
An application of this problem occurs in the design of routing algorithms that are robust under node or link failures. In general, the system operator has a fallback plan of alternative routes that are to be followed when failures occur. In the occurrence of failures, the surviving nodes do not need to solve a new optimization problem in order to calculate how failed routes need to be rerouted. In particular, by running a standard shortest path algorithm, e.g., Bellman-Ford, using extended link costs, perhaps each node can find the optimal rerouting.
In the future, I plan to study the properties of extended classes of shortest path algorithms. Although, a single set of extended costs which match arbitrary preferences over an arbitrary network topology exist, their implementation is intractable. What one wants is the construction of a universal set of extended costs that always give a solution in the cases where path preferences posses a pre-specified structure. Thus, I plan to devise a set of theoretical tools that give constructable solutions for the given preference structures.
The tremendous success of 802.11b wireless networks ensures such wireless network deployments will become ubiquitous in the near future. An ad hoc wireless network based on ubiquitous and low cost 802.11b interfaces would be an ideal candidate for disaster relief applications during times when traditional networks fail to provide adequate network services. However, 802.11b was not designed with providing QoS in mind, and it is not trivial to provide QoS guarantees over such networks. To bridge the gap, we investigate the problem of providing soft QoS guarantees in a small sized 802.11b ad hoc wireless network.
One of the major challenges in providing QoS is in deciding routes for application traffic. At the heart of this routing problem is finding the right model for describing the shared nature of the wireless medium. We used a simplified model for the delay and bandwidth constraints of a sharing medium based on the use of a cell. We proved, through an implementation, that this simple model captures the important charateristics of the shared medium constraints specific to wireless routing networks. The simplicity of the model allows us to use an integer linear program to find the optimal routes. Furthermore, the routes chosen by using the simple model are verified by a faster than real time simulation that can use more sophisticated interference models that can provide more accurate results but are much harder to analyze. In order to cope with the inevitable differences between the simple model and the more accurate MAC layer simulation model, we invented a control loop that can search and adapt feasible QoS routes, starting from those calculated from a simple model by incorporating a simulator in the loop. In addition, to compensate for the lack of MAC layer reservation capabilities, we introduced a simple distributed scheduling algorithm to rate limit the best effort traffic to protect the QoS requested by high priority applications. Finally, we completed a proof of concept implementation of all of the above features to show that our architecture is both self-contained and sufficient to provide QoS in an ad hoc wireless network using only off-the-shelf 802.11b network equipment.
1Postdoctoral Researcher
2Postdoctoral Researcher
3Staff
Recent measurements have shown that peer-to-peer file transfer volumes comprise almost half of the traffic carried across metropolitan area networks. What is alarming is that peer-to-peer applications have existed for less than five years. What is even more surprising is that these peer-to-peer communities continue to thrive even though measurements show that only a small fraction of the users are willing to share their content with other users.
In this research, we develop models that predict this unexpected behavior. Users clearly behave in self interest, often at the expense of other users. Cooperation, however, is highly desired since all users will benefit from a more diverse selection of media. Furthermore, the network performance, e.g., lower link loads and shorter download times, should only improve as sharing increases. Moreover, better network performance should only improve a user's utility.
The problem of predicting and driving the behavior of self-interested independent decision makers has been well studied in the context of game theory. We have developed simple repeated game models and found sub game perfect equilibria of cooperation among all users. We allow users to punish each other by either allowing the users to block another user or even anouncing to other users to stay away from a free-riding user. In this way users have an incentive to cooperate.
Questions still remain, however, in making models that will make the desirable equilibrium unique so that under any initial conditions all the users will eventually choose actions leading to an equilibrium of total cooperation.
The number of deployed 802.11 wireless access point devices has grown dramatically in recent years. In many cases these devices are deployed by individuals or institutions for their own private use, rather than providing wireless access service to other parties. Often these private access points are underutilized, and thus could potentially be used to provide service to users foreign to the owner of the access point, but that lie within the communications radius of the access point. Such a service would be especially valuable for mobile users that are traveling through an area where they cannot establish Internet connectivity in any other way. However, access point owners would incur costs in offering this service. These costs include any performance degradation due to the additional traffic from foreign users, and the possible security risk in allowing foreign users access to their access point and potentially the other hosts connected to that access point. Because of this, access point owners would need incentives to voluntarily offer service to foreign users. One possible incentive would be in the form of payments from the foreign user to the owner of the access point. A problem with this model is that the foreign user and the access point owner cannot trust each other. For example, if the foreign user were to pay in advance for a service, she cannot be certain that the access point owner would actually provide the service after receiving payment. Likewise, if the foreign user agreed to pay after receiving service for some time, the access point owner cannot be certain that the foreign user would actually deliver on the payment promise. Our work focuses on formulating this situation in a game theoretic model, and using this model to find payment schemes that incentize access point owners to allow foreign users to use the access point, that incentivize both parties to carry out any agreed exchange of payment for service, and that require as little communication between the two parties as possible.
Today's Internet is composed of many interconnected autonomous networks, or domains, that operate independently according to their own set of policies. However, providing end-to-end quality of service (QoS) requires a joint effort by all the networks. One approach used in practice is built upon service level agreements (SLA) between domains. When using this approach, one network offers others forwarding services with certain guarantees on QoS. The concatenation of these SLAs then ensures end-to-end QoS for end users.
Our work focuses on the application of voice over IP (VoIP) and illustrates how measurement-based admission control and SLAs can be designed to provide end-to-end QoS in a dynamic, fully decentralized way. A game-theoretic model for the proposed scheme is developed to demonstrate its feasibility and some of its interesting properties.
In many network routing situations, it is necessary to calculate a protection path in addition to a normal path. Applications like MPLS networks and optical networks require backup paths that satisfy various constraints of bandwidth, link or node selection, and ease of configuration.
In this research, we first demonstrate the premise that algorithmic treatments of normal and backup path calculation, configuration, and maintenance should be different from those accorded to normal paths. We demonstrate this through a new hysteresis-based scheme to handle dynamic demands in a network, where the different parameters accorded to normal and backup paths enable better utilization of network resources.
The chief contribution of this work is to present an infrastructure for managing normal and protection paths in a network through a modular suite of algorithms. We present several novel mechanisms to enhance the overall performance. (1) A distributed method of separately calculating normal and backup paths in the network, using link state information. (2) A sophisticated per-link state maintained at each node, which allows optimal sharing of backup bandwidth. (3) Piggybacking the link state information on the reservation ack/nack packets, to implement a very low intensity link state update mechanism. This may be deployed in cases where global link state updates are too costly to use. (4) Asynchronous dynamic reorganization of backup paths to reduce congestion in the network. We apply Markov decision process analysis to quantitatively determine the reduction in blocking probability under various network loads.
We have also developed a simulation model which may be used to verify and evaluate the algorithms developed. The modular simulation model allows specific components to be plugged in or out, and has hooks to capture statistics. It enables us to quantify the gains achieved by adopting the mechanisms listed above, e.g. reduction in blocking probability and specific link loads under fixed total loads applied across source destination pairs in the network.
Many different networking scenarios today require the calculation of a backup path in addition to calculating a normal path thorugh the network. Furthermore, both paths are required to satisfy some quality (i.e., bandwidth) constraints. The traditional methods for calculating the backup paths often do not account for sharing of bandwidth between different backup paths, and consequently utilize the network resources poorly.
In this work, we first present a centralized algorithm to exactly consider the bandwidth sharing on each link across various backup paths, and hence minimize the bandwidth resources used for protection. We take into account disjoint normal paths that can share their backup bandwidths and also consider the fact that normal bandwidth need no longer be reserved along all the links if a link on the path fails, thereby achieving the optimal protection bandwidth reuse.
Finally, we also present a distributed algorithm for the same problem, which results in a solution that is less optimal, but still far more efficient than the naive method.
There is great importance in developing networks that provide some guarantee of QoS in order to accommodate real-time (EF) applications such as VoIP and video. These latency sensitive applications are not worthwhile to use over a network if the provided quality is poor. For instance, telephony type traffic can be carried over the Internet in the form of VoIP, but it can also be carried over a public switched telephone network at a guaranteed quality. In order for VoIP to be a competitive alternative there needs to be similar guarantees in expected quality. Poor quality of these real-time applications in terms of delay is a result of congestion in the network.
My research investigates a scheme that provides QoS for such EF traffic by means of adding packet marking capability within the router. Within a network, one can have certain routers that have the capability of marking a packet at the onset of congestion. By means of such packet marking, network resources can be shared by both EF flows and BE flows. The EF flows are governed by admission control in response to the number of packets marked by the network. As such, an EF flow will only be admitted if the routers the flow will use have the available resources to provide the QoS that the EF flow needs. A BE flow is governed by TCP with ECN wherein a marked packet is treated as a dropped packet. As such, a BE flow will not saturate the network with its transmission, leaving adequate resources to service the EF flows.
Past work includes theortical analysis and simulation results via ns. Current work lies in creating an implementation using laptops as wireless routers wherein more extensive tests can be run on a small network of such routers.
Streaming media applications are becoming increasingly popular on the Internet. For streaming applications the media data must be encoded without the benefit of knowing the state of the channel during transmission. For this reason, in streaming media systems, the encoding must be flexible and the server must adaptively select and transmit the appropriate data to the client, as a function of the state of the network. Since many media streaming applications use pre-roll buffer, there is large flexibility in packet scheduling algorithms, significantly affecting rate-distortion performance [1,2].
In our study, we analyze the expected global distortion in video streaming assuming error concealment at the receiver. We also propose a simple and real-time applicable rate-distortion optimization method based on the effective bandwidth approach. We evaluate the effective bandwidth of VBR video data by computing the autocovariance of data size in a group of picture (GOP), taking into account the variance of channel bandwidth fluctuation. Assigning a different priority to motion vector and texture, we obtain SNR scalability effectively [3]. We also adopt temporal scalability, resulting in graceful degradation in channels with heavy loss. Experimental results show that our scheduling algorithm, motion-texture discrimination, and temporal scalability yield a performance improvement of several dBs in PSNR compared to conventional sequential transmission of video data without scalability.