FlashBack is a peer-to-peer replication middleware designed specifically for power-constrained devices. FlashBack provides a reliable storage layer for personal area networks (PANs) by spreading backup data among peers. It is designed to be scalable, fault-tolerant and opportunistic in the presence of fixed infrastructure. It also leverages heterogeneity within the PANs. This project examines the challenges presented by mobile personal devices. We propose the FlashBack architecture and algorithms, and present experimental results that illustrate FlashBack's performance characteristics.
1Staff, Intel Research Laboratory at Seattle
2Staff, Intel Research Laboratory at Seattle
This project examines the feasibility of peer-to-peer full-text keyword search of the Web. Two classes of keyword search techniques are in use or have been proposed: flooding of queries over an overlay network (as in Gnutella), and intersection of index lists stored in a distributed hash table (DHT). A simple model of capacity and workload suggests that the Internet does not have enough capacity to make naive use of either of these techniques attractive for Web search. We then present a number of existing and novel optimizations, such as caching and precomputation, bloom filters, gap compression and adaptive set intersections  with clustering. We estimate their effect on performance, and conclude that in combination these optimizations would bring the problem to within an order of magnitude of feasibility. To achieve the last order of magnitude, we propose using Fagin's algorithm (FA)  in conjunction with a ranking function to generate incremental results. This may allow savings in communication if users terminate their queries early.
We intend to build a DHT-based search engine, and compare its performance characteristics with Gnutella and KaZaA.
Recent work in the networking and OS research communities have developed (and continue to improve) highly scalable peer-to-peer systems called distributed hash tables (DHTs). These systems can provide the foundation for designing a massively distributed query engine. Such a query engine can be utilized not only to query over a wide-area network (tapping into distributed resources), but also query data that naturally resides in the network without the need to transport the data to a central collection point. We believe such as system will be useful in many applications, particularly for network monitoring.
DHTs are able to map a key (a sequence of bits) to a node in the network in the face of simultaneous nodes entering, leaving, and failing in the network. Above this mapping, the common hash table function, put() and get() can be implemented. We extend this interface slightly to allow for query processing.
We use existing hash-based distributed/parallel relational join algorithms (such as the symmetric hash join) while relaxing traditional consistency semantics (ACID) to allow the system to scale to over ten-thousand machines in simulation, and (so-far) dozens in an actual implementation over the Millennium Cluster.
Our primary research focus is on supporting continuous queries, which appear to be natural to the class of applications we are interested in supporting. In addition to relational joins, aggregation is also needed for many of the applications. Key properties for any operator include low communication, little or no synchronization, and are non-blocking. These open the door to questions of dynamic query optimization, approximate answers (such as online aggregation), and multi-query sharing.
1International Computer Science Institute
Communication reliability is a desired property in computer networks. One key technology to increase the reliability of a communication path is to provision a disjoint backup path. One of the main challenges in implementing this technique is that two paths that are disjoint at the IP or overlay layer may share the same physical links. As a result, although we may select a disjoint backup path at the overlay layer, one physical link failure may cause the failure of both the primary and the backup paths.
In this project, we propose a solution to address this problem. The main idea is to take into account the correlated link failure at the overlay layer. More precisely, our goal is to find a route for the backup path to minimize the joint path failure probability between the primary and the backup paths. To demonstrate the feasibility of our approach, we perform extensive evaluations under both single and double link failure models. Our results show that, in terms of robustness, our approach is near optimal and is up to 60% better than no backup path reservation and is up to 30% better than using the traditional shortest disjoint path algorithm to select the backup path.
Structured peer-to-peer overlay networks have recently gained popularity as a platform for the construction of resilient, large-scale distributed systems [1-5]. Structured overlays conform to a specific graph structure that allows them to locate objects by exchanging O(Lg N) messages where N is the number of nodes in the overlay.
Structured overlays can be used to construct scalable, robust, decentralized services such as distributed hash tables and application level multicast. These services in turn promise to enable novel classes of highly scalable, resilient, distributed applications, including cooperative archival storage, cooperative content distribution, and messaging.
Currently, each structured overlay protocol exports a different API with subtly different semantics. Thus, application designers must understand the intricacies of each protocol to decide which system best meets their needs. As a result, applications are locked into one system and unable to leverage innovations in other protocols. Moreover, the semantic differences make a comparative evaluation of different protocol designs difficult.
This work is an initial attempt to identify the fundamental abstractions provided by structured overlays and to define an API for the common services they provide. Such an API should be easily implemented by overlay protocols and allow efficient implementation of a wide range of applications. We believe that a common API will accelerate the adoption of structured overlays, facilitate independent innovation in overlay protocols, services, and applications, and permit direct experimental comparisons between systems.