Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences

COLLEGE OF ENGINEERING

UC Berkeley

Optimization of File Migration In Distributed Systems

Qivind Kure

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-88-413
April 1988

http://www.eecs.berkeley.edu/Pubs/TechRpts/1988/CSD-88-413.pdf

This dissertation presents and evaluates several new algorithms that improve the performance of distributed file systems by migrating or copying files as necessary between system nodes. The results are based on analysis of file reference patterns from three commercial installations, modeling, and trace driven simulation.

The first part of the dissertation is an exploratory analysis of how shared user files are referenced. We find that although few files are shared, they are opened frequently, and account for a large fraction of the I/O traffic to user files. The reference pattern to shared files is not easily characterized, and varies widely among files. A batch Poisson process with geometric batch size is determined to be the most appropriate model.

Based on the exploratory analysis, we developed several algorithms for file migration and replication. The algorithms evaluated include those based on our file reference pattern analysis, as well as simple strategies such as static placement and movement on reference, and optimal look-ahead migration and placement. We found that only a few files should be migrated or replicated, but replication or migration can substantially reduce the network traffic (up to 63% for replication and 36% for migration, relative to static placement).

A policy based on a batch Poisson process with geometric batch size has the best performance when replication is not allowed. It uses as decision variables the fraction of a file accessed per open, the number of references from a user, and the number of changes in locality.

By replicating files, the network traffic can be reduced further compared to migration alone (up to 42%). Whether the additional copies should be invalidated or updated when the file is updated depends on the installation and the rules for placing users at nodes. The algorithms with the best performance use the average reference rate, the number of consecutive opens in update mode, and the time since the node started using the file as the decision variables. By comparing our realizable algorithms with optimal unrealizable algorithms, we show that it is unlikely that other migration or replication algorithms can achieve a substantially better performance.


BibTeX citation:

@techreport{Kure:CSD-88-413,
    Author = {Kure, Qivind},
    Title = {Optimization of File Migration In Distributed Systems},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {1988},
    Month = {Apr},
    URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1988/5861.html},
    Number = {UCB/CSD-88-413},
    Abstract = {This dissertation presents and evaluates several new algorithms that improve the performance of distributed file systems by migrating or copying files as necessary between system nodes. The results are based on analysis of file reference patterns from three commercial installations, modeling, and trace driven simulation. <p>The first part of the dissertation is an exploratory analysis of how shared user files are referenced. We find that although few files are shared, they are opened frequently, and account for a large fraction of the I/O traffic to user files.  The reference pattern to shared files is not easily characterized, and varies widely among files. A batch Poisson process with geometric batch size is determined to be the most appropriate model. <p>Based on the exploratory analysis, we developed several algorithms for file migration and replication. The algorithms evaluated include those based on our file reference pattern analysis, as well as simple strategies such as static placement and movement on reference, and optimal look-ahead migration and placement. We found that only a few files should be migrated or replicated, but replication or migration can substantially reduce the network traffic (up to 63% for replication and 36% for migration, relative to static placement). <p>A policy based on a batch Poisson process with geometric batch size has the best performance when replication is not allowed. It uses as decision variables the fraction of a file accessed per open, the number of references from a user, and the number of changes in locality. <p>By replicating files, the network traffic can be reduced further compared to migration alone (up to 42%). Whether the additional copies should be invalidated or updated when the file is updated depends on the installation and the rules for placing users at nodes. The algorithms with the best performance use the average reference rate, the number of consecutive opens in update mode, and the time since the node started using the file as the decision variables. By comparing our realizable algorithms with optimal unrealizable algorithms, we show that it is unlikely that other migration or replication algorithms can achieve a substantially better performance.}
}

EndNote citation:

%0 Report
%A Kure, Qivind
%T Optimization of File Migration In Distributed Systems
%I EECS Department, University of California, Berkeley
%D 1988
%@ UCB/CSD-88-413
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1988/5861.html
%F Kure:CSD-88-413