Nick Lanham

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2014-46

May 1, 2014

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-46.pdf

<p>Recent years has seen an ever widening gulf develop between access times for data stored in memory versus data on disk. Concurrently, growth in main memory sizes has led to large gains in the popularity of database systems that keep their working sets primarily in memory. These systems make the assumption that either all data in always in memory, or that access to disk, managed by a standard buffer pool, will suffice.</p>

<p>However, with data sizes growing steadily and more quickly than available main memory, it is clear that all in-memory systems will need some way to move data to a cold backing store.</p>

<p>This paper proposes a new online, statistics based, batch-oriented technique to allow an RDBMS to leverage cold storage to increase data capacity without overly impacting query performance. Our solution couples well with semantic knowledge about an application, making it easy to take advantage of application specific access patterns. We develop a number of techniques for efficient statistics gathering and management, movement of data to cold storage, and querying of data in cold storage. We show this approach fits well into the main memory model, and that it has excellent performance in some industry standard benchmarks, as well as for an Enterprise Resource Planning benchmark we have developed.</p>

Advisors: Michael Franklin


BibTeX citation:

@mastersthesis{Lanham:EECS-2014-46,
    Author= {Lanham, Nick},
    Editor= {Kraska, Tim and Franklin, Michael},
    Title= {Methuselah: Intelligent Data Aging},
    School= {EECS Department, University of California, Berkeley},
    Year= {2014},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-46.html},
    Number= {UCB/EECS-2014-46},
    Abstract= {<p>Recent years has seen an ever widening gulf develop between access times for data stored in memory versus data on disk. Concurrently, growth in main memory sizes has led to large gains in the popularity of database systems that keep their working sets primarily in memory. These systems make the assumption that either all data in always in memory, or that access to disk, managed by a standard buffer pool, will suffice.</p>

<p>However, with data sizes growing steadily and more quickly than available main memory, it is clear that all in-memory systems will need some way to move data to a cold backing store.</p>

<p>This paper proposes a new online, statistics based, batch-oriented technique to allow an RDBMS to leverage cold storage to increase data capacity without overly impacting query performance. Our solution couples well with semantic knowledge about an application, making it easy to take advantage of application specific access patterns. We develop a number of techniques for efficient statistics gathering and management, movement of data to cold storage, and querying of data in cold storage. We show this approach fits well into the main memory model, and that it has excellent performance in some industry standard benchmarks, as well as for an Enterprise Resource Planning benchmark we have developed.</p>},
}

EndNote citation:

%0 Thesis
%A Lanham, Nick 
%E Kraska, Tim 
%E Franklin, Michael 
%T Methuselah: Intelligent Data Aging
%I EECS Department, University of California, Berkeley
%D 2014
%8 May 1
%@ UCB/EECS-2014-46
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-46.html
%F Lanham:EECS-2014-46