Animesh Kumar

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2008-182

December 19, 2008

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-182.pdf

SRAM leakage-power is a significant fraction of the total power consumption on a chip. Various system level techniques have been proposed to reduce this leakage-power by reducing (scaling) the supply voltage. SRAM supply voltage scaling reduces the leakage-power, but it increases stored-data failure rate due to commonly known failure mechanisms, for example, soft-errors.

This work studies SRAM leakage-power reduction using system level design techniques, with a data-reliability constraint. A statistical or probabilistic setup is used to model failure mechanisms like soft-errors or process-variations, and error-probability is used as a metric for reliability. Error models which combine various SRAM cell failure mechanisms are developed. In a probabilistic setup, the bit-error probability increases due to supply voltage reduction, but it can be compensated by suitable choices of error-correction code and data-refresh (scrubbing) rate. The trade-offs between leakage-power, supply voltage reduction, data-refresh rate, error-correction code, and decoding error probability are studied. The leakage-power -- including redundancy overhead, coding power, and data-refresh power -- is set as the cost-function and an error-probability target is set as the constraint. The cost-function is minimized subject to the constraint, over the choices of data-refresh rate, error-correction code, and supply voltage. Using this optimization procedure, simulation results and circuit-level leakage-power reduction estimates are presented.

Experimental results are presented for the special case of low duty-cycle applications like sensor nodes. Retention of stored data at lowest possible leakage-power is the only target in this case. Each SRAM cell has a threshold parameter called the data-retention voltage (DRV), above which the stored bit can be retained reliably. The DRV exhibits systematic and random variation due to process technology. Using the proposed optimization method, the retention supply voltage is selected to minimize the leakage-power per useful bit. The fundamental lower bound on the leakage-power per bit, while taking the DRV distribution into account, is established. For experimentally observed DRV-distributions from custom built SRAM chips, a [31, 26,3] Hamming code based retention scheme achieves a significant portion of the leakage-power reduction compared to the fundamental limit. These results are verified by twenty-four experimental chips manufactured in an industrial 90nm CMOS process.

Advisors: Kannan Ramchandran


BibTeX citation:

@phdthesis{Kumar:EECS-2008-182,
    Author= {Kumar, Animesh},
    Title= {SRAM Leakage-Power Optimization Framework: a System Level Approach},
    School= {EECS Department, University of California, Berkeley},
    Year= {2008},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-182.html},
    Number= {UCB/EECS-2008-182},
    Abstract= {SRAM leakage-power is a significant fraction of the total power consumption on a chip. Various system level techniques have been proposed to reduce this leakage-power by reducing (scaling) the supply voltage.  SRAM supply voltage scaling reduces the leakage-power, but it increases stored-data failure rate due to commonly known failure mechanisms, for example, soft-errors.

This work studies SRAM leakage-power reduction using system level design techniques, with a data-reliability constraint.  A statistical or probabilistic setup is used to model failure mechanisms like soft-errors or process-variations, and error-probability is used as a metric for reliability.  Error models which combine various SRAM cell failure mechanisms are developed. In a probabilistic setup, the bit-error probability increases due to supply voltage reduction, but it can be compensated by suitable choices of error-correction code and data-refresh (scrubbing) rate. The trade-offs between leakage-power, supply voltage reduction, data-refresh rate, error-correction code, and decoding error probability are studied. The leakage-power -- including redundancy overhead, coding power, and data-refresh power -- is set as the cost-function and an error-probability target is set as the constraint.  The cost-function is minimized subject to the constraint, over the choices of data-refresh rate, error-correction code, and supply voltage.  Using this optimization procedure, simulation results and circuit-level leakage-power reduction estimates are presented.

Experimental results are presented for the special case of low duty-cycle applications like sensor nodes. Retention of stored data at lowest possible leakage-power is the only target in this case. Each SRAM cell has a threshold parameter called the data-retention voltage (DRV), above which the stored bit can be retained reliably.  The DRV exhibits systematic and random variation due to process technology. Using the proposed optimization method, the retention supply voltage is selected to minimize the leakage-power per useful bit. The fundamental lower bound on the leakage-power per bit, while taking the DRV distribution into account, is established.  For experimentally observed DRV-distributions from custom built SRAM chips, a [31, 26,3] Hamming code based retention scheme achieves a significant portion of the leakage-power reduction compared to the fundamental limit. These results are verified by twenty-four experimental chips manufactured in an industrial 90nm CMOS process.},
}

EndNote citation:

%0 Thesis
%A Kumar, Animesh 
%T SRAM Leakage-Power Optimization Framework: a System Level Approach
%I EECS Department, University of California, Berkeley
%D 2008
%8 December 19
%@ UCB/EECS-2008-182
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-182.html
%F Kumar:EECS-2008-182