Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences

COLLEGE OF ENGINEERING

UC Berkeley

Depth of Field Postprocessing For Layered Scenes Using Constant-Time Rectangle Spreading

Todd Jerome Kosloff, Michael Tao and Brian A. Barsky

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2008-187
December 30, 2008

http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-187.pdf

Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, everything appears completely in sharp focus, leading to an unnatural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts and limitations in the type of blur. In this paper, we present a new image filter based on rectangle spreading which is constant time per pixel. When used in a layered depth of field framework, it eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the point spread function.


BibTeX citation:

@techreport{Kosloff:EECS-2008-187,
    Author = {Kosloff, Todd Jerome and Tao, Michael and Barsky, Brian A.},
    Title = {Depth of Field Postprocessing For Layered Scenes Using Constant-Time Rectangle Spreading},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2008},
    Month = {Dec},
    URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-187.html},
    Number = {UCB/EECS-2008-187},
    Abstract = {Control over what is in focus and what is not in focus in an image is
an important artistic tool. The range of depth in a 3D scene that is
imaged in sufficient focus through an optics system, such as a camera
lens, is called depth of field. Without depth of field, everything
appears completely in sharp focus, leading to an unnatural, overly
crisp appearance. Current techniques for rendering depth of field in
computer graphics are either slow or suffer from artifacts and limitations
in the type of blur. In this paper, we present a new image
filter based on rectangle spreading which is constant time per pixel.
When used in a layered depth of field framework, it eliminates the
intensity leakage and depth discontinuity artifacts that occur in previous
methods. We also present several extensions to our rectangle
spreading method to allow flexibility in the appearance of the blur
through control over the point spread function.}
}

EndNote citation:

%0 Report
%A Kosloff, Todd Jerome
%A Tao, Michael
%A Barsky, Brian A.
%T Depth of Field Postprocessing For Layered Scenes Using Constant-Time Rectangle Spreading
%I EECS Department, University of California, Berkeley
%D 2008
%8 December 30
%@ UCB/EECS-2008-187
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-187.html
%F Kosloff:EECS-2008-187