Michael Wish Tao

U.C. Berkeley, Ph.D. Candidate
Computational Photography and Computer Vision
Adviser: Ravi Ramamoorthi


545 Soda Hall
Berkeley, CA 94720

e-mail: mtao@berkeley.edu
cell:     510-461-2770

BRIEF · CV/RESUME :: RESEARCH · ART · BUSINESS


2010

Error-tolerance Image Compositing

European Conference on Computer Vision (ECCV), 2010
Michael W. Tao, Micah K. Johnson, and Sylvain Paris. "Error-tolerant Image Compositing". In European Conference on Computer Vision (ECCV), 2010
[official webpage] [paper] [oral presentation]

Abstract

Gradient-domain compositing is an essential tool in computer vision and its applications, e.g., seamless cloning, panorama stitching, shadow removal, scene completion and reshuffling. While easy to implement, these gradient-domain techniques often generate bleeding artifacts where the composited image regions do not match. One option is to modify the region boundary to minimize such mismatches. However, this option may not always be sufficient or applicable, e.g., the user or algorithm may not allow the selection to be altered. We propose a new approach to gradient-domain compositing that is robust to inaccuracies and prevents color bleeding without changing the boundary location. Our approach improves standard gradient-domain compositing in two ways. First, we define the boundary gradients such that the produced gradient field is nearly integrable. Second, we control the integration process to concentrate residuals where they are less conspicuous. We show that our approach can be formulated as a standard least-squares problem that can be solved with a sparse linear system akin to the classical Poisson equation. We demonstrate results on a variety of scenes. The visual quality and run-time complexity compares favorably to other approaches.

2009

Processing for Layered Scenes Using Constant-time Rectangle Spreading

Graphics Interface, 2009
Todd J. Kosloff , Michael W. Tao , Brian A. Barsky. “Depth of Field Postprocessing for Layered Scenes Using Constant-time Rectangle Spreading". In Proceedings of Graphics Interface, 2009 
[official webpage] [paper]

Abstract

Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, the entire scene appears completely in sharp focus, leading to an un- natural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts, or restrict the choice of point spread function (PSF). In this paper, we present a new image filter based on rectangle spread- ing which is constant time per pixel. When used in a layered depth of field framework, our filter eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the PSF.

 

 

© 2010 Michael Tao. All Rights Reserved.