Hole Filling in Images

Siddharth Jain
(Professor Avideh Zakhor)

At the Video and Image Processing Laboratory (VIP Lab), we are engaged in automated 3D model generation for urban environments [1]. Ground based modeling involves a setup of two 2D laser scanners and a digital camera mounted on top of a truck. As we drive the truck in a city the laser scans give us depth information using the LIDAR time-of-flight principle. These laser scans are then subjected to accurate localization and 3D data processing algorithms to create a 3D mesh of the urban environment. The resulting mesh is then texture mapped with camera images to produce photo-realistic models.

However, objects such as trees, cars, lamposts, etc., occlude parts of the buildings from the laser scanners and the digital camera and thus leave holes both in the geometry (mesh) and texture (camera images). For a 3D model the user should be able to view the building facade that was hiden behind a tree or some other obstacle. Hole filling in geometry is discussed in [1,2].

We present a simple and efficient method for hole filling in images. Given an image with regions of unknown rgb values (holes) our task is to determine the rgb values (fill the holes) as sensibly as we can from the information available to us from the rest of the image. We use our method to fill holes in the texture atlases generated during automated 3D modeling of urban environments. Hole filling can also be used for other applications such as restoring old and damaged photographs, removing objects from images, and special effects.

We first fill in regions of low frequency by doing a pass of 1D horizontal interpolation in which for each row we try to interpolate the missing columns (corresponding to the holes) if they lie in a region of low frequency. This is followed by a pass of 1D vertical interpolation. We then employ a copy-paste method based on the idea of texture synthesis in [3] and illustrated in Figure 1. We take a window around the hole, find a matching region in the image, and fill the hole by copying the matching region and pasting it over the hole.

The approach is found to work well on most images and does not suffer from the limitations of local inpainting in traditional hole filling schemes [4]. Figure 2 shows part of a texture atlas with holes marked in red. Figure 3 shows the hole-filled image.


Figure 1: Illustrating the copy-paste method

Figure 2: Part of a texture atlas with holes

Figure 3: Hole-filled image

[1]
C. Frueh, "Automated 3D Model Generation for Urban Environments," PhD thesis.
[2]
C. Frueh and A. Zakhor, "Data Processing Algorithms for Generating Textured Facade Meshes from Laser Scans and Camera Images," Int. Symp. 3D Processing, Visualization, and Transmission, Padua, Italy, June 2002.
[3]
A. Efros and W. Freeman, "Image Quilting for Texture Synthesis and Transfer," Proc. SIGGRAPH, Los Angeles, CA August 2001.
[4]
G. Sapiro, M. Bertalmio, V. Caselles, and C. Ballester, "Image Inpainting," Proc. SIGGRAPH, New Orleans, LA, July 2000.

More information (http://www.eecs.berkeley.edu/~morpheus) or

Send mail to the author : (morpheus@eecs.berkeley.edu)


Edit this abstract