Fast, Automated 3D Model Reconstruction for Urban Environments

Christian Frueh1
(Professor Avideh Zakhor)
(ARO) DAAD19-00-1-0352

The focus of this project is the fast reconstruction of photo-realistic 3D city models, in order to facilitate interactive walk-, drive-, and fly-throughs. This modeling effort occurs at two levels: (1) ground-based modeling of building facades using 2D laser scanners and a video cameras, and (2) airborne modeling using airborne laser scans and aerial images. We are currently working on merging these two models into a single one.

For ground-based modeling, an acquisition system has been built that can acquire 3D information from the facades of the buildings. Our experimental set up consists of a truck equipped with one color camera and two fast, inexpensive 2D laser scanners. One scanner is mounted vertically in order to scan the building facades. The other one is mounted horizontally and captures 1800-scans while traveling on city streets under normal traffic conditions. The horizontal scans are used to determine an estimate of the vehicle’s motion in a scan matching process, and relative motion estimates are concatenated to an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is corrected by using probabilistic Monte-Carlo localization. Specifically, the final global pose is obtained by utilizing an aerial photograph or a DSM as a global map, to which the ground-based horizontal laser scans are matched. Figure 1 shows the reconstructed acquisition path superimposed over a digital surface map of Berkeley.

We have developed a set of completely automated data processing algorithms to handle the large data size, and to cope with imperfections and non-idealities inherent in laser scanning systems such as occlusions and reflections from glass surfaces. Dominant building structures are detected, and points are classified into a foreground (trees, cars, etc,) and background layer (building facades). Large holes in the facades, caused by occlusion from foreground objects, are filled in by adaptive interpolation techniques, and further processing removes isolated points and fills remaining small holes. The processed scans are triangulated and texture mapped using the camera images. Applying our technique, we have reconstructed photo-realistic, texture-mapped 3D facade models of five downtown Berkeley city blocks, as shown in Figures 2 and 3. For this a highly detailed model, the acquisition time was 11 minutes, and the total computation time for the completely automated reconstruction was 151 minutes. For airborne modeling, airborne laser scans are acquired and resampled to obtain a digital surface map (DSM), containing roof and terrain shape complementary to the ground-based facades. The DSM is processed in order to sharpen edges and remove erroneous points, triangulated, and texture mapped with an aerial image, in order to obtain a airborne surface mesh as shown in Figure 4 for the east Berkeley campus. Finally, facade models are merged merged with the DSM by removing redundant parts and filling gaps. The result is a 3D city model usable for both walk-, and fly-throughs, as shown in Figure 5. 


Figure 1: Reconstructed acquisition path in Berkeley

Figure 2: Downtown Berkeley facade models

Figure 3: Downtown Berkeley facade models

Figure 4: East Berkeley campus

Figure 5: Merged model as seen from a bird's eye view

[1]
C. Früh and A. Zakhor, "Data Processing Algorithms for Generating Textured 3D Building Façade Meshes from Laser Scans and Camera Images," Proc. 3D Data Processing, Visualization, and Transmission, Padua, Italy, June 2002.
[2]
C. Früh and A. Zakhor, "3D Model Generation for Cities Using Aerial Photographs and Ground Level Laser Scans," Computer Vision and Pattern Recognition Conf., Kauai, HI, December 2001.
[3]
C. Früh and A. Zakhor, "Fast 3D Model Generation in Urban Environments," Int. Conf. Multisensor Fusion and Integration for Intelligent Systems, Baden-Baden, Germany, August 2001.
1Postdoctoral Researcher

More information (http://www-video.eecs.berkeley.edu/~frueh ) or

Send mail to the author : (frueh@eecs.berkeley.edu)


Edit this abstract