Fast, Photorealistic, Automated, 3D Modeling of Cities Using Ground and Airborne Sensors
Matthew Carlberg, James Andrews and Avideh Zakhor
Three-dimensional modeling of objects, scenes, and urban environments, consisting of geometry and texture of visible surfaces, is useful in a variety of applications. In particular, 3D modeling of urban environments is applicable to urban planning, training and simulation for disaster scenarios, virtual heritage conversation, and combating urban terrorism. Over the past three years, the video and image processing group at UC Berkeley has focused on developing techniques for automated 3D model generation of urban environments so as to enable virtual, yet photorealistic, walkthroughs, drive throughs, and fly throughs.
To this end, we have developed two sets of modeling techniques: ground-based and airborne-based. Our ground-based modeling method uses a vehicle equipped with 2D laser scanners and a digital camera to acquire data to be processed offline, while driving under normal traffic conditions in public roads. Unlike previous approaches to urban modeling, this approach acquires data in a continuous "drive by scanning" way, rather than "stop and go" fashion. Associated with the ground based data set is a set of algorithms we have developed in order to process the data and reconstruct the model in a fast, automated way; at the heart of these algorithms are Monte Carlo localization schemes that determine the position of our acquisition vehicle fairly accurately over long driving distances. Our airborne model is constructed from airborne laser data acquired with a flying airplane over the region of interest, and aerial images obtained from a helicopter at oblique angles. We have developed merging algorithms to combine the ground based and airborne models into one fused model which can then be used for virtual walkthroughs, drive throughs, and fly throughs.
In this project, we will leverage our expertise in 3D modeling of urban and non-urban terrain in order to develop a new class of computationally scalable, multi-resolution techniques for urban environments. Specifically, we will develop a two-step approach to the 3D terrain modeling problem: in the first step, we analyze, segment, and classify the 3D point cloud data into various "components"; in the second step we model each component separately using a representation that is best suited for that particular component. This approach, which we refer to as "Hybrid," has many desirable properties; for example, the segmentation can divide the data into geometrically and semantically meaningful components, and provide a natural way to handle sharp discontinuities that are present in urban environments by segmenting them along the discontinuities; smooth and rapidly changing regions can be separated into different pieces allowing us to model each with appropriate tools and/or model parameters. Also, objects with non-genus-0 topologies can be handled separately from 2D height-fields resulting in a more accurate, yet compact representation. Finally, this two-step procedure can be iterated upon to result in more accurate models.
For the first step of the Hybrid model which deals with 3D data analysis, we plan to leverage our past work on segmentation via machine learning techniques, slippage analysis, structure discovery such as symmetry detection, and global topological techniques. For the second step which deals with representation of individual components, we plan to explore the use of sub-division surfaces, T-NURBs, and extension of Morse structures to urban terrains. In deriving the algorithms associated with the above steps, we put special emphasis on topological/geometric multi-scale representations rather than traditional multi-resolution signal processing based techniques. We plan to integrate the above models within GIS systems, and compare them against existing systems such as Janus for visibility computations.