Interactive Visualization of Large, Detailed City Models

Ali Lakhia, Louie Lu, and Chris Frueh
(Professor Avideh Zakhor)
(MURI) DAAD19-00-1-0352

We are developing an interactive rendering engine to visualize large, detailed city models acquired with laser scanners and digital cameras. These models are considerably more complex than those created by hand or acquired via semi-automatic methods, and require significant complexity reduction for interactive visualization. In particular, huge quantities of texture data expose bottlenecks in the graphics pipeline that are not addressed by previous rendering algorithms.

Our rendering engine is based on three separate strategies that enable large and complex 3D models to be rendered interactively: (1) frustum culling, (2) levels of detail (LODs), and (3) management of data. We extend LODs to create a hierarchical structure and implement an algorithm that traverses the hierarchy to render frames. Our system adaptively selects the appropriate LODs by slicing the available time. Lastly, our LODs are managed in memory to avoid swapping. This is done by loading only the most simplistic LODs in memory and incrementally pre-fetching other LODs from disk with a background thread based on a priority heuristic. Experimental results are shown to demonstrate the effectiveness of our approach.

Figure 1: The highest level of detail texture map of a block in Berkeley with one in every ten triangles overlaid in blue on top of the texture. Note that the texture is not tiled and the triangles are not simplified or collapsed to form larger ones.

Figure 2: The entire city model, as seen through our rendering system, has 27 blocks. Each block consists mostly of the building facades and some of the street.

Figure 3: Closeup of one of the blocks in our city model.

C. Frueh and A. Zakhor, "Fast 3D Model Generation in Urban Environments," Int. Conf. Multisensor Fusion and Integration for Intelligent Systems, Baden-Baden, Germany, August 2001.

More information ( or

Send mail to the author : (

Edit this abstract