We develop three-dimensional shape contexts as part of an approach to 3D object recognition from point clouds. 3D shape contexts are semi-local descriptions of object shape centered at points on an object's surface, and are a natural extension of 2D shape contexts introduced by Belongie, Malik, and Puzicha  for recognition in 2D images. 3D shape contexts are joint histograms of point density parameterized by radius, azimuth, and elevation. These features are similar in spirit to spin images , which have shown good performance in 3D object recognition tasks. Spin images are two-dimensional descriptors, summing over the azimuth angle, whereas shape contexts preserve information in all three dimensions.
To recognize objects, we compute shape contexts at a few randomly chosen points on a query image, and find their nearest neighbors in a stored set of shape contexts for a set of sample points on 3D object models. The model with the smallest combined distances is taken to be the best match. Finding nearest neighbors in high dimensions is computationally expensive, so we explore the use of clustering and locality sensitive hashing  to speed up the search while maintaining accuracy. Results are shown for both full 3D models and simulated range data.
Figure 1: Visualization of the histogram bins of the 3D shape context