With the increasing popularity and availability of devices like personal digital assistants (PDAs), pen computing is becoming more familiar to end users. However, many applications are still designed with moded windows, icons, menus, and a pointer (WIMP) interfaces that treat pens like pointing devices instead of using them for what they are good for: sketching. We are interested in providing a solid infrastructure and a set of utilities for developing sketch-based user interfaces for a variety of applications.
Sketching is simple and natural and is especially desirable for conceptual design either on an individual basis or in a collaborative environment. By embedding recognition engines in sketch-based programs, the resulting drawings can be interpreted and processed. Various computations can be applied to recognized sketches, therefore fully leveraging the power of electronic design tools while maintaining the ease of sketching.
Currently, we are working on multi-stroke symbol recognition for a class of shapes that are commonly used in slide creation and diagram editing. Our technique is independent of stroke order, number, and direction, as well as invariant to rotation, scaling, translation, and reflection. We take the statistical approach using local features to capture shape information and relative position of strokes, thus utilizing both structural and statistical information learned from examples. Furthermore, the recognition system is adaptive such that it learns upon user correction. It also provides feedback to the user so that the user can better understand why misrecognition took place and can adapt the drawing style accordingly.