Context-Aware Word Prediction

Jeffrey Heer
(Professors Anind Dey and Jennifer Mankoff)
(NSF) IIS-0205644

The fluid, everyday communication of natural language that many of us take for granted eludes many persons with disabilities. People inflicted with conditions such as ALS (most notably Stephen Hawking) must depend on text-entry and speech synthesis systems to communicate. An integral component of such systems is word prediction software. By attempting to guess the speaker's intended word before it is completed, word prediction systems hope to reduce input time and accelerate communication.

Word prediction based on language modeling (e.g., trigram models) has proven quite useful for reducing the number of keystrokes needed by disabled users. We hypothesize, however, that by taking into account the user's context, further improvements in word prediction might be realized. In particular, we propose modeling a conversation as a dynamic topic-driven process, using both linguistic history and sensed context data (such as location and time of day) to infer the most likely topics. Words, in turn, are then predicted by the inferred topics as well as the conversation history. In essence, we hope to capture (in some small part) both the sequential regularities of language and the underlying semantics.

Our goals are to realize improved models for word prediction and to explore the use of probabilistic reasoning as a tool for modeling and performing inference on sensed context data. While our primary emphasis is on augmented communication, we believe our work will also have relevance to related efforts in context-aware computing, language modeling, and speech recognition.

More information ( or

Send mail to the author : (

Edit this abstract