Context-aware computing is an effort to use sensed attributes of an environment to provide enriched support for activities. For example, an application might provide relevant services based upon your location or the identity of your companions. As low-level architectural support for context-aware computing matures [1,2], we are ready to explore more general and powerful means of access to context data. Information required by a context-aware application may be spread across a number of different repositories, partitioned by any number of physical, organizational, or privacy boundaries. What is needed is a mechanism for context-aware applications to issue context-based queries without having to explicitly manage the complex storage layout and access policies of the underlying data.
To address this need, we are developing Liquid, a distributed query processing system intended to both simplify and enhance the next generation of context-aware applications. Liquid will allow applications to issue long-standing queries in a simple declarative language and to monitor continuously changing query results. Our system is targeted at supporting two primary features: (1) continuous (persistent) queries sensitive to the dynamic nature of context (e.g., issuer changes location), and (2) queries with approximate results, where result substitutions can be made by exploiting relationships between repositories (e.g., a floor's temperature data is substituted for missing room temperature data). It is our hope that the Liquid system will provide a solid base for building advanced context-aware applications.
The fluid, everyday communication of natural language that many of us take for granted eludes many persons with disabilities. People inflicted with conditions such as ALS (most notably Stephen Hawking) must depend on text-entry and speech synthesis systems to communicate. An integral component of such systems is word prediction software. By attempting to guess the speaker's intended word before it is completed, word prediction systems hope to reduce input time and accelerate communication.
Word prediction based on language modeling (e.g., trigram models) has proven quite useful for reducing the number of keystrokes needed by disabled users. We hypothesize, however, that by taking into account the user's context, further improvements in word prediction might be realized. In particular, we propose modeling a conversation as a dynamic topic-driven process, using both linguistic history and sensed context data (such as location and time of day) to infer the most likely topics. Words, in turn, are then predicted by the inferred topics as well as the conversation history. In essence, we hope to capture (in some small part) both the sequential regularities of language and the underlying semantics.
Our goals are to realize improved models for word prediction and to explore the use of probabilistic reasoning as a tool for modeling and performing inference on sensed context data. While our primary emphasis is on augmented communication, we believe our work will also have relevance to related efforts in context-aware computing, language modeling, and speech recognition.
Healthy cities ambient display is a project where we will design a public ambient display to be put in a busy plaza, public transportation center, or a market that shows the "health” of the city as characterized by various statistics. Ambient displays are ubiquitous computing devices that give a continuous stream of information in a peripheral, non-obtrusive way. We have interviewed and surveyed a number of Berkeley residents to gain a better understanding of what they think a “healthy” city is. From these responses, we will create a display that will monitor the status of information sources relevant to the health of the city, and display this information to the residents of Berkeley.
Ambient display research is a new but burgeoning field studying the design and evaluation of systems that provide non-critical information to the periphery of human attention . Different displays receive their input from different sources and display output in a variety of ways. However, as little middleware exists to support ambient display development, developers must rewrite from scratch code used to translate input to output. To correct this problem, we are designing, implementing and evaluating an ambient display toolkit. We approach this problem by first outlining the design space of ambient displays, classifying several previously built displays by input and output type. We then look at code used in those displays to determine additional patterns. Next, we develop an architecture and a library of functions to support these patterns. As a final step, we rebuild a few existing ambient displays using the toolkit to evaluate its effectiveness and then iterate the toolkit design based on our findings.
We are investigating the role of usability in everyday privacy, which signifies an individual's regular exposure to and control over the disclosure of personal information in ubiquitous computing environments. The near-continuous and sensitive nature of everyday privacy necessitates usable, consistent interaction mechanisms for managing it. Toward that end, we are designing and evaluating a user interface for managing everyday privacy in ubicomp. Our design is based on the notion that the identity of the information recipient is the primary determinant of the quality and quantity of personal information an individual prefers to disclose and, further, that an individual's disclosure preferences regarding a given recipient can vary by situation.
Ambient displays are a new type of pervasive computing device that give information in non-critical ways, allowing users to get information in the periphery of their attention. These devices are useful because they do not demand attention, so a person can be aware of more information without being overburdened by it . Getting information from an ambient display requires little thought, allowing people to focus on other tasks. The very characteristics that make ambient displays a useful interface innovation also make them difficult to evaluate. Traditional evaluation techniques used in human computer interaction do not apply well to ambient displays. Our goal is to assess the pros and cons of different evaluation techniques for testing the effectiveness of an ambient display, and then to determine the best techniques for evaluating these displays by conducting evaluation studies. The process we will follow to accomplish these goals begins with a literature survey and analysis of the various evaluation techniques available. In parallel, we will design an ambient display that addresses the needs of people who must continuously monitor many sources of information. One example is a display that allows restaurant servers to see the status of food preparation by quickly glancing at a visual display that represents remaining preparation times. With an ambient display and knowledge of evaluation methods, we will select one or more techniques to use in a summative study of the ambient display. The design and results of the study will guide further research on the evaluation of ambient displays and improve our ability to design effective displays.
In 1997 there were 227,000 deaf people in the US who could not use regular auditory sensing to gain awareness of sound. Instead, they use alternate awareness techniques such as sensing vibrations and the use of flashing lights to substitute for the aural sensing of sound in the workplace. However, there remains a gap between the experience of a hearing individual and the experience of a deaf person. Our work describes the design and evaluation of a peripheral display to provide the deaf with awareness of sound in an office environment to help close that gap. Conceptual drawings of sound by hearing participants, exploration with paper prototypes, interviews, and surveys formed the basis for our current design.
We implemented two prototypes shown in Figures 1 and 2. One is based on a spectrograph, a tool commonly used by speech therapists that represents pitch and intensity of sound over time. Another depicts position and amplitude over time. We evaluated them in a dual task experiment with eight deaf participants and found they were able to peripherally identify notification sounds such as a door knock or telephone ring with both systems while performing a visual primary task. Participants had significantly higher identification rates with the visualization that represented position. Neither visualization resulted in a significant amount of distraction in terms of performance of a primary task. This work  has been received with much enthusiasm by members of the deaf community and may ultimately result in a system for better support for sound awareness for the deaf in situations of fixed visual focus.
Figure 1: A cellular phone ring as represented by our spectrograph visualization. In this visualization, height is mapped to frequency, color to intensity (blue = quiet; red = loud). The temporal aspect is depicted by having the visualization animate from right to left. A cellular phone ring is recognizable by a regular frequency amplitude pattern. This is typical of mechanical sounds.
Figure 2: A cellular phone ring as represented by our ripples visualization. A top view map of the room appears in white. The rings denote the position of a sound source in a room. The size and color of rings indicate the amplitude of the sound. Frequency does not appear in this visualization. A user can infer a sound source from its location. In this case, the participant was told the phone was on the desk. Thus, a sound coming from the desk would probably be the phone.
Accessibility of technology for persons with disabilities is a significant matter facing design engineers. Disabled users may have vision, speech, motor, or cognitive impairments which require special hardware and software to make their computers more accessible. The TALK project focuses on accessible technologies for persons with motor and speech impairments.
TALK is comprised of a web accessibility project and a word prediction project. The web accessibility project aims to allow users with only single switch input to navigate the web, and take advantage of context when filling in web forms. The word prediction project looks at the performance of word prediction, character prediction, and abbreviation expansion techniques for users through user testing. This portion of the project also looks at how communication occurs for persons with both speech and motor impairments, and hopes to determine where and what technology could improve that communication. We are also continuing the work of the Augmented Wheelchair project by examining how context aware computing can support other aspects of the daily lives of wheelchair users .
We are interested in encouraging conversation by providing a means for people to discover mutual interests. Conversations engender knowledge of one's community, which in turn encourages collaboration and social awareness. To support these broad goals, we have designed a system that makes available implicit relationships amongst people cohabiting an environment equipped with ubiquitous sensors and displays . Sensors in this environment track people's interaction with documents, places, and other people. Another component analyzes this contextual information to discover specific relationships between people. To present found relationships, we employ a composite system integrating a public ambient display that provides aggregate, abstract information and a PDA display that displays more specific information (Figures 1 and 2). The public ambient display notifies users in the space of the existence of relationships and the PDA supports inquiry and communication.
We are deploying this system to several spaces and are evaluating its impact. Before deploying the system, we use interviews and contextual inquiries to gauge the communication processes in deployment spaces. Then, during deployment, we use surveys and direct observation to discover how the system changes group communication.
Figure 1: An ambient public display showing files (left), people (center), and places (right) of mutual interest
Figure 2: A PDA display showing a list of related people
Nutrition has a big impact on health, including major diseases such as heart disease, osteoporosis, and cancer. Our work is designed to help people keep track of the nutrional content of foods they have eaten. Our work uses shopping receipts to generate suggestions about healthier food items that could help to supplement missing nutrients. Our application, based on shopping receipt data, provides access to ambiguous suggestions for more nutritious purchases.
Our goal is to contribute a better understanding of how a sensor-based application can be integrated in everyday life. To do this, we chose an approach that can easily be replicated for many users, deployed, and tested for months at a time. We are currently in the process of conducting a diary study that can provide data on which we can train our prediction algorithms. We conducted a formative user study that suggested that receipts may provide enough information to extend our work by also estimating what people are actually eating, as opposed to simply what they are purchasing. We are also interviewing and observing people's shopping and food managing habits to further inform the system design.