Context-aware computing is an effort to use sensed attributes of an environment to provide enriched support for activities. For example, an application might provide relevant services based upon your location or the identity of your companions. As low-level architectural support for context-aware computing matures [1,2], we are ready to explore more general and powerful means of access to context data. Information required by a context-aware application may be spread across a number of different repositories, partitioned by any number of physical, organizational, or privacy boundaries. What is needed is a mechanism for context-aware applications to issue context-based queries without having to explicitly manage the complex storage layout and access policies of the underlying data.
To address this need, we are developing Liquid, a distributed query processing system intended to both simplify and enhance the next generation of context-aware applications. Liquid will allow applications to issue long-standing queries in a simple declarative language and to monitor continuously changing query results. Our system is targeted at supporting two primary features: (1) continuous (persistent) queries sensitive to the dynamic nature of context (e.g., issuer changes location), and (2) queries with approximate results, where result substitutions can be made by exploiting relationships between repositories (e.g., a floor's temperature data is substituted for missing room temperature data). It is our hope that the Liquid system will provide a solid base for building advanced context-aware applications.
The field of ubiquitous computing (Ubicomp) is still at its inception. Within the field, there is no standard set of methodologies to evaluate Ubicomp systems. System designers benefit from performing multiple iterations, and having functionality in a system before the final implementation to get feedback on design issues. This gives rise to different prototyping techniques such as Wizard of Oz. Insight is a set of tools to support the Wizard of Oz prototyping of Ubicomp systems and the evaluation of the those systems, using data collected in user studies. It is composed of the context event logger, a tool for allowing a wizard in a Wizard of Oz scenario to simulate a sensor network that captures events in its environment, and the context event analyzer, a tool for showing higher level aggregates of lower level context data.
Our plan is to use the logger and analyzer in evaluating two iterations of an application for industrial-sized kitchens that tracks and helps users locate food items. In the first iteration, we applied paper-prototyping techniques along with Wizard of Oz simulation of sensor input using the logger. Our second iteration will involve an interactive prototype that will still depend on simulating actual sensors, but which will have networked computer screens and a simple database to actually respond to users without the need for a person (wizard) to simulate application behavior. After user studies of the prototype from each iteration, we will employ the analyzer and examine the events logged, looking for events that identify design flaws. Our goal is to collect evidence demonstrating the strengths and flaws of both methods in Ubicomp design and evaluation.
In 1997 there were 227,000 deaf people in the US who could not use regular auditory sensing to gain awareness of sound. Instead, they use alternate awareness techniques such as sensing vibrations and the use of flashing lights to substitute for the aural sensing of sound in the workplace. However, there remains a gap between the experience of a hearing individual and the experience of a deaf person. Our work describes the design and evaluation of a peripheral display to provide the deaf with awareness of sound in an office environment to help close that gap. Conceptual drawings of sound by hearing participants, exploration with paper prototypes, interviews, and surveys formed the basis for our current design.
We implemented two prototypes shown in Figures 1 and 2. One is based on a spectrograph, a tool commonly used by speech therapists that represents pitch and intensity of sound over time. Another depicts position and amplitude over time. We evaluated them in a dual task experiment with eight deaf participants and found they were able to peripherally identify notification sounds such as a door knock or telephone ring with both systems while performing a visual primary task. Participants had significantly higher identification rates with the visualization that represented position. Neither visualization resulted in a significant amount of distraction in terms of performance of a primary task. This work  has been received with much enthusiasm by members of the deaf community and may ultimately result in a system for better support for sound awareness for the deaf in situations of fixed visual focus.
Figure 1: A cellular phone ring as represented by our spectrograph visualization. In this visualization, height is mapped to frequency, color to intensity (blue = quiet; red = loud). The temporal aspect is depicted by having the visualization animate from right to left. A cellular phone ring is recognizable by a regular frequency amplitude pattern. This is typical of mechanical sounds.
Figure 2: A cellular phone ring as represented by our ripples visualization. A top view map of the room appears in white. The rings denote the position of a sound source in a room. The size and color of rings indicate the amplitude of the sound. Frequency does not appear in this visualization. A user can infer a sound source from its location. In this case, the participant was told the phone was on the desk. Thus, a sound coming from the desk would probably be the phone.
User interface designers are increasingly faced with the challenge of targeting multi-device, multimodal applications, but do not have tools to support them. This work proposes an informal prototyping tool, named CrossWeaver, which implements the programming by illustration (PBI) technique, enabling non-programmer designers to build multimodal, multi-device user interface prototypes, test those prototypes with end users, and collect valuable feedback informing iterative design.
PBI is a technique for user interface prototyping that involves building executable prototypes from example sketches. PBI has its origin in the informal interface approach , supporting natural human input, such as sketching, while minimizing recognition and transformation of the input. PBI also uses programming by demonstration techniques , enabling a working application to be built by an end-user based on concrete examples, in this case design sketches. CrossWeaver extends informal user interface and programming by demonstration research to multimodal, multi-device applications, enabling a designer to create and test a multi-device, multimodal prototype from a set of example-sketched storyboards.
Figure 1: Screenshot of the initial CrossWeaver prototype
People often use a variety of computing devices, such as PCs, PDAs, and cell phones, to access the same information. The user interface to this information needs to be different for each device, due to different input and output constraints. Currently, designers designing such multi-device user interfaces either have to design a UI separately for each device, which is time consuming, or use a program to automatically generate interfaces, which often results in interfaces that are awkward.
We are creating a system called Damask  to better support multi-device UI design. With Damask, the designer will design a UI for one device by sketching the design and by specifying which design patterns the interface uses. The patterns will help Damask generate user interfaces optimized for the other target devices. The generated interfaces will be of sufficient quality so that it will be more convenient to use Damask than to design each of the other interfaces separately, and the ease with which designers will be able to create designs will encourage them to engage in iterative design.
Figure 1: Damask's proposed user interface
We conducted an ethnographic study  in which we observed and interviewed several professional web designers. This study showed that the process of designing a web site involves an iterative progression from less detailed to more detailed representations of the site. For example, designers often create site maps early in the process, which are high-level representations of a site in which each page or set of pages is depicted as a label. They then proceed to create storyboards of interaction sequences, which employ minimal page-level detail and focus instead on the navigational elements required to get from one page to another. Later still, designers create schematics and mock-ups, which are different representations of individual pages.
These were the primary observations that led to the design and implementation of DENIM , a system to assist web designers in the early stages of information, navigation, and interaction design. DENIM is an informal pen-based system that allows designers to quickly sketch web pages, create links among them, and interact with them in a run mode. The different ways of viewing a web site, from site map to storyboard to individual pages, are integrated through the use of zooming.
More information is available through the Group for User Interface Research web site at http://guir.berkeley.edu.
Figure 1: The DENIM system
Context-aware applications are computer systems that make use of implicitly gathered information, such as a person's identity, location, and activity. This is in contrast to traditional computer systems that require explicit user interaction for all input.
This work is addressing two different but related problems. The first is organizing and managing the sensors, data, and services in a meaningful way. The second is doing all of this in a privacy-sensitive manner that provides end-users with greater control and feedback over what information is being collected about them and how that information is being used.
The main abstraction we are developing is InfoSpace. InfoSpaces are repositories of context information designed to be analogous to web sites. That is, in the same way that many people create and manage personal web sites, they would create and manage personal InfoSpaces. While a person would only have one logical InfoSpace, they may have several InfoSpaces that physically reside on multiple devices, thus providing people with high availability even when mobile.
However, systems that collect highly personal information like this are always strongly criticized because of potential privacy threats. To address these legitimate concerns, we are integrating several privacy mechanisms, including basic access control to limit queries, the option to return intentionally ambiguous results, privacy tags for specifying privacy preferences on data that flows from one InfoSpace to another, and user interfaces for helping end-users understand who has been accessing their data.
In our previous studies into web design , we found that pens, paper, walls, and tables were often used for explaining, developing, and communicating ideas during the early phases of design. These wall-scale paper-based design practices inspired The Designers’ Outpost , a tangible user interface that combines the affordances of paper and large physical workspaces with the advantages of electronic media to support information design. With Outpost, users collaboratively author web site information architectures on an electronic whiteboard using physical media (post-it notes and images), structuring and annotating that information with electronic pens. This interaction is enabled by a touch-sensitive SMART board augmented with a rear-mounted video camera for capturing movement and a front-mounted high-resolution camera for capturing ink.
The electronic representation gives us three main advantages: the ability to support fluid transitions to other tools, such as DENIM , support for history , and remote collaboration .
We have recently developed a remote collaboration system  based on The Designers’ Outpost. The system provides a distributed shared workspace that employs physical post-it notes as interaction primitives. We implement and evaluate two mechanisms for awareness: transient ink input for gestures and a blue shadow of the remote collaborator for presence.
Figure 1: Users collaborate remotely using physical artifacts. Notes that are digital on this board correspond to electronic notes in Figure 2.
Figure 2: Notes on this board are electronic versions of the physical notes in Figure 1.
Our contextual inquiry into the practices of oral historians unearthed a curious incongruity: while oral historians consider interview recordings to be a central historical artifact, these recordings sit unused after a written transcript is produced. We hypothesized that this is largely because books are more usable than recordings, so we created Books with Voices : bar-code augmented paper transcripts enabling fast, random access to digital video interviews on a PDA. We present quantitative results of an evaluation of this tangible interface with 13 participants. They found this lightweight, structured access to original recordings to be useful, offering substantial benefits with minimal overhead. Oral historians found a level of emotion in the video not available in the printed transcript. The video also helped readers clarify the text and observe nonverbal cues.
Figure 1: Accessing digital video by scanning transcripts
Figure 2: PDA video display of oral histories
Figure 3: Augmented paper transcripts produced by Books with Voices; from an oral history with Professor Carlo Séquin