Chapter 15: Human-Computer Interaction

The EECS Research Summary for 2003


Livenotes: A Collaborative Note-Taking Application

Matthew Kam
(Professor John F. Canny)
Center for Innovative Learning Technologies

Livenotes is a collaborative note-taking Java program that runs on handheld wireless tablets. Using the shared whiteboard medium integral to Livenotes, students can record and annotate one another's lecture notes on their respective tablets in real time.

We are developing Livenotes to facilitate peer learning within small groups of students in conventional classrooms, because small group learning is a proven pedagogical method for enhancing student attention, participation, and understanding [1].

The next steps in our research are to:

(1) Introduce more user-interface affordances and features;

(2) Migrate from client-server to peer-to-peer;

(2) Deploy in additional class settings; and

(3) Evaluate the learning effectiveness that arises from using Livenotes (both qualitative and quantitative).

Livenotes is undertaken in collaboration with Daniel Glaser, Alastair Iles, Edwin Mach, Ian Wang, and Hailing Xu, under the supervision of Professor John Canny. A Spring 2003 deployment will be carried out in collaboration with Professors Ellen Do and Mark Gross at the University of Washington, Seattle. Orna Tarshish is a past contributor.

[1]
A. Iles, D. Glaser, M. Kam, and J. Canny, "Learning via Distributed Dialogue: Livenotes and Handheld Wireless Technology," Proc. Conf. Computer Support for Collaborative Learning, January 2002.

More information (http://www.cs.berkeley.edu/~mattkam) or

Send mail to the author : (mattkam@eecs.berkeley.edu)

Activity-based Computing

Yitao Duan
(Professor John F. Canny)
(NSF) EIA-0122599

When working in shared physical spaces, individuals develop a rich sense of awareness that greatly facilitates their collaboration toward a common goal. They gather and share information freely; they gain a sense of what others know which allows them to ask the right person for help; and they come to understand other's goals which gives them a richer sense of purpose. This richness seems to be missing in electronic contexts. We propose a methodology for recapturing it. Our approach is called activity-based computing (ABC) which draws its principles primarily from activity theory, which divides human behavior into hierarchy of activities, actions, and operations. We observe that computer systems today are action-based which results in their lack of awareness of high-level activity context and motive. ABC performs activity-level analysis using probabilistic models and tacit data mining and provides efficient visualization as a user interface.


Send mail to the author : (duan@eecs.berkeley.edu)

Toward Trustworthy Ubiquitous Computing Environments

Yitao Duan
(Professor John F. Canny)
(NSF) EIA-0122599

In an ubiquitous computing environment, sensors are actively collecting data, much of which can be very sensitive. Protecting this private data is a central concern for the users to have a trust relationship with the environment. There are a few challenges that make ubicomp security different from other system protection: (1) The environment is often unfamiliar to the users. They will not have a trust relationship with the owners of the environment as they might with their local system administrator appropriate for handling their private information. (2) Data is often generated dynamically, streams at high rates, and must be processed in real time. (3) Users' access rights change dynamically with respect to their relationship with the mechanisms by which data is generated. For example, a number of users can form an ad hoc group and record their meeting using a camera that is administered by the environment. They should only have access to the video produced during the meeting period. We are investigating schemes for protecting user data in a ubicomp environment. The key principle we propose is "data discretion," which grants access to information only to individuals who would have "real-world" access to the data. We have devised a protocol that is based on hybrid secret-key and public-key cryptography to enforce this principle. Our protocol allows for legitimate sharing and collaboration, yet stops any efforts to physically track the users by anyone, thus protecting user anonymity and privacy.


Send mail to the author : (duan@eecs.berkeley.edu)

Cosmetic Errors of Beginning Scheme Programmers

Clint Ryan
(Professor Michael Clancy)

"Cosmetic errors" are significant errors that produce results that, at least to beginning programmers, appear to be essentially correct except for minor formatting issues. An example common among students in CS 3, "Introduction to Symbolic Computing," is a procedure that should return a number, such as 4, but actually returns a list with a number as its only element, such as (4). Another example is a procedure that returns exactly the list that it should, except that it contains a null list as its last element. Many students consider procedures such as these to be essentially correct. Others seem not to even notice the difference, although the list (2 3 5 7 ()) is clearly not the list (2 3 5 7).

The goal of this research is to determine how it is that students can continue to make these mistakes once they have a good understanding of the different Scheme data types, and to develop exercises that help students stop making these mistakes. While there appear to be a number of different reasons for these errors, some of the most common involve the way students understand null or one-item lists and empty strings (called empty words in CS 3). For example, some students view empty lists and strings as "nothing," which can be safely ignored when it shows up in another list and used as a return value if a procedure is given arguments out of its range. Others, including a number of very good students, believe that one-element lists are useless and that Scheme automatically converts them to the element they contain.

Guided by informal interviews and assessments given in summer and fall 2002, we will assess and interview CS 3 students about null and one-element lists and empty words at the start of the spring 2002 semester. Using the data we gather, we intend to target certain cosmetic errors that typically occur later in the semester. If this proves to be successful, we can design activities that address cosmetic errors throughout the semester.


Send mail to the author : (ryanc@eecs.berkeley.edu)

The UC-WISE Project


(Professor Michael Clancy)
CITRIS

The UC-WISE (University of California Web-based Instruction for Science and Engineering) project is designing a system for integrating technology into the instruction of entry level science and engineering courses. This system, based on the WISE learning environment developed in Berkeley's School of Education (http://wise.berkeley.edu) will deliver functional content in the form of dynamic Java-based tools for computer programming, modeling, and many other learning activities. All of these student activities will be integrated into a Web-based learning environment that links to the course calendar, the course syllabus, and a database that stores all student work and supports instructor assessments. The goal of this effort is to research the most effective ways of integrating computer technology into our courses, replacing the traditional lecture with a more dynamic role for the instructor as tutor or learning partner.

The system includes a "master curriculum," a database of richly annotated course "learning objects," e.g., exercises, projects, assessments questions, and video lecture segments. It will incorporate three major components for loading and accessing the database: (1) the curriculum builder, with which master teachers populate and annotate the master curriculum; (2) the course customizer, which guides a prospective instructor to form courses based on material in the master curriculum; and (3) the course portal, in which the constructed course is delivered to students.

In summer 2002, we ran the summer CS 3 (the introductory programming course for nonmajors) in an all-lab format using a prototype version of the course portal. Results were exciting; students and staff loved the format, and the students did quite well compared to CS 3 students in earlier semesters. (The summer results are described in [1].)

Current work proceeds in two areas: curriculum and system development. At present we are concentrating on the likely uses of the metadata annotating each learning object in the database, and on tools that will assist users of the curriculum builder and the course customizer. Developer subgroups are also designing tools for student collaboration and for Scheme programming. The curriculum group is reviewing and tuning the summer curriculum, exploring the development of other CS lower-division courses in this format, designing a curriculum for tutors, considering how to make use of the CS 3 curriculum in the self-paced CS 3S, and investigating the possibility of online exams.

[1]
M. Clancy, M. C. Linn, C. Ryan, J. Slotta, and N. Titterton, "New Roles for Students, Instructors, and Computers in a Lab-based Introductory Programming Course," Proc. Technical Symp. CS Education, published as SIGCSE Bulletin, Vol. 35, No. 1, 2003.

More information (http://wiser.cs.berkeley.edu/) or

Send mail to the author : (clancy@eecs.berkeley.edu)

Liquid: Context-Aware Distributed Queries

Alan Newberger, Christopher Beckmann, Jeffrey Heer, and Jason I. Hong
(Professors Anind Dey, James A. Landay, and Jennifer Mankoff)
(NSF) IIS-0205644

Context-aware computing is an effort to use sensed attributes of an environment to provide enriched support for activities. For example, an application might provide relevant services based upon your location or the identity of your companions. As low-level architectural support for context-aware computing matures [1,2], we are ready to explore more general and powerful means of access to context data. Information required by a context-aware application may be spread across a number of different repositories, partitioned by any number of physical, organizational, or privacy boundaries. What is needed is a mechanism for context-aware applications to issue context-based queries without having to explicitly manage the complex storage layout and access policies of the underlying data.

To address this need, we are developing Liquid, a distributed query processing system intended to both simplify and enhance the next generation of context-aware applications. Liquid will allow applications to issue long-standing queries in a simple declarative language and to monitor continuously changing query results. Our system is targeted at supporting two primary features: (1) continuous (persistent) queries sensitive to the dynamic nature of context (e.g., issuer changes location), and (2) queries with approximate results, where result substitutions can be made by exploiting relationships between repositories (e.g., a floor's temperature data is substituted for missing room temperature data). It is our hope that the Liquid system will provide a solid base for building advanced context-aware applications.

[1]
A. K. Dey, D. Salber, and G. D. Abowd, "A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications," Human-Computer Interaction Journal, Vol. 16, No. 2-4, 2001.
[2]
J. I. Hong and J. A. Landay, "An Infrastructure Approach to Context-Aware Computing," Human-Computer Interaction, Vol. 16, No.2-4, 2001.

More information (http://guir.berkeley.edu/projects/cfabric/) or

Send mail to the author : (jheer@cs.berkeley.edu)

Insight: Tool Support for User-Centered Ubicomp Prototyping and Evaluation

Alan Liu1 and Peter Khooshabeh2
(Professors Anind Dey and James A. Landay)

The field of ubiquitous computing (Ubicomp) is still at its inception. Within the field, there is no standard set of methodologies to evaluate Ubicomp systems. System designers benefit from performing multiple iterations, and having functionality in a system before the final implementation to get feedback on design issues. This gives rise to different prototyping techniques such as Wizard of Oz. Insight is a set of tools to support the Wizard of Oz prototyping of Ubicomp systems and the evaluation of the those systems, using data collected in user studies. It is composed of the context event logger, a tool for allowing a wizard in a Wizard of Oz scenario to simulate a sensor network that captures events in its environment, and the context event analyzer, a tool for showing higher level aggregates of lower level context data.

Our plan is to use the logger and analyzer in evaluating two iterations of an application for industrial-sized kitchens that tracks and helps users locate food items. In the first iteration, we applied paper-prototyping techniques along with Wizard of Oz simulation of sensor input using the logger. Our second iteration will involve an interactive prototype that will still depend on simulating actual sensors, but which will have networked computer screens and a simple database to actually respond to users without the need for a person (wizard) to simulate application behavior. After user studies of the prototype from each iteration, we will employ the analyzer and examine the events logged, looking for events that identify design flaws. Our goal is to collect evidence demonstrating the strengths and flaws of both methods in Ubicomp design and evaluation.

[1]
S. Consolvo, L. Arnstein, and B. Franza, "User Study Techniques in the Design and Evaluation of a Ubicomp Environment," Proc. Int. Conf. Ubiquitous Computing, September 2002.
1Undergraduate (EECS)
2Undergraduate (non-EECS)

Send mail to the author : (lliu@eecs.berkeley.edu)

Context-Aware Word Prediction

Jeffrey Heer
(Professors Anind Dey and Jennifer Mankoff)
(NSF) IIS-0205644

The fluid, everyday communication of natural language that many of us take for granted eludes many persons with disabilities. People inflicted with conditions such as ALS (most notably Stephen Hawking) must depend on text-entry and speech synthesis systems to communicate. An integral component of such systems is word prediction software. By attempting to guess the speaker's intended word before it is completed, word prediction systems hope to reduce input time and accelerate communication.

Word prediction based on language modeling (e.g., trigram models) has proven quite useful for reducing the number of keystrokes needed by disabled users. We hypothesize, however, that by taking into account the user's context, further improvements in word prediction might be realized. In particular, we propose modeling a conversation as a dynamic topic-driven process, using both linguistic history and sensed context data (such as location and time of day) to infer the most likely topics. Words, in turn, are then predicted by the inferred topics as well as the conversation history. In essence, we hope to capture (in some small part) both the sequential regularities of language and the underlying semantics.

Our goals are to realize improved models for word prediction and to explore the use of probabilistic reasoning as a tool for modeling and performing inference on sensed context data. While our primary emphasis is on augmented communication, we believe our work will also have relevance to related efforts in context-aware computing, language modeling, and speech recognition.


More information (http://www.cs.berkeley.edu/projects/io/augmented-wheelchair.html) or

Send mail to the author : (jheer@cs.berkeley.edu)

Ambient Display of Healthy City Information

Morgan Ames1 and Chinmayi Bettadapur2
(Professors Anind Dey and Jennifer Mankoff)

Healthy cities ambient display is a project where we will design a public ambient display to be put in a busy plaza, public transportation center, or a market that shows the "health” of the city as characterized by various statistics. Ambient displays are ubiquitous computing devices that give a continuous stream of information in a peripheral, non-obtrusive way. We have interviewed and surveyed a number of Berkeley residents to gain a better understanding of what they think a “healthy” city is. From these responses, we will create a display that will monitor the status of information sources relevant to the health of the city, and display this information to the residents of Berkeley.

1Undergraduate (EECS)
2Undergraduate (EECS)

More information (http://kettle.cs.berkeley.edu/ambient/10) or

Send mail to the author : (dey@eecs.berkeley.edu)

Toolkit Support for Ambient Displays

Scott Carter and Tara Matthews
(Professors Anind Dey and Jennifer Mankoff)

Ambient display research is a new but burgeoning field studying the design and evaluation of systems that provide non-critical information to the periphery of human attention [1]. Different displays receive their input from different sources and display output in a variety of ways. However, as little middleware exists to support ambient display development, developers must rewrite from scratch code used to translate input to output. To correct this problem, we are designing, implementing and evaluating an ambient display toolkit. We approach this problem by first outlining the design space of ambient displays, classifying several previously built displays by input and output type. We then look at code used in those displays to determine additional patterns. Next, we develop an architecture and a library of functions to support these patterns. As a final step, we rebuild a few existing ambient displays using the toolkit to evaluate its effectiveness and then iterate the toolkit design based on our findings.

[1]
M. Weiser and J. S. Brown, "Designing Calm Technology," PowerGrid Journal, Vol. 1.01, July 1996.

More information (http://www.cs.berkeley.edu/projects/jmankoff/ambient/) or

Send mail to the author : (sacarter@cs.berkeley.edu)

Everyday Privacy Management in Ubiquitous Computing

Scott Lederer, Christopher Beckmann, and Karen Teng1
(Professors Anind Dey and Jennifer Mankoff)
NDSEG Fellowship

We are investigating the role of usability in everyday privacy, which signifies an individual's regular exposure to and control over the disclosure of personal information in ubiquitous computing environments. The near-continuous and sensitive nature of everyday privacy necessitates usable, consistent interaction mechanisms for managing it. Toward that end, we are designing and evaluating a user interface for managing everyday privacy in ubicomp. Our design is based on the notion that the identity of the information recipient is the primary determinant of the quality and quantity of personal information an individual prefers to disclose and, further, that an individual's disclosure preferences regarding a given recipient can vary by situation.

1Undergraduate (EECS)

More information (http://guir.berkeley.edu/projects/end_user_privacy/) or

Send mail to the author : (lederer@cs.berkeley.edu)

Ambient Display Evaluation

Tara Matthews, Scott Carter, Edward De Guzman, Morgan Ames1, Chinmayi Bettadapur2, Gary Hsieh3, and Mira Sutijono
(Professors Anind Dey and Jennifer Mankoff)

Ambient displays are a new type of pervasive computing device that give information in non-critical ways, allowing users to get information in the periphery of their attention. These devices are useful because they do not demand attention, so a person can be aware of more information without being overburdened by it [1]. Getting information from an ambient display requires little thought, allowing people to focus on other tasks. The very characteristics that make ambient displays a useful interface innovation also make them difficult to evaluate. Traditional evaluation techniques used in human computer interaction do not apply well to ambient displays. Our goal is to assess the pros and cons of different evaluation techniques for testing the effectiveness of an ambient display, and then to determine the best techniques for evaluating these displays by conducting evaluation studies. The process we will follow to accomplish these goals begins with a literature survey and analysis of the various evaluation techniques available. In parallel, we will design an ambient display that addresses the needs of people who must continuously monitor many sources of information. One example is a display that allows restaurant servers to see the status of food preparation by quickly glancing at a visual display that represents remaining preparation times. With an ambient display and knowledge of evaluation methods, we will select one or more techniques to use in a summative study of the ambient display. The design and results of the study will guide further research on the evaluation of ambient displays and improve our ability to design effective displays.

[1]
M. Weiser and J. S. Brown, "Designing Calm Technology," PowerGrid Journal, Vol. 1.01, July 1996.
1Undergraduate (EECS)
2Undergraduate (EECS)
3Undergraduate (EECS)

More information (http://www.cs.berkeley.edu/projects/jmankoff/ambient/) or

Send mail to the author : (tmatthew@eecs.berkeley.edu)

Tangible Instant Messaging

Gaurav Bhalotia, Anthony Gagliano1, Elizabeth Yang2, and Margaret Yau3
(Professor Anind Dey)

Since its introduction in 1996, the use of instant messenger tools has grown exponentially. Instant messenger (IM) supports two key human needs: presence information and instant communication. However, users of IM only have access to these key needs when their select peers have the technology available. Limiting factors include people who do not have computers, people who do not have a continuous connection at home, and lastly, people who do have a continuous connection but are not sitting in front of their computers at all times. Our goal is to provide people with access to awareness information and instant messages via tangible objects. These tangible objects are devices that users can carry with them while mobile or can situate throughout their environment to provide information as users move in it. We are investigating a number of display techniques for these devices including fabric and paint that change colors when exposed to heat or light, physical objects with mobile parts, and tiny displays.

1Undergraduate (EECS)
2Undergraduate (EECS)
3Undergraduate (EECS)

More information (http://www.cs.berkeley.edu/~dey) or

Send mail to the author : (dey@eecs.berkeley.edu)

Prototyping Tools for Context-Aware Applications

Tim Sohn1 and Alan Newberger
(Professor Anind Dey)
(NSF) IIS-0205644

The emergence of context-aware applications, those that take into account the context of the user and environment, has shown the ability for rich interaction with the surrounding environment. Such applications prove challenging to construct and control, both for users and programmers. While a deployed application may take advantage of real-time data sensed in the environment, during development such data is often not available. It can be cumbersome and error-prone for a programmer to manually manage the access and retrieval of available data. During run-time it can be unclear how an application is configured and usually it will be difficult or impossible to change that configuration.

We are investigating approaches to mitigate these problems in context-aware application development and usage. Our objective is to establish a working and usable environment to support programmers as they build and prototype context-aware applications. This would involve developing new interaction techniques for developing applications either through a graphical user interface (GUI) or other tangible means, and an interface into a real or simulated system.

Existing low-level infrastructures promote reuse of sensors and provide heterogeneous access to environment data [1]. Using this existing research, we are building prototyping tools that allow developers to rapidly build and simulate context aware applications with or without access to an actual instrumented environment. There are many technical challenges to be addressed in a prototyping system, ranging from a specification language, interface methods, distributed computing issues, and exposing functionality of a complex sensor-based system to programmers and end-users. The tools we are constructing integrate with rule-based primitives that manage all interaction with low-level infrastructures on behalf of an application. Rules may be inspected and modified at run-time to allow feedback and control of applications, again managed on behalf of applications with no explicit additional programming.

Our goal is to provide high-level primitives in context-aware infrastructures that will make applications easier to build, maintain, and modify by both programmers and end-users.

[1]
A. K. Dey, D. Salber, and G. D. Abowd, "A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications," Human-Computer Interaction Journal, Vol. 16, No. 2-4, 2001.
1Undergraduate (EECS)

More information (http://www.cs.berkeley.edu/~dey/context.html) or

Send mail to the author : (tsohn@eecs.berkeley.edu)

Flexible Searching Using Faceted, Hierarchical Metadata

Ka-Ping Yee, Kirsten Swearingen1, Kevin Chen, Kevin Li2, Paul Daniell, and Brycen Chun3
(Professor Marti A. Hearst --SIMS)
(NSF/CAREER) IIS-9984741

The FLAMENCO project (FLexible information Access using MEtadata in Novel Combinations) is exploring a new method for searching large online collections. Our goal is to design a search interface that supports task-oriented search and offers users a "browsing the shelves" experience. The Flamenco interface: integrates search and browsing; uses query previews to guide users; dynamically presents metadata to organize the search results and suggest next steps; and offers multiple methods for expanding and refining a search.

We have conducted a series of usability studies, which have helped refine the interface design and demonstrated that users strongly prefer the metadata-based approach over a baseline, Google-style image search interface.

Our methods and tools may be applied to any collection of images or text documents that have been classified using faceted metadata. So far we have applied this approach to: architecture images, plant/animal images, fine arts images, Epinions product information, a portion of MEDLINE medical texts, and to OCR'd tobacco industry documents.

The FLAMENCO group is currently developing methods to optimize the system's speed, offer users search history and personalization features, and enable the addition and classification of new items in a collection.


Figure 1: The Flamenco image browser. Categories available within the current result set are displayed on the left. This image set is constrained by location and structure type, and grouped by style.

Figure 2: Search results for "garden." The top portion of the screen is devoted to presenting the categories that produced the result set displayed on the lower portion of the screen. Users have the option of disambiguating their text query, or exploring all the results that contain their search term.

Figure 3: View of an individual item, with contextualized links for expanding the query in several conceptual directions, to see all "entrance spaces" or all "opera houses," or, more broadly, all buildings related to the "performing arts."

[1]
M. Hearst, J. English, R. Sinha, K. Swearingen, and P. Yee, "Finding the Flow in Website Search," Communications of the ACM, Vol. 45, No. 9, September 2002.
[2]
K.-P. Yee, K. Swearingen, K. Li, and M. Hearst, "Faceted Metadata for Image Search and Browsing" (submitted).
[3]
M. Hearst, "Next Generation Web Search: Setting Our Sites," IEEE Data Engineering Bulletin, Vol. 23, No. 3, September 2000.
[4]
A. Elliott, "Flamenco Image Browser: Using Metadata to Improve Image Search During Architectural Design," Proc. ACM CHI Conf. Companion, Seattle, WA, April 2001.
1Staff
2Undergraduate (EECS)
3Undergraduate (EECS)

More information (http://flamenco.sims.berkeley.edu) or

Send mail to the author : (kirstens@sims.berkeley.edu)

Peephole Displays: Handheld Computers as Virtual Windows

Ka-Ping Yee
(Professor Marti A. Hearst --SIMS)
(NSF) 9984741 and (NSF) EIA-0122599

The small size of handheld computers makes them conveniently mobile, but limits the amount of information that can be shown on their screens. This work introduces "peephole displays," an interaction metaphor in which the handheld computer is a movable window on a larger virtual workspace anchored to the user’s physical reference frame. Peephole displays enable new forms of two-handed interaction for simultaneously navigating and manipulating information, including the ability to create and edit objects larger than the screen and the ability to drag and drop in three dimensions. I developed four iterations of the peephole hardware, and built and tested several peephole-augmented applications including a drawing program, a map viewer, and a calendar. A user study of 24 participants shows that the peephole technique can be more effective than current methods for navigating large information spaces on handheld computers.


Figure 1: Viewing a map. These images were made by blending two photographs taken from the same viewpoint. The position of the device is tracked and the display scrolls to produce the illusion of a movable view on a large street map floating in space. Notice how Gravier St., visible in both views, maintains a fixed position with respect to the outside world.

Figure 2: Note-taking on a large workspace. By using both hands together, a user can continue to write beyond the bounds of the screen.

Figure 3: Drawing on a large workspace. By using both hands together, a user can draw a figure larger than the screen in a single, natural stroke.

Figure 4: Viewing a calendar. Conventional PDA calendar programs have a day view, a week view, and a month view; the peephole calendar combines the strengths of all three into a single modeless view. This combined view is shown on the lower plane and an overview of the entire year is shown on an upper plane. The user can lift the display to switch to the year view and thereby navigate to a different month.

Figure 5: 3D drag-and-drop. The clipboard plane is located in space above the drawing plane. To transfer items to and from the clipboard, the user simply picks an item and lifts or lowers the display. Unlike conventional clipboards, this clipboard can contain more than one item and permits its contents to be viewed.

More information (http://zesty.ca/) or

Send mail to the author : (pingster@cs.berkeley.edu)

User Interaction Design for Secure Systems

Ka-Ping Yee
(Professor Marti A. Hearst --SIMS)

The security of any system that is configured or operated by human beings depends on the information conveyed by the user interface, the decisions of the users, and the interpretation of their actions. This work establishes some starting points for reasoning about security from a user-centered perspective: it proposes to model systems in terms of actors and actions, and introduces the concept of the subjective actor-ability state. Ten principles for secure interaction design are identified; examples of real-world problems illustrate and justify the principles.

The results of this work come from discussing design challenges and user experiences at length with designers and users of software intended to be secure. After much debate and several iterations of refinement, we have formed the following set of design principles:

  1. Path of least resistance: the most natural way to do any task should also be the most secure way.
  2. Appropriate boundaries: the interface should expose, and the system should enforce, distinctions between objects and between actions along boundaries that matter to the user.
  3. Explicit authorization: a user's authorities must only be provided to other actors as a result of an explicit user action that is understood to imply granting.
  4. Visibility: the interface should allow the user to easily review any active actors and authority relationships that would affect security-relevant decisions.
  5. Revocability: the interface should allow the user to easily revoke authorities that the user has granted, wherever revocation is possible.
  6. Expected ability: the interface must not give the user the impression that it is possible to do something that cannot actually be done.
  7. Trusted path: the interface must provide an unspoofable and faithful communication channel between the user and any entity trusted to manipulate authorities on the user's behalf.
  8. Identifiability: the interface should enforce that distinct objects and distinct actions have unspoofably identifiable and distinguishable representations.
  9. Expressiveness: the interface should provide enough expressive power to (a) describe a safe security policy without undue diffiulty and (b) allow users to express security policies in terms that fit their goals.
  10. Clarity: the effect of any security-relevant action must be clearly apparent to the user before the action is taken.


More information (http://zesty.ca/) or

Send mail to the author : (pingster@cs.berkeley.edu)

From Data to Display: The Design and Evaluation of a Peripheral Sound Display for the Deaf

Wai-ling Ho-Ching
(Professors James A. Landay and Jennifer Mankoff)

In 1997 there were 227,000 deaf people in the US who could not use regular auditory sensing to gain awareness of sound. Instead, they use alternate awareness techniques such as sensing vibrations and the use of flashing lights to substitute for the aural sensing of sound in the workplace. However, there remains a gap between the experience of a hearing individual and the experience of a deaf person. Our work describes the design and evaluation of a peripheral display to provide the deaf with awareness of sound in an office environment to help close that gap. Conceptual drawings of sound by hearing participants, exploration with paper prototypes, interviews, and surveys formed the basis for our current design.

We implemented two prototypes shown in Figures 1 and 2. One is based on a spectrograph, a tool commonly used by speech therapists that represents pitch and intensity of sound over time. Another depicts position and amplitude over time. We evaluated them in a dual task experiment with eight deaf participants and found they were able to peripherally identify notification sounds such as a door knock or telephone ring with both systems while performing a visual primary task. Participants had significantly higher identification rates with the visualization that represented position. Neither visualization resulted in a significant amount of distraction in terms of performance of a primary task. This work [1] has been received with much enthusiasm by members of the deaf community and may ultimately result in a system for better support for sound awareness for the deaf in situations of fixed visual focus.


Figure 1: A cellular phone ring as represented by our spectrograph visualization. In this visualization, height is mapped to frequency, color to intensity (blue = quiet; red = loud). The temporal aspect is depicted by having the visualization animate from right to left. A cellular phone ring is recognizable by a regular frequency amplitude pattern. This is typical of mechanical sounds.

Figure 2: A cellular phone ring as represented by our ripples visualization. A top view map of the room appears in white. The rings denote the position of a sound source in a room. The size and color of rings indicate the amplitude of the sound. Frequency does not appear in this visualization. A user can infer a sound source from its location. In this case, the participant was told the phone was on the desk. Thus, a sound coming from the desk would probably be the phone.

[1]
F. W. Ho-Ching, J. Mankoff, and J. A. Landay, "From Data to Display: The Design and Evaluation of a Peripheral Sound Display for the Deaf," CHI (submitted). Also, UC Berkeley Computer Science Division Report No. UCB/CSD 02/1204, October 2002. Available online: http://www.cs.berkeley.edu/~wai-ling/pubs/chi2003long-submitted.pdf.

More information (http://guir.berkeley.edu/projects/ic2hear) or

Send mail to the author : (wai-ling@cs.berkeley.edu)

Multimodal, Multi-Device Prototyping Using Programming by Illustration

Anoop Sinha
(Professor James A. Landay)
NSF Graduate Fellowship and (NSF) 9985111

User interface designers are increasingly faced with the challenge of targeting multi-device, multimodal applications, but do not have tools to support them. This work proposes an informal prototyping tool, named CrossWeaver, which implements the programming by illustration (PBI) technique, enabling non-programmer designers to build multimodal, multi-device user interface prototypes, test those prototypes with end users, and collect valuable feedback informing iterative design.

PBI is a technique for user interface prototyping that involves building executable prototypes from example sketches. PBI has its origin in the informal interface approach [1], supporting natural human input, such as sketching, while minimizing recognition and transformation of the input. PBI also uses programming by demonstration techniques [2], enabling a working application to be built by an end-user based on concrete examples, in this case design sketches. CrossWeaver extends informal user interface and programming by demonstration research to multimodal, multi-device applications, enabling a designer to create and test a multi-device, multimodal prototype from a set of example-sketched storyboards.


Figure 1: Screenshot of the initial CrossWeaver prototype

[1]
J. A. Landay and B. A. Myers, "Sketching Interfaces: Toward More Human Interface Design," IEEE Computer, Vol. 34, No. 3, 2001.
[2]
A. Cypher, ed., "Watch What I Do: Programming by Demonstration," D. C. Halbert et al., ed., MIT Press, Cambridge, MA, 1993.

More information (http://guir.berkeley.edu/projects/crossweaver/) or

Send mail to the author : (aks@eecs.berkeley.edu)

Damask: Supporting Early-Stage Multi-Device UI Design Using Patterns

James Lin
(Professor James A. Landay)
(NSF) 9985111

People often use a variety of computing devices, such as PCs, PDAs, and cell phones, to access the same information. The user interface to this information needs to be different for each device, due to different input and output constraints. Currently, designers designing such multi-device user interfaces either have to design a UI separately for each device, which is time consuming, or use a program to automatically generate interfaces, which often results in interfaces that are awkward.

We are creating a system called Damask [1] to better support multi-device UI design. With Damask, the designer will design a UI for one device by sketching the design and by specifying which design patterns the interface uses. The patterns will help Damask generate user interfaces optimized for the other target devices. The generated interfaces will be of sufficient quality so that it will be more convenient to use Damask than to design each of the other interfaces separately, and the ease with which designers will be able to create designs will encourage them to engage in iterative design.


Figure 1: Damask's proposed user interface

[1]
J. Lin and J. A. Landay, "Damask: A Tool for Early-Stage Design and Prototyping of Multi-Device User Interfaces," Int. Conf. Distributed Multimedia Systems Workshop on Visual Computing, San Francisco, CA, September 2002.

More information (http://guir.berkeley.edu/projects/damask/) or

Send mail to the author : (jimlin@eecs.berkeley.edu)

DENIM: Finding a Tighter Fit between Tools and Practice for Web Site Design

James Lin, Mark Newman, Yang Li1, and Marc Ringuette2
(Professor James A. Landay)

We conducted an ethnographic study [1] in which we observed and interviewed several professional web designers. This study showed that the process of designing a web site involves an iterative progression from less detailed to more detailed representations of the site. For example, designers often create site maps early in the process, which are high-level representations of a site in which each page or set of pages is depicted as a label. They then proceed to create storyboards of interaction sequences, which employ minimal page-level detail and focus instead on the navigational elements required to get from one page to another. Later still, designers create schematics and mock-ups, which are different representations of individual pages.

These were the primary observations that led to the design and implementation of DENIM [2], a system to assist web designers in the early stages of information, navigation, and interaction design. DENIM is an informal pen-based system that allows designers to quickly sketch web pages, create links among them, and interact with them in a run mode. The different ways of viewing a web site, from site map to storyboard to individual pages, are integrated through the use of zooming.

More information is available through the Group for User Interface Research web site at http://guir.berkeley.edu.


Figure 1: The DENIM system

[1]
M. W. Newman and J. A. Landay, "Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice," Designing Interactive Systems, New York, NY, August 2000.
[2]
J. Lin, M. W. Newman, J. I. Hong, and J. A. Landay, "DENIM: Finding a Tighter Fit between Tools and Practice for Web Site Design," CHI Letters: Human Factors in Computing Systems, The Hague, The Netherlands, April 2000.
1Postdoctoral Researcher
2Staff

More information (http://guir.berkeley.edu/projects/denim/) or

Send mail to the author : (jimlin@eecs.berkeley.edu)

Privacy-Sensitive Infrastructure Support for Context-Awareness

Jason I. Hong, Chris Beckmann, Jeff Heer, Xiaodong Jiang, and Alan Newberger
(Professor James A. Landay)
(NSF) IIS-0205644

Context-aware applications are computer systems that make use of implicitly gathered information, such as a person's identity, location, and activity. This is in contrast to traditional computer systems that require explicit user interaction for all input.

This work is addressing two different but related problems. The first is organizing and managing the sensors, data, and services in a meaningful way. The second is doing all of this in a privacy-sensitive manner that provides end-users with greater control and feedback over what information is being collected about them and how that information is being used.

The main abstraction we are developing is InfoSpace. InfoSpaces are repositories of context information designed to be analogous to web sites. That is, in the same way that many people create and manage personal web sites, they would create and manage personal InfoSpaces. While a person would only have one logical InfoSpace, they may have several InfoSpaces that physically reside on multiple devices, thus providing people with high availability even when mobile.

However, systems that collect highly personal information like this are always strongly criticized because of potential privacy threats. To address these legitimate concerns, we are integrating several privacy mechanisms, including basic access control to limit queries, the option to return intentionally ambiguous results, privacy tags for specifying privacy preferences on data that flows from one InfoSpace to another, and user interfaces for helping end-users understand who has been accessing their data.


More information (http://guir.berkeley.edu/projects/cfabric/) or

Send mail to the author : (jasonh@eecs.berkeley.edu)

The Designers’ Outpost: A Tangible Interface for Collaborative Web Site Design

Katherine Everitt, Scott Klemmer, and Robert Lee1
(Professor James A. Landay)
(NSF) IIS-0084367

In our previous studies into web design [1], we found that pens, paper, walls, and tables were often used for explaining, developing, and communicating ideas during the early phases of design. These wall-scale paper-based design practices inspired The Designers’ Outpost [2], a tangible user interface that combines the affordances of paper and large physical workspaces with the advantages of electronic media to support information design. With Outpost, users collaboratively author web site information architectures on an electronic whiteboard using physical media (post-it notes and images), structuring and annotating that information with electronic pens. This interaction is enabled by a touch-sensitive SMART board augmented with a rear-mounted video camera for capturing movement and a front-mounted high-resolution camera for capturing ink.

The electronic representation gives us three main advantages: the ability to support fluid transitions to other tools, such as DENIM [3], support for history [4], and remote collaboration [5].

We have recently developed a remote collaboration system [5] based on The Designers’ Outpost. The system provides a distributed shared workspace that employs physical post-it notes as interaction primitives. We implement and evaluate two mechanisms for awareness: transient ink input for gestures and a blue shadow of the remote collaborator for presence.


Figure 1: Users collaborate remotely using physical artifacts. Notes that are digital on this board correspond to electronic notes in Figure 2.

Figure 2: Notes on this board are electronic versions of the physical notes in Figure 1.

[1]
M. W. Newman and J. A. Landay, "Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice," Proc. Designing Interactive Systems, New York, NY, August 2000.
[2]
S. R. Klemmer, M. W. Newman, R. Farrell, M. Bilezikjian, and J. A. Landay, "The Designers’ Outpost: A Tangible Interface for Collaborative Web Site Design," ACM Symp. User Interface Software and Technology, CHI Letters, Vol. 3, No. 2, 2001.
[3]
J. Lin, M. W. Newman, J. I. Hong, and J. A. Landay, "DENIM: Finding a Tighter Fit between Tools and Practice for Web Site Design," CHI Human Factors in Computing Systems, CHI Letters, Vol. 2, No. 1, 2000.
[4]
S. R. Klemmer, M. Thomsen, E. Phelps-Goodman, and J. A. Landay, Where Do Web Sites Come From? Capturing and Interacting with Design History, UC Berkeley Computer Science Division, Report No. UCB/CSD 01/1157, October 2001.
[5]
K. M. Everitt, S. R. Klemmer, R. Lee, and J. A. Landay, "Two Worlds Apart: Bridging the Gap Between Physical and Virtual Media for Distributed Design Collaboration," CHI, 2003 (submitted). Also, UC Berkeley Computer Science Division Report No. UCB/CSD 02/1201, 2002.
1Undergraduate (EECS)

More information (http://guir.berkeley.edu/outpost/) or

Send mail to the author : (everitt@eecs.berkeley.edu)

Books with Voices: Paper Transcripts as a Tangible Interface to Oral Histories

Scott R. Klemmer, Jamey Graham1, and Gregory J. Wolff2
(Professor James A. Landay)

Our contextual inquiry into the practices of oral historians unearthed a curious incongruity: while oral historians consider interview recordings to be a central historical artifact, these recordings sit unused after a written transcript is produced. We hypothesized that this is largely because books are more usable than recordings, so we created Books with Voices [1]: bar-code augmented paper transcripts enabling fast, random access to digital video interviews on a PDA. We present quantitative results of an evaluation of this tangible interface with 13 participants. They found this lightweight, structured access to original recordings to be useful, offering substantial benefits with minimal overhead. Oral historians found a level of emotion in the video not available in the printed transcript. The video also helped readers clarify the text and observe nonverbal cues.


Figure 1: Accessing digital video by scanning transcripts

Figure 2: PDA video display of oral histories

Figure 3: Augmented paper transcripts produced by Books with Voices; from an oral history with Professor Carlo Séquin

[1]
S. R. Klemmer, J. Graham, G. J. Wolff, and J. A. Landay, Books with Voices: Paper Transcripts as a Tangible Interface to Oral Histories, UC Berkeley Computer Science Division, Report No. UCB/CSD 02/1199, September 2002.
[2]
J. M. Graham and J. J. Hull, "Video Paper: A Paper-based Interface for Skimming and Watching Video," Int. Conf. Consumer Electronics, Los Angeles, CA, July 2002.
1Ricoh Innovations, Inc.
2Ricoh Innovations, Inc.

More information (http://guir.berkeley.edu/oral-history) or

Send mail to the author : (srk@cs.berkeley.edu)

TALK: Technology Advancing Living

Holly Fait, Carol Pai1, and Tony Lai2
(Professor Jennifer Mankoff)
(NSF) IIS-0205644 and (NSF) 020921

Accessibility of technology for persons with disabilities is a significant matter facing design engineers. Disabled users may have vision, speech, motor, or cognitive impairments which require special hardware and software to make their computers more accessible. The TALK project focuses on accessible technologies for persons with motor and speech impairments.

TALK is comprised of a web accessibility project and a word prediction project. The web accessibility project aims to allow users with only single switch input to navigate the web, and take advantage of context when filling in web forms. The word prediction project looks at the performance of word prediction, character prediction, and abbreviation expansion techniques for users through user testing. This portion of the project also looks at how communication occurs for persons with both speech and motor impairments, and hopes to determine where and what technology could improve that communication. We are also continuing the work of the Augmented Wheelchair project by examining how context aware computing can support other aspects of the daily lives of wheelchair users [1].

[1]
A. Dey, J. Mankoff, G. Abowd, and S. Carter, "Distributed Mediation of Ambiguous Context in Aware Environments," User Interface Software and Technology, Paris, France, October 2002.
[2]
J. Mankoff, A. Dey, U. Batra, and M. Moore, "Web Accessibility for Low Bandwidth Input," ASSETS Int. Conf. Assistive Technologies, Edinburgh, Scotland, July 2002.
[3]
M. Y. Ivory, J. Mankoff, and A. Le, "Using Automated Tools to Improve Web Site Usage by Users with Diverse Abilities," Information Technology and Society (to appear).
1Undergraduate (EECS)
2Undergraduate (EECS)

More information (http://guir.berkeley.edu/projects/) or

Send mail to the author : (hfait@eecs.berkeley.edu)

Representing and Supporting Action on Buried Relationships in Smart Environments

Scott Carter and Mimi Yang1
(Professor Jennifer Mankoff)
Hewlett-Packard

We are interested in encouraging conversation by providing a means for people to discover mutual interests. Conversations engender knowledge of one's community, which in turn encourages collaboration and social awareness. To support these broad goals, we have designed a system that makes available implicit relationships amongst people cohabiting an environment equipped with ubiquitous sensors and displays [1]. Sensors in this environment track people's interaction with documents, places, and other people. Another component analyzes this contextual information to discover specific relationships between people. To present found relationships, we employ a composite system integrating a public ambient display that provides aggregate, abstract information and a PDA display that displays more specific information (Figures 1 and 2). The public ambient display notifies users in the space of the existence of relationships and the PDA supports inquiry and communication.

We are deploying this system to several spaces and are evaluating its impact. Before deploying the system, we use interviews and contextual inquiries to gauge the communication processes in deployment spaces. Then, during deployment, we use surveys and direct observation to discover how the system changes group communication.


Figure 1: An ambient public display showing files (left), people (center), and places (right) of mutual interest

Figure 2: A PDA display showing a list of related people

[1]
S. Carter, J. Mankoff, and P. Goddi, "Representing and Supporting Action on Buried Relationships in Smart Environments," Conf. Computer Supported Cooperative Work, New Orleans, LA, November 2002.
1Undergraduate (EECS)

More information (http://www.madpickle.com/scott/hebb/) or

Send mail to the author : (sacarter@cs.berkeley.edu)

Supporting Shopper Nutrition

Tu Tran1, Doris Lin2, Danqing Wu3, Eric Park4, and Gary Hsieh5
(Professor Jennifer Mankoff)

Nutrition has a big impact on health, including major diseases such as heart disease, osteoporosis, and cancer. Our work is designed to help people keep track of the nutrional content of foods they have eaten. Our work uses shopping receipts to generate suggestions about healthier food items that could help to supplement missing nutrients. Our application, based on shopping receipt data, provides access to ambiguous suggestions for more nutritious purchases.

Our goal is to contribute a better understanding of how a sensor-based application can be integrated in everyday life. To do this, we chose an approach that can easily be replicated for many users, deployed, and tested for months at a time. We are currently in the process of conducting a diary study that can provide data on which we can train our prediction algorithms. We conducted a formative user study that suggested that receipts may provide enough information to extend our work by also estimating what people are actually eating, as opposed to simply what they are purchasing. We are also interviewing and observing people's shopping and food managing habits to further inform the system design.

[1]
J. Mankoff, G. Hsieh, H. C. Hung, S. Lee, and E. Nitao, "Using Low-Cost Sensing to Support Nutritional Awareness," Proc. Ubicomp, Goteborg, Sweden, October 2002.
1Graduate Student (non-EECS), Mills College
2Undergraduate (EECS)
3Undergraduate (EECS)
4Undergraduate (EECS)
5Undergraduate (EECS)

More information (http://www.cs.berkeley.edu/projects/jmankoff/nutrition/) or

Send mail to the author : (jmankoff@eecs.berkeley.edu)

Prosody-based Automatic Detection of Annoyance and Frustration in Human-Computer Dialog

Jeremy Ang, Elizabeth Shriberg1, and Andreas Stolcke2
(Professor Nelson H. Morgan)
(DARPA) ROAR N66001-99-D-8504, DARPA Communicator Project at ICSI and University of Washington, (NASA) NCC 2-1256, and (NSF) IRI-9619921

We investigate the use of prosody for the detection of frustration and annoyance in natural human-computer dialog. In addition to prosodic features, we examine the contribution of language model information and speaking "style." Results show that a prosodic model can predict whether an utterance is neutral versus "annoyed or frustrated" with an accuracy on par with that of human interlabeler agreement. Accuracy increases when discriminating only "frustrated" from other utterances, and when using only those utterances on which labelers originally agreed. Furthermore, prosodic model accuracy degrades only slightly when using recognized versus true words. Language model features, even if based on true words, are relatively poor predictors of frustration. Finally, we find that hyperarticulation is not a good predictor of emotion; the two phenomena often occur independently.

1Staff, ICSI, SRI International
2Staff, ICSI, SRI International

More information (http://www.icsi.berkeley.edu/~jca) or

Send mail to the author : (jca@eecs.berkeley.edu)