:: projects :: (or see a selected list)

::UC Berkeley CS
Enhancing Cross-Device Interaction Scripting with Interactive Illustrations
(2016)
DemoScript is a technique that automatically analyzes a cross-device interaction program while it is being written. DemoScript visually illustrates the step-by-step execution of a selected portion or the entire program with a novel, automatically generated cross-device storyboard visualization. In addition to helping developers understand the behavior of the program, DemoScript also allows developers to revise their program by interactively manipulating the cross-device storyboard.
  • CHI2016 full paper [pdf]
  • (Best Paper Award)
  • [Demo Video]
  • Intern work at Google Research
Weave: Scripting Cross-Device Wearable Interaction
(2015)
Weave is a framework for developers to create cross-device wearable interaction by scripting. Weave provides a set of high-level APIs, based on JavaScript, for developers to easily distribute UI output and combine sensing events and user input across mobile and wearable devices. Weave allows developers to focus on their target interaction behaviors and manipulate devices regarding their capabilities and affordances, rather than low-level specifications. Weave also contributes an integrated authoring environment for developers to program and test cross-device behaviors, and when ready, deploy these behaviors to its runtime environment on users’ ad-hoc network of devices.
DemoWiz: Re-Performing Software Demonstrations for a Live Presentation
(2014)
DemoWiz is a system with a refined workflow that helps presenters capture software demonstrations, edit and rehearse them, and re-perform them for an engaging live presentation. DemoWiz visualizes input events and guides presenters to see what's coming up by overlaying visual annotations of events on the screencast recording where the events happen in a video. It also provides lightweight editing for presenters to adjust video playback speed, pause frames, and add text notes.
  • CHI2014 full paper
  • [Demo Video]
  • Intern work at Microsoft Research
DemoCut: Generating Concise Instructional Videos for Physical Demonstrations
(2012-2013)
DemoCut is a semi-automatic video editing system that improves the quality of amateur instructional videos for physical tasks. DemoCut asks users to mark key moments in a recorded demonstration using a set of marker types derived from our formative study. Based on these markers, the system uses audio and video analysis to automatically organize the video into meaningful segments and apply appropriate video editing effects.
MixT: Automatic Generation of Step-by-Step Mixed Media Tutorials
(2011-2012)
MixT is a system that automatically generates step-by-step mixed media tutorials from user demonstrations. MixT segments screencapture video into steps using logs of application commands and input events, applies video compositing techniques to focus on salient information, and highlights interactions through mouse trails.
  • UIST2012 full paper [pdf] [slides]
  • CHI2012 wip paper [pdf]
  • [Demo Video]
  • Joint work with Adobe Creative Technologies Lab.
  • Teamwork with Sally Ahn (CS MS) and Amanda Ren (CS BS).
Kinectograph: Body-Tracking Camera Control for Demonstration Videos
(2012-2013)
A large community of users creates and shares how-to videos online. It is often difficult for the authors of these videos to control camera focus, view, and position while performing their physical tasks. Kinectograph is a recording device that automatically pans and tilts to follow specific body parts, e.g., hands, of a user in a video. It utilizes a Kinect depth sensor to track skeletal data and adjusts the camera angle via a 2D pan-tilt gimbal mount. Users control and configure Kinectograph through a tablet application with real-time video preview.
  • CHI2013 wip paper [pdf]
  • Also shown at Maker Fair 2013.
  • [Demo Video]
  • Teamwork with Derrick Cheng (CS MS) and Taeil Kwak (iSchool MS).
::MIT Media Lab
Raconteur: From Chat to Stories
(2009-2010)
People who are not professional storytellers usually have difficulty composing travel photos and videos into a coherent and engaging story. However, consider putting the same person in a conversation with a friend – suddenly the story comes alive. Raconteur is a system for conversational storytelling that encourages people to make coherent points. It performs natural language processing in real-time on a text chat between a storyteller and a viewer and recommends appropriate media items from a library. A large commonsense knowledge base and a novel commonsense inference technique are used to identify story patterns.
  • IUI2011 full paper [pdf] [slides]
  • CHI2011 note [pdf]
  • IUI2010 short paper [pdf]
  • M.S. thesis at MIT, September 2010
  • [Demo Video]
  • Press:
    - Global Views Monthly (遠見雜誌) vol. 289, July 2010, pp. 226-227 (in Chinese)
    - Ozsvald's blog (02/07/10) [link]
Designing Interactive Narrative for Children
(2009-2010)
Based on the story "Guess How Much I Love You" by S. McBratney and A. Jeram, this interactive narrative system aims to teach young children the concepts of measurement and comparison through the conversation between two characters. It explores the design space of enhancing interactive narrative using a commonsense knowledge database to understand players' intention and generate relevant narration dynamically.
  • ELO2010 full paper [pdf]
Goal-Oriented Interfaces for Consumer Electronics
(2008)
Consumer electronics devices are becoming more complicated, intimidating users. These devices do not know anything about everyday life or human goals, and they show irrelevant menus and options. Roadie interprets user's intentions using common-sense reasoning and helps the device display relevant information to reach the user's goal. We constructed the Roadie interface to real consumer electronics devices: a television, set top box, and smart phone.
Burn Your Memory Away: One-time Use Video Capture and Storage Device to Encourage Memory Appreciation
(2009)
Modern ease of access to technology enables many of us to obsessively document our lives. However, much of the captured digital content is often disregarded and forgotten on storage devices, with no concerns of cost or decay. PY-ROM is a prototype design of a matchstick-like video recording and storage device that burns itself away after being used. This encourages designers to consider lifecycles and human-computer relationships by integrating physical properties into digitally augmenting everyday objects.
  • CHI2009 alt.chi paper [pdf]
  • [Demo Video]
  • Press:
    - Technology Review blog (04/13/09) [web] [pdf]
    - ACM TechNews (04/15/09) [web] [pdf]
  • Teamwork with Xiao Xiao, Keywon Chung, and Carnaven Chiu.
Stress OutSourced: A Haptic Social Network via Crowdsourcing
(2009)
Stress OutSourced (SOS) is a peer-to-peer network that allows anonymous users to send each other therapeutic massages to relieve stress. By applying the emerging concept of crowdsourcing to haptic therapy, SOS brings physical and affective dimensions to our already networked lifestyle while preserving the privacy of its members.
  • CHI2009 alt.chi paper [pdf].
  • Also shown at SIGGRAPH 2010.
  • [Demo Video]
  • [Project page]
  • Press:
    - Fashioning Technology [web] (05/13/09)
    - Makezine blog [web] (05/17/09)
    - talk2myShirt [web] (05/26/09)
    - TechNewsDaily [link] over MSNBC [link] (07/27/10)
    - gizmag [link](07/30/10)
    - Gizmodiva [link] (08/03/10)
  • Teamwork with Keywon Chung, Carnaven Chiu, and Xiao Xiao.
:: UbiComp Lab, National Taiwan University
Calorie-Aware Kitchen
(2006-2008)
During cooking process, family cooks are commonly unaware of how many calories go into their prepared meals. This work presents a smart kitchen with Ubicomp technology to improve home cooking by providing the number of calories of ingredients that are used in prepared meals. In doing so, family cooks can more effectively control the meal calories based on family needs. Our kitchen has sensors to track the number of calories in food ingredients, and then provides real-time feedback to users on these values.
  • Persuasive 2008 paper [pdf]
  • CHI 2007 wip paper (People's Choice Award) [pdf]
  • UbiComp 2006 paper [pdf]
  • IEEE Pervasive Computing Magazine 2010 paper [pdf]
  • M.S. thesis 2008 [pdf] [slides]
  • [Demo Video]
  • [Cooking Video Example]
  • Press:
    - Serious Games Market [link],
    - Insight (台大智活) [link] (in Chinese)
    - Economic Daily News (經濟日報) [link] (in Chinese)
  • Teamwork with Jenhao Chen
Ubicomp Technologies for Play-Based Occupational Therapy
(2007-2008)
Ubicomp technologies can assist parents and occupational therapists in modifying behaviors in young children. In occupational therapy, an effective mean to motivate child behavior change is by designing playful activities which leverages the desire of children to play to induce their behavioral change. By embedding digital technology into activities, Ubicomp technologies can be used to enhance the effectiveness of play-based occupational therapy. We demonstrated two playful activity designs targeting on slow eating and tooth brushing behaviors in young children.
  • IEEE Pervasive Computing Magazine 2009 paper [pdf]
  • CHI2008 paper [pdf]
Designing Smart Everyday Objects
(2007)
We surveyed and classified smart everyday objects based on the relation between the object’s digital (new) and its traditional functions, and the relation between the object’s digital (new) object-human interaction and its traditional object-human interaction. Then, we attempted to map out a design space for digital function and interaction of smart everyday objects.
  • HCI International 2007 paper [pdf]
  • Teamwork with Jenhao Chen
Being a Food Vendor in Nightmarket
(2006)
We developed a multi-modal, interactive and role-playing game for people to experience selling food in a disruptive night market environment.
  • Nightmarket2006 workshop demo
  • [Demo Video] by Unique Business News TV (非凡新聞台), Taiwan, June 2006.
  • Teamwork with Denny Tsai and Jack Lin