Research¶I created this tagcloud using Worlde. The input comes from some of my recent papers.
My research goal is to build smart robots that can autonomously operate in the real world. Ideally a robot should:
- Adapt to novel situations. This is commonly referred to as closing the gap between research prototypes and real world robotics.
- Plan its actions by making proper use of its skills to meet long-term goals.
- Be robust to failures and general unpredictability of the real world. Sensor fusion is a key component towards robustness.
I have used several robotics platforms for both navigation and manipulation tasks. My work led me to several amazing places in the world, and even though I have been involved in different projects my goals have never changed.
I am currently working on bridging the gap between symbolic and continuous planning using discrete tasks decomposition and continuous optimization. I am also working in visual-haptics sensor fusion to allow a robot to choose the appropriate manipulation technique based on past haptics perception and current visual feedback.
When I arrived at UCB I have been involved on an exciting project to enable grounded language acquisition in a robot via visual and haptics interaction with the world. I have coordinated the efforts of several groups in both UC Berkeley and University of Pennsylvania to deploy a working demo on a PR2 robot that:
- Detects and recognize objects in front of it.
- Infers spatial relations between objects
- Performs grasping and placing actions.
- Used tactile sensors to characterize objects with human-like adjectives.
A video of the demo is available in the videos page.
I worked on a project in compositional skills building. I studied how a robot can autonomously create new skills or improve the ones it is already provided with. By drawing inspiration from software engineering practices I developed an algorithm that allowed a robot to leverage on possibly well-defined and tested modules to build skills of increasingly complexity. The example scenario I used was a PR2 robot learning to move its base so that it would grasp an object. This is a challenging problem in that, when obstacles are considered, no closed form mathematical solution exists and common solutions include expensive searches or stochastic approaches. Using object detection and motion planning as basic building blocks the robot autonomously developed the capability to navigate a room-sized environment to reach and grasp a target object. As an extension to this work I modified the algorithm to allow the robot to improve the previously developed skill, so that objects placed in hard-to-reach locations or even out of sight could successfully be reached.
Previously I investigated the conditions under which a robot exhibits emergent behaviors, i.e. a behavior that deviates from the programmed one but gives the robot unforeseen and useful capabilities. I used the Kolmogorov’s theory of algorithmic complexity as a tool to measure and study behavioral emergence. I then applied this theory to prove that complexity can improve the performance of an agent when placed in an hostile and unpredictable environment.
During my Ph.D I developed algorithms to build effective indoor and outdoor robotics tour guides. Although robotic guides had been previously developed for indoor environments, little or no work had been done for outdoor environments. This proved to be a challenging problem, both from a perception and a control point of view. Together with my colleagues we developed an adaptive algorithm that used stereo vision and image processing techniques to segment drivable terrain and to plan a path accordingly. This allowed our robotic tour guide to reliably navigate in spaces where common range sensors could not detect obstacles (e.g. a pool or foliage). Since the robot was expected to operate in a large environment with poor wireless coverage I had to program it to make the best use of its sensors without human intervention.