Care and Feeding of the Internal Model

Brent Gillespie

U. of Michigan

Abstract

When considering how to build a machine that learns, a reasonable starting point is to consider how the human brain and body learns, especially learns to solve motor tasks. We have been conducting human subject studies using haptic interface to virtual environments to understand how the central nervous system uses sensory input and past experience to solve manipulation tasks and whether internal models might be involved. We ask subjects to drive resonant systems, to balance underactuated and unstable systems, to anticipate changes in load while objects are lifted, and to throw virtual balls at targets. We meter the visual and haptic feedback, use covert condition changes that check dependence on expectations, and occasionally back-drive the human hand and arm to determine driving point impedance. We have found evidence that internal models are used for integration of visual and haptic feedback and for rapid tuning of parameters within a feedforward controller. We have shown that training in one task can lead to performance improvements in parametrically related tasks even without specific practice. The most reasonable explanation for such an outcome is a mental model that is similarly parameterized. A memory map within the brain now seems less likely. Interestingly, the concept of a model computing somewhere inside the brain strikes most neuroscientists as ludicrous; nevertheless, the idea is gaining hold in motor control. In addition to reviewing results from our lab in this talk, I will survey the field of human motor learning, and attempt to extract implications for the field of machine learning.
Maintained by: Fei Sha