Using Control Theory to Make Safety Guarantees About Learned Dynamics and Behaviors
Jeremy Gillula and Claire Tomlin
NDSEG Fellowship and National Science Foundation
For some time now machine learning methods have been widely used in perception for autonomous robots. While there have been many results describing the theoretical performance of machine learning techniques with regards to their accuracy or convergence rates, relatively little work has been done on developing theoretical performance guarantees about their stability and robustness. As a result, many machine learning techniques are still limited to being used in situations where safety and robustness are not critical for success. (For example, a machine learning algorithm may be able to learn how to drive a car in a high-traffic area with excellent empirical results. However, without formal guarantees about stability and robustness, it would be unsafe to allow such an algorithm to control a car on a highway populated by normal vehicles and their drivers.) Motivated by examples like those mentioned above, we have begun work on developing formal methods for proving stability and robustness of particular machine learning algorithms. In particular, our approach is to apply control theoretic techniques to the world of machine learning.