Title: Bridging Control Theory and Machine Learning
Date: Wednesday, March 21, 2018
Abstract: The design of modern intelligent systems relies heavily on techniques developed in the control and machine learning communities. On one hand, control techniques are crucial for safety-critical systems; the robustness to uncertainty and disturbance is typically introduced by a model-based design equipped with sensing, actuation, and feedback. On the other hand, learning techniques have achieved the state-of-the-art performance for a variety of artificial intelligence tasks (computer vision, natural language processing, and Go). The developments of next-generation intelligent systems such as self-driving cars, advanced robotics, and smart buildings require leveraging these control and learning techniques in an efficient and safe manner.
This talk will focus on fundamental connections between robust control and machine learning. Specifically, we will present a control perspective on the empirical risk minimization (ERM) problem in machine learning. ERM is a central topic of machine learning research, and is typically solved using first-order optimization methods which are developed in a case-by-case manner. First, we will discuss how to adapt robust control theory to automate the analysis of such optimization methods including the gradient descent method, Nesterov's accelerated method, stochastic gradient descent (SGD), stochastic average gradient (SAG), SAGA, Finito, stochastic dual coordinate ascent (SDCA), stochastic variance reduction gradient (SVRG), and SGD with momentum. Next, we will show how to apply classical control design tools to develop new robust accelerated methods for ERM problems. Finally, we will conclude with some long-term research vision on the general connections between our proposed control-oriented tools and reinforcement learning methods.
Biography: Bin Hu received the B.Sc. in Theoretical and Applied Mechanics from the University of Science and Technology of China in 2008, and received the M.S. in Computational Mechanics from Carnegie Mellon University in 2010. He received the Ph.D. in Aerospace Engineering and Mechanics at the University of Minnesota in 2016, advised by Peter Seiler. He is currently a postdoctoral researcher in the optimization group of the Wisconsin Institute for Discovery at the University of Wisconsin-Madison. He is interested in building fundamental connections between the techniques used in the control and machine learning communities. His current research focuses on tailoring robust control theory (integral quadratic constraints, dissipation inequalities, jump system theory, etc.) to automate the analysis and design of stochastic optimization methods for large-scale learning tasks. He is also particularly interested in the connections between model-based control and model-free reinforcement learning.