ESE6180: Learning, Dynamics and Control
Instructor: Ingvar Ziemann
Email: ingvarz@seas.upenn.edu
Teaching Assistants: Bruce Lee (brucele@seas.upenn.edu), Thomas Zhang (ttz2@seas.upenn.edu)
Lectures: MW noon-1:30pm in GLAB101
Office Hours: M 2:30pm-4pm, W 10am-11:30am, both in Towne M70
Overview
This course will provide students an introduction to the emerging area at the intersection of machine learning, dynamics, and control. We will investigate machine learning and data-driven algorithms that interact with the physical world, with an emphasis on a holistic understanding of the interplay between concepts from machine learning (e.g., generalization, sample complexity), probability and statistics (e.g., concentration, information-theoretic lower bounds) and dynamical systems and control theory (e.g., feedback, stability, observability). Topics of study will include learning models of dynamical systems and time series and then use these models to meet performance objectives. We will also study fundamental limits to performance (what is the best we can do within our model?) and leverage these to devise data-collection schemes that are best possible for downstream control applications.
About the course
Prerequisites: This is an advanced theory-intensive course. A solid foundation in linear systems (at the level of ESE 500), probability theory (at the level of ESE 530), and optimization (at the level of ESE 605), as well as mathematical maturity (comfort with reading and writing proofs) is required. Familiarity with Python is helpful, but not required. Undergraduates need permission.
This course is ideal for advanced graduate students who are interested in applying novel research concepts to their own work. By the end of this course, students will be ready to start doing research in the Learning for Dynamics and Control (L4DC) space.
Tentative List of Topics
Introduction, History, Overview and Administrative Stuff
Part 1: Probability, Statistics and System Identification
- The Chernoff Bound and Mean Estimation
- Chernoff, Bernstein and Linear Regression
- Covering, the Union Bound and Learning in \(\mathbb{R}^d\)
- The Hanson-Wright Inequality
- Learning Linear Dynamical Systems
- Interplay of Stability and Learning
Part 2: Learning to Control the Linear Quadratic Regulator
- Dynamic Programming Solution to LQR
- Offline Learning of the Linear Quadratic Regulator
- Riccati Equation Perturbation Bounds
- Learning to Control
- Policy Gradient Methods
Part 3: Statistical Optimality and Experiment Design
- Information-Theoretic Lower Bounds
- Active Learning for the Linear Quadratic Regulator
Part 4: Advanced Topics
- Self-Normalized Martingales
- Nonlinear Regression and Nonlinear Dynamics
Grading
Homework (60%): there will five (5) homework assignments. An initial homework assignment, Homework 1, will be handed out on the first day of class, and will be worth 12%. Homework 1 is mandatory, and must be passed to a satisfactory level: it is used to check your knowledge of prerequisites. The remaining four homework assignments will also each be worth 12%.
Each hand-in must be written up in LaTeX in single column style in the article document class.
We ask that you write out detailed and rigorous solutions.
You get 6 free late days, beyond that no late assignments will be graded.
You are allowed, even encouraged, to work on homework in small groups, but you must write up your own homework solutions and code to hand in – please indicate who you collaborated with on your assignments.
Each homework problem will be graded on a scale of 0-4.
Homeworks are submitted on Canvas and deadlines are 11:59pm ET.
Final project (40%): students will be expected to work on a theory-focused project (in groups of up to 2 students). See the project page for details.
Note that these weights are approximate, and we reserve the right to change them later.
Code of Academic Integrity: All students are expected to adhere to the University’s Code of Academic Integrity.