A Caltech Library Service

Assuring Safety under Uncertainty in Learning-Based Control Systems


Cheng, Richard (2021) Assuring Safety under Uncertainty in Learning-Based Control Systems. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/9kye-rn93.


Learning-based controllers have recently shown impressive results for different robotic tasks in well-defined environments, successfully solving a Rubiks cube and sorting objects in a bin. These advancements promise to enable a host of new capabilities for complex robotic systems. However, these learning-based controllers cannot yet be deployed in highly uncertain environments due to significant issues relating to learning reliability, robustness, and safety.

To overcome these issues, this thesis proposes new methods for integrating model information (e.g. model-based control priors) into the reinforcement learning framework, which is crucial to ensuring reliability and safety. I show, both empirically and theoretically, that this model information greatly reduces variance in learning and can effectively constrain the policy search space, thus enabling significant improvements in sample complexity for the underlying RL algorithms. Furthermore, by leveraging control barrier functions and Gaussian process uncertainty models, I show how system safety can be maintained under uncertainty without interfering with the learning process (e.g. distorting the policy gradients).

The last part of the thesis will discuss fundamental limitations that arise when utilizing machine learning to derive safety guarantees. In particular, I show that widely used uncertainty models can be highly inaccurate when predicting rare events, and examine the implications of this for safe learning. To overcome some of these limitations, a novel framework is developed based on assume-guarantee contracts in order to ensure safety in multi-agent human environments. The proposed approach utilizes contracts to impose loose responsibilities on agents in the environment, which are learned from data. Imposing these responsibilities on agents, rather than treating their uncertainty as a purely random process, allows us to achieve both safety and efficiency in interactions.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:Reinforcement Learning, Control, Assured Safety, Uncertainty Modeling, Control Barrier Functions
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Mechanical Engineering
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Burdick, Joel Wakeman
Thesis Committee:
  • Murray, Richard M. (chair)
  • Ames, Aaron D.
  • Yue, Yisong
  • Burdick, Joel Wakeman
Defense Date:15 December 2020
Non-Caltech Author Email:richardcheng805 (AT)
Funding AgencyGrant Number
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
General Atomics Electromagnetic Systems Group (GA-EMS)UNSPECIFIED
Record Number:CaltechTHESIS:01052021-195655093
Persistent URL:
Related URLs:
URLURL TypeDescription Article adapted for Chapter 3 Article adapted for Chapter 4 Article adapted for Chapter 4
Cheng, Richard0000-0001-8301-9169
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:14046
Deposited By: Richard Cheng
Deposited On:13 Jan 2021 16:56
Last Modified:02 Nov 2021 00:12

Thesis Files

[img] PDF - Final Version
See Usage Policy.


Repository Staff Only: item control page