A Caltech Library Service

Robust Safety-Critical Control: A Lyapunov and Barrier Approach


Taylor, Andrew James (2023) Robust Safety-Critical Control: A Lyapunov and Barrier Approach. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/bpht-by81.


Accompanying the technological advances of the past decade has been the promise for widespread growth of autonomous systems into nearly all domains of human society, including manufacturing, transportation, and healthcare. At the same time, there have been several tragic failures that reveal potential risks with the expansion of autonomous systems into everyday life, and indicate that it is vital for safety to be accounted for in the design of control systems.

This thesis seeks to develop a theory of robust safety-critical control for autonomous systems. This theory will be built upon the foundational tools of Control Lyapunov Functions (CLFs) and Control Barrier Functions (CBFs), which provide a powerful paradigm for the design of model-based safety-critical controllers. The dependence of CLF and CBF-based controllers on a system model makes them susceptible to modeling inaccuracies, potentially resulting in unsafe behavior when deploying these controllers on real-world systems.

In this thesis I present methods for resolving four classes of model inaccuracies referred to as model error, disturbances, measurement error, and input sampling, which are commonly faced challenges when designing controllers for robotic systems. The proposed methods are unified by their shared use of CLFs and CBFs to produce controllers possessing rigorous and robust safety guarantees that can be demonstrated in simulation or experimentally. A hallmark of these methods is a focus on enabling control synthesis through convex optimization, which ensures that controllers can be efficiently computed on real-world robotic hardware platforms.

In addressing model error, I consider both data-driven learning approaches and adaptive control approaches. I present three episodic learning frameworks that iteratively augment existing CLF and CBF-based controllers specified via convex optimization problems to improve the stability and safety properties of a system, which I demonstrate in simulation and experimentally. I also establish a relationship between the degradation of stability and safety properties with the magnitude of residual learning error through the perspective of Input-to-State Stability (ISS) and Input-to-State Safety (ISSf). Lastly, I develop an adaptive safety-critical control framework for systems with parametric model error through the notion of adaptive CBFs.

In addressing disturbances, I resolve challenges in balancing performance and robustness with ISSf-based controllers through the notion of Tunable Input-to-State Safety (TISSf), which permits prioritizing robustness to disturbances only when safety requirements are close to being violated. I demonstrate the capabilities of TISSf-based control design experimentally on an autonomous semi-trailer truck system that is subject to input disturbances due to complex unmodeled actuator dynamics. Lastly, I develop a framework for achieving ISSf-like finite-time safety guarantees for discrete-time systems subject to stochastic disturbances through the use of CBFs and convex optimization.

In addressing measurement error, I develop the notion of Measurement-Robust CBFs (MR-CBFs), which permit control synthesis through convex optimization in the presence of imperfect measurements. I demonstrate the capability of MR-CBFs on an experimental Segway system using a vision-based measurement system, validating the tractability of using controllers specified through increasingly complex classes of convex optimization problems on real-world systems. Lastly, I present an application of Preference Based Learning (PBL) in tuning the robustness parameters of a CBF-based controller, demonstrating the first use of PBL with CBFs and providing a tool for tuning the safety and performance of the robust controllers proposed in this thesis.

In addressing input sampling, I consider both sampled-data and event-triggered paradigms for modeling input sampling. I provide a method for synthesizing CLF-based controllers for sampled-data systems by integrating feedback linearization with approximate discrete-time models, leading to a significant improvement over continuous-time CLF-based controllers implemented with input sampling. I then develop a framework for achieving safety of sampled-data systems through approximate discrete-time models through the notion of practical safety and Sampled-Data CBFs (SD-CBFs), which I demonstrate with convex-optimization based controllers in simulation. Lastly, I develop a method for event-triggered safety-critical control that uses ISSf to achieve safety while satisfying the requirement of a minimum interevent time.

Collectively, these contributions constitute a significant advance in the theory of robust safety-critical control by establishing a framework, unified by the use of CLFs and CBFs in conjunction with convex optimization, that addresses a wide class of challenges faced in the design of safety-critical control systems.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:control; robotics; machine learning; optimization; robust; safety-critical; Lyapunov; barrier
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Control and Dynamical Systems
Awards:Thomas A. Tisch Prize for Graduate Teaching in Computing and Mathematical Sciences, 2020.
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Ames, Aaron D.
Group:AMBER Laboratory
Thesis Committee:
  • Yue, Yisong (chair)
  • Murray, Richard M.
  • Burdick, Joel Wakeman
  • Ames, Aaron D.
Defense Date:22 May 2023
Funding AgencyGrant Number
Defense Advanced Research Projects Agency (DARPA)HR00111890035
Record Number:CaltechTHESIS:06022023-032907616
Persistent URL:
Related URLs:
URLURL TypeDescription Learning with Control Lyapunov Functions for Uncertain Robotic Systems, presented in Section 3.3 Control Lyapunov Perspective on Episodic Learning via Projection to State Stability, presented in Section 3.4 Safety with Control Barrier Functions, presented in Section 3.9 for Safety-Critical Control with Control Barrier Functions, presented in Section 3.5 Event Triggered Control via Input-to-State Safe Barrier Functions, presented in Section 6.5 Control Barrier Perspective on Episodic Learning via Projection-to-State Safety, presented in Section 3.6 Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions, presented in Section 5.2 Learning for Safe Bipedal Locomotion with Control Barrier Functions and Projection-to-State Safety, presented in Section 3.6 control barrier functions: Certainty in safety with uncertainty in state, presented in Section 5.3 Controller Synthesis With Tunable Input-to-State Safe Control Barrier Functions, presented in Section 4.3 Stabilization With Control Lyapunov Functions via Quadratically Constrained Quadratic Programs, presented in Section 6.3 robust data-driven control synthesis for nonlinear systems with actuation uncertainty, presented in Section 3.8 Preference-Based Learning for Safety-Critical Control, presented in Section 5.4 of Sampled-Data Systems with Control Barrier Functions via Approximate Discrete Time Models, presented in Section 6.4 Barrier Functions and Input-to-State Safety with Application to Automated Vehicles, presented in Section 4.3 and 4.4 Safety under Stochastic Uncertainty with Discrete-Time Control Barrier Functions, presented in Section 4.5
Taylor, Andrew James0000-0002-5990-590X
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:15275
Deposited By: Andrew Taylor
Deposited On:02 Jun 2023 23:32
Last Modified:16 Jun 2023 22:44

Thesis Files

[img] PDF - Final Version
See Usage Policy.


Repository Staff Only: item control page