A Caltech Library Service

Nonlinear optimal control: a receding horizon appoach


Primbs, James A. (1999) Nonlinear optimal control: a receding horizon appoach. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/4AD2-0T48.


As advances in computing power forge ahead at an unparalleled rate, an increasingly compelling question that spans nearly every discipline is how best to exploit these advances. At one extreme, a tempting approach is to throw as much computational power at a problem as possible. Unfortunately, this is rarely a justifiable approach unless one has some theoretical guarantee of the efficacy of the computations. At the other extreme, not taking advantage of available computing power is unnecessarily limiting. In general, it is only through a careful inspection of the strengths and weaknesses of all available approaches that an optimal balance between analysis and computation is achieved. This thesis addresses the delicate interaction between theory and computation in the context of optimal control.

An exact solution to the nonlinear optimal control problem is known to be prohibitively difficult, both analytically and computationally. Nevertheless, a number of alternative (suboptimal) approaches have been developed. Many of these techniques approach the problem from an off-line, analytical point of view, designing a controller based on a detailed analysis of the system dynamics. A concept particularly amenable to this point of view is that of a control Lyapunov function. These techniques extend the Lyapunov methodology to control systems. In contrast, so-called receding horizon techniques rely purely on on-line computation to determine a control law. While offering an alternative method of attacking the optimal control problem, receding horizon implementations often lack solid theoretical stability guarantees.

In this thesis, we uncover a synergistic relationship that holds between control Lyapunov function based schemes and on-line receding horizon style computation. These connections derive from the classical Hamilton-Jacobi-Bellman and Euler-Lagrange approaches to optimal control. By returning to these roots, a broad class of control Lyapunov schemes are shown to admit natural extensions to receding horizon schemes, benefiting from the performance advantages of on-line computation. From the receding horizon point of view, the use of a control Lyapunov function is a convenient solution to not only the theoretical properties that receding horizon control typically lacks, but also unexpectedly eases many of the difficult implementation requirements associated with on-line computation. After developing these schemes for the unconstrained nonlinear optimal control problem, the entire design methodology is illustrated on a simple model of a longitudinal flight control system. They are then extended to time-varying and input constrained nonlinear systems, offering a promising new paradigm for nonlinear optimal control design.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:control Lyapunov function; model predictive control; moving horizon control; nonlinear optimal control; pointwise min-norm control; receding horizon control; Sontag's formula
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Control and Dynamical Systems
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Doyle, John Comstock
Thesis Committee:
  • Doyle, John Comstock (chair)
  • Krener, Arthur
  • Marsden, Jerrold E.
  • Chandy, K. Mani
  • Murray, Richard M.
Defense Date:1 January 1999
Non-Caltech Author Email:japrimbs (AT)
Record Number:CaltechETD:etd-10172005-103315
Persistent URL:
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:4124
Deposited By: Imported from ETD-db
Deposited On:17 Oct 2005
Last Modified:21 Dec 2019 03:08

Thesis Files

PDF (Primbs_ja_1999.pdf) - Final Version
See Usage Policy.


Repository Staff Only: item control page