CaltechTHESIS
  A Caltech Library Service

Various algorithms for optimization and learning in adaptive systems

Citation

Anderson, Brooke P. (1993) Various algorithms for optimization and learning in adaptive systems. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/rnjw-jy11. https://resolver.caltech.edu/CaltechTHESIS:04102013-153335605

Abstract

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:Computation and Neural Systems
Degree Grantor:California Institute of Technology
Division:Biology
Major Option:Computation and Neural Systems
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Hopfield, John J.
Thesis Committee:
  • Unknown, Unknown
Defense Date:29 September 1992
Record Number:CaltechTHESIS:04102013-153335605
Persistent URL:https://resolver.caltech.edu/CaltechTHESIS:04102013-153335605
DOI:10.7907/rnjw-jy11
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:7606
Collection:CaltechTHESIS
Deposited By:INVALID USER
Deposited On:10 Apr 2013 23:07
Last Modified:09 Nov 2022 19:20

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

14MB

Repository Staff Only: item control page