CaltechTHESIS
  A Caltech Library Service

Linear Maps with Point Rules: Applications to Pattern Classification and Associative Memory

Citation

Venkatesh, Santosh Subramanyam (1987) Linear Maps with Point Rules: Applications to Pattern Classification and Associative Memory. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/1YSB-Q028. https://resolver.caltech.edu/CaltechETD:etd-03052008-095021

Abstract

Generalisations of linear discriminant functions are introduced to tackle problems in pattern classification, and associative memory. The concept of a point rule is defined, and compositions of global linear maps with point rules are incorporated in two distinct structural forms—feedforward and feedback—to increase classification flexibility at low increased complexity. Three performance measures are utilised, and measures of consistency established.

Feedforward pattern classification systems based on multi-channel machines are introduced. The concept of independent channels is defined and used to generate independent features. The statistics of multi-channel classifiers are characterised, and specific applications of these structures are considered. It is demonstrated that image classification invariant to image rotation and shift is possible using multi-channel machines incorporating a square-law point rule. The general form of rotation invariant classifier is obtained. The existence of optimal solutions is demonstrated, and good sub-optimal systems are introduced, and characterised. Threshold point rules are utilised to generate a class of low-cost binary filters which yield excellent classification performance. Performance degradation is characterised as a function of statistical side-lobe fluctuations, finite system space-bandwidth, and noise.

Simplified neural network models are considered as feedback systems utilising a linear map and a threshold point rule. The efficacy of these models is determined for the associative storage and recall of memories. A precise definition of the associative storage capacity of these structures is provided. The capacity of these networks under various algorithms is rigourously derived, and optimal algorithms proposed. The ultimate storage capacity of neural networks is rigourously characterised. Extensions are considered incorporating higher-order networks yielding considerable increases in capacity.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:Electrical Engineering
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Electrical Engineering
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Psaltis, Demetri
Thesis Committee:
  • Psaltis, Demetri (chair)
  • Posner, Edward C.
  • McEliece, Robert J.
  • Abu-Mostafa, Yaser S.
  • Franklin, Joel N.
Defense Date:20 August 1986
Record Number:CaltechETD:etd-03052008-095021
Persistent URL:https://resolver.caltech.edu/CaltechETD:etd-03052008-095021
DOI:10.7907/1YSB-Q028
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:883
Collection:CaltechTHESIS
Deposited By: Imported from ETD-db
Deposited On:14 Mar 2008
Last Modified:21 Dec 2019 01:29

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

9MB

Repository Staff Only: item control page