Venkatesh, Santosh Subramanyam (1987) Linear maps with point rules : applications to pattern classification and associative memory. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-03052008-095021
Generalisations of linear discriminant functions are introduced to tackle problems in pattern classification, and associative memory. The concept of a point rule is defined, and compositions of global linear maps with point rules are incorporated in two distinct structural forms—feedforward and feedback—to increase classification flexibility at low increased complexity. Three performance measures are utilised, and measures of consistency established.
Feedforward pattern classification systems based on multi-channel machines are introduced. The concept of independent channels is defined and used to generate independent features. The statistics of multi-channel classifiers are characterised, and specific applications of these structures are considered. It is demonstrated that image classification invariant to image rotation and shift is possible using multi-channel machines incorporating a square-law point rule. The general form of rotation invariant classifier is obtained. The existence of optimal solutions is demonstrated, and good sub-optimal systems are introduced, and characterised. Threshold point rules are utilised to generate a class of low-cost binary filters which yield excellent classification performance. Performance degradation is characterised as a function of statistical side-lobe fluctuations, finite system space-bandwidth, and noise.
Simplified neural network models are considered as feedback systems utilising a linear map and a threshold point rule. The efficacy of these models is determined for the associative storage and recall of memories. A precise definition of the associative storage capacity of these structures is provided. The capacity of these networks under various algorithms is rigourously derived, and optimal algorithms proposed. The ultimate storage capacity of neural networks is rigourously characterised. Extensions are considered incorporating higher-order networks yielding considerable increases in capacity.
|Item Type:||Thesis (Dissertation (Ph.D.))|
|Degree Grantor:||California Institute of Technology|
|Division:||Engineering and Applied Science|
|Major Option:||Electrical Engineering|
|Thesis Availability:||Restricted to Caltech community only|
|Defense Date:||20 August 1986|
|Default Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Imported from ETD-db|
|Deposited On:||14 Mar 2008|
|Last Modified:||26 Dec 2012 02:33|
- Final Version
Restricted to Caltech community only
See Usage Policy.
Repository Staff Only: item control page