Fyfe, William John Andrew (1992) Invariance hints and the VC dimension. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-07202007-075240
We are interested in having a neural network learn an unknown function f. If the function satisfies an invariant of some sort, such as f is an odd function, then we want to be able to take advantage of this information and not have the network deduce the invariant based on an example of f.
The invariant might be defined in terms of an explicit transformation of the input space under which f is constant. In this case it is possible to build a network that necessarily satisfies the invariant.
In general, we define the invariant in terms of a partition of the input space such that if x, x' are in the same partition element then f(x) = f(x'). An example of the invariant would be a a pair (x, x') taken from a single partition element. We can combine examples of the invariant with examples of the function in the learning process. The goal is to substitute examples of the invariant for examples of the function; the extent to which we can actually do this depends on the appropriate VC dimensions. Simulations verify, at least in simple cases, that examples of the invariant do aid the learning process.
|Item Type:||Thesis (Dissertation (Ph.D.))|
|Degree Grantor:||California Institute of Technology|
|Division:||Engineering and Applied Science|
|Major Option:||Computer Science|
|Thesis Availability:||Restricted to Caltech community only|
|Defense Date:||26 May 1992|
|Default Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Imported from ETD-db|
|Deposited On:||20 Jul 2007|
|Last Modified:||26 Dec 2012 02:55|
- Final Version
Restricted to Caltech community only
See Usage Policy.
Repository Staff Only: item control page