Zeng, Zheng (1994) Recurrent neural networks for grammatical inference. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-12112007-090004
In this thesis, various artificial recurrent neural network models are investigated for the problem of deriving grammar rules from a finite set of example "sentences." A general discrete network framework and its corresponding learning algorithm are presented and studied in detail in learning three different types of grammars.
The first type of grammars considered is regular grammars. Experiments with conventional analog recurrent networks in learning regular grammars are presented to demonstrate the unstable behavior of such networks in processing very long strings after training. A new network structure to force recurrent networks to learn stable states by discretizing the internal feedback signals is constructed. For training such discrete networks a "pseudo-gradient" learning rule is applied.
As an extension to the discrete network model, an external discrete stack is added to accommodate the inference of context-free grammars. A composite error function is devised to deal with various situations during learning. The pseudo-gradient method is also extended to train such a network to learn context-free grammars with the added option of operating on the external stack.
Another extension to the discrete network structure is made for the purpose of learning probabilistic finite state grammars. The network consists of a discrete portion which is intended to represent the structure of the grammar, and an analog portion which is intended to represent the transition probabilities. Two criteria for the network to verify the correctness of its solution during training are proposed. Theoretical analysis of the necessary and sufficient conditions for the correct solution is presented.
Experimental results show that the discrete network models have similar capabilities in learning various grammars as their analog counterparts, and have the advantage of being provably stable.
|Item Type:||Thesis (Dissertation (Ph.D.))|
|Subject Keywords:||recurrent neural network|
|Degree Grantor:||California Institute of Technology|
|Division:||Engineering and Applied Science|
|Major Option:||Electrical Engineering|
|Thesis Availability:||Public (worldwide access)|
|Defense Date:||10 May 1994|
|Default Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Imported from ETD-db|
|Deposited On:||12 Dec 2007|
|Last Modified:||12 May 2014 16:34|
- Final Version
See Usage Policy.
Repository Staff Only: item control page