Citation
Frank, Timothy S. (1998) Computing with spiking neurons. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/vf21-gw62. https://resolver.caltech.edu/CaltechETD:etd-02042008-110206
Abstract
This thesis explores methods for computing with spikes. A spiking neuron model (SNM) is developed, which uses relatively few variables. A neuron's state is completely determined by the amount of neurotransmitter at its input synapses and the time since it last produced a spike. A spike is treated as a discrete event, which triggers the release of neurotransmitter at the neuron's output synapses. Neurotransmitter affects the voltage potentials of postsynaptic neurons.
The SNM is able to duplicate many of the properties of biological neurons, including: latency, efractory periods, and oscillatory spiking behavior, thus indicating that it is sufficiently complex for duplicating many of the computations performed by real neurons. Although the inspiration for the SNM comes from biology, the purpose of this research is to develop better computational devices.
Several single neuron building blocks are designed to perform useful functions, such as: a high gain response, a memory oscillator, a bounded threshold response, and an identity or inverse response. These single neuron building blocks are then used in larger networks to accomplish more complex tasks including: synchronizing input stimuli, recognizing spiking patterns, evaluating Boolean logic expressions, memorizing spike patterns, counting input spikes, multiplexing signals, comparing spike patterns, and recalling an associative memory.
When using the SNM, there are several possible methods for encoding information within a spike train. With synchronous spike patterns, each spike can encode a single bit. The strength of an input stimulus may be retained within the output phase of a spike or logarithmically encoded in the neurotransmitter released at a synapse. And when two sensory neurons receive the same input signal, the time duration of the stimulus can be linearly encoded within their phase differences, while the strength of the input signal is logarithmically encoded in their firing rates.
Learning may also be incorporated into an SNM network. A special feedforward network architecture is presented, in which each neuron has either an inhibitory or excitatory effect on all of the neurons to which it connects. A new learning rule is developed to train this network to respond to any combinations of input spike patterns.
Item Type: | Thesis (Dissertation (Ph.D.)) |
---|---|
Degree Grantor: | California Institute of Technology |
Division: | Engineering and Applied Science |
Major Option: | Applied Physics |
Thesis Availability: | Public (worldwide access) |
Research Advisor(s): |
|
Thesis Committee: |
|
Defense Date: | 27 May 1998 |
Record Number: | CaltechETD:etd-02042008-110206 |
Persistent URL: | https://resolver.caltech.edu/CaltechETD:etd-02042008-110206 |
DOI: | 10.7907/vf21-gw62 |
Default Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. |
ID Code: | 495 |
Collection: | CaltechTHESIS |
Deposited By: | Imported from ETD-db |
Deposited On: | 20 Feb 2008 |
Last Modified: | 16 Apr 2021 23:20 |
Thesis Files
|
PDF (Frank_ts_1998.pdf)
- Final Version
See Usage Policy. 17MB |
Repository Staff Only: item control page