Abstract
All-optical multilayer perceptrons differ in various ways from the ideal neural network model. Examples are the use of nonideal activation functions, which are truncated, asymmetric, and have a nonstandard gain; restriction of the network parameters to non-negative values, and the limited accuracy of the weights. A backpropagation-based learning rule is presented that compensates for these nonidealities and enables the implementation of all-optical multilayer perceptrons where learning occurs under computer control. The good performance of this learning rule, even when using a small number of weight levels, is illustrated by a series of computer simulations incorporating the nonidealities.
Original language | English |
---|---|
Pages (from-to) | 1305-1315 |
Number of pages | 11 |
Journal | Optical Engineering |
Volume | 37 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 1998 |
Keywords
- Activation function
- Backpropagation
- Liquid crystal light valve
- Neural network
- Non-negative neural networks
- Optical multilayer perceptron
- Weight discretization