Abstract
All-optical multilayer perceptrons differ in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the use of limited accuracy for the weights. In this paper an adaptation of the backpropagation learning rule is presented that compensates for these three non-idealities. The good performance of this learning rule is illustrated by a series of experiments. This algorithm enables the implementation of all-optical multilayer perceptrons where learning occurs under control of a computer.
Original language | English |
---|---|
Pages | 8 |
Number of pages | 8 |
Publication status | Published - 1996 |
Event | Proceedings of the 1996 1st International Symposium on Neuro-Fuzzy Systems, AT'96 - Lausanne, Switz Duration: 29 Aug 1996 → 31 Aug 1996 |
Conference
Conference | Proceedings of the 1996 1st International Symposium on Neuro-Fuzzy Systems, AT'96 |
---|---|
City | Lausanne, Switz |
Period | 29/08/1996 → 31/08/1996 |