Abstract
It is shown that the gain of the sigmoidal activation function, used in backpropagation neural networks, can be eliminated since there exists a well-defined relationship between the gain, the learning rate, and the set of initial weights. Similarly, it also possible to eliminate the learning rate by adjusting the gain and the initial weights. The relationship is proven and extended to various variations of the backpropagation learning rule as well as applied to hardware implementations of neural networks.
Original language | English |
---|---|
Pages | 365-368 |
Number of pages | 4 |
Publication status | Published - 1995 |
Event | Proceedings of the 1995 IEEE International Conference on Neural Networks. Part 1 (of 6) - Perth, Aust Duration: 27 Nov 1995 → 1 Dec 1995 |
Conference
Conference | Proceedings of the 1995 IEEE International Conference on Neural Networks. Part 1 (of 6) |
---|---|
City | Perth, Aust |
Period | 27/11/1995 → 1/12/1995 |