Improved weight initialization for deep and narrow feedforward neural network
Initialization
Activation function
Feedforward neural network
Feed forward
Backpropagation
DOI:
10.1016/j.neunet.2024.106362
Publication Date:
2024-05-03T02:23:40Z
AUTHORS (4)
ABSTRACT
Appropriate weight initialization settings, along with the ReLU activation function, have become cornerstones of modern deep learning, enabling training and deployment highly effective efficient neural network models across diverse areas artificial intelligence. The problem "dying ReLU," where neurons inactive yield zero output, presents a significant challenge in networks function. Theoretical research various methods been introduced to address problem. However, even these research, remains challenging for extremely narrow feedforward In this paper, we propose novel method issue. We establish several properties our initial matrix demonstrate how enable propagation signal vectors. Through series experiments comparisons existing methods, effectiveness method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (46)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....