Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization

Initialization Interpretability Perceptron Multilayer perceptron Feature (linguistics)
DOI: 10.48550/arxiv.2102.13333 Publication Date: 2021-01-01
ABSTRACT
The interpretability of neural networks (NNs) is a challenging but essential topic for transparency in the decision-making process using machine learning. One reasons lack random weight initialization, where input randomly embedded into different feature space each layer. In this paper, we propose an interpretation method deep multilayer perceptron, which most general architecture NNs, based on identity initialization (namely, matrices). proposed allows us to analyze contribution neuron classification and class likelihood hidden As property identity-initialized matrices remain near even after This enables treat change features from layer as classification. Furthermore, can separate output map that depicts likelihood, by adding extra dimensions according number classes, thereby allowing calculation recognition accuracy thus revealing roles independent layers, such extraction
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....