Understanding the role of individual units in a deep neural network
Contextual image classification
DOI:
10.1073/pnas.1907375117
Publication Date:
2020-09-01T20:45:27Z
AUTHORS (6)
ABSTRACT
Deep neural networks excel at finding hierarchical representations that solve complex tasks over large data sets. How can we humans understand these learned representations? In this work, present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and generation networks. First, analyze a convolutional (CNN) trained on scene discover match diverse set object concepts. We find evidence has many classes play crucial roles in classifying classes. Second, use similar method generative adversarial (GAN) model generate scenes. By analyzing changes made when small sets are activated or deactivated, objects be added removed from output scenes while adapting context. Finally, apply our understanding attacks semantic editing.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (61)
CITATIONS (238)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....