Implicitly Defined Layers in Neural Networks
Backpropagation
Relevance
Feed forward
Feedforward neural network
DOI:
10.48550/arxiv.2003.01822
Publication Date:
2020-01-01
AUTHORS (5)
ABSTRACT
In conventional formulations of multilayer feedforward neural networks, the individual layers are customarily defined by explicit functions. this paper we demonstrate that defining in a network \emph{implicitly} provide much richer representations over standard one, consequently enabling vastly broader class end-to-end trainable architectures. We present general framework implicitly layers, where theoretical analysis such can be addressed through implicit function theorem. also show how seamlessly incorporated into existing machine learning libraries. particular with respect to current automatic differentiation techniques for use backpropagation based training. Finally, versatility and relevance our proposed approach on number diverse example problems promising results.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....