Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules
Hebbian theory
MNIST database
Learning rule
Competitive learning
Leabra
Supervised Learning
DOI:
10.48550/arxiv.2103.10252
Publication Date:
2021-01-01
AUTHORS (4)
ABSTRACT
The brain performs unsupervised learning and (perhaps) simultaneous supervised learning. This raises the question as to whether a hybrid of methods will produce better Inspired by rich space Hebbian rules, we set out directly learn rule on local information that best augments signal. We present Hebbian-augmented training algorithm (HAT) for combining gradient-based with an pre-synpatic activity, post-synaptic activities, current weights. test HAT's effect simple problem (Fashion-MNIST) find consistently higher performance than alone. finding provides empirical evidence synaptic activities strong signal can be used augment methods. further meta-learned update is time-varying function; thus, it difficult pinpoint interpretable aids in training. do meta-learner eventually degenerates into non-Hebbian preserves important weights so not disturb learner's convergence.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....