Improving Compositional Generalization Using Iterated Learning and Simplicial Embeddings

Iterated function Principle of compositionality Representation
DOI: 10.48550/arxiv.2310.18777 Publication Date: 2023-01-01
ABSTRACT
Compositional generalization, the ability of an agent to generalize unseen combinations latent factors, is easy for humans but hard deep neural networks. A line research in cognitive science has hypothesized a process, ``iterated learning,'' help explain how human language developed this ability; theory rests on simultaneous pressures towards compressibility (when ignorant learns from informed one) and expressivity it uses representation downstream tasks). Inspired by we propose improve compositional generalization networks using iterated learning models with simplicial embeddings, which can approximately discretize representations. This approach further motivated analysis compositionality based Kolmogorov complexity. We show that combination changes improves over other approaches, demonstrating these improvements both vision tasks well-understood factors real molecular graph prediction where structure unknown.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....