Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?
Transfer of learning
DOI:
10.48550/arxiv.2212.06645
Publication Date:
2022-01-01
AUTHORS (3)
ABSTRACT
Traditional multi-task learning architectures train a single model across multiple tasks through shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the parameter updates, which otherwise can lead to negative transfer. A new type of within NLP homogenizes as and language decoder, does surprisingly well range diverse tasks. Does this architecture suffer from task-conflicts require algorithms? We study how certain factors shift towards text-to-text affects conflict transfer, finding both directional transfer are constant architectures.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....