Cross-Task Generalization via Natural Language Crowdsourcing Instructions

Crowdsourcing Schema (genetic algorithms) Natural language understanding ENCODE
DOI: 10.18653/v1/2022.acl-long.244 Publication Date: 2022-06-03T01:34:53Z
ABSTRACT
Humans (e.g., crowdworkers) have a remarkable ability in solving different tasks, by simply reading textual instructions that define them and looking at few examples. Despite the success of conventional supervised learning on individual datasets, such models often struggle with generalization across tasks question-answering system cannot solve classification tasks). A long-standing challenge AI is to build model learns new task understanding human-readable it. To study this, we introduce NATURAL INSTRUCTIONS, dataset 61 distinct their human-authored instructions, 193k instances (input-output pairs). The are obtained from crowdsourcing used create existing NLP datasets mapped unified schema. Using this meta-dataset, measure cross-task training seen measuring remaining unseen ones. We adopt generative pre-trained language encode task-specific along input generate output. Our results indicate benefit when evaluated terms (19% better for utilizing instructions). These models, however, far behind an estimated performance upperbound indicating significant room more progress direction.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (69)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....