Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations

FOS: Computer and information sciences Computer Science - Computation and Language Computation and Language (cs.CL)
DOI: 10.48550/arxiv.2308.16349 Publication Date: 2023-01-01
ABSTRACT
We introduce Affective Visual Dialog, an emotion explanation and reasoning task as a testbed for research on understanding the formation of emotions in visually grounded conversations. The involves three skills: (1) Dialog-based Question Answering (2) Emotion Prediction (3) generation based dialog. Our key contribution is collection large-scale dataset, dubbed AffectVisDial, consisting 50K 10-turn dialogs well concluding attributions dialog-informed textual explanations, resulting total 27,180 working hours. explain our design decisions collecting dataset questioner answerer tasks that are associated with participants conversation. train demonstrate solid Dialog baselines adapted from state-of-the-art models. Remarkably, responses generated by models show promising emotional abilities response to project page available at https://affective-visual-dialog.github.io.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....