Predicting Emotions in User-Generated Videos
Sadness
Benchmark (surveying)
DOI:
10.1609/aaai.v28i1.8724
Publication Date:
2022-06-23T10:47:30Z
AUTHORS (3)
ABSTRACT
User-generated video collections are expanding rapidly in recent years, and systems for automatic analysis of these high demands. While extensive research efforts have been devoted to recognizing semantics like "birthday party" "skiing", little attempts made understand the emotions carried by videos, e.g., "joy" "sadness". In this paper, we propose a comprehensive computational framework predicting user-generated videos. We first introduce rigorously designed dataset collected from popular video-sharing websites with manual annotations, which can serve as valuable benchmark future research. A large set features extracted dataset, ranging low-level visual descriptors, audio features, high-level semantic attributes. Results experiments indicate that combining multiple types features---such joint use clues---is important, attribute such those containing sentiment-level very effective.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (55)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....