automatic recognition of facial displays of unfelt emotions
FOS: Computer and information sciences
Emotion recognition , Face recognition , Feature extraction , Face , Psychology , Observers , Trajectory
expresión facial sin emoción
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
expressió facial sense emoció
computació afectiva
02 engineering and technology
análisis del comportamiento humano
Human face recognition (Computer science)
616
0202 electrical engineering, electronic engineering, information engineering
facial expression recognition
Face recognition
Observers
affective computing
TrAffective computing
anàlisi del comportament humà
Reconeixement facial (Informàtica)
reconeixement d'expressió facial
computación afectiva
Feature extraction
Reconocimiento facial (Informática)
Emotion recognition
human behaviour analysis
unfelt facial expression of emotion
reconocimiento de la expresión facial
DOI:
10.48550/arxiv.1707.04061
Publication Date:
2021-04-01
AUTHORS (8)
ABSTRACT
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....