- Emotion and Mood Recognition
- Face recognition and analysis
- Social Robot Interaction and HRI
- Face and Expression Recognition
- Color perception and design
- Face Recognition and Perception
- Mental Health Research Topics
- Evolutionary Psychology and Human Behavior
- Digital Mental Health Interventions
- Speech and Audio Processing
- Gaze Tracking and Assistive Technology
- Hand Gesture Recognition Systems
- Personality Traits and Psychology
- AI in Service Interactions
- Music and Audio Processing
- Human Pose and Action Recognition
- Resilience and Mental Health
- Context-Aware Activity Recognition Systems
- Speech and dialogue systems
- Action Observation and Synchronization
- EEG and Brain-Computer Interfaces
- Ethics and Social Impacts of AI
- Mind wandering and attention
- Video Surveillance and Tracking Methods
- Advanced Image and Video Retrieval Techniques
University of Cambridge
2016-2025
PRG S&Tech (South Korea)
2024
Institute of Electrical and Electronics Engineers
2021
Signal Processing (United States)
2021
Fundación INTRAS
2020
Middle East Technical University
2005-2020
Politecnico di Milano
2020
RWTH Aachen University
2020
Technological Institute of Castilla y León
2020
University of Glasgow
2019
Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite major efforts, there are several open questions on what important cues to interpret facial expressions how encode them. this paper, we review progress across a range applications shed light these fundamental questions. We analyse state-of-the-art solutions by decomposing their pipelines into components, namely face registration,...
Past research in analysis of human affect has focused on recognition prototypic expressions six basic emotions based posed data acquired laboratory settings. Recently, there been a shift toward subtle, continuous, and context-specific interpretations affective displays recorded naturalistic real-world settings, multimodal affect. Converging with this shift, paper presents, to the best our knowledge, first approach literature that: 1) fuses facial expression, shoulder gesture, audio cues for...
Recognition and analysis of human emotions have attracted a lot interest in the past two decades been researched extensively neuroscience, psychology, cognitive sciences, computer sciences. Most research machine emotion has focused on recognition prototypic expressions six basic based data that posed demand acquired laboratory settings. More recently, there shift toward affective displays recorded naturalistic settings as driven by real world applications. This computing is aimed subtle,...
Despite major advances within the affective computing research field, modelling, analysing, interpreting and responding to naturalistic human behaviour still remains as a challenge for automated systems emotions are complex constructs with fuzzy boundaries substantial individual variations in expression experience. Thus, small number of discrete categories (e.g., happiness sadness) may not reflect subtlety complexity states conveyed by such rich sources information. Therefore, behavioural...
Automatic distinction between posed and spontaneous expressions is an unsolved problem. Previously cognitive sciences' studies indicated that the automatic separation of from possible using face modality alone. However, little known about information contained in head shoulder motion. In this work, we propose to (i) distinguish smiles by fusing head, face, modalities, (ii) investigate which modalities carry important how relate each other, (iii) extent temporal dynamics these signals...
This study examined the predictive power of personal resources (i.e., self-esteem, optimism, and perceived control), severity earthquake experience material human loss threat), coping self-efficacy (CSE) on general distress, intrusion, avoidance symptoms among survivors 1999 Marmara in Turkey. Specifically, we expected that CSE would mediate links between resources, experience, distress. Survivors (N = 336) filled out various measures exposure, CSE, Results path analyses indicated...
Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of main mechanisms, including important role played in recognition process by modalities' dynamics. Constrained human physiology, temporal evolution a modality appears be well approximated sequence segments called onset, apex, offset. Stemming these findings, computer scientists, over past 15...
To be able to develop and test robust affective multimodal systems, researchers need access novel databases containing representative samples of human multi-modal expressive behavior. The creation such requires a major effort in the definition behaviors, choice modalities, collection labeling large amount data. At present, public only exist for single modalities as facial expression analysis. There also number gesture static dynamic hand postures gestures. However, there is not readily...
This paper describes a substantial effort to build real-time interactive multimodal dialogue system with focus on emotional and nonverbal interaction capabilities. The work is motivated by the aim provide technology competences in perceiving producing behaviors required sustain conversational dialogue. We present Sensitive Artificial Listener (SAL) scenario as setting which seems particularly suited for study of behavior since it requires only very limited verbal understanding part machine....
This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained modalities. Secondly, we fuse facial expression affective body gesture information first at a feature-level, in which the data both modalities combined before classification, later decision-level, integrate outputs of monomodal systems by use suitable criteria. We then evaluate these fusion approaches, terms performance over based on modality...
In this paper we introduce a novel dataset, the Multimodal Human-Human-Robot-Interactions (MHHRI) with aim of studying personality simultaneously in human-human interactions (HHI) and human-robot (HRI) its relationship engagement. data was collected during controlled interaction study where dyadic between two human participants triadic robot took place interactants asking set personal questions to each other. Interactions were recorded using static dynamic cameras as well biosensors,...
Engagement is crucial to designing intelligent systems that can adapt the characteristics of their users. This paper focuses on automatic analysis and classification engagement based humans' robot's personality profiles in a triadic human-human-robot interaction setting. More explicitly, we present study involves two participants interacting with humanoid robot, investigate how participants' personalities be used together predict state each participant. The fully system first trained Big...
The activations of Facial Action Units (AUs) mutually influence one another. While the relationship between a pair AUs can be complex and unique, existing approaches fail to specifically explicitly represent such cues for each in facial display. This paper proposes an AU modelling approach that deep learns unique graph describe target Our first encodes AU's activation status its association with other into node feature. Then, it multi-dimensional edge features multiple task-specific AUs....
The World Health Organization recommends that employers take action to protect and promote mental well-being at work. However, the extent which these recommended practices can be implemented in workplace is limited by lack of resources personnel availability. Robots have been shown great potential for promoting well-being, gradual adoption such assistive technology may allow overcome aforementioned resource barriers. This paper presents first study investigates deployment use two different...
Personality determines a wide variety of human daily and working behaviours, is crucial for understanding internal external states. In recent years, large number automatic personality computing approaches have been developed to predict either the apparent or self-reported subject based on non-verbal audio-visual behaviours. However, majority them suffer from complex dataset-specific pre-processing steps model training tricks. absence standardized benchmark with consistent experimental...
The last decade has shown a growing interest in robots as well-being coaches. However, insightful guidelines for the design of coaches to promote mental have not yet been proposed. This article details and ethical recommendations based on qualitative analysis drawing grounded theory approach, which was conducted with three-step iterative process included user-centered studies involving robotic coaches, namely: (1) user-centred study 11 participants consisting both prospective users who had...
The extraction of descriptive features from sequences faces is a fundamental problem in facial expression analysis. Facial expressions are represented by psychologists as combination elementary movements known action units: each movement localised and its intensity specified with score that small when the subtle large pronounced. Inspired this approach, we propose novel data-driven feature framework represents variations linear basis functions, whose coefficients proportional to intensity....
In this paper we propose a supervised initialization scheme for cascaded face alignment based on explicit head pose estimation. We first investigate the failure cases of most state art approaches and observe that these failures often share one common global property, i.e. variation is usually large. Inspired by this, deep convolutional network model reliable accurate Instead using mean shape, or randomly selected shapes initialisation, two schemes generating initialisation: relies projecting...
In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time across varying situational contexts. Differently from existing works, obtain visual-only audio-only annotations same set subjects, first literature, compare them to their audio-visual annotations. We time-continuous prediction approach that learns temporal...
As Facial Expression Recognition (FER) systems become integrated into our daily lives, these need to prioritise making fair decisions instead of aiming at higher individual accuracy scores. Ranging from surveillance diagnosing mental and emotional health conditions individuals, balance the vs fairness trade-off make that do not unjustly discriminate against specific under-represented demographic groups. Identifying bias as a critical problem in facial analysis systems, different methods have...