- Emotion and Mood Recognition
- Color perception and design
- Music and Audio Processing
- Visual perception and processing mechanisms
- Hearing Impairment and Communication
- Speech and Audio Processing
- Social Robot Interaction and HRI
- Face Recognition and Perception
- Emotions and Moral Behavior
- Multisensory perception and integration
- Language, Discourse, Communication Strategies
- Speech Recognition and Synthesis
- Speech and dialogue systems
- Digital Communication and Language
- Face and Expression Recognition
- Neuroscience and Music Perception
- Language, Metaphor, and Cognition
- Advanced Text Analysis Techniques
- Subtitles and Audiovisual Media
- Neural dynamics and brain function
- Aesthetic Perception and Analysis
- Phonetics and Phonology Research
- Action Observation and Synchronization
- 3D Surveying and Cultural Heritage
- Humor Studies and Applications
University of Canterbury
2024
Queen's University Belfast
2011-2023
National University of Ireland
1986-2010
Queen's University
2003-2009
University of Stirling
1994
Google (United States)
1986
Royal Society of Medicine
1982
Two channels have been distinguished in human interaction: one transmits explicit messages, which may be about anything or nothing; the other implicit messages speakers themselves. Both linguistics and technology invested enormous efforts understanding first, channel, but second is not as well understood. Understanding party's emotions of key tasks associated with second, channel. To tackle that task, signal processing analysis techniques to developed, while, at same time, consolidating...
SEMAINE has created a large audiovisual database as part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage person in sustained, emotionally colored conversation. Data used build the came from interactions between users and "operator” simulating SAL agent, different configurations: Solid (designed so operators displayed appropriate nonverbal behavior) Semi-automatic users' experience approximated interacting with machine). We then recorded user...
Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood such as unipolar depression shows a strong temporal correlation with affective dimensions valence and arousal. addition, psychologists psychiatrists take observation expressive facial vocal cues into account while evaluating patient's condition. Depression could result in dampened expressions, avoiding eye contact, using short sentences flat intonation. It is this context that we...
The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood Emotion" will be the sixth competition event aimed at comparison of multimedia processing machine learning methods for automatic audio, visual physiological depression emotion analysis, with all participants competing under strictly same conditions. goal is to provide a common benchmark test set multi-modal information bring together recognition communities, as well video compare relative merits various approaches...
Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood such as unipolar depression shows a strong temporal correlation with affective dimensions valence, arousal and dominance. addition structured self-report questionnaires, psychologists psychiatrists use in their evaluation patient's level observation facial expressions vocal cues. It is this context that we present fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This...
Class based emotion recognition from speech, as performed in most works up to now, entails many restrictions for practical applications.Human is a continuum and an automatic system must be able recognise it such.We present novel approach continuous on Long Short-Term Memory Recurrent Neural Networks which include modelling of long-range dependencies between observations thus outperform techniques like Support-Vector Regression.Transferring the innovative concept additionally emotional...
Despite major advances within the affective computing research field, modelling, analysing, interpreting and responding to naturalistic human behaviour still remains as a challenge for automated systems emotions are complex constructs with fuzzy boundaries substantial individual variations in expression experience. Thus, small number of discrete categories (e.g., happiness sadness) may not reflect subtlety complexity states conveyed by such rich sources information. Therefore, behavioural...
We have recorded a new corpus of emotionally coloured conversations. Users were while holding conversations with an operator who adopts in sequence four roles designed to evoke emotional reactions. The and the user are seated separate rooms; they see each other through teleprompter screens, hear speakers. To allow high quality recording, by five high-resolution, framerate cameras, microphones. All sensor information is synchronously, accuracy 25 μs. In total, we 20 participants, for total...
The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) "Real-life depression, affect" will be the seventh competition event aimed at comparison of multimedia processing machine learning methods for automatic audiovisual depression emotion analysis, with all participants competing under strictly same conditions. goal is to provide a common benchmark test set multimodal information bring together recognition communities, as well compare relative merits various approaches from real-life...
The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) 'State-of-Mind, Detecting Depression with AI, Cross-cultural Affect Recognition' is the ninth competition event aimed at comparison of multimedia processing machine learning methods for automatic audiovisual health emotion analysis, all participants competing strictly under same conditions. goal to provide a common benchmark test set multimodal information bring together recognition communities, as well compare relative merits...
We present the second Audio-Visual Emotion recognition Challenge and workshop (AVEC 2012), which aims to bring together researchers from audio video analysis communities around topic of emotion recognition. The goal challenge is recognise four continuously valued affective dimensions: arousal, expectancy, power, valence. There are two sub-challenges: in Fully Continuous Sub-Challenge participants have predict values dimensions at every moment during recordings, while for Word-Level a single...
This paper describes a substantial effort to build real-time interactive multimodal dialogue system with focus on emotional and nonverbal interaction capabilities. The work is motivated by the aim provide technology competences in perceiving producing behaviors required sustain conversational dialogue. We present Sensitive Artificial Listener (SAL) scenario as setting which seems particularly suited for study of behavior since it requires only very limited verbal understanding part machine....
We present the first Audio-Visual+ Emotion recognition Challenge and workshop (AV+EC 2015) aimed at comparison of multimedia processing machine learning methods for automatic audio, visual physiological emotion analysis. This is 5th event in AVEC series, but very that bridges across video data. The goal to provide a common benchmark test set multimodal information bring together communities, compare relative merits three approaches under well-defined strictly comparable conditions establish...
The Audio/Visual Emotion Challenge and Workshop (AVEC 2018) "Bipolar disorder, cross-cultural affect recognition'' is the eighth competition event aimed at comparison of multimedia processing machine learning methods for automatic audiovisual health emotion analysis, with all participants competing strictly under same conditions. goal to provide a common benchmark test set multimodal information bring together recognition communities, as well compare relative merits various approaches from...
Representing computationally everyday emotional states is a challenging task and, arguably, one of the most fundamental for affective computing. Standard practice in emotion annotation to ask humans assign an absolute value intensity each behavior they observe. Psychological theories and evidence from multiple disciplines including neuroscience, economics artificial intelligence, however, suggest that assigning reference-based (relative) values subjective notions better aligned with...
The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood Emotion" will be the sixth competition event aimed at comparison of multimedia processing machine learning methods for automatic audio, visual physiological depression emotion analysis, with all participants competing under strictly same conditions. goal is to provide a common benchmark test set multi-modal information bring together recognition communities, as well video compare relative merits various approaches...
Computational representation of everyday emotional states is a challenging task and, arguably, one the most fundamental for affective computing. Standard practice in emotion annotation to ask people assign value intensity or class each behavior they observe. Psychological theories and evidence from multiple disciplines including neuroscience, economics artificial intelligence, however, suggest that assigning reference-based values subjective notions better aligned with underlying...