Roddy Cowie

ORCID: 0000-0003-3480-2223
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Emotion and Mood Recognition
  • Color perception and design
  • Music and Audio Processing
  • Visual perception and processing mechanisms
  • Hearing Impairment and Communication
  • Speech and Audio Processing
  • Social Robot Interaction and HRI
  • Face Recognition and Perception
  • Emotions and Moral Behavior
  • Multisensory perception and integration
  • Language, Discourse, Communication Strategies
  • Speech Recognition and Synthesis
  • Speech and dialogue systems
  • Digital Communication and Language
  • Face and Expression Recognition
  • Neuroscience and Music Perception
  • Language, Metaphor, and Cognition
  • Advanced Text Analysis Techniques
  • Subtitles and Audiovisual Media
  • Neural dynamics and brain function
  • Aesthetic Perception and Analysis
  • Phonetics and Phonology Research
  • Action Observation and Synchronization
  • 3D Surveying and Cultural Heritage
  • Humor Studies and Applications

University of Canterbury
2024

Queen's University Belfast
2011-2023

National University of Ireland
1986-2010

Queen's University
2003-2009

University of Stirling
1994

Google (United States)
1986

Royal Society of Medicine
1982

Two channels have been distinguished in human interaction: one transmits explicit messages, which may be about anything or nothing; the other implicit messages speakers themselves. Both linguistics and technology invested enormous efforts understanding first, channel, but second is not as well understood. Understanding party's emotions of key tasks associated with second, channel. To tackle that task, signal processing analysis techniques to developed, while, at same time, consolidating...

10.1109/79.911197 article EN IEEE Signal Processing Magazine 2001-01-01

SEMAINE has created a large audiovisual database as part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage person in sustained, emotionally colored conversation. Data used build the came from interactions between users and "operator” simulating SAL agent, different configurations: Solid (designed so operators displayed appropriate nonverbal behavior) Semi-automatic users' experience approximated interacting with machine). We then recorded user...

10.1109/t-affc.2011.20 article EN IEEE Transactions on Affective Computing 2011-07-26

Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood such as unipolar depression shows a strong temporal correlation with affective dimensions valence and arousal. addition, psychologists psychiatrists take observation expressive facial vocal cues into account while evaluating patient's condition. Depression could result in dampened expressions, avoiding eye contact, using short sentences flat intonation. It is this context that we...

10.1145/2512530.2512533 article EN 2013-10-17

The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood Emotion" will be the sixth competition event aimed at comparison of multimedia processing machine learning methods for automatic audio, visual physiological depression emotion analysis, with all participants competing under strictly same conditions. goal is to provide a common benchmark test set multi-modal information bring together recognition communities, as well video compare relative merits various approaches...

10.1145/2988257.2988258 preprint EN 2016-10-12

Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood such as unipolar depression shows a strong temporal correlation with affective dimensions valence, arousal and dominance. addition structured self-report questionnaires, psychologists psychiatrists use in their evaluation patient's level observation facial expressions vocal cues. It is this context that we present fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This...

10.1145/2661806.2661807 article EN 2014-11-03

Class based emotion recognition from speech, as performed in most works up to now, entails many restrictions for practical applications.Human is a continuum and an automatic system must be able recognise it such.We present novel approach continuous on Long Short-Term Memory Recurrent Neural Networks which include modelling of long-range dependencies between observations thus outperform techniques like Support-Vector Regression.Transferring the innovative concept additionally emotional...

10.21437/interspeech.2008-192 article EN Interspeech 2022 2008-09-22

Despite major advances within the affective computing research field, modelling, analysing, interpreting and responding to naturalistic human behaviour still remains as a challenge for automated systems emotions are complex constructs with fuzzy boundaries substantial individual variations in expression experience. Thus, small number of discrete categories (e.g., happiness sadness) may not reflect subtlety complexity states conveyed by such rich sources information. Therefore, behavioural...

10.1109/fg.2011.5771357 article EN 2011-03-01

We have recorded a new corpus of emotionally coloured conversations. Users were while holding conversations with an operator who adopts in sequence four roles designed to evoke emotional reactions. The and the user are seated separate rooms; they see each other through teleprompter screens, hear speakers. To allow high quality recording, by five high-resolution, framerate cameras, microphones. All sensor information is synchronously, accuracy 25 μs. In total, we 20 participants, for total...

10.1109/icme.2010.5583006 article EN 2010-07-01

The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) "Real-life depression, affect" will be the seventh competition event aimed at comparison of multimedia processing machine learning methods for automatic audiovisual depression emotion analysis, with all participants competing under strictly same conditions. goal is to provide a common benchmark test set multimodal information bring together recognition communities, as well compare relative merits various approaches from real-life...

10.1145/3133944.3133953 preprint EN 2017-10-20

The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) 'State-of-Mind, Detecting Depression with AI, Cross-cultural Affect Recognition' is the ninth competition event aimed at comparison of multimedia processing machine learning methods for automatic audiovisual health emotion analysis, all participants competing strictly under same conditions. goal to provide a common benchmark test set multimodal information bring together recognition communities, as well compare relative merits...

10.1145/3347320.3357688 article EN 2019-10-15

We present the second Audio-Visual Emotion recognition Challenge and workshop (AVEC 2012), which aims to bring together researchers from audio video analysis communities around topic of emotion recognition. The goal challenge is recognise four continuously valued affective dimensions: arousal, expectancy, power, valence. There are two sub-challenges: in Fully Continuous Sub-Challenge participants have predict values dimensions at every moment during recordings, while for Word-Level a single...

10.1145/2388676.2388776 article EN 2012-10-22

This paper describes a substantial effort to build real-time interactive multimodal dialogue system with focus on emotional and nonverbal interaction capabilities. The work is motivated by the aim provide technology competences in perceiving producing behaviors required sustain conversational dialogue. We present Sensitive Artificial Listener (SAL) scenario as setting which seems particularly suited for study of behavior since it requires only very limited verbal understanding part machine....

10.1109/t-affc.2011.34 article EN IEEE Transactions on Affective Computing 2011-10-13

We present the first Audio-Visual+ Emotion recognition Challenge and workshop (AV+EC 2015) aimed at comparison of multimedia processing machine learning methods for automatic audio, visual physiological emotion analysis. This is 5th event in AVEC series, but very that bridges across video data. The goal to provide a common benchmark test set multimodal information bring together communities, compare relative merits three approaches under well-defined strictly comparable conditions establish...

10.1145/2808196.2811642 article EN 2015-10-13

The Audio/Visual Emotion Challenge and Workshop (AVEC 2018) "Bipolar disorder, cross-cultural affect recognition'' is the eighth competition event aimed at comparison of multimedia processing machine learning methods for automatic audiovisual health emotion analysis, with all participants competing strictly under same conditions. goal to provide a common benchmark test set multimodal information bring together recognition communities, as well compare relative merits various approaches from...

10.1145/3266302.3266316 preprint EN 2018-10-15

Representing computationally everyday emotional states is a challenging task and, arguably, one of the most fundamental for affective computing. Standard practice in emotion annotation to ask humans assign an absolute value intensity each behavior they observe. Psychological theories and evidence from multiple disciplines including neuroscience, economics artificial intelligence, however, suggest that assigning reference-based (relative) values subjective notions better aligned with...

10.1109/acii.2017.8273608 article EN 2017-10-01

The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood Emotion" will be the sixth competition event aimed at comparison of multimedia processing machine learning methods for automatic audio, visual physiological depression emotion analysis, with all participants competing under strictly same conditions. goal is to provide a common benchmark test set multi-modal information bring together recognition communities, as well video compare relative merits various approaches...

10.48550/arxiv.1605.01600 preprint EN other-oa arXiv (Cornell University) 2016-01-01

Computational representation of everyday emotional states is a challenging task and, arguably, one the most fundamental for affective computing. Standard practice in emotion annotation to ask people assign value intensity or class each behavior they observe. Psychological theories and evidence from multiple disciplines including neuroscience, economics artificial intelligence, however, suggest that assigning reference-based values subjective notions better aligned with underlying...

10.1109/taffc.2018.2879512 article EN publisher-specific-oa IEEE Transactions on Affective Computing 2018-11-06
Coming Soon ...