Catherine Pélachaud

ORCID: 0000-0003-1008-0799
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Social Robot Interaction and HRI
  • Speech and dialogue systems
  • Human Motion and Animation
  • AI in Service Interactions
  • Emotion and Mood Recognition
  • Language, Metaphor, and Cognition
  • Multi-Agent Systems and Negotiation
  • Face recognition and analysis
  • Human Pose and Action Recognition
  • Hand Gesture Recognition Systems
  • Action Observation and Synchronization
  • Video Analysis and Summarization
  • Hearing Impairment and Communication
  • Language, Discourse, Communication Strategies
  • Multimodal Machine Learning Applications
  • Natural Language Processing Techniques
  • Artificial Intelligence in Games
  • Face Recognition and Perception
  • Robotics and Automated Systems
  • Tactile and Sensory Interactions
  • Speech and Audio Processing
  • Virtual Reality Applications and Impacts
  • Linguistics and Discourse Analysis
  • Emotions and Moral Behavior
  • Psychiatry, Mental Health, Neuroscience

Sorbonne Université
2013-2025

Centre National de la Recherche Scientifique
2016-2025

Institut Systèmes Intelligents et de Robotique
2005-2024

Sorbonne Paris Cité
2024

Université Paris 1 Panthéon-Sorbonne
2020-2022

Université Sorbonne Nouvelle
2019-2021

Laboratoire Traitement et Communication de l’Information
2010-2019

Télécom Paris
2009-2018

Université Paris 8
2002-2018

Multimedia University
2011-2018

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate synchronized speech, intonation, facial expressions, hand gestures. Conversation is created by a dialogue planner that produces the text as well intonation of utterances. The speaker/listener relationship, text, in turn drive lip motions, eye gaze, head motion, arm gestures generators. Coordinated arm, wrist, motions are invoked to create semantically...

10.1145/192161.192272 article EN 1994-01-01

Social Signal Processing is the research domain aimed at bridging social intelligence gap between humans and machines. This paper first survey of that jointly considers its three major aspects, namely, modeling, analysis, synthesis behavior. Modeling investigates laws principles underlying interaction, analysis explores approaches for automatic understanding exchanges recorded with different sensors, studies techniques generation behavior via various forms embodiment. For each above includes...

10.1109/t-affc.2011.27 article EN IEEE Transactions on Affective Computing 2011-08-25

This article reports results from a program that produces high‐quality animation of facial expressions and head movements as automatically possible in conjunction with meaning‐based speech synthesis, including spoken intonation. The goal the research is much to test define our theories formal semantics for such gestures, produce convincing animation. Towards this end, we have produced high‐level programming language three‐dimensional (3‐D) expressions. We been concerned primarily conveying...

10.1207/s15516709cog2001_1 article EN Cognitive Science 1996-01-01

This paper describes a substantial effort to build real-time interactive multimodal dialogue system with focus on emotional and nonverbal interaction capabilities. The work is motivated by the aim provide technology competences in perceiving producing behaviors required sustain conversational dialogue. We present Sensitive Artificial Listener (SAL) scenario as setting which seems particularly suited for study of behavior since it requires only very limited verbal understanding part machine....

10.1109/t-affc.2011.34 article EN IEEE Transactions on Affective Computing 2011-10-13

10.1016/j.specom.2008.04.009 article EN Speech Communication 2008-05-18

In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using taxonomy communicative functions developed by Isabella Poggi [22] to specify behavior agent. Based on representation language, Affective Presentation Markup Language, APML has been defined drive animation agent [4]. Lately, have working creating no longer generic but an individual...

10.1145/1101149.1101301 article EN 2005-11-06

The term “believability” is often used to describe expectations concerning virtual agents. In this paper, we analyze which factors influence the believability of agent acting as software assistant. We consider several such embodiment, communicative behavior, and emotional capabilities. conduct a perceptive study where role plausible and/or appropriate displays in relation believability. also investigate how people judge agent, whether it provokes social reactions humans toward it. Finally,...

10.1162/pres_a_00065 article EN PRESENCE Virtual and Augmented Reality 2011-10-01

We present a novel multi-lingual database of natural dyadic novice-expert interactions, named NoXi, featuring screen-mediated human interactions in the context information exchange and retrieval. NoXi is designed to provide spontaneous with emphasis on adaptive behaviors unexpected situations (e.g. conversational interruptions). A rich set audio-visual data, as well continuous discrete annotations are publicly available through web interface. Descriptors include low level social signals...

10.1145/3136755.3136780 article EN 2017-11-03
Coming Soon ...