Elisabeth André

ORCID: 0000-0002-2367-162X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Social Robot Interaction and HRI
  • Speech and dialogue systems
  • Emotion and Mood Recognition
  • Innovative Human-Technology Interaction
  • Human Motion and Animation
  • AI in Service Interactions
  • Multi-Agent Systems and Negotiation
  • Interactive and Immersive Displays
  • Artificial Intelligence in Games
  • Virtual Reality Applications and Impacts
  • Video Analysis and Summarization
  • Multimedia Communication and Technology
  • Context-Aware Activity Recognition Systems
  • Natural Language Processing Techniques
  • Explainable Artificial Intelligence (XAI)
  • Language, Metaphor, and Cognition
  • Gaze Tracking and Assistive Technology
  • Speech and Audio Processing
  • Music and Audio Processing
  • Semantic Web and Ontologies
  • EEG and Brain-Computer Interfaces
  • Speech Recognition and Synthesis
  • Human-Automation Interaction and Safety
  • Ethics and Social Impacts of AI
  • Tactile and Sensory Interactions

University of Augsburg
2016-2025

Augsburg University
2012-2024

German Research Centre for Artificial Intelligence
1993-2024

University Hospital Bonn
2024

University of Bonn
2024

Stellenbosch University
2024

Tygerberg Hospital
2024

National Human Genome Research Institute
2024

National Institutes of Health
2024

Imperial College London
2024

Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and not always extracted in similar fashion. With many independent teams working different research areas, shared standards become an essential safeguard ensure compliance with state-of-the-art methods allowing appropriate comparison results across studies potential integration combination extraction recognition systems. In this paper we propose basic standard...

10.1109/taffc.2015.2457417 article EN cc-by IEEE Transactions on Affective Computing 2015-07-16

Little attention has been paid so far to physiological signals for emotion recognition compared audiovisual channels such as facial expression or speech. This paper investigates the potential of reliable recognition. All essential stages an automatic system are discussed, from recording a dataset feature-based multiclass classification. In order collect multiple subjects over many weeks, we used musical induction method which spontaneously leads real emotional states, without any deliberate...

10.1109/tpami.2008.26 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2008-09-29

Little attention has been paid so far to physiological signals for emotion recognition compared audio-visual channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented system including data analysis and classification. For collecting in different affective states, used music induction method which elicits natural emotional reactions from subject. Four-channel biosensors are obtain electromyogram, electrocardiogram, skin...

10.1109/icme.2005.1521579 article EN 2005-10-24

Discusses some of the key issues that must be addressed in creating virtual humans, or androids. As a first step, we overview and available tools three areas human research: face-to-face conversation, emotions personality, figure animation. Assembling is still daunting task, but building blocks are getting bigger better every day.

10.1109/mis.2002.1024753 article EN IEEE Intelligent Systems 2002-07-01

Abstract The facial gestalt (overall morphology) is a characteristic clinical feature in many genetic disorders that often essential for suspecting and establishing specific diagnosis. Therefore, publishing images of individuals affected by pathogenic variants disease-associated genes has been an important part scientific communication. Furthermore, medical imaging data also crucial teaching training deep-learning models such as GestaltMatcher. However, sparsely available, sharing patient...

10.1038/s41431-025-01787-z article EN cc-by European Journal of Human Genetics 2025-01-15

We present a data-mining experiment on feature selection for automatic emotion recognition. Starting from more than 1000 features derived pitch, energy and MFCC time series, the most relevant in respect to data are selected this set by removing correlated features. The acted realistic emotions analyzed show significant differences. All computed automatically we also contrast with manually units of analysis. A higher degree automation did not prove be disadvantage terms recognition accuracy

10.1109/icme.2005.1521463 article EN 2005-10-24

In this paper the development of an electromyogram (EMG) based interface for hand gesture recognition is presented. To recognize control signs in gestures, we used a single channel EMG sensor positioned on inside forearm. addition to common statistical features such as variance, mean value, and standard deviation, also calculated from time frequency domain including Fourier region length, zerocrosses, occurrences, etc. For realizing real-time classification assuring acceptable accuracy,...

10.1145/1378773.1378778 article EN 2008-01-13

Automatic detection and interpretation of social signals carried by voice, gestures, mimics, etc. will play a key-role for next-generation interfaces as it paves the way towards more intuitive natural human-computer interaction. The paper at hand introduces Social Signal Interpretation (SSI), framework real-time recognition signals. SSI supports large range sensor devices, filter feature algorithms, well as, machine learning pattern tools. It encourages developers to add new components using...

10.1145/2502081.2502223 article EN 2013-10-21

The study at hand aims the development of a multimodal, ensemble-based system for emotion recognition. Special attention is given to problem often neglected: missing data in one or more modalities. In offline evaluation issue can be easily solved by excluding those parts corpus where channels are corrupted not suitable evaluation. real applications, however, we cannot neglect challenge and have find adequate ways handle it. To address this, do expect examined completely available all time...

10.1109/t-affc.2011.12 article EN IEEE Transactions on Affective Computing 2011-06-17

The ability to display emotions is a key feature in human communication and also for robots that are expected interact with humans social environments. For expressions based on Body Movement other signals than facial expressions, like Sound, no common grounds have been established so far. Based psychological research expression of perception emotional stimuli we created eight different expressional designs the Anger, Sadness, Fear Joy, consisting Movements, Sounds Eye Colors. In large...

10.1109/roman.2011.6005263 article EN Roman 20-50 2011-07-01

While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, resulting systems suffer a loss transparency and comprehensibility. This development led to an on-going resurgence explainable (XAI) which aims reduce opaqueness those black-box-models. However, much current XAI-Research is focused on practitioners engineers while omitting specific needs end-users. In this paper, we examine impact virtual agents within...

10.1145/3308532.3329441 article EN 2019-07-01

Abstract While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, resulting systems suffer a loss transparency and comprehensibility, especially for end-users. In this paper, we explore effects incorporating virtual agents into explainable (XAI) designs on perceived trust For purpose, conducted user study based simple speech recognition system keyword classification. As result experiment, found that integration...

10.1007/s12193-020-00332-0 article EN cc-by Journal on Multimodal User Interfaces 2020-07-09

With the ongoing rise of machine learning, need for methods explaining decisions made by artificial intelligence systems is becoming a more and important topic. Especially image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting areas input data. Contrary, counterfactual explanation try enable reasoning modifying in way that classifier would have different prediction. By doing so, users are equipped with completely kind explanatory...

10.3389/frai.2022.825565 article EN cc-by Frontiers in Artificial Intelligence 2022-04-08

Rare genetic disorders affect more than 6% of the global population. Reaching a diagnosis is challenging because rare are very diverse. Many have recognizable facial features that hints for clinicians to diagnose patients. Previous work, such as GestaltMatcher, utilized representation vectors produced by DCNN similar AlexNet match patients in high-dimensional feature space support "unseen" ultra-rare disorders. However, architecture and dataset used transfer learning GestaltMatcher become...

10.1109/wacv56688.2023.00499 article EN 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023-01-01

Speech is the fundamental mode of human communication, and its synthesis has long been a core priority in human–computer interaction research. In recent years, machines have managed to master art generating speech that understandable by humans. However, linguistic content an utterance encompasses only part meaning. Affect, or expressivity, capacity turn into medium capable conveying intimate thoughts, feelings, emotions—aspects are essential for engaging naturalistic interpersonal...

10.1109/jproc.2023.3250266 article EN Proceedings of the IEEE 2023-03-10
Coming Soon ...