Stéphane Ayache

ORCID: 0000-0003-2982-7127
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Image and Video Retrieval Techniques
  • Video Analysis and Summarization
  • Image Retrieval and Classification Techniques
  • Domain Adaptation and Few-Shot Learning
  • Multimodal Machine Learning Applications
  • Natural Language Processing Techniques
  • Music and Audio Processing
  • Machine Learning and Algorithms
  • Topic Modeling
  • Ear Surgery and Otitis Media
  • Explainable Artificial Intelligence (XAI)
  • Machine Learning and Data Classification
  • Human Pose and Action Recognition
  • Tensor decomposition and applications
  • Sparse and Compressive Sensing Techniques
  • Neuroscience and Music Perception
  • Face recognition and analysis
  • Machine Learning in Materials Science
  • Speech Recognition and Synthesis
  • Hearing Loss and Rehabilitation
  • Anomaly Detection Techniques and Applications
  • thermodynamics and calorimetric analyses
  • Algorithms and Data Compression
  • Social Robot Interaction and HRI
  • Adversarial Robustness in Machine Learning

Laboratoire d’Informatique et Systèmes
2018-2024

Centre National de la Recherche Scientifique
2008-2024

Aix-Marseille Université
2011-2024

Centrale Marseille
2023-2024

Institut de Neurosciences de la Timone
2024

Université de Toulon
2019-2023

Centre Hospitalier de Cannes
2021

Laboratoire d’Informatique Fondamentale de Marseille
2008-2020

Innate Pharma (France)
2011-2017

Centre d’Immunologie de Marseille-Luminy
2012

Explainability and interpretability are two critical aspects of decision support systems. Despite their importance, it is only recently that researchers starting to explore these aspects. This paper provides an introduction explainability in the context apparent personality recognition. To best our knowledge, this first effort direction. We describe a challenge we organized on impressions analysis from video. analyze detail newly introduced data set, evaluation protocol, proposed solutions...

10.1109/taffc.2020.2973984 article EN IEEE Transactions on Affective Computing 2020-02-14

This paper reviews and discusses research advances on "explainable machine learning" in computer vision. We focus a particular area of the "Looking at People" (LAP) thematic domain: first impressions personality analysis. Our aim is to make computational intelligence vision communities aware importance developing explanatory mechanisms for computer-assisted decision making applications, such as automating recruitment. Judgments based traits are being made routinely by human resource...

10.1109/ijcnn.2017.7966320 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2017-05-01

Methods based on class activation maps (CAM) provide a simple mechanism to interpret predictions of convolutional neural networks by using linear combinations feature as saliency maps. By contrast, masking-based methods optimize map directly in the image space or learn it training another network additional data. In this work we introduce Opti-CAM, combining ideas from CAM-based and approaches. Our is combination maps, where weights are optimized per such that logit masked for given...

10.1016/j.cviu.2024.104101 article EN cc-by-nc Computer Vision and Image Understanding 2024-08-08

In recent years, joint text-image embeddings have significantly improved thanks to the development of transformer-based Vision-Language models. Despite these advances, we still need better understand representations produced by those this paper, compare pre-trained and fine-tuned at a vision, language multimodal level. To that end, use set probing tasks evaluate performance state-of-the-art models introduce new datasets specifically for probing. These are carefully designed address range...

10.1609/aaai.v36i10.21375 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2022-06-28

Explainability and interpretability are two critical aspects of decision support systems. Within computer vision, they in certain tasks related to human behavior analysis such as health care applications. Despite their importance, it is only recently that researchers starting explore these aspects. This paper provides an introduction explainability the context vision with emphasis on looking at people tasks. Specifically, we review study those mechanisms first impressions analysis. To best...

10.48550/arxiv.1802.00745 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Many cancers cannot be detected early due to lack of effective disease biomarkers, leading poor prognosis. We applied an existing biophysical technology nanoDSF in a novel way answer this unmet biomedical need. developed breakthrough digital biomarker method for cancer detection based on AI-classification plasma denaturation profiles (PDPs) obtained by technology. PDPs from 300 samples patients with melanoma, brain, digestive or lung were automatically distinguished healthy accuracy 94%....

10.1101/2025.03.26.25324680 preprint EN medRxiv (Cold Spring Harbor Laboratory) 2025-03-27

10.1155/2007/56928 article EN EURASIP Journal on Image and Video Processing 2007-01-01

Methods based on class activation maps (CAM) provide a simple mechanism to interpret predictions of convolutional neural networks by using linear combinations feature as saliency maps. By contrast, masking-based methods optimize map directly in the image space or learn it training another network additional data.In this work we introduce Opti-CAM, combining ideas from CAM-based and approaches. Our is combination maps, where weights are optimized per such that logit masked for given...

10.2139/ssrn.4476687 preprint EN 2023-01-01

Understanding how a learned black box works is of crucial interest for the future Machine Learning. In this paper, we pioneer question global interpretability models that assign numerical values to symbolic sequential data. To tackle task, propose spectral algorithm extraction weighted automata (WA) from such boxes. This does not require access dataset or inner representation box: inferred model can be obtained solely by querying box, feeding it with inputs and analyzing its outputs....

10.48550/arxiv.1810.05741 preprint EN cc-by arXiv (Cornell University) 2018-01-01

Glioblastoma is the most frequent and aggressive primary brain tumor. Its diagnosis based on resection or biopsy that could be especially difficult dangerous in case of deep location patient comorbidities. Monitoring disease evolution progression also requires repeated biopsies are often not feasible. Therefore, there an urgent need to develop biomarkers diagnose follow glioblastoma a minimally invasive way. In present study, we described novel cancer detection method plasma denaturation...

10.3390/cancers13061294 article EN Cancers 2021-03-15

In this article, we present two models to jointly and automatically generate the head, facial gaze movements of a virtual agent from acoustic speech features. Two architectures are explored: Generative Adversarial Network an Encoder-Decoder. Head orientation generated as 3D coordinates, while expressions using action units based on coding system. A large corpus almost 4 hours videos, involving 89 different speakers is used train our models. We extract visual features these videos existing...

10.1145/3536220.3558806 article EN INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION 2022-11-04

Glioblastoma (GBM) is the most frequent and aggressive primary brain tumor in adults. Recently, we demonstrated that plasma denaturation profiles of glioblastoma patients obtained using Differential Scanning Fluorimetry can be automatically distinguished from healthy controls with help Artificial Intelligence (AI). Here, used a set machine-learning algorithms to classify according their EGFR status. We found Adaboost AI able discriminate alterations GBM an 81.5% accuracy. Our study shows use...

10.3390/cancers15030760 article EN Cancers 2023-01-26
Coming Soon ...