Guillaume Jeanneret

ORCID: 0000-0002-0055-7816
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Explainable Artificial Intelligence (XAI)
  • Advanced Neural Network Applications
  • Domain Adaptation and Few-Shot Learning
  • Cell Image Analysis Techniques
  • Visual Attention and Saliency Detection
  • Generative Adversarial Networks and Image Synthesis
  • Integrated Circuits and Semiconductor Failure Analysis
  • SARS-CoV-2 detection and testing
  • SARS-CoV-2 and COVID-19 Research
  • Human Pose and Action Recognition
  • Video Surveillance and Tracking Methods
  • 3D Shape Modeling and Analysis
  • Human Motion and Animation
  • Biosensors and Analytical Detection
  • Interdisciplinary Research and Collaboration
  • Topic Modeling
  • Spectroscopy Techniques in Biomedical and Chemical Research
  • Scientific Computing and Data Management
  • Image Processing Techniques and Applications
  • Image Enhancement Techniques
  • Machine Learning in Materials Science
  • Privacy-Preserving Technologies in Data
  • Machine Learning in Healthcare

Université de Caen Normandie
2023-2024

Centre National de la Recherche Scientifique
2023-2024

École Nationale Supérieure d'Ingénieurs de Caen
2023-2024

GREYC
2023-2024

Universidad de Los Andes
2020-2022

King Abdullah University of Science and Technology
2021

Universidad de Los Andes
2019

This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving identification and modification fewest necessary features to alter a classifier's prediction for given image. Our proposed method, Text-to-Image Models (TIME), is black-box counterfactual technique based on distillation. Unlike previous methods, this approach requires solely image its prediction, omitting need structure, parameters, or gradients. Before counterfactuals, TIME introduces two distinct...

10.1109/wacv57701.2024.00469 article EN 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024-01-03

Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, cannot be used directly in counterfactual explanation perspective, as such are perceived noise not actionable understandable image modifications. Building on the robust learning literature, this paper proposes an elegant method to turn into semantically meaningful perturbations, without modifying classifiers explain. The proposed...

10.1109/cvpr52729.2023.01576 preprint EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

Massive molecular testing for COVID-19 has been pointed out as fundamental to moderate the spread of pandemic. Pooling methods can enhance efficiency, but they are viable only at low incidences disease. We propose Smart Pooling, a machine learning method that uses clinical and sociodemographic data from patients increase efficiency informed Dorfman by arranging samples into all-negative pools. To do this, we ran an automated train numerous models on retrospective dataset more than 8000...

10.1038/s41598-022-10128-9 article EN cc-by Scientific Reports 2022-04-20

Mixed reality applications require tracking the user's full-body motion to enable an immersive experience. However, typical head-mounted devices can only track head and hand movements, leading a limited reconstruction of due variability in lower body configurations. We propose BoDiffusion – generative diffusion model for synthesis tackle this under-constrained problem. present time space conditioning scheme that allows leverage sparse inputs while generating smooth realistic sequences. To...

10.1109/iccvw60793.2023.00456 article EN 2023-10-02

Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks. In this work, we study how equipping with Test-time Transformation Ensembling (TTE) can work a reliable defense against such While transforming the input data, both at train and test times, is enhance model performance, its effects on robustness have not been studied. Here, present comprehensive empirical of impact TTE, in form widely-used image transforms, robustness. We show that TTE...

10.1109/iccvw54120.2021.00015 article EN 2021-10-01

Summary Background COVID-19 is an acute respiratory illness caused by the novel coronavirus SARS-CoV-2. The disease has rapidly spread to most countries and territories 14·2 million confirmed infections 602,037 deaths as of July 19 th 2020. Massive molecular testing for been pointed fundamental moderate disease. Pooling methods can enhance efficiency, but they are viable only at very low incidences We propose Smart Pooling, a machine learning method that uses clinical sociodemographic data...

10.1101/2020.07.13.20152983 preprint EN cc-by-nc-nd medRxiv (Cold Spring Harbor Laboratory) 2020-07-15

Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable. In this paper, we propose DiME, method allowing the generation of counterfactual images using recent diffusion models. By leveraging guided generative process, our proposed methodology shows how use gradients target classifier generate input instances. Further, analyze current approaches evaluate spurious correlations and extend evaluation measurements by proposing new...

10.48550/arxiv.2203.15636 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Download This Paper Open PDF in Browser Add to My Library Share: Permalink Using these links will ensure access this page indefinitely Copy URL DOI

10.2139/ssrn.4768660 preprint EN 2024-01-01

10.1016/j.cviu.2024.104207 article EN Computer Vision and Image Understanding 2024-10-30

We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore effect on robustness against adversarial attacks replacing first layers various architectures Gabor layers, i.e. convolutional filters that are based learnable parameters. observe enhanced gain a consistent boost in over regular models and preserve high generalizing test performance, even though these come at negligible increase number then exploit closed form expression to...

10.48550/arxiv.1912.05661 preprint EN cc-by arXiv (Cornell University) 2019-01-01

Adversarial Robustness is a growing field that evidences the brittleness of neural networks. Although literature on adversarial robustness vast, dimension missing in these studies: assessing how severe mistakes are. We call this notion "Adversarial Severity" since it quantifies downstream impact corruptions by computing semantic error between misclassification and proper label. propose to study effects noise measuring Severity into large-scale dataset: iNaturalist-H. Our contributions are:...

10.1109/iccvw54120.2021.00013 article EN 2021-10-01

Mixed reality applications require tracking the user's full-body motion to enable an immersive experience. However, typical head-mounted devices can only track head and hand movements, leading a limited reconstruction of due variability in lower body configurations. We propose BoDiffusion -- generative diffusion model for synthesis tackle this under-constrained problem. present time space conditioning scheme that allows leverage sparse inputs while generating smooth realistic sequences. To...

10.48550/arxiv.2304.11118 preprint EN cc-by-nc-sa arXiv (Cornell University) 2023-01-01

This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving identification and modification fewest necessary features to alter a classifier's prediction for given image. Our proposed method, Text-to-Image Models (TIME), is black-box counterfactual technique based on distillation. Unlike previous methods, this approach requires solely image its prediction, omitting need structure, parameters, or gradients. Before counterfactuals, TIME introduces two distinct...

10.48550/arxiv.2309.07944 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, cannot be used directly in counterfactual explanation perspective, as such are perceived noise not actionable understandable image modifications. Building on the robust learning literature, this paper proposes an elegant method to turn into semantically meaningful perturbations, without modifying classifiers explain. The proposed...

10.48550/arxiv.2303.09962 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Instance-level video segmentation requires a solid integration of spatial and temporal information. However, current methods rely mostly on domain-specific information (online learning) to produce accurate instance-level segmentations. We propose novel approach that relies exclusively the generic spatio-temporal attention cues. Our strategy, named Multi-Attention Instance Network (MAIN), overcomes challenging scenarios over arbitrary videos without modelling sequence- or instance-specific...

10.48550/arxiv.1904.05847 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Abstract Massive molecular testing for COVID-19 has been pointed out as fundamental to moderate the spread of pandemic. Pooling methods can enhance efficiency, but they are viable only at low incidences disease. We propose Smart Pooling, a machine learning method that uses clinical and sociodemographic data from patients increase efficiency informed Dorfman by arranging samples into all-negative pools. To do this, we ran an automated train numerous models on retrospective dataset more than...

10.21203/rs.3.rs-805716/v1 preprint EN cc-by Research Square (Research Square) 2021-08-20
Coming Soon ...