Dennis Vetter

ORCID: 0000-0002-5977-5535
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Artificial Intelligence in Healthcare and Education
  • Ethics and Social Impacts of AI
  • Explainable Artificial Intelligence (XAI)
  • Complex Systems and Decision Making
  • Radiology practices and education
  • Imbalanced Data Classification Techniques
  • Machine Learning in Healthcare
  • Cardiac Arrest and Resuscitation
  • Surgical Simulation and Training
  • Adversarial Robustness in Machine Learning
  • Disaster Response and Management
  • Machine Learning and Algorithms
  • AI and HR Technologies
  • COVID-19 diagnosis using AI
  • Simulation-Based Education in Healthcare
  • Innovations in Medical Education
  • Anomaly Detection Techniques and Applications
  • Advanced Vision and Imaging
  • Cutaneous Melanoma Detection and Management

Goethe University Frankfurt
2021-2023

University of Minnesota
2014

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents review of the key arguments favor and against explainability AI-powered Clinical Decision Support System (CDSS) applied to concrete use case, namely an CDSS currently used emergency call setting identify patients with life-threatening cardiac arrest. More specifically, we performed normative analysis using socio-technical scenarios provide nuanced account role CDSSs allowing abstractions...

10.1371/journal.pdig.0000016 article EN cc-by PLOS Digital Health 2022-02-17

This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of artificial intelligence (AI) system component for healthcare. The explains decisions made by deep learning networks analyzing images skin lesions. trustworthy AI developed here used a holistic approach rather than static ethical checklist and required multidisciplinary team experts working with designers their managers. Ethical, legal, technical issues potentially arising...

10.3389/fhumd.2021.688152 article EN cc-by Frontiers in Human Dynamics 2021-07-13

Artificial Intelligence (AI) has the potential to greatly improve delivery of healthcare and other services that advance population health wellbeing. However, use AI in also brings risks may cause unintended harm. To guide future developments AI, High-Level Expert Group on set up by European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These are aimed at a variety stakeholders, especially guiding practitioners toward more ethical robust...

10.3389/fhumd.2021.673104 article EN cc-by Frontiers in Human Dynamics 2021-07-08

This article's main contributions are twofold: 1) to demonstrate how apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice domain of healthcare and 2) investigate research question what does "trustworthy AI" mean at time COVID-19 pandemic. To this end, we present results a post-hoc self-assessment evaluate trustworthiness an system predicting multiregional score conveying degree lung compromise patients, developed verified by...

10.1109/tts.2022.3195114 article EN cc-by-nc-nd IEEE Transactions on Technology and Society 2022-07-29

Abstract Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though multitude of guidelines for the design and development such trustworthy AI exist, these focus on high-level abstract requirements systems, it often very difficult assess if specific system fulfills requirements. The Z-Inspection® process provides holistic dynamic framework evaluate trustworthiness at different stages lifecycle, including intended use, design, development....

10.1007/s44206-023-00063-1 article EN cc-by Deleted Journal 2023-09-09

This report is a methodological reflection on Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ holistic process used to evaluate the trustworthiness of AI-based technologies at different stages AI lifecycle. It focuses, in particular, identification and discussion ethical issues tensions through elaboration socio-technical scenarios. uses general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. illustrates both researchers...

10.48550/arxiv.2206.09887 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01

This report shares the experiences, results and lessons learned in conducting a pilot project ``Responsible use of AI'' cooperation with Province Friesland, Rijks ICT Gilde-part Ministry Interior Kingdom Relations (BZK) (both The Netherlands) group members Z-Inspection$^{\small{\circledR}}$ Initiative. took place from May 2022 through January 2023. During pilot, practical application deep learning algorithm province Fr\^yslan was assessed. AI maps heathland grassland by means satellite...

10.48550/arxiv.2404.14366 preprint EN arXiv (Cornell University) 2024-04-22

Knowledge distillation (KD) remains challenging due to the opaque nature of knowledge transfer process from a Teacher Student, making it difficult address certain issues related KD. To this, we proposed UniCAM, novel gradient-based visual explanation method, which effectively interprets learned during Our experimental results demonstrate that with guidance Teacher's knowledge, Student model becomes more efficient, learning relevant features while discarding those are not relevant. We refer...

10.48550/arxiv.2412.13943 preprint EN arXiv (Cornell University) 2024-12-18

Assessing the trustworthiness of artificial intelligence systems requires knowledge from many different disciplines. These disciplines do not necessarily share concepts between them and might use words with meanings, or even same differently. Additionally, experts be aware specialized terms readily used in other Therefore, a core challenge assessment process is to identify when talk about problem but terminologies. In words, group descriptions (a.k.a. issues) semantic meaning described using...

10.48550/arxiv.2208.04608 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01

Abstract. While didactic training is a crucial element of education in the health care profession, current technologies leveraging low cost data acquisition and processing may provide an attractive alternative means for rapid, objective assessment foundational skills. When these are leveraged towards improving procedural surgical skill set, there strong opportunity enhancing practices. proctor will still play role refinement clinical judgment, affordable options rapid skills serve as avenue...

10.5194/ms-5-17-2014 article EN cc-by Mechanical sciences 2014-02-03
Coming Soon ...