- Explainable Artificial Intelligence (XAI)
- Cutaneous Melanoma Detection and Management
- Cell Image Analysis Techniques
- AI in cancer detection
- Privacy-Preserving Technologies in Data
- Adversarial Robustness in Machine Learning
- Artificial Intelligence in Healthcare and Education
- Handwritten Text Recognition Techniques
- Scientific Computing and Data Management
- Machine Learning in Healthcare
- Cryptography and Data Security
- Rough Sets and Fuzzy Logic
- Data Management and Algorithms
- Advanced Image and Video Retrieval Techniques
- Bayesian Modeling and Causal Inference
- Pressure Ulcer Prevention and Management
- Machine Learning and Data Classification
- Anomaly Detection Techniques and Applications
- Data Quality and Management
- Nonmelanoma Skin Cancer Studies
- Wound Healing and Treatments
- Multimodal Machine Learning Applications
- Diabetic Foot Ulcer Assessment and Management
- Ethics and Social Impacts of AI
- Medical Imaging and Analysis
German Research Centre for Artificial Intelligence
2019-2024
Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
2021-2024
University of Kaiserslautern
2020-2022
Pforzheim University of Applied Sciences
2020
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of artificial intelligence (AI) system component for healthcare. The explains decisions made by deep learning networks analyzing images skin lesions. trustworthy AI developed here used a holistic approach rather than static ethical checklist and required multidisciplinary team experts working with designers their managers. Ethical, legal, technical issues potentially arising...
The field of explainable AI (XAI) has quickly become a thriving and prolific community. However, silent, recurrent acknowledged issue in this area is the lack consensus regarding its terminology. In particular, each new contribution seems to rely on own (and often intuitive) version terms like "explanation" "interpretation". Such disarray encumbers consolidation advances towards fulfillment scientific regulatory demands e.g., when comparing methods or establishing their compliance w.r.t....
Deep learning based medical image classifiers have shown remarkable prowess in various application areas like ophthalmology, dermatology, pathology, and radiology. However, the acceptance of these Computer-Aided Diagnosis (CAD) systems real clinical setups is severely limited primarily because their decision-making process remains largely obscure. This work aims at elucidating a deep classifier by verifying that model learns utilizes similar disease-related concepts as described employed...
Diabetic foot is a common complication associated with diabetes mellitus (DM) leading to ulcerations in the feet. Due diabetic neuropathy, most patients have reduced sensitivity pain. As result, minor injuries go unnoticed and progress into ulcers. The timely detection of potential ulceration points intervention crucial preventing amputation. Changes plantar temperature are one early signs ulceration. Previous studies focused on either binary classification or grading DM severity, but...
Since the advent of deep learning (DL), field has witnessed a continuous stream innovations. However, translation these advancements into practical applications not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by ongoing research eXplainable AI (XAI) privacy-preserving machine (PPML), which seek to address some limitations associated with opaque data-intensive models. Despite...
With the advent of machine learning in applications critical infrastructure such as healthcare and energy, privacy is a growing concern minds stakeholders. It pivotal to ensure that neither model nor data can be used extract sensitive information by attackers against individuals or harm whole societies through exploitation infrastructure. The applicability these domains mostly limited due lack trust regarding transparency constraints. Various safety-critical use cases (mostly relying on...
Artificial Intelligence (AI) has achieved remarkable success in image generation, analysis, and language modeling, making data-driven techniques increasingly relevant practical real-world applications, promising enhanced creativity efficiency for human users. However, the deployment of AI high-stakes domains such as infrastructure healthcare still raises concerns regarding algorithm accountability safety. The emerging field explainable (XAI) made significant strides developing interfaces...
Since the mid-10s, era of Deep Learning (DL) has continued to this day, bringing forth new superlatives and innovations each year. Nevertheless, speed with which these translate into real applications lags behind fast pace. Safety-critical applications, in particular, underlie strict regulatory ethical requirements need be taken care are still active areas debate. eXplainable AI (XAI) privacy-preserving machine learning (PPML) both crucial research fields, aiming at mitigating some drawbacks...
In this paper we study the performance evaluation of state-of-the-art object detection models for task bibliographic reference from document images. The motivation evaluating in hand is inspired how human perceive a containing references. Humans can easily distinguish between different references just by exploiting layout with glimpse an eye, without understanding content. Existing systems are purely based on textual By contrast, employed four state-of-the art and compared their text...
Remarkable success of modern image-based AI methods and the resulting interest in their applications critical decision-making processes has led to a surge efforts make such intelligent systems transparent explainable. The need for explainable does not stem only from ethical moral grounds but also stricter legislation around world mandating clear justifiable explanations any decision taken or assisted by AI. Especially medical context where Computer-Aided Diagnosis can have direct influence...
The field of explainable AI (XAI) has quickly become a thriving and prolific community. However, silent, recurrent acknowledged issue in this area is the lack consensus regarding its terminology. In particular, each new contribution seems to rely on own (and often intuitive) version terms like "explanation" "interpretation". Such disarray encumbers consolidation advances towards fulfillment scientific regulatory demands e.g., when comparing methods or establishing their compliance with...
Early detection of skin cancers like melanoma is crucial to ensure high chances survival for patients. Clinical application Deep Learning (DL)-based Decision Support Systems (DSS) cancer screening has the potential improve quality patient care. The majority work in medical AI community focuses on a diagnosis setting that mainly relevant autonomous operation. Practical decision support should, however, go beyond plain and provide explanations. This paper provides an overview works towards...
Trustworthiness is a major prerequisite for the safe application of opaque deep learning models in high-stakes domains like medicine. Understanding decision-making process not only contributes to fostering trust but might also reveal previously unknown decision criteria complex that could advance state medical research. The discovery decision-relevant concepts from black box particularly challenging task. This study proposes Concept Discovery through Latent Diffusion-based Counterfactual...
Privacy-preservation is of key importance for the transition modern deep learning algorithms into everyday applications dealing with sensitive data, such as healthcare, finance and several other domains critical infrastructure. One major impediment research in computer science considerable time investment required to set up experiments their evaluation. In domain privacy-preserving learning, this aggravated by dispersion implementations throughout frameworks libraries. This work introduces...
One principal impediment in the successful deployment of AI-based Computer-Aided Diagnosis (CAD) systems clinical workflows is their lack transparent decision making. Although commonly used eXplainable AI methods provide some insight into opaque algorithms, such explanations are usually convoluted and not readily comprehensible except by highly trained experts. The explanation decisions regarding malignancy skin lesions from dermoscopic images demands particular clarity, as underlying...
It is generally believed that the human visual system biased towards recognition of shapes rather than textures. This assumption has led to a growing body work aiming align deep models' decision-making processes with fundamental properties vision. The reliance on shape features primarily expected improve robustness these models under covariate shift. In this paper, we revisit significance shape-biases for classification skin lesion images. Our analysis shows different datasets exhibit...