- Explainable Artificial Intelligence (XAI)
- Artificial Intelligence in Healthcare and Education
- Decision-Making and Behavioral Economics
- Psychology of Moral and Emotional Judgment
- Blood Pressure and Hypertension Studies
- SARS-CoV-2 and COVID-19 Research
- Machine Learning in Healthcare
- COVID-19 Clinical Research Studies
- COVID-19 diagnosis using AI
- Imbalanced Data Classification Techniques
- Ethics and Social Impacts of AI
- Flow Experience in Various Fields
- Safety Warnings and Signage
- Artificial Intelligence in Law
- Neurological Disorders and Treatments
- Adversarial Robustness in Machine Learning
- Mental Health via Writing
- Cutaneous Melanoma Detection and Management
- Acupuncture Treatment Research Studies
- Acute Ischemic Stroke Management
- Chronic Disease Management Strategies
- Mind wandering and attention
- Cerebrovascular and Carotid Artery Diseases
- Death Anxiety and Social Exclusion
- Sentiment Analysis and Opinion Mining
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo"
2022-2025
Confederazione Nazionale dell'Artigianato e Della Piccola e Media Impresa
2023
National Research Council
2021-2022
Azienda Socio Sanitaria Territoriale Grande Ospedale Metropolitano Niguarda
2018-2021
Azienda Socio Sanitaria Territoriale Lariana
2020
University of Trento
2019
The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians investigate reasons behind its suggestions. In this paper, we present results a user study impact advice from clinical healthcare providers' judgment two different cases: case where explains suggestion and it does not. We examined weight advice,...
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box models and way such are presented users, i.e., explanation user interface. Despite its importance, second aspect has received limited attention so far in literature. Effective interfaces fundamental for allowing human decision-makers take advantage oversee high-risk systems effectively. Following an iterative design approach, we present first cycle...
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes critical role interpretability transparency in AI systems for diagnosing diseases, predicting patient outcomes, creating personalized treatment plans. While acknowledging complexities inherent trade-offs between model performance, our work underscores significance XAI methods enhancing...
A crucial challenge in critical settings like medical diagnosis is making deep learning models used decision-making systems interpretable. Efforts Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods evaluated on broad classifiers and fail complex, real-world issues, such as diagnosis. In our study, we focus enhancing user trust confidence automated AI systems, particularly for diagnosing skin lesions, by tailoring an method explain model’s...
Reverse Transcription-Polymerase Chain Reaction (RT-PCR) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) diagnosis currently requires quite a long time span. A quicker and more efficient diagnostic tool in emergency departments could improve management during this global crisis. Our main goal was assessing the accuracy of artificial intelligence predicting results RT-PCR SARS-COV-2, using basic information at hand all departments.This is retrospective study carried out...
This article's main contributions are twofold: 1) to demonstrate how apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice domain of healthcare and 2) investigate research question what does "trustworthy AI" mean at time COVID-19 pandemic. To this end, we present results a post-hoc self-assessment evaluate trustworthiness an system predicting multiregional score conveying degree lung compromise patients, developed verified by...
Explainable AI (XAI) provides methods to understand non-interpretable machine learning models. However, we have little knowledge about what legal experts expect from these explanations, including their compliance with, and value against European Union legislation. To close this gap, present the Explanation Dialogues, an expert focus study uncover expectations, reasoning, understanding of practitioners towards XAI, with a specific on General Data Protection Regulation. The consists online...
Abstract A key issue in critical contexts such as medical diagnosis is the interpretability of deep learning models adopted decision-making systems. Research eXplainable Artificial Intelligence (XAI) trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems those diagnosis. In paper, we aim at improving trust confidence users towards automatic AI decision systems field skin lesion by customizing an existing...
Objective Through a hospital-based SARS-CoV-2 molecular and serological screening, we evaluated the effectiveness of two months lockdown surveillance, in Milan, Lombardy, first to be overwhelmed by COVID-19 pandemics during March-April 2020. Methods All subjects presenting at major hospital Milan from May-11 July-5, 2020, underwent screening chemiluminescent assays. Those admitted were further tested RT-PCR. Results The cumulative anti-N IgG seroprevalence 2753 analyzed was 5.1% (95%CI =...
Detecting and characterizing people with mental disorders is an important task that could help the work of different healthcare professionals. Sometimes, a diagnosis for specific requires long time, possibly causing problems because being diagnosed can give access to support groups, treatment programs, medications might patients. In this paper, we study problem exploiting supervised learning approaches, based on users' psychometric profiles extracted from Reddit posts, detect users dealing...
A key issue in critical contexts such as medical diagnosis is the interpretability of deep learning models adopted decision-making systems. Research eXplainable Artificial Intelligence (XAI) trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems those diagnosis. In paper, we analyze a case study skin lesion images where customize an existing approach for explaining model able recognize different types...
Abstract Objective: Reverse Transcription-Polymerase Chain Reaction (RT-PCR) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) diagnosis currently requires quite a long time span. A quicker and more efficient diagnostic tool in emergency departments could improve management during this global crisis. Our maingoal was assessing the accuracy of artificial intelligence forecasting resultsof RT-PCR SARS-COV-2, using basic information at hand all emergencydepartments. Methods: This...
Flow is a precious mental status for achieving high sports performance. It defined as an emotional state with valence and arousal levels. However, viable detection system that could provide information about it in real-time not yet recognized. The prospective work presented here aims to the creation of online flow framework. A supervised machine learning model will be trained predict levels, both on already existing databases freshly collected physiological data. As final result, definition...
Artificial Intelligence (AI) is increasingly used to build Decision Support Systems (DSS) across many domains. This paper describes a series of experiments designed observe human response different characteristics DSS such as accuracy and bias, particularly the extent which participants rely on DSS, performance they achieve. In our experiments, play simple online game inspired by so-called "wildcat" (i.e., exploratory) drilling for oil. The landscape has two layers: visible layer describing...
Objective: Data regarding prevalence and clinical management of hypertensive emergencies urgencies are lacking heterogeneous. Our goal is to characterize patients with admitted the emergency department (ED) Niguarda hospital. In this population we also want evaluate factors associated organ damage, adherence guidelines impact blood pressure (BP) on short-term (admission hospital mortality) medium-term outcomes (recurrence). Design method: We performed a single-centre retrospective study...