- Machine Learning in Healthcare
- Artificial Intelligence in Healthcare and Education
- Explainable Artificial Intelligence (XAI)
- Ethics and Social Impacts of AI
- Law, AI, and Intellectual Property
- Biomedical Text Mining and Ontologies
- Machine Learning and Data Classification
- Human Mobility and Location-Based Analysis
- Artificial Intelligence in Healthcare
- Topic Modeling
- COVID-19 epidemiological studies
- Complex Systems and Time Series Analysis
- Opinion Dynamics and Social Influence
- Radiology practices and education
- Data-Driven Disease Surveillance
- COVID-19 diagnosis using AI
- Nonlinear Dynamics and Pattern Formation
Joint Research Centre
2023-2024
University of Pisa
2022-2023
Scuola Normale Superiore
2019-2022
Institute for Scientific Interchange
2017
University of Turin
2017
Several recent advancements in Machine Learning involve blackbox models: algorithms that do not provide human-understandable explanations support of their decisions. This limitation hampers the fairness, accountability and transparency these models; field eXplainable Artificial Intelligence (XAI) tries to solve this problem providing for black-box models. However, healthcare datasets (and related learning tasks) often present peculiar features, such as sequential data, multi-label...
The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians investigate reasons behind its suggestions. In this paper, we present results a user study impact advice from clinical healthcare providers' judgment two different cases: case where explains suggestion and it does not. We examined weight advice,...
The proposed EU regulation for Artificial Intelligence (AI), the AI Act, has sparked some debate about role of explainable (XAI) in high-risk systems. Some argue that black-box models will have to be replaced with transparent ones, others using XAI techniques might help achieving compliance. This work aims bring clarity as regards context Act and focuses particular on requirements transparency human oversight. After outlining key points describing current limitations techniques, this paper...
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box models and way such are presented users, i.e., explanation user interface. Despite its importance, second aspect has received limited attention so far in literature. Effective interfaces fundamental for allowing human decision-makers take advantage oversee high-risk systems effectively. Following an iterative design approach, we present first cycle...
The pervasive application of algorithmic decision-making is raising concerns on the risk unintended bias in AI systems deployed critical settings such as healthcare. detection and mitigation model a very delicate task that should be tackled with care involving domain experts loop. In this paper we introduce FairLens, methodology for discovering explaining biases. We show how tool can audit fictional commercial black-box acting clinical decision support system (DSS). scenario, healthcare...
The recent availability of large-scale call detail record data has substantially improved our ability quantifying human travel patterns with broad applications in epidemiology. Notwithstanding a number successful case studies, previous works have shown that using different mobility sources, such as mobile phone or census surveys, to parametrize infectious disease models can generate divergent outcomes. Thus, it remains unclear what extent epidemic modelling results may vary when proxies for...
This article's main contributions are twofold: 1) to demonstrate how apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice domain of healthcare and 2) investigate research question what does "trustworthy AI" mean at time COVID-19 pandemic. To this end, we present results a post-hoc self-assessment evaluate trustworthiness an system predicting multiregional score conveying degree lung compromise patients, developed verified by...
With the upcoming enforcement of EU AI Act, documentation high-risk systems and their risk management information will become a legal requirement playing pivotal role in demonstration compliance. Despite its importance, there is lack standards guidelines to assist with drawing up aligned Act. This paper aims address this gap by providing an in-depth analysis Act's provisions regarding technical documentation, wherein we particularly focus on management. On basis analysis, propose Cards as...
With the upcoming enforcement of EU AI Act, documentation high-risk systems and their risk management information will become a legal requirement playing pivotal role in demonstration compliance. Despite its importance, there is lack standards guidelines to assist with drawing up aligned Act. This paper aims address this gap by providing an in-depth analysis Act's provisions regarding technical documentation, wherein we particularly focus on management. On basis analysis, propose Cards as...
The pervasive application of algorithmic decision-making is raising concerns on the risk unintended bias in AI systems deployed critical settings such as healthcare. detection and mitigation biased models a very delicate task which should be tackled with care involving domain experts loop. In this paper we introduce FairLens, methodology for discovering explaining biases. We show how our tool can used to audit fictional commercial black-box model acting clinical decision support system....