Cecilia Panigutti

ORCID: 0000-0002-6552-787X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Machine Learning in Healthcare
  • Artificial Intelligence in Healthcare and Education
  • Explainable Artificial Intelligence (XAI)
  • Ethics and Social Impacts of AI
  • Law, AI, and Intellectual Property
  • Biomedical Text Mining and Ontologies
  • Machine Learning and Data Classification
  • Human Mobility and Location-Based Analysis
  • Artificial Intelligence in Healthcare
  • Topic Modeling
  • COVID-19 epidemiological studies
  • Complex Systems and Time Series Analysis
  • Opinion Dynamics and Social Influence
  • Radiology practices and education
  • Data-Driven Disease Surveillance
  • COVID-19 diagnosis using AI
  • Nonlinear Dynamics and Pattern Formation

Joint Research Centre
2023-2024

University of Pisa
2022-2023

Scuola Normale Superiore
2019-2022

Institute for Scientific Interchange
2017

University of Turin
2017

Several recent advancements in Machine Learning involve blackbox models: algorithms that do not provide human-understandable explanations support of their decisions. This limitation hampers the fairness, accountability and transparency these models; field eXplainable Artificial Intelligence (XAI) tries to solve this problem providing for black-box models. However, healthcare datasets (and related learning tasks) often present peculiar features, such as sequential data, multi-label...

10.1145/3351095.3372855 article EN 2020-01-27

The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians investigate reasons behind its suggestions. In this paper, we present results a user study impact advice from clinical healthcare providers' judgment two different cases: case where explains suggestion and it does not. We examined weight advice,...

10.1145/3491102.3502104 article EN CHI Conference on Human Factors in Computing Systems 2022-04-28

The proposed EU regulation for Artificial Intelligence (AI), the AI Act, has sparked some debate about role of explainable (XAI) in high-risk systems. Some argue that black-box models will have to be replaced with transparent ones, others using XAI techniques might help achieving compliance. This work aims bring clarity as regards context Act and focuses particular on requirements transparency human oversight. After outlining key points describing current limitations techniques, this paper...

10.1145/3593013.3594069 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2023-06-12

eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box models and way such are presented users, i.e., explanation user interface. Despite its importance, second aspect has received limited attention so far in literature. Effective interfaces fundamental for allowing human decision-makers take advantage oversee high-risk systems effectively. Following an iterative design approach, we present first cycle...

10.1145/3587271 article EN ACM Transactions on Interactive Intelligent Systems 2023-03-14

The pervasive application of algorithmic decision-making is raising concerns on the risk unintended bias in AI systems deployed critical settings such as healthcare. detection and mitigation model a very delicate task that should be tackled with care involving domain experts loop. In this paper we introduce FairLens, methodology for discovering explaining biases. We show how tool can audit fictional commercial black-box acting clinical decision support system (DSS). scenario, healthcare...

10.1016/j.ipm.2021.102657 article EN cc-by-nc-nd Information Processing & Management 2021-06-22

The recent availability of large-scale call detail record data has substantially improved our ability quantifying human travel patterns with broad applications in epidemiology. Notwithstanding a number successful case studies, previous works have shown that using different mobility sources, such as mobile phone or census surveys, to parametrize infectious disease models can generate divergent outcomes. Thus, it remains unclear what extent epidemic modelling results may vary when proxies for...

10.1098/rsos.160950 article EN cc-by Royal Society Open Science 2017-05-01

This article's main contributions are twofold: 1) to demonstrate how apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice domain of healthcare and 2) investigate research question what does "trustworthy AI" mean at time COVID-19 pandemic. To this end, we present results a post-hoc self-assessment evaluate trustworthiness an system predicting multiregional score conveying degree lung compromise patients, developed verified by...

10.1109/tts.2022.3195114 article EN cc-by-nc-nd IEEE Transactions on Technology and Society 2022-07-29

With the upcoming enforcement of EU AI Act, documentation high-risk systems and their risk management information will become a legal requirement playing pivotal role in demonstration compliance. Despite its importance, there is lack standards guidelines to assist with drawing up aligned Act. This paper aims address this gap by providing an in-depth analysis Act's provisions regarding technical documentation, wherein we particularly focus on management. On basis analysis, propose Cards as...

10.31219/osf.io/6dxgt preprint EN 2024-06-27

With the upcoming enforcement of EU AI Act, documentation high-risk systems and their risk management information will become a legal requirement playing pivotal role in demonstration compliance. Despite its importance, there is lack standards guidelines to assist with drawing up aligned Act. This paper aims address this gap by providing an in-depth analysis Act's provisions regarding technical documentation, wherein we particularly focus on management. On basis analysis, propose Cards as...

10.48550/arxiv.2406.18211 preprint EN arXiv (Cornell University) 2024-06-26

The pervasive application of algorithmic decision-making is raising concerns on the risk unintended bias in AI systems deployed critical settings such as healthcare. detection and mitigation biased models a very delicate task which should be tackled with care involving domain experts loop. In this paper we introduce FairLens, methodology for discovering explaining biases. We show how our tool can used to audit fictional commercial black-box model acting clinical decision support system....

10.48550/arxiv.2011.04049 preprint EN other-oa arXiv (Cornell University) 2020-01-01
Coming Soon ...