- Explainable Artificial Intelligence (XAI)
- Adversarial Robustness in Machine Learning
- Privacy, Security, and Data Protection
- Ethics and Social Impacts of AI
- Mobile Crowdsensing and Crowdsourcing
- Cell Image Analysis Techniques
- Machine Learning in Healthcare
- Advanced Neural Network Applications
King's College London
2023
University of Edinburgh
2021
Explanations for AI are a crucial part of autonomous systems: they increase user's confidence, provide an interpretation otherwise black-box system, and can serve as interface between the user system. to become mandatory all systems influencing people (see, example, upcoming EU Act). While so far explanations image classifiers focused on explaining images objects, such ImageNet, there is important area application them, namely, healthcare. In this paper we focus particular healthcare: use...
Algorithmic systems are increasingly deployed to make decisions that people used make. Perceptions of these can significantly influence their adoption, yet, broadly speaking, users' understanding the internal working is limited. To explore perceptions algorithmic systems, we developed a prototype e-recruitment system called Algorithm Playground where offer users look behind scenes such and provide "how" "why" explanations on how job applicants ranked by algorithms. Using an online study with...
Using large pre-trained models for image recognition tasks is becoming increasingly common owing to the well acknowledged success of recent like vision transformers and other CNN-based VGG Resnet. The high accuracy these on benchmark has translated into their practical use across many domains including safety-critical applications autonomous driving medical diagnostics. Despite widespread use, have been shown be fragile changes in operating environment, bringing robustness question. There an...