- Semantic Web and Ontologies
- Service-Oriented Architecture and Web Services
- Multi-Agent Systems and Negotiation
- AI-based Problem Solving and Planning
- Explainable Artificial Intelligence (XAI)
- Logic, Reasoning, and Knowledge
- Scientific Computing and Data Management
- Advanced Database Systems and Queries
- Distributed and Parallel Computing Systems
- Data Quality and Management
- Software Engineering Research
- Business Process Modeling and Analysis
- Topic Modeling
- Big Data and Business Intelligence
- Bayesian Modeling and Causal Inference
- Adversarial Robustness in Machine Learning
- Energy Efficient Wireless Sensor Networks
- Natural Language Processing Techniques
- Software Reliability and Analysis Research
- Software Engineering Techniques and Practices
- Peer-to-Peer Network Technologies
- Anomaly Detection Techniques and Applications
- Speech and dialogue systems
- Advanced Text Analysis Techniques
- Optimization and Search Problems
Cardiff University
2015-2024
Institute for Security Studies
2019
University of Aberdeen
1999-2008
University of Manchester
2007
BT Group (United Kingdom)
2006
University of Southampton
2006
Concordia University
1990-2003
Binghamton University
2002
King's College Hospital
1996
Université Savoie Mont Blanc
1994
Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, video data. However, the continue to be treated mostly as black-box function approximators, mapping a given input output. The next step this human-machine evolutionary process - incorporating these into mission critical processes such medical diagnosis, planning control requires level trust association with machine Typically, statistical...
Summary Recent rapid progress in machine learning (ML), particularly so‐called ‘deep learning’, has led to a resurgence interest explainability of artificial intelligence (AI) systems, reviving an area research dating back the 1970s. The aim this article is view current issues concerning ML‐based AI systems from perspective classical AI, showing that fundamental problems are far new, and arguing elements earlier work offer routes making towards explainable today.
Saliency maps are a popular approach to creating post-hoc explanations of image classifier outputs. These methods produce estimates the relevance each pixel classification output score, which can be displayed as saliency map that highlights important pixels. Despite proliferation such methods, little effort has been made quantify how good these at capturing true pixels (i.e. their “fidelity”). We therefore investigate existing metrics for evaluating fidelity metrics). find there is...
Social media has become extremely influential when it comes to policy making in modern societies, especially the western world, where platforms such as Twitter allow users follow politicians, thus citizens more involved political discussion. In same vein, politicians use express their opinions, debate among others on current topics and promote agendas aiming influence voter behaviour. this paper, we attempt analyse tweets of from three European countries explore virality tweets. Previous...
Anomalies such as redundant, contradictory, and deficient knowledge in a base are symptoms of probable errors. Detecting anomalies is well-established method for verifying knowledge-based systems. Although many tools have been developed to perform anomaly detection, several important issues neglected, especially the theoretical foundations computational limitations detection methods, analyses utility practical use. This article addresses these by presenting foundation empirical results...
In this paper we investigate the role of idioms in automated approaches to sentiment analysis. To estimate degree which inclusion as features may potentially improve results traditional analysis, compared our two such methods. First, support collected a set 580 that are relevant i.e. ones can be mapped an emotion. These mappings were then obtained using web-based crowdsourcing approach. The quality crowdsourced information is demonstrated with high agreement among five independent annotators...
There is general consensus that it important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there no over what meant by 'explainable' 'interpretable'. In this paper, we argue lack of due being several distinct stakeholder communities. We note that, while the concerns individual communities are broadly compatible, they not identical, which gives rise different intents requirements explainability/interpretability. use software...
Several researchers have argued that a machine learning system's interpretability should be defined in relation to specific agent or task: we not ask if the system is interpretable, but whom it interpretable. We describe model intended help answer this question, by identifying different roles agents can fulfill system. illustrate use of our variety scenarios, exploring how an agent's role influences its goals, and implications for defining interpretability. Finally, make suggestions could...
Abstract: Expert system evaluation comprises verification, validation and user acceptance testing. The nature of expert systems requires that they be evaluated carefully, detailed methodologies for their development devised. This paper attempts to give practical guidance the phase development. Empirical logical methods are surveyed, both types method applied in an case study. study provides insights as relative effectiveness methods, demonstrates empirical testing complementary each other....
The authors believe that current knowledge management practice significantly under-utilizes engineering technology, despite recent efforts to promote its use. They focus on two processes: using acquisition processes capture structured systematically; and representation technology store the knowledge, preserving important relationships are far richer than those possible in conventional databases. To demonstrate usefulness of these processes, we present a case study which drilling optimization...
The KRAFT project aims to investigate how a distributed architecture can support the transformation and reuse of particular class knowledge, namely constraints, fuse this knowledge so as gain added value, by using it for constraint solving or data retrieval.
Knowledge fusion refers to the process of locating and extracting knowledge from multiple, heterogeneous on-line sources, transforming it so that union can be applied in problem-solving. The KRAFT project has defined a generic agent-based architecture support form constraints expressed against an object data model. employs three kinds agent: facilitators locate appropriate sources knowledge; wrappers transform homogeneous constraint interchange format; mediators fuse together with associated...
Abstract This paper surveys the verification of expert system knowledge bases by detecting anomalies. Such anomalies are highly indicative errors in base. The is two parts. first part describes four types anomaly: redundancy, ambivalence, circularity, and deficiency. We consider rule which based on first-order logic, explain terms syntax semantics logic. second presents a review five programs have been built to detect various subsets provide framework for comparing capabilities tools, we...
The ability to create reliable, scalable virtual organisations (VOs) on demand in a dynamic, open and competitive environment is one of the challenges that underlie Grid computing. In response, CONOISE-G project, we are developing an infrastructure support robust resilient organisation formation operation. Specifically, provides mechanisms assure effective operation agent-based VOs face disruptive potentially malicious entities environments. this paper, describe system, outline its use VO...