Tim Miller

ORCID: 0000-0003-4908-6063
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Multi-Agent Systems and Negotiation
  • Explainable Artificial Intelligence (XAI)
  • Logic, Reasoning, and Knowledge
  • Topic Modeling
  • Semantic Web and Ontologies
  • Open Education and E-Learning
  • Natural Language Processing Techniques
  • Software Testing and Debugging Techniques
  • AI-based Problem Solving and Planning
  • Adversarial Robustness in Machine Learning
  • Machine Learning in Healthcare
  • Software Engineering Research
  • Software Reliability and Analysis Research
  • Ethics and Social Impacts of AI
  • Speech and dialogue systems
  • Library Collection Development and Digital Resources
  • Artificial Intelligence in Healthcare and Education
  • Advanced Software Engineering Methodologies
  • Innovative Human-Technology Interaction
  • Reinforcement Learning in Robotics
  • Parasitic Infections and Diagnostics
  • Business Process Modeling and Analysis
  • Parasites and Host Interactions
  • Bayesian Modeling and Causal Inference
  • Formal Methods in Verification

The University of Queensland
2002-2024

Center Point
2024

The University of Melbourne
2014-2023

Boston Children's Hospital
2017-2023

Harvard University
2017-2023

Cardinal Health (United States)
2023

Georgia Institute of Technology
2021

Harvard University Press
2021

United Nations Department of Economic and Social Affairs
2020

Noblis
2016-2018

10.1016/j.artint.2018.07.007 article EN publisher-specific-oa Artificial Intelligence 2018-10-27

Although considerable work has been done in recent years to drive the state of art facial recognition towards operation on fully unconstrained imagery, research always restricted by a lack datasets public domain. In addition, traditional biometrics experiments such as single image verification and closed set do not adequately evaluate ways which face systems are used practice. The IARPA Janus Benchmark-C (IJB-C) dataset advances goal robust recognition, improving upon previous domain IJB-B...

10.1109/icb2018.2018.00033 article EN 2018-02-01

Despite the importance of rigorous testing data for evaluating face recognition algorithms, all major publicly available faces-in-the-wild datasets are constrained by use a commodity detector, which limits, among other conditions, pose, occlusion, expression, and illumination variations. In 2015, NIST IJB-A dataset, consists 500 subjects, was released to mitigate these constraints. However, relatively low number impostor genuine matches per split in protocol limits evaluation an algorithm at...

10.1109/cvprw.2017.87 article EN 2017-07-01

Trust is a central component of the interaction between people and AI, in that 'incorrect' levels trust may cause misuse, abuse or disuse technology. But what, precisely, nature AI? What are prerequisites goals cognitive mechanism trust, how can we promote them, assess whether they being satisfied given interaction? This work aims to answer these questions. We discuss model inspired by, but not identical to, interpersonal (i.e., people) as defined by sociologists. rests on two key...

10.1145/3442188.3445923 article EN 2021-03-01

Abstract Objective To conduct a systematic scoping review of explainable artificial intelligence (XAI) models that use real-world electronic health record data, categorize these techniques according to different biomedical applications, identify gaps current studies, and suggest future research directions. Materials Methods We searched MEDLINE, IEEE Xplore, the Association for Computing Machinery (ACM) Digital Library relevant papers published between January 1, 2009 May 2019. summarized...

10.1093/jamia/ocaa053 article EN Journal of the American Medical Informatics Association 2020-04-08

Prominent theories in cognitive science propose that humans understand and represent the knowledge of world through causal relationships. In making sense world, we build models our mind to encode cause-effect relations events use these explain why new happen by referring counterfactuals — things did not happen. this paper, derive explanations behaviour model-free reinforcement learning agents. We present an approach learns a structural model during encodes relationships between variables...

10.1609/aaai.v34i03.5631 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2020-04-03

In his seminal book `The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from user perspective) programmers in charge of design decisions, rather than interaction designers. As result, for themselves, their target audience, phenomenon he refers to as `inmates running asylum'. This paper explainable AI risks similar fate. While...

10.48550/arxiv.1712.00547 preprint EN other-oa arXiv (Cornell University) 2017-01-01

This paper presents a model of contrastive explanation using structural casual models. The topic causal in artificial intelligence has gathered interest recent years as researchers and practitioners aim to increase trust understanding intelligent decision-making. While different sub-fields have looked into this problem with sub-field-specific view, there are few models that capture more generally. One general is based on It defines an fact that, if found be true, would constitute actual...

10.1017/s0269888921000102 article EN The Knowledge Engineering Review 2021-01-01

Online exam supervision technologies have recently generated significant controversy and concern. Their use is now booming due to growing demand for online courses off-campus assessment options amid COVID-19 lockdowns. proctoring purport effectively oversee students sitting exams by using artificial intelligence (AI) systems supplemented human invigilators. Such alarmed some who see them as a "Big Brother-like" threat liberty privacy, potentially unfair discriminatory. However, universities...

10.1007/s13347-021-00476-1 article EN public-domain Philosophy & Technology 2021-08-31

In this paper, we argue for a paradigm shift from the current model of explainable artificial intelligence (XAI), which may be counter-productive to better human decision making. early support systems, assumed that could give people recommendations and they would consider them, then follow them when required. However, research found often ignore because do not trust them; or perhaps even worse, blindly, are wrong. Explainable mitigates by helping understand how why models certain...

10.1145/3593013.3594001 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2023-06-12

Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well other agents. However, planning involving nested beliefs is known be computationally challenging. In this work, we address task synthesizing plans that necessitate reasoning We plan from perspective a single agent with potential for goals and actions non-homogeneous co-present observations, ability one if it were another. formally characterize our...

10.1609/aaai.v29i1.9665 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2015-03-04

Human syntactic processing shows many signs of taking place within a general-purpose short-term memory. But this kind memory is known to have severely constrained storage capacity—possibly as few three or four distinct elements. This article describes model that operates successfully these severe constraints, by recognizing constituents in right-corner transformed representation (a variant left-corner parsing) and mapping random variables Hierarchic Hidden Markov Model, factored time-series...

10.1162/coli.2010.36.1.36100 article EN cc-by-nc-nd Computational Linguistics 2010-01-11

As the use of algorithmic systems in high-stakes decision-making increases, ability to contest decisions is being recognised as an important safeguard for individuals. Yet, there little guidance on what `contestability'--the decisions--in relation requires. Recent research presents different conceptualisations contestability decision-making. We contribute this growing body work by describing and analysing perspectives people organisations who made submissions response Australia's proposed...

10.1145/3449180 article EN Proceedings of the ACM on Human-Computer Interaction 2021-04-13

In this article, we show that explanations of decisions made by machine learning systems can be improved not only explaining why a decision was but also how an individual could obtain their desired outcome. We formally define the concept directive (those offer specific actions take to achieve outcome), introduce two forms (directive-specific and directive-generic), describe these generated computationally. investigate people’s preference for perception toward through online studies, one...

10.1145/3579363 article EN ACM Transactions on Interactive Intelligent Systems 2023-01-12

Test case prioritization is the process of ordering execution test cases to achieve a certain goal, such as increasing rate fault detection. Increasing detection can provide earlier feedback system developers, improving fixing activity and, ultimately, software delivery. Many existing techniques consider that tests be run in any order. However, due functional dependencies may exist between some cases-that is, one must executed before another-this often not case. In this paper, we present...

10.1109/tse.2012.26 article EN IEEE Transactions on Software Engineering 2012-04-27

Software testing remains the most widely used approach to verification in industry today, consuming between 30-50 percent of entire development cost. Test input selection for intelligent agents presents a problem due very fact that are intended operate robustly under conditions which developers did not consider and would therefore be unlikely test. Using methods automatically generate execute tests is one way provide coverage many without significantly increasing However, using automatic...

10.1109/tse.2013.10 article EN IEEE Transactions on Software Engineering 2013-08-28

This is the fourth in a series of essays about "explainable AI." Previous laid out theoretical and empirical foundations. essay focuses on Deep Nets, considers methods for allowing system users to generate self-explanations. accomplished by exploring how Net systems perform when they are operating at their "boundary conditions." Inspired recent research into adversarial examples that demonstrate weaknesses we invert purpose these argue spoofing can be used as tool answer contrastive...

10.1109/mis.2018.033001421 article EN IEEE Intelligent Systems 2018-05-01

Trust is a central component of the interaction between people and AI, in that 'incorrect' levels trust may cause misuse, abuse or disuse technology. But what, precisely, nature AI? What are prerequisites goals cognitive mechanism trust, how can we promote them, assess whether they being satisfied given interaction? This work aims to answer these questions. We discuss model inspired by, but not identical to, sociology's interpersonal (i.e., people). rests on two key properties vulnerability...

10.48550/arxiv.2010.07487 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This deficiency remains a key challenge when applying CNNs important domains. Recent work on explanations through feature importance of approximate linear has moved from input-level features (pixels or segments) to mid-layer maps the form concept activation vectors (CAVs). CAVs contain concept-level information and could be learned via clustering. In this work, we...

10.1609/aaai.v35i13.17389 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-05-18
Coming Soon ...