Lars Henry Berge Olsen

ORCID: 0009-0006-9360-6993
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Explainable Artificial Intelligence (XAI)
  • Machine Learning and Data Classification
  • Bayesian Modeling and Causal Inference
  • Statistical Methods and Inference
  • Sports Analytics and Performance
  • Machine Learning in Healthcare
  • Face and Expression Recognition
  • Anomaly Detection Techniques and Applications
  • Photoreceptor and optogenetics research
  • Forecasting Techniques and Applications

University of Oslo
1970-2024

Shapley values are a popular model-agnostic explanation framework for explaining predictions made by complex machine learning models. The provides feature contribution scores that sum to the predicted response and represent each feature's importance. computation of exact is computationally expensive due estimating an exponential amount non-trivial conditional expectations. KernelSHAP enables us approximate using sampled subset weighted We propose three main novel contributions: stabilizing...

10.48550/arxiv.2410.04883 preprint EN arXiv (Cornell University) 2024-10-07

Shapley values originated in cooperative game theory but are extensively used today as a model-agnostic explanation framework to explain predictions made by complex machine learning models the industry and academia. There several algorithmic approaches for computing different versions of value explanations. Here, we focus on conditional predictive fitted tabular data. Estimating precise is difficult they require estimation non-trivial expectations. In this article, develop new methods,...

10.48550/arxiv.2305.09536 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Shapley values are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models. have desirable theoretical properties and sound mathematical foundation in the field of cooperative game theory. Precise value estimates for dependent data rely on accurate modeling dependencies between all feature combinations. In this paper, we use variational autoencoder with arbitrary conditioning (VAEAC) model simultaneously. We demonstrate through...

10.48550/arxiv.2111.13507 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Shapley values are extensively used in explainable artificial intelligence (XAI) as a framework to explain predictions made by complex machine learning (ML) models. In this work, we focus on conditional for predictive models fitted tabular data and the prediction $f(\boldsymbol{x}^{*})$ single observation $\boldsymbol{x}^{*}$ at time. Numerous value estimation methods have been proposed empirically compared an average basis XAI literature. However, less has devoted analyzing precision of...

10.48550/arxiv.2312.03485 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...