Tim Räz

ORCID: 0000-0002-8464-4190
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Philosophy and History of Science
  • Explainable Artificial Intelligence (XAI)
  • Ethics and Social Impacts of AI
  • Adversarial Robustness in Machine Learning
  • Relativity and Gravitational Theory
  • History and Theory of Mathematics
  • Quantum Mechanics and Applications
  • Philosophy and Theoretical Science
  • Science and Climate Studies
  • Experimental Behavioral Economics Studies
  • Computability, Logic, AI Algorithms
  • Psychology of Moral and Emotional Judgment
  • Bayesian Modeling and Causal Inference
  • Multi-Criteria Decision Making
  • Evolution and Genetic Dynamics
  • Reliability and Agreement in Measurement
  • Ecosystem dynamics and resilience
  • Mobile Crowdsensing and Crowdsourcing
  • Risk Perception and Management
  • Advanced Statistical Methods and Models
  • Generative Adversarial Networks and Image Synthesis
  • Mathematical and Theoretical Analysis
  • Machine Learning and Data Classification
  • Law, Economics, and Judicial Systems
  • Free Will and Agency

University of Bern
2021-2024

University of Zurich
2020-2021

University of Konstanz
2014-2017

University of Lausanne
2012-2014

Praktisk-Teologiske Seminar
2013

Abstract In computer science, there are efforts to make machine learning more interpretable or explainable, and thus better understand the underlying models, algorithms, their behavior. But what exactly is interpretability, how can it be achieved? Such questions lead into philosophical waters because answers depend on explanation understanding are—and issues that have been central philosophy of science. this paper, we review recent literature interpretability. We propose a systematization in...

10.1111/phc3.12830 article EN Philosophy Compass 2022-04-19

The interpretability of ML models is important, but it not clear what amounts to. So far, most philosophers have discussed the lack black-box such as neural networks, and methods explainable AI that aim to make these more transparent. goal this paper clarify nature by focussing on other end "interpretability spectrum". reasons why some models, linear decision trees, are highly interpretable will be examined, also how general MARS GAM, retain degree interpretability. It found while there...

10.1016/j.shpsa.2023.12.007 article EN cc-by Studies in History and Philosophy of Science Part A 2024-01-03

Abstract Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they frequently used science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that with is limited by our themselves. In the present paper, we will argue, contra Sullivan, current does limit ability DNNs. Sullivan’s claim hinges on which notion at play. If employ weak understanding,...

10.1007/s10670-022-00605-y article EN cc-by Erkenntnis 2022-08-07

This paper critically examines arguments against independence, a measure of group fairness also known as statistical parity and demographic parity. In recent discussions in computer science, some have maintained that independence is not suitable fairness. position at least partially based on two influential papers (Dwork et al., 2012, Hardt 2016) provide independence. We revisit these arguments, we find the case rather weak. give favor showing it plays distinctive role considerations...

10.1145/3442188.3445876 preprint EN 2021-02-25

Abstract This paper argues that a notion of statistical explanation, based on Salmon’s relevance model, can help us better understand deep neural networks. It is proved homogeneous partitions, the core are equivalent to minimal sufficient statistics, an important from inference. establishes link networks via so-called Information Bottleneck method, information-theoretic framework, according which implicitly solve optimization problem generalizes statistics. The resulting explanation general,...

10.1017/psa.2021.12 article EN Philosophy of Science 2022-01-01

Abstract Parameterization and parameter tuning are central aspects of climate modeling, there is widespread consensus that these procedures involve certain subjective elements. Even if the use elements not necessarily epistemically problematic, an intuitive appeal for replacing them with more objective (automated) methods, such as machine learning. Relying on several case studies, we argue that, while learning techniques may help to improve model parameterization in ways, they still require...

10.1007/s10584-023-03532-1 article EN cc-by Climatic Change 2023-07-18

Journal Article On the Application of Honeycomb Conjecture to Bee's Get access Tim Räz * *University Lausanne, Department Philosophy, Quartier UNIL-Dorigny, CH-1015 Switzerland. tim.raz@unil.ch Search for other works by this author on: Oxford Academic Google Scholar Philosophia Mathematica, Volume 21, Issue 3, October 2013, Pages 351–360, https://doi.org/10.1093/philmat/nkt022 Published: 14 June 2013

10.1093/philmat/nkt022 article EN Philosophia Mathematica 2013-06-14

Michael Weisberg and Kenneth Reisman argue that the Volterra Principle can be derived from multiple predator-prey models that, therefore, is a prime example for robustness analysis. In current article, I give new results regarding Principle, extending Weisberg’s Reisman’s work, discuss consequences of these we do not end up with multiple, independent but rather one general model. identify kind situation in which this generalization approach may occur, analyze generalized an explanatory perspective.

10.1086/693874 article EN Philosophy of Science 2017-09-13

10.1016/j.shpsb.2015.01.004 article EN Studies in History and Philosophy of Science Part B Studies in History and Philosophy of Modern Physics 2015-02-01

10.1007/s13194-012-0060-z article EN European Journal for Philosophy of Science 2012-09-24

10.1007/s11229-016-1014-3 article EN Synthese 2016-01-30

10.1007/s13194-017-0189-x article EN European Journal for Philosophy of Science 2017-11-08

Individual fairness requires that similar individuals are treated similarly. It is supposed to prevent the unfair treatment of on subgroup level and overcome problem group measures susceptible manipulation or gerrymandering. The goal present paper explore extent which individual itself can be gerrymandered. will proved gerrymandered in context predicting scores. Then, it argued a very weak notion for some choices feature space metric. Finally, discussed properties (individual) desirable.

10.1016/j.artint.2023.104035 article EN cc-by-nc Artificial Intelligence 2023-10-24

I raise an objection to Stewart Shapiro's version of ante rem structuralism: show that it is in conflict with mathematical practice. Shapiro introduced so-called ‘finite cardinal structures’ illustrate features structuralism. establish these structures have a well-known counterpart mathematics, but this incompatible Furthermore, there good reason why, according practice, do not behave as conceived by

10.1093/philmat/nku034 article EN Philosophia Mathematica 2014-12-23

This paper investigates the inter-rater reliability of risk assessment instruments (RAIs). The main question is whether different, socially salient groups are affected differently by a lack RAIs, that is, mistakes with respect to different affects them differently. investigated simulation study COMPAS dataset. A controlled degree noise injected into input data predictive model; can be interpreted as synthetic rater makes mistakes. finding there systematic differences in output between sign...

10.1145/3630106.3658544 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2024-06-03

In a recent paper, Aidan Lyon and Mark Colyvan have proposed an explanation of the structure bee's honeycomb based on mathematical Honeycomb Conjecture. This has instantly become one standard examples in philosophical debate explanations physical phenomena. this critical note, I argue that is not scientifically adequate. The reason for fails to do justice essentially three-dimensional honeycomb.

10.1093/phimat/nkt022 article EN Philosophia Mathematica 2013-06-14

Impossibility results show that important fairness measures (independence, separation, sufficiency) cannot be satisfied at the same time under reasonable assumptions. This paper explores whether we can satisfy and/or improve these simultaneously to a certain degree. We introduce information-theoretic formulations of and define degrees based on formulations. The suggest unexplored theoretical relations between three measures. In experimental part, use expressions as regularizers obtain...

10.1609/aaai.v36i11.21450 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2022-06-28

Zusammenfassung „Correctional Offender Management Profiling for Alternative Sanctions“ (COMPAS) ist ein Risikobeurteilungsinstrument, das im Bereich der Strafjustiz in den USA eingesetzt wird. An COMPAS hat sich eine lebhafte Diskussion über Fairness entzündet, die bis heute andauert. Jedoch wurde diese deutschsprachigen Kontext bisher nicht stark rezipiert. In diesem Beitrag wird zuerst Risikobeurteilung durch systematisch dargestellt und diskutiert, wie Es dann auf drei wichtige Aspekte...

10.1007/s11757-022-00741-9 article DE cc-by Forensische Psychiatrie Psychologie Kriminologie 2022-10-05

Abstract The present paper examines the recidivism risk assessment instrument FOTRES, addressing questions whether FOTRES provides us with an adequate understanding of risk, we actually understand itself, and is fair. evaluation uses criteria empirical accuracy, representational domain validity, intelligibility, fairness. This compared to that COMPAS, a different, much-discussed instrument. argues performs poorly in comparison COMPAS respect some criteria, both do not show satisfactory...

10.1007/s43681-022-00223-y article EN cc-by AI and Ethics 2022-10-06

The interpretability of ML models is important, but it not clear what amounts to. So far, most philosophers have discussed the lack black-box such as neural networks, and methods explainable AI that aim to make these more transparent. goal this paper clarify nature by focussing on other end 'interpretability spectrum'. reasons why some models, linear decision trees, are highly interpretable will be examined, also how general MARS GAM, retain degree interpretability. I find while there...

10.48550/arxiv.2211.13617 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01
Coming Soon ...