- Philosophy and History of Science
- Explainable Artificial Intelligence (XAI)
- Ethics and Social Impacts of AI
- Adversarial Robustness in Machine Learning
- Relativity and Gravitational Theory
- History and Theory of Mathematics
- Quantum Mechanics and Applications
- Philosophy and Theoretical Science
- Science and Climate Studies
- Experimental Behavioral Economics Studies
- Computability, Logic, AI Algorithms
- Psychology of Moral and Emotional Judgment
- Bayesian Modeling and Causal Inference
- Multi-Criteria Decision Making
- Evolution and Genetic Dynamics
- Reliability and Agreement in Measurement
- Ecosystem dynamics and resilience
- Mobile Crowdsensing and Crowdsourcing
- Risk Perception and Management
- Advanced Statistical Methods and Models
- Generative Adversarial Networks and Image Synthesis
- Mathematical and Theoretical Analysis
- Machine Learning and Data Classification
- Law, Economics, and Judicial Systems
- Free Will and Agency
University of Bern
2021-2024
University of Zurich
2020-2021
University of Konstanz
2014-2017
University of Lausanne
2012-2014
Praktisk-Teologiske Seminar
2013
Abstract In computer science, there are efforts to make machine learning more interpretable or explainable, and thus better understand the underlying models, algorithms, their behavior. But what exactly is interpretability, how can it be achieved? Such questions lead into philosophical waters because answers depend on explanation understanding are—and issues that have been central philosophy of science. this paper, we review recent literature interpretability. We propose a systematization in...
The interpretability of ML models is important, but it not clear what amounts to. So far, most philosophers have discussed the lack black-box such as neural networks, and methods explainable AI that aim to make these more transparent. goal this paper clarify nature by focussing on other end "interpretability spectrum". reasons why some models, linear decision trees, are highly interpretable will be examined, also how general MARS GAM, retain degree interpretability. It found while there...
Abstract Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they frequently used science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that with is limited by our themselves. In the present paper, we will argue, contra Sullivan, current does limit ability DNNs. Sullivan’s claim hinges on which notion at play. If employ weak understanding,...
This paper critically examines arguments against independence, a measure of group fairness also known as statistical parity and demographic parity. In recent discussions in computer science, some have maintained that independence is not suitable fairness. position at least partially based on two influential papers (Dwork et al., 2012, Hardt 2016) provide independence. We revisit these arguments, we find the case rather weak. give favor showing it plays distinctive role considerations...
Abstract This paper argues that a notion of statistical explanation, based on Salmon’s relevance model, can help us better understand deep neural networks. It is proved homogeneous partitions, the core are equivalent to minimal sufficient statistics, an important from inference. establishes link networks via so-called Information Bottleneck method, information-theoretic framework, according which implicitly solve optimization problem generalizes statistics. The resulting explanation general,...
Abstract Parameterization and parameter tuning are central aspects of climate modeling, there is widespread consensus that these procedures involve certain subjective elements. Even if the use elements not necessarily epistemically problematic, an intuitive appeal for replacing them with more objective (automated) methods, such as machine learning. Relying on several case studies, we argue that, while learning techniques may help to improve model parameterization in ways, they still require...
Journal Article On the Application of Honeycomb Conjecture to Bee's Get access Tim Räz * *University Lausanne, Department Philosophy, Quartier UNIL-Dorigny, CH-1015 Switzerland. tim.raz@unil.ch Search for other works by this author on: Oxford Academic Google Scholar Philosophia Mathematica, Volume 21, Issue 3, October 2013, Pages 351–360, https://doi.org/10.1093/philmat/nkt022 Published: 14 June 2013
Michael Weisberg and Kenneth Reisman argue that the Volterra Principle can be derived from multiple predator-prey models that, therefore, is a prime example for robustness analysis. In current article, I give new results regarding Principle, extending Weisberg’s Reisman’s work, discuss consequences of these we do not end up with multiple, independent but rather one general model. identify kind situation in which this generalization approach may occur, analyze generalized an explanatory perspective.
Individual fairness requires that similar individuals are treated similarly. It is supposed to prevent the unfair treatment of on subgroup level and overcome problem group measures susceptible manipulation or gerrymandering. The goal present paper explore extent which individual itself can be gerrymandered. will proved gerrymandered in context predicting scores. Then, it argued a very weak notion for some choices feature space metric. Finally, discussed properties (individual) desirable.
I raise an objection to Stewart Shapiro's version of ante rem structuralism: show that it is in conflict with mathematical practice. Shapiro introduced so-called ‘finite cardinal structures’ illustrate features structuralism. establish these structures have a well-known counterpart mathematics, but this incompatible Furthermore, there good reason why, according practice, do not behave as conceived by
This paper investigates the inter-rater reliability of risk assessment instruments (RAIs). The main question is whether different, socially salient groups are affected differently by a lack RAIs, that is, mistakes with respect to different affects them differently. investigated simulation study COMPAS dataset. A controlled degree noise injected into input data predictive model; can be interpreted as synthetic rater makes mistakes. finding there systematic differences in output between sign...
In a recent paper, Aidan Lyon and Mark Colyvan have proposed an explanation of the structure bee's honeycomb based on mathematical Honeycomb Conjecture. This has instantly become one standard examples in philosophical debate explanations physical phenomena. this critical note, I argue that is not scientifically adequate. The reason for fails to do justice essentially three-dimensional honeycomb.
Impossibility results show that important fairness measures (independence, separation, sufficiency) cannot be satisfied at the same time under reasonable assumptions. This paper explores whether we can satisfy and/or improve these simultaneously to a certain degree. We introduce information-theoretic formulations of and define degrees based on formulations. The suggest unexplored theoretical relations between three measures. In experimental part, use expressions as regularizers obtain...
Zusammenfassung „Correctional Offender Management Profiling for Alternative Sanctions“ (COMPAS) ist ein Risikobeurteilungsinstrument, das im Bereich der Strafjustiz in den USA eingesetzt wird. An COMPAS hat sich eine lebhafte Diskussion über Fairness entzündet, die bis heute andauert. Jedoch wurde diese deutschsprachigen Kontext bisher nicht stark rezipiert. In diesem Beitrag wird zuerst Risikobeurteilung durch systematisch dargestellt und diskutiert, wie Es dann auf drei wichtige Aspekte...
Abstract The present paper examines the recidivism risk assessment instrument FOTRES, addressing questions whether FOTRES provides us with an adequate understanding of risk, we actually understand itself, and is fair. evaluation uses criteria empirical accuracy, representational domain validity, intelligibility, fairness. This compared to that COMPAS, a different, much-discussed instrument. argues performs poorly in comparison COMPAS respect some criteria, both do not show satisfactory...
The interpretability of ML models is important, but it not clear what amounts to. So far, most philosophers have discussed the lack black-box such as neural networks, and methods explainable AI that aim to make these more transparent. goal this paper clarify nature by focussing on other end 'interpretability spectrum'. reasons why some models, linear decision trees, are highly interpretable will be examined, also how general MARS GAM, retain degree interpretability. I find while there...