- Adversarial Robustness in Machine Learning
- Neural dynamics and brain function
- Ethics and Social Impacts of AI
- Visual perception and processing mechanisms
- Neural Networks and Applications
- Explainable Artificial Intelligence (XAI)
- Neurobiology and Insect Physiology Research
- Domain Adaptation and Few-Shot Learning
- Law, AI, and Intellectual Property
- Categorization, perception, and language
- Digitalization, Law, and Regulation
- Open Source Software Innovations
- Ethics in Clinical Research
- Advanced Memory and Neural Computing
- Morphological variations and asymmetry
- Plant and Biological Electrophysiology Studies
- History of Science and Medicine
- Big Data and Business Intelligence
- Medical Imaging and Analysis
- Advanced MRI Techniques and Applications
- Multisensory perception and integration
- Medical Imaging Techniques and Applications
Bernstein Center for Computational Neuroscience Tübingen
2025
University of Tübingen
2019-2025
Max Planck Institute for Intelligent Systems
2017-2020
Max Planck Society
2019
Abstract Uncertainty is intrinsic to perception. Neural circuits which process sensory information must therefore also represent the reliability of this information. How they do so a topic debate. We propose model visual cortex in average neural response strength encodes stimulus features, while cross-neuron variability gain uncertainty these features. To test model, we studied spiking activity neurons macaque V1 and V2 elicited by repeated presentations stimuli whose was manipulated...
Abstract Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body literature develops new approaches quantify fairness. Here, we investigate how one can divert quantification fairness by describing practice call “fairness hacking” purpose shrouding unfairness algorithms. This impacts end-users who rely on algorithms, as well broader community interested fair AI practices....
What constitutes a fair decision? This question is not only difficult for humans but becomes more challenging when Artificial Intelligence (AI) models are used. In light of discriminatory algorithmic behaviors, the EU has recently passed AI Act, which mandates specific rules models, incorporating both traditional legal non-discrimination regulations and machine learning based fairness concepts. paper aims to bridge these two different concepts in Act through: First high-level introduction...
A central problem in cognitive science and behavioural neuroscience as well machine learning artificial intelligence research is to ascertain whether two or more decision makers (be they brains algorithms) use the same strategy. Accuracy alone cannot distinguish between strategies: systems may achieve similar accuracy with very different strategies. The need differentiate beyond particularly pressing if are near ceiling performance, like Convolutional Neural Networks (CNNs) humans on visual...
Considerable practical interest exists in being able to automatically determine whether a recorded magnetic resonance image is affected by motion artifacts caused patient movements during scanning. Existing approaches usually rely on the use of navigators or external sensors detect and track acquisition. In this work, we present an algorithm based convolutional neural networks that enables fully automated detection MR scans without special hardware requirements. The approach data driven uses...
One of the most important tasks for humans is attribution causes and effects in all wakes life. The first systematical study visual perception causality—often referred to as phenomenal causality—was done by Albert Michotte using his now well-known launching events paradigm. Launching are seeming collision transfer movement between two objects—abstract, featureless stimuli (“objects”) Michotte’s original experiments. Here, we relation causal ratings setting collisions a photorealistically...
Industry involvement in the machine learning (ML) community seems to be increasing. However, quantitative scale and ethical implications of this influence are rather unknown. For purpose, we have not only carried out an informed analysis field, but inspected all papers main ML conferences NeurIPS, CVPR, ICML last 5 years - almost 11,000 total. Our statistical approach focuses on conflicts interest, innovation gender equality. We obtained four findings: (1) Academic-corporate collaborations...
When does a digital image resemble reality? The relevance of this question increases as the generation synthetic images -- so called deep fakes becomes increasingly popular. Deep have gained much attention for number reasons among others, due to their potential disrupt political climate. In order mitigate these threats, EU AI Act implements specific transparency regulations generating content or manipulating existing content. However, distinction between real and is even from computer vision...
"The power of a generalization system follows directly from its biases" (Mitchell 1980). Today, CNNs are incredibly powerful generalisation systems -- but to what degree have we understood how their inductive bias influences model decisions? We here attempt disentangle the various aspects that determine decides. In particular, ask: makes one decide differently another? meticulously controlled setting, find (1.) irrespective network architecture or objective (e.g. self-supervised,...
Uncertainty is intrinsic to perception. Neural circuits which process sensory information must therefore also represent the reliability of this information. How they do so a topic debate. We propose view visual cortex in average neural response strength encodes stimulus features, while cross-neuron variability gain uncertainty these features. To test our theory, we studied spiking activity neurons macaque V1 and V2 elicited by repeated presentations stimuli whose was manipulated distinct...
Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body literature develops new approaches quantify fairness. Here, we investigate how one can divert quantification fairness by describing practice call "fairness hacking" purpose shrouding unfairness algorithms. This impacts end-users who rely on algorithms, as well broader community interested fair AI practices. We...