Andrej Tschalzev

ORCID: 0000-0002-0638-5744
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Explainable Artificial Intelligence (XAI)
  • Adversarial Robustness in Machine Learning
  • Machine Learning and Data Classification
  • Neural Networks and Applications
  • Acute Ischemic Stroke Management
  • Health Systems, Economic Evaluations, Quality of Life
  • Advanced Machining and Optimization Techniques
  • Clinical practice guidelines implementation
  • Non-Destructive Testing Techniques
  • Stroke Rehabilitation and Recovery
  • Air Quality Monitoring and Forecasting
  • Bayesian Modeling and Causal Inference
  • Advanced Data Processing Techniques
  • Telemedicine and Telehealth Implementation
  • Computational Physics and Python Applications
  • Industrial Vision Systems and Defect Detection

Institute for Enterprise Systems
2024

University of Mannheim
2024

Abstract Background and Purpose Investigating the cost‐effectiveness of future mobile stroke unit (MSU) services with respect to local idiosyncrasies is essential for enabling large‐scale implementation MSU services. The aim this study was assess varying urban German settings modes operation. Methods Costs different operating times together personnel configurations were simulated. Different possible catchment zones, ischemic incidence, circadian distribution, rates alternative diagnoses, as...

10.1111/ene.16514 article EN cc-by European Journal of Neurology 2024-11-06

Mobile stroke units (MSU) have been demonstrated to improve prehospital care in metropolitan and rural regions. Due geographical, social structural idiosyncrasies of the German city Mannheim, concepts established MSU services are not directly applicable Mannheim initiative. The aim present analysis was identify major determinants that need be considered when initially setting up a local service. Local statistics from 2015 2021 were analyzed circadian distribution strokes incidence rates...

10.3389/fneur.2024.1358145 article EN cc-by Frontiers in Neurology 2024-02-29

Abstract We consider generating explanations for neural networks in cases where the network’s training data is not accessible, instance due to privacy or safety issues. Recently, Interpretation Nets ( $$\mathcal {I}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>I</mml:mi> </mml:math> -Nets) have been proposed as a sample-free approach post-hoc, global model interpretability that does require access data. They formulate interpretation machine learning task maps network...

10.1007/s10994-023-06428-4 article EN cc-by Machine Learning 2024-01-10

Neural networks often assume independence among input data samples, disregarding correlations arising from inherent clustering patterns in real-world datasets (e.g., due to different sites or repeated measurements). Recently, mixed effects neural (MENNs) which separate cluster-specific 'random effects' cluster-invariant 'fixed have been proposed improve generalization and interpretability for clustered data. However, existing methods only allow approximate quantification of cluster are...

10.48550/arxiv.2407.01115 preprint EN arXiv (Cornell University) 2024-07-01

Tabular data is prevalent in real-world machine learning applications, and new models for supervised of tabular are frequently proposed. Comparative studies assessing the performance typically consist model-centric evaluation setups with overly standardized preprocessing. This paper demonstrates that such evaluations biased, as modeling pipelines often require dataset-specific preprocessing feature engineering. Therefore, we propose a data-centric framework. We select 10 relevant datasets...

10.48550/arxiv.2407.02112 preprint EN arXiv (Cornell University) 2024-07-02

Neural networks often assume independence among input data samples, disregarding correlations arising from inherent clustering patterns in real-world datasets (e.g., due to different sites or repeated measurements). Recently, mixed effects neural (MENNs) which separate cluster-specific 'random effects' cluster-invariant 'fixed have been proposed improve generalization and interpretability for clustered data. However, existing methods only allow approximate quantification of cluster are...

10.24963/ijcai.2024/555 article EN 2024-07-26

We consider generating explanations for neural networks in cases where the network's training data is not accessible, instance due to privacy or safety issues. Recently, $\mathcal{I}$-Nets have been proposed as a sample-free approach post-hoc, global model interpretability that does require access data. They formulate interpretation machine learning task maps network representations (parameters) representation of an interpretable function. In this paper, we extend $\mathcal{I}$-Net framework...

10.48550/arxiv.2206.04891 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01
Coming Soon ...