- Statistical Methods in Clinical Trials
- Statistical Methods and Inference
- Optimal Experimental Design Methods
- Advanced Statistical Methods and Models
- Statistical Methods and Bayesian Inference
- Bayesian Methods and Mixture Models
- Financial Risk and Volatility Modeling
- Statistical Distribution Estimation and Applications
- Advanced Causal Inference Techniques
- Fault Detection and Control Systems
- Sensory Analysis and Statistical Methods
- Probability and Risk Models
- Neural Networks and Applications
- Multiple Sclerosis Research Studies
- Probability and Statistical Research
- Data-Driven Disease Surveillance
- Machine Learning and Data Classification
- Genetic and phenotypic traits in livestock
- Lymphoma Diagnosis and Treatment
- Machine Learning and Algorithms
- Neutropenia and Cancer Infections
- Multiple Myeloma Research and Treatments
- Health Systems, Economic Evaluations, Quality of Life
- Hematopoietic Stem Cell Transplantation
- Ferroptosis and cancer prognosis
Otto-von-Guericke University Magdeburg
2023-2024
TU Dortmund University
2018-2023
Max Delbrück Center
2021
Charité - Universitätsmedizin Berlin
2021
Humboldt-Universität zu Berlin
2021
Freie Universität Berlin
2021
University Hospital of Basel
2021
University of Basel
2021
University of California, Irvine
2021
Heinrich Heine University Düsseldorf
2018-2019
Weighted logrank tests are a popular tool for analysing right-censored survival data from two independent samples. Each of these is optimal against certain hazard alternative, example, the classical test proportional hazards. But which weight function should be used in practical applications? We address this question by flexible combination idea leading to testing procedure with broader power. Besides test's asymptotic exactness and consistency, its power behaviour under local alternatives...
<h3>Objective</h3> To investigate the association of combined serum neurofilament light chain (sNfL) and retinal optical coherence tomography (OCT) measurements with future disease activity in patients early multiple sclerosis (MS). <h3>Methods</h3> We analyzed sNfL by single molecule array technology performed OCT a prospective cohort 78 clinically isolated syndrome relapsing-remitting MS median (interquartile range) follow-up 23.9 (23.3–24.7) months. Patients were grouped into those...
Several methods in survival analysis are based on the proportional hazards assumption. However, this assumption is very restrictive and often not justifiable practice. Therefore, effect estimands that do rely highly desirable practical applications. One popular example for restricted mean time (RMST). It defined as area under curve up to a prespecified point and, thus, summarizes into meaningful estimand. For two‐sample comparisons RMST, previous research found inflation of type I error...
We propose inference procedures for general factorial designs with time-to-event endpoints. Similar to additive Aalen models, null hypotheses are formulated in terms of cumulative hazards. Deviations measured quadratic forms Nelson-Aalen-type integrals. Different from existing approaches, this allows work without restrictive model assumptions as proportional In particular, crossing survival or hazard curves can be detected a significant loss power. For distribution-free application the...
Abstract Purpose Infections due to severe neutropenia are the most common therapy-associated causes of mortality in patients with acute myeloid leukemia (AML). New strategies lessen severity and duration needed. Methods Cytarabine is commonly used for AML consolidation therapy; we compared high- intermediate-dose cytarabine administration on days 1, 2, 3 (AC-123) versus 3, 5 (AC-135) therapy AML. Recently, clinical trials demonstrated that high-dose AC-123 resulted a shortened white blood...
Functional data analysis is becoming increasingly popular to study from real-valued random functions. Nevertheless, there a lack of multiple testing procedures for such data. These are particularly important in factorial designs compare different groups or infer factor effects. We propose new class arbitrary linear hypotheses general with functional Our methods allow global as well inference both, univariate and multivariate mean functions without assuming particular error distributions nor...
Factorial analyses offer a powerful nonparametric means to detect main or interaction effects among multiple treatments. For survival outcomes, for example, from clinical trials, such techniques can be adopted comparing reasonable quantifications of treatment effects. The key difficulty solve in analysis concerns the proper handling censoring. So far, all existing factorial data have been developed under independent censoring assumption, which is too strong many applications. As solution,...
Abstract Population means and standard deviations are the most common estimands to quantify effects in factorial layouts. In fact, statistical procedures such designs built toward inferring or contrasts thereof. For more robust analyses, we consider population median, interquartile range (IQR) general quantile combinations as which formulate null hypotheses calculate compatible confidence regions. Based upon simultaneous multivariate central limit theorems corresponding resampling results,...
Recent observations, especially in cancer immunotherapy clinical trials with time‐to‐event outcomes, show that the commonly used proportional hazard assumption is often not justifiable, hampering an appropriate analysis of data by ratios. An attractive alternative advocated given restricted mean survival time (RMST), which does rely on any model and can always be interpreted intuitively. Since methods for RMST based asymptotic theory suffer from inflated type‐I error under small sample...
Abstract The multivariate coefficient of variation (MCV) is an attractive and easy‐to‐interpret effect size for the dispersion in data. Recently, first inference methods MCV were proposed general factorial designs. However, are primarily derived one special variant while there several reasonable proposals. Moreover, when rejecting a global null hypothesis, more in‐depth analysis interest to find significant contrasts MCV. This paper concerns extending nonparametric permutation procedure...
Factorial survival designs with right-censored observations are commonly inferred by Cox regression and explained means of hazard ratios. However, in case non-proportional hazards, their interpretation can become cumbersome; especially for clinicians. We therefore offer an alternative: median times used to estimate treatment interaction effects null hypotheses formulated contrasts population versions. Permutation-based tests confidence regions proposed shown be asymptotically valid. Their...
Remote magnetic navigation (RMN) facilitates ventricular arrhythmia (VA) ablation. This study aimed to evaluate the long-term efficacy of RMN-guided ablation for tachycardia (VT) and premature contractions (PVC). A total 176 consecutive patients (mean age 53.23 ± 17.55 years, 37% female) underwent VA PVC (132 patients, 75%) or VT (44 25%). The cohort consisted 119 (68%) with idiopathic VA, 31 (18%) ischemic (ICM), 26 (15%) dilated cardiomyopathy (DCM). recurrence was observed in 69 (39%,...
A frequent problem in statistical science is how to properly handle missing data matched paired observations. There a large body of literature coping with the univariate case. Yet, ongoing technological progress measuring biological systems raises need for addressing more complex data, example, graphs, strings, and probability distributions. To fill this gap, article proposes new estimators maximum mean discrepancy (MMD) pairs data. These can detect differences distributions under different...
A nonparametric goodness-of-fit test for random variables with values in a separable Hilbert space is investigated. To verify the null hypothesis that data come from specific distribution, an integral type based on Cramér-von-Mises statistic suggested. The convergence distribution of under proved and test's consistency concluded. Moreover, properties local alternatives are discussed. Applications given huge but finite dimension functional infinite dimensional spaces. general approach enables...
The area between two survival curves is an intuitive test statistic for the classical two-sample testing problem. We propose a bootstrap version of it assessing overall homogeneity these curves. Our approach allows ties in data as well independent right censoring, which may differ groups. asymptotic distribution its counterpart are derived under null hypothesis, and their consistency proven general alternatives. demonstrate finite sample superiority proposed over some existing methods...
While there exists several inferential methods for analyzing functional data in factorial designs, is a lack of statistical tests that are valid (i) general (ii) under non-restrictive assumptions on the generating process and (iii) allow coherent post-hoc analyses. In particular, most existing assume Gaussianity or equal covariance functions across groups (homoscedasticity) only applicable specific study designs do not evaluation interactions. Moreover, all available strategies designed...
The family of goodness-of-fit tests based on $\Phi$-divergences is known to be optimal for detecting signals hidden in high-dimensional noise data when the heterogeneous normal mixture model underlying. This test includes Tukey’s popular higher criticism and famous Berk–Jones test. In this paper we address open question whether tests’ optimality still present beyond prime model. On one hand, transfer different models, example, heteroscedastic normal, general Gaussian...
Abstract This paper considers a paired data framework and discusses the question of marginal homogeneity bivariate high-dimensional or functional data. The related testing problem can be endowed into more general setting for random variables taking values in Hilbert space. To address this problem, Cramér–von-Mises type test statistic is applied bootstrap procedure suggested to obtain critical finally consistent test. desired properties derived that are asymptotic exactness under null...
Much effort has been done to control the "false discovery rate" (FDR) when $m$ hypotheses are tested simultaneously. The FDR is expectation of proportion" $\text{FDP}=V/R$ given by ratio number false rejections $V$ and all $R$. In this paper, we have a closer look at FDP for adaptive linear step-up multiple tests. These tests extend well known Benjamini Hochberg test estimating unknown amount $m_{0}$ true null hypotheses. We give exact finite sample formulas higher moments and, in...