- Philosophy and History of Science
- Statistics Education and Methodologies
- Science and Climate Studies
- Bayesian Modeling and Causal Inference
- Meta-analysis and systematic reviews
- Statistical Methods in Clinical Trials
- Risk Perception and Management
- Probability and Statistical Research
- Historical Economic and Social Studies
- Advanced Statistical Methods and Models
- Epistemology, Ethics, and Metaphysics
- Statistical Mechanics and Entropy
- Mental Health Research Topics
- Forecasting Techniques and Applications
- Explainable Artificial Intelligence (XAI)
- Decision-Making and Behavioral Economics
- Law, Economics, and Judicial Systems
- Biomedical Text Mining and Ontologies
- Economic Theory and Institutions
- Pragmatism in Philosophy and Education
- Race, Genetics, and Society
- Philosophy, Science, and History
- Health Systems, Economic Evaluations, Quality of Life
- American Constitutional Law and Politics
- Data Analysis with R
Virginia Tech
2013-2023
Cambridge University Press
2010
Philosophy of Science Association
1995
Hydro One (Canada)
1990
Despite the widespread use of key concepts Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been subject philosophical controversy debate for over 60 years. Both current long-standing problems N–P tests stem from unclarity confusion, even among adherents, as to how a test's (pre-data) error probabilities are be used (post-data) inductive inference opposed behavior. We argue that relevance is ensure only hypotheses passed...
While many philosophers of science have accorded special evidential significance to tests whose results are “novel facts”, there continues be disagreement over both the definition novelty and why it should matter. The view favored by Giere, Lakatos, Worrall others is that use-novelty : An accordance between evidence e hypothesis h provides a genuine test only if not used in 's construction. I argue what lies behind intuition matters deeper severe set out criterion severity akin notion test's...
The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply methods, but this not been accompanied by attention to checking assumptions on these methods are based. At same time, disagreements about inferences based research frequently revolve around whether actually met in studies available, e.g., psychology, ecology, biology, risk assessment. Philosophical scrutiny can help disentangle ‘practical’ problems model...
While the common procedure of statistical significance testing and its accompanying concept p-values have long been surrounded by controversy, renewed concern has triggered replication crisis in science. Many blame tests themselves, some regard them as sufficiently damaging to scientific practice warrant being abandoned. We take a contrary position, arguing that central criticisms arise from misunderstanding misusing tools, fact purported remedies themselves risk argue banning use p-value...
The error statistical account of testing uses considerations, not to provide a measure probability hypotheses, but model patterns irregularity that are useful for controlling, distinguishing, and learning from errors. aim this paper is (1) explain the main points contrast between subjective Bayesian approach (2) elucidate key errors underlie central objection raised by Colin Howson at our PSA 96 Symposium.
While orthodox (Neyman-Pearson) statistical tests enjoy widespread use in science, the philosophical controversy over their appropriateness for obtaining scientific knowledge remains unresolved. I shall suggest an explanation and a resolution of this controversy. The source controversy, argue, is that are typically interpreted as rules making optimal decisions to how behave –-where optimality measured by frequency errors test would commit long series trials. Most philosophers statistics,...
An essential component of inference based on familiar frequentist notions, such as $p$-values, significance and confidence levels, is the relevant sampling distribution. This feature results in violations a principle known strong likelihood (SLP), focus this paper. In particular, if outcomes $\mathbf{x}^{\ast }$ $\mathbf{y}^{\ast from experiments $E_{1}$ $E_{2}$ (both with unknown parameter $\theta $) have different probability models $f_{1}(\cdot)$, $f_{2}(\cdot)$, then even though...
The key problem in the controversy over group selection is that of defining a criterion identifies distinct causal process irreducible to individual selection. We aim clarify this and formulate an adequate model distinguish two types models, labeling them type I II models. Type models are invoked explain differences among groups their respective rates production contained individuals. new groups. Taking Elliott Sober's as exemplar, we argue although have some biological importance—they force...
Recently, a number of statistical reformers have argued for conceptualizing significance testing as analogous to diagnostic testing, with "base rate" true nulls and test's error probabilities used compute "positive predictive value" or "false discovery rate". These quantities are critique scientific practice. We argue against this; these not relevant evaluating tests, they add the confusion over take focus away from what matters: evidence.