Esther Maassen

ORCID: 0000-0003-3288-5424
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Mental Health Research Topics
  • Meta-analysis and systematic reviews
  • Psychometric Methodologies and Testing
  • Scientific Computing and Data Management
  • Advanced Causal Inference Techniques
  • Behavioral Health and Interventions
  • Cognitive Science and Mapping
  • Privacy, Security, and Data Protection
  • Complex Network Analysis Techniques
  • Privacy-Preserving Technologies in Data
  • Digital Mental Health Interventions
  • Data Quality and Management
  • Machine Learning in Materials Science
  • Biomedical Text Mining and Ontologies
  • Impact of Technology on Adolescents
  • Biomedical and Engineering Education
  • Data-Driven Disease Surveillance
  • Cognitive and psychological constructs research
  • X-ray Diffraction in Crystallography
  • Cognitive Abilities and Testing
  • Forecasting Techniques and Applications
  • Advanced Statistical Modeling Techniques
  • World Systems and Global Transformations
  • Religion and Society Interactions
  • Mobile Health and mHealth Applications

Tilburg University
2018-2024

Self-report scales are widely used in psychology to compare means latent constructs across groups, experimental conditions, or time points.However, for these comparisons be meaningful and unbiased, the must demonstrate measurement invariance (MI) compared points (experimental) groups.MI testing determines whether measured equivalently groups time, which is essential comparisons.We conducted a systematic review of 426 articles with openly available data, (a) examine common practices...

10.1037/met0000624 article EN Psychological Methods 2023-12-25

To determine the reproducibility of psychological meta-analyses, we investigated whether could reproduce 500 primary study effect sizes drawn from 33 published meta-analyses based on information given in and recomputations altered overall results meta-analysis. Results showed that almost half (k = 224) all sampled not be reproduced reported meta-analysis, mostly because incomplete or missing how studies were selected computed. Overall, this led to small discrepancies computation mean sizes,...

10.1371/journal.pone.0233107 article EN cc-by PLoS ONE 2020-05-27
Richard Klein Michelangelo Vianello Fred Hasselman Byron G. Adams Reginald B. Adams and 95 more Sinan Alper Mark Aveyard Jordan Axt Mayowa T. Babalola Štěpán Bahník Mihály Berkics Michael J. Bernstein Daniel R. Berry Olga Białobrzeska Konrad Bocian Mark Brandt Robert Busching Huajian Cai Fanny Cambier Katarzyna Cantarero Cheryl L. Carmichael Zeynep Cemalcılar Jesse Chandler Jen‐Ho Chang Armand Chatard Eva CHEN Winnee Cheong David C. Cicero Sharon Coen Jennifer A. Coleman Brian Collisson Morgan Conway Katherine S. Corker Paul Curran Fiery Cushman Ilker Dalgar William E. Davis Maaike Jolise de Bruijn Marieke de Vries Thierry Devos Canay Doğulu Nerisa Dozo Kristin Nicole Dukes Yarrow Dunham Kevin Durrheim Matthew J. Easterbrook Charles R. Ebersole John E. Edlund Alexander Scott English Anja Eller Carolyn Finck Miguel-Ángel Freyre Michael Friedman Natalia Frankowska Elisa Maria Galliani Tanuka Ghoshal Steffen Robert Giessner Tripat Gill Timo Gnambs Ángel Gómez Roberto González Jesse Graham Jon Grahe Ivan Grahek Eva G. T. Green Kakul Hai Matthew Haigh Elizabeth L. Haines Michael P. Hall Marie E. Heffernan Joshua A. Hicks Petr Houdek Marije van der Hulst Jeffrey R. Huntsinger Ho Phi Huynh Hans IJzerman Yoel Inbar Åse Innes-Ker William Jiménez‐Leal Melissa‐Sue John Jennifer A. Joy-Gaba Roza Gizem Kamiloglu Andreas Kappes Heather Barry Kappes Serdar Karabatı Haruna Karick Victor N. Keller Anna Kende Nicolas Kervyn Goran Knežević Carrie Kovacs Lacy E. Krueger German Kurapov Jaime L. Kurtz Daniël Lakens Ljiljana B. Lazarević Carmel Levitan Neil A. Lewis Samuel Lins Esther Maassen

We conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation effect magnitudes across sample setting. Each protocol was administered approximately half 125 samples 15,305 total participants from 36 countries territories. Using conventional statistical significance (p < .05), fifteen (54%) the provided evidence same direction statistically significant as original finding. With a strict...

10.31234/osf.io/9654g preprint EN 2018-11-19

Self-report scales are widely used in psychology to compare means latent constructs across groups, experimental conditions, or time points. However, for these comparisons be meaningful and unbiased, the must demonstrate measurement invariance (MI) compared points (experimental) groups. MI testing determines whether measured equivalently groups time, which is essential comparisons. We conducted a systematic review of 426 articles with openly available data, (1) examine common practices...

10.31234/osf.io/n3f5u preprint EN 2022-07-26

Abstract Purpose Intensive longitudinal studies, in which participants complete questionnaires multiple times a day over an extended period, are increasingly popular the social sciences general and quality-of-life research particular. The intensive methods allow for studying dynamics of constructs (e.g., how much patient-reported outcomes vary across time). These promise higher ecological validity lower recall bias than traditional that question only once, since high frequency means their...

10.1007/s11136-024-03678-0 article EN cc-by Quality of Life Research 2024-06-13

Purpose: Intensive longitudinal studies, in which participants complete questionnaires multiple times a day over an extended period, are increasingly popular the social sciences general and quality-of-life research particular. The intensive methods allow for studying dynamics of constructs (e.g., how much patient-reported outcomes (PROs) vary across time). These promise higher ecological validity lower recall bias than traditional that question only once, since high frequency means their...

10.31234/osf.io/uat5r preprint EN 2023-06-02

In determining the need to directly replicate, it is crucial first verify original results through independent reanalysis of data. Original that appear erroneous and cannot be reproduced by offer little evidence begin with, thereby diminishing replicate. Sharing data scripts essential ensure reproducibility.

10.31234/osf.io/fuzkh article EN 2018-01-22

To determine the reproducibility of psychological meta-analyses, we investigated whether could reproduce 500 primary study effect sizes drawn from 33 published meta-analyses based on information given in and recomputations altered overall results meta-analysis.

10.31234/osf.io/g5ryh preprint EN 2019-10-17

Given the many benefits of sharing data, an increasing number psychological researchers publicly share data underlying their research via online repositories. While undoubtedly a positive scientific development that enables greater verification and re-use, it is important to protect interests confidentiality participants while doing so. This particularly relevant when studying sensitive topics, for example related health, religion, politics, sexual behaviors. We systematically assessed risk...

10.31234/osf.io/ybzu9 preprint EN 2022-03-29

10.1016/j.jarmac.2019.06.007 article EN Journal of Applied Research in Memory and Cognition 2019-09-01
Richard Klein Michelangelo Vianello Fred Hasselman Byron G. Adams Reginald B. Adams and 95 more Sinan Alper Mark Aveyard Jordan Axt Mayowa T. Babalola Štěpán Bahník Mihály Berkics Michael J. Bernstein Daniel R. Berry Olga Białobrzeska Konrad Bocian Mark Brandt Robert Busching Huajian Cai Fanny Cambier Katarzyna Cantarero Cheryl L. Carmichael Zeynep Cemalcılar Jesse Chandler Jen‐Ho Chang Armand Chatard Eva CHEN Winnee Cheong David C. Cicero Sharon Coen Jennifer A. Coleman Brian Collisson Morgan Conway Katherine S. Corker Paul Curran Fiery Cushman Ilker Dalgar William E. Davis Maaike Jolise de Bruijn Marieke de Vries Thierry Devos Canay Doğulu Nerisa Dozo Kristin Nicole Dukes Yarrow Dunham Kevin Durrheim Matthew J. Easterbrook Charles R. Ebersole John E. Edlund Alexander Scott English Anja Eller Carolyn Finck Miguel-Ángel Freyre Michael Friedman Natalia Frankowska Elisa Maria Galliani Tanuka Ghoshal Steffen Robert Giessner Tripat Gill Timo Gnambs Ángel Gómez Roberto González Jesse Graham Jon Grahe Ivan Grahek Eva G. T. Green Kakul Hai Matthew Haigh Elizabeth L. Haines Michael P. Hall Marie E. Heffernan Joshua A. Hicks Petr Houdek Marije van der Hulst Jeffrey R. Huntsinger Ho Phi Huynh Hans IJzerman Yoel Inbar Åse Innes-Ker William Jiménez‐Leal Melissa‐Sue John Jennifer A. Joy-Gaba Roza Gizem Kamiloglu Andreas Kappes Heather Barry Kappes Serdar Karabatı Haruna Karick Victor N. Keller Anna Kende Nicolas Kervyn Goran Knežević Carrie Kovacs Lacy E. Krueger German Kurapov Jaime L. Kurtz Daniël Lakens Ljiljana B. Lazarević Carmel Levitan Neil A. Lewis Samuel Lins Esther Maassen

We conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation effect magnitudes across sample setting. Each protocol was administered approximately half 125 samples 15,305 total participants from 36 countries territories. Using conventional statistical significance (p < .05), fifteen (54%) the provided evidence same direction statistically significant as original finding. With a strict...

10.31234/osf.io/9654g_v1 preprint EN 2018-11-19

To improve the way in which research is currently conducted and communicated, Hartgerink van Zelst (2018) have recently suggested a new communication infrastructure. In their vision, output communicated continuously “as-you-go” as opposed to current system where only after entire cycle has been completed (i.e., “after-the-fact”). This offers host of advantages, including more transparency rapid dissemination output. examine viability system, I aim build functioning implementation proposed...

10.31222/osf.io/289p5 preprint EN 2019-07-17

This is our commentary on the "What IQ? Life Beyond "General Intelligence"" paper by Kovacs and Conway (2019- https://doi.org/10.1177/0963721419827275). We advocate for a latent variable approach in experimental research, illustrate both conceptual statistical benefits of using this context research cognition. Benefits include availability more information about constructs being measured, ability to check specific or general effects covariates, larger power detect mean differences between groups.

10.31234/osf.io/gq6xs preprint EN 2019-09-26
Coming Soon ...