Responsible Research Assessment II: A specific proposal for hiring and promotion in psychology
Promotion (chess)
DOI:
10.31234/osf.io/5yexm_v2
Publication Date:
2025-05-05T13:25:26Z
AUTHORS (6)
ABSTRACT
Traditional metric indicators of scientific productivity (e.g., journal impact factor; h-index) have been heavily criticized for being invalid and fueling a culture that focuses on the quantity, rather than quality, person’s output. There is now wide-spread demand viable alternatives to current academic evaluation practices. In previous report, we laid out four basic principles more responsible research assessment in hiring promotion processes (Schönbrodt et al., 2024). The present paper offers specific proposal how these may be implemented practice: We argue favor broadening range relevant contributions propose set concrete quality criteria (including ready-to-use online tool) articles. These are supposed used primarily first phase process. Their function help establish minimum threshold methodological rigor – including empirical theoretical candidates need pass order further considered or promotion. contrast, second process actual content candidates’ output necessarily uses narrative means assessment. debate over ways replacing with ones relate closely continues. Its course outcome will depend willingness researchers get involved shape it.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....