Keeping Users Engaged During Repeated Administration of the Same Questionnaire: Using Large Language Models to Reliably Diversify Questions
Longitudinal Study
Criterion validity
Depression
Repeated measures design
DOI:
10.48550/arxiv.2311.12707
Publication Date:
2023-01-01
AUTHORS (6)
ABSTRACT
Standardized, validated questionnaires are vital tools in research and healthcare, offering dependable self-report data. Prior work has revealed that virtual agent-administered almost equivalent to self-administered ones an electronic form. Despite being engaging method, repeated use of longitudinal or pre-post studies can induce respondent fatigue, impacting data quality via response biases decreased rates. We propose using large language models (LLMs) generate diverse questionnaire versions while retaining good psychometric properties. In a study, participants interacted with our agent system responded daily for two weeks one the following questionnaires: standardized depression questionnaire, question variants generated by LLMs, accompanied LLM-generated small talk. The responses were compared questionnaire. Psychometric testing consistent covariation between external criterion focal measure administered across three conditions, demonstrating reliability validity variants. Participants found significantly less repetitive than administrations same Our findings highlight potential invigorate foster engagement interest, without compromising their validity.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....