Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding
Curiosity
Qualitative analysis
Task Analysis
Qualitative property
DOI:
10.1145/3581754.3584136
Publication Date:
2023-03-26T22:12:25Z
AUTHORS (5)
ABSTRACT
Qualitative analysis of textual contents unpacks rich and valuable information by assigning labels to the data. However, this process is often labor-intensive, particularly when working with large datasets. While recent AI-based tools demonstrate utility, researchers may not have readily available AI resources expertise, let alone be challenged limited generalizability those task-specific models. In study, we explored use language models (LLMs) in supporting deductive coding, a major category qualitative where pre-determined codebooks label data into fixed set codes. Instead training models, pre-trained LLM could used directly for various tasks without fine-tuning through prompt learning. Using curiosity-driven questions coding task as case found, combining GPT-3 expert-drafted codebooks, our proposed approach achieved fair substantial agreements expert-coded results. We lay out challenges opportunities using LLMs support beyond.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (16)
CITATIONS (84)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....