Iterative LLM-Guided Sampling and Expert-annotated Benchmark Corpus for Harmful Suicide Content Detection (Preprint)

DOI: 10.2196/preprints.73725 Publication Date: 2025-03-25T20:12:21Z
ABSTRACT
BACKGROUND Harmful suicide content on the internet poses significant risks, as it can induce suicidal thoughts and behaviors, particularly among vulnerable populations. Despite global efforts, existing moderation approaches remain insufficient, especially in high-risk regions like South Korea, which has the highest suicide rate among OECD countries. Previous research has primarily focused on assessing the suicide risk of individuals rather than the harmfulness of content itself, highlighting a gap in automated detection systems for harmful suicide content. OBJECTIVE In this study, we aimed to develop an AI-driven system for classifying online suicide-related content into five levels: illegal, harmful, potentially harmful, harmless, and non-suicide-related. Additionally, the researchers construct a multi-modal bench- mark dataset with expert annotations to improve content moderation and assist AI models in detecting and regulating harmful content more effectively. METHODS We collected 43,244 user-generated posts from various online sources, including social media, Q&A platforms, and online communities. To reduce the workload on human annotators, GPT-4 was used for pre-annotation, filtering and categorizing content before manual review by medical professionals. A task description document ensured consistency in classification. Ultimately, a benchmark dataset of 452 manually labeled entries was developed, including both Korean and English versions, to support AI-based moderation. The study also evaluated zero-shot and few-shot learning to determine the best AI approach for detecting harmful content. RESULTS The multi-modal benchmark dataset showed that GPT-4 achieved the highest F1 scores (66.46 for illegal and 77.09 for harmful content detection). Image descriptions improved classification accuracy, while directly using raw images slightly decreased performance. Few-shot learning significantly enhanced detection, demonstrating that small but high-quality datasets could improve AI-driven moderation. However, translation challenges were observed, particularly in suicide-related slang and abbreviations, which were sometimes inaccurately conveyed in the English benchmark. CONCLUSIONS This study provides a high-quality benchmark for AI-based suicide content detection, proving that LLMs can effectively assist in content moderation while reducing the burden on human moderators. Future work will focus on enhancing real-time detection and improving the handling of subtle or disguised harmful content.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (32)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....