Human-Centered Privacy Research in the Age of Large Language Models
DOI:
10.48550/arxiv.2402.01994
Publication Date:
2024-02-02
AUTHORS (6)
ABSTRACT
The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns. To date, research on these concerns been model-centered: exploring how LLMs lead risks like memorization, or can be used infer personal characteristics about people from content. We argue that there is a need for more focusing the human aspect issues: e.g., design paradigms affect users' disclosure behaviors, mental preferences controls, tools, artifacts empower end-users reclaim ownership over data. build usable, efficient, privacy-friendly systems powered by with imperfect properties, our goal initiate discussions outline an agenda conducting human-centered issues LLM-powered systems. This Special Interest Group (SIG) aims bring together researchers backgrounds usable security privacy, human-AI collaboration, NLP, any other related domains share perspectives experiences this problem, help community establish collective understanding challenges, opportunities, methods, strategies collaborate outside HCI.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....