Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs

Code (set theory) STRIDE
DOI: 10.48550/arxiv.2401.00905 Publication Date: 2024-01-01
ABSTRACT
The emergence of large language models (LLMs) has significantly accelerated the development a wide range applications across various fields. There is growing trend in construction specialized platforms based on LLMs, such as newly introduced custom GPTs by OpenAI. While provide functionalities like web browsing and code execution, they also introduce significant security threats. In this paper, we conduct comprehensive analysis privacy issues arising from GPT platform. Our systematic examination categorizes potential attack scenarios into three threat role malicious actor, identifies critical data exchange channels GPTs. Utilizing STRIDE modeling framework, identify 26 vectors, with 19 being partially or fully validated real-world settings. findings emphasize urgent need for robust measures ecosystem, especially light forthcoming launch official store
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....