A Call for Built-in Biosecurity Safeguards for Generative AI Tools
DOI:
10.20944/preprints202503.1761.v1
Publication Date:
2025-03-28T00:52:39Z
AUTHORS (11)
ABSTRACT
The rapid adoption of generative AI (GenAI) in biotechnology offers immense potential but also raises serious safety concerns. AI models for protein engineering, genome editing, and molecular synthesis can be misused to enhance viral virulence, design toxins, or modify human embryos, while ethical and policy discussions lag behind technological advances. This Correspondence calls for proactive, built-in, AI-native safeguards within GenAI tools. With more research and development, emerging AI safety technologies—watermarking, alignment, anti-jailbreak methods, and unlearning—can complement governance policies and provide scalable biosecurity solutions. We also stress the global community’s role in researching, developing, testing, and implementing these measures to ensure the responsible GenAI deployment in biotechnology.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....