See also: Tacit Knowledge and Biosecurity at https://blog.stephenturner.us/p/tacit-knowledge-biosecurity-rand.
- Sarah R. Carter & Greg Butchello. A Framework for Managed Access to Biological AI Tools. https://www.nti.org/analysis/articles/a-framework-for-managed-access-to-biological-ai-tools/ (2026).
- Brent, R. & McKelvey, G. Contemporary Foundation AI Models Increase Biological Weapons Risk. https://www.rand.org/pubs/perspectives/PEA3853-1.html (2025).
- Dettman, J., Lathrop, E., Attal‐Juncqua, A., Nicotra, M. & Berke, A. Prioritizing Feasible and Impactful Actions to Enable Secure AI Development and Use in Biology. Biotech & Bioengineering bit.70132 (2026) doi:10.1002/bit.70132.
- Zhou, Y. et al. Benchmarking large language models on safety risks in scientific laboratories. Nat Mach Intell https://doi.org/10.1038/s42256-025-01152-1 (2026) doi:10.1038/s42256-025-01152-1.
- Mui, A. K. et al. Increasing Gene Synthesis Security Risk Awareness Through Global Engagement and Collaborative Exercise Development. Health Security 23, 449–455 (2025).
- Kolt, N. et al. Legal Alignment for Safe and Ethical AI. Preprint at https://doi.org/10.48550/arXiv.2601.04175 (2026).
- AI Security Institute. Frontier AI Trends Report. https://aisi.s3.eu-west-2.amazonaws.com/Frontier+AI+Trends+Report+-+AI+Security+Institute.pdf (2026).
- Gladue, D. P., O’Mahony, A., Gladue, D. P. & O’Mahony, A. CRISPR Treatments for AI-Designed Synthetic Viruses: Rapid Programmable Countermeasures for Emerging and Engineered Viruses. Viruses 17, (2025).
- Eskandar, K. Artificial intelligence and synthetic biology: biosecurity risks, dual-use concerns, and governance pathways. AI Ethics 6, 66 (2025).
- Beal, J. & Alexanian, T. Creating Enforceable Biosecurity Standards for Nucleic Acid Providers. Engineering Biology 9, e70003 (2025).
- Romero-Severson, E. O., Harvey, T., Generous, N. & Mach, P. M. Measuring skill-based uplift from AI in a real biological laboratory. Preprint at https://doi.org/10.48550/arXiv.2512.10960 (2025).
- Eskandar, K. Artificial intelligence and synthetic biology: biosecurity risks, dual-use concerns, and governance pathways. AI Ethics 6, 66 (2026).
- Okon, M. B. et al. From pandemics to preparedness: harnessing AI, CRISPR, and synthetic biology to counter biosecurity threats. Front Public Health 13, 1711344 (2025).
- Batalis, S. & Venkatram, V. Use all the tools of the trade: Building a foundation for the next era of biosecurity. Bulletin of the Atomic Scientists 81, 457–461 (2025).
- OpenAI. OpenAI O1 System Card. https://openai.com/index/openai-o1-system-card/ (2024).
- Taylor, J. et al. Auditing Games for Sandbagging. Preprint at https://doi.org/10.48550/arXiv.2512.07810 (2025).
- Committee on Assessing and Navigating Biosecurity Concerns and Benefits of Artificial Intelligence Use in the Life Sciences et al. The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations. 28868 (National Academies Press, Washington, D.C., 2025). doi:10.17226/28868.
- Banerjee, A., Tam, E., Dang, C. & Martinez, D. SafeGenie: Erasing Dangerous Concepts from Biological Diffusion Models. in (2025).
- Braun, J. et al. Resisting RL Elicitation of Biosecurity Capabilities: Reasoning Models Exploration Hacking on WMDP. in (2025).
- O’Brien, K. et al. Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs. in (2025).
- Zhang, Z., Zhou, Z., Jin, R., Cong, L. & Wang, M. GeneBreaker: Jailbreak Attacks against DNA Language Models with Pathogenicity Guidance. in (2025).
- Liu, A. B. et al. ABC-Bench: An Agentic Bio-Capabilities Benchmark for Biosecurity. in (2025).
- Wang, D. et al. Without Safeguards, AI-Biology Integration Risks Creating Future Pandemics. in (2025).
- OpenAI. Preparedness Framework. https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf (2025).
- Endy, D. Biosecurity Really: A Strategy for Victory.
- Liu, Z., Dou, G., Tan, Z., Tian, Y. & Jiang, M. Towards Safer Large Language Models through Machine Unlearning. Preprint at https://doi.org/10.48550/arXiv.2402.10058 (2024).
- Nguyen, T. T. et al. A Survey of Machine Unlearning. ACM Trans. Intell. Syst. Technol. 16, 108:1-108:46 (2025).
- Anthropic. Activating AI Safety Level 3 protections. https://www.anthropic.com/news/activating-asl3-protections (2025).
- Tedeschi, S. et al. ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming. Preprint at https://doi.org/10.48550/arXiv.2404.08676 (2024).
- Mazeika, M. et al. HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal. Preprint at https://doi.org/10.48550/arXiv.2402.04249 (2024).
- Wang, Y. et al. MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark. Preprint at https://doi.org/10.48550/arXiv.2406.01574 (2024).
- Hendrycks, D. et al. Measuring Massive Multitask Language Understanding. Preprint at https://doi.org/10.48550/arXiv.2009.03300 (2021).
- Rein, D. et al. GPQA: A Graduate-Level Google-Proof Q&A Benchmark. Preprint at https://doi.org/10.48550/arXiv.2311.12022 (2023).
- Li, N. et al. The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning. Preprint at https://doi.org/10.48550/arXiv.2403.03218 (2024).
- Laurent, J. M. et al. LAB-Bench: Measuring Capabilities of Language Models for Biology Research. Preprint at https://doi.org/10.48550/arXiv.2407.10362 (2024).
- Ivanov, I. BioLP-bench: Measuring understanding of biological lab protocols by large language models. 2024.08.21.608694 Preprint at https://doi.org/10.1101/2024.08.21.608694 (2024).
- Rivera, S. & Mackelprang, R. Risk Mitigation for Biological Design Tools. https://ebrc.org/publications-risk-mitigation-for-biological-design-tools (2025) doi:10.25498/E4R10S.
- Tarangelo, J., Attal-Juncqua, A., Somani, E., Roberts, D. & Webster, K. Protecting Biological Materials and Services from Misuse: Opportunities for Access Monitoring and Control. https://www.rand.org/pubs/research_reports/RRA4067-1.html (2025).
- Ho, A. Do the biorisk evaluations of AI labs actually measure the risk of developing bioweapons? Epoch AI https://epoch.ai/gradient-updates/do-the-biorisk-evaluations-of-ai-labs-actually-measure-the-risk-of-developing-bioweapons (2025).
- Willis, H., Nicotra, M., Sperisen, B., Somani, E. & Willis, H. Living Framework and Guidelines for Information Disclosure in Biosecurity. Preprint at https://doi.org/10.20944/preprints202510.1427.v1 (2025).
- Zhang, Z. et al. Generative AI for Biosciences: Emerging Threats and Roadmap to Biosecurity. Preprint at https://doi.org/10.48550/arXiv.2510.15975 (2025).
- Peppin, A. et al. The Reality of AI and Biorisk. Preprint at https://doi.org/10.48550/arXiv.2412.01946 (2025).
- Hattoh, G., Ayensu, J., Ofori, N. P., Eshun, S. & Akogo, D. Can Large Language Models Design Biological Weapons? Evaluating Moremi Bio. Preprint at https://doi.org/10.48550/arXiv.2505.17154 (2025).
- Wheeler, N. E. Responsible AI in biotechnology: balancing discovery, innovation and biosecurity risks. Front Bioeng Biotechnol 13, 1537471 (2025).
- Forum, F. M. Issue Brief: Preliminary Taxonomy of AI-Bio Safety Evaluations. Frontier Model Forum https://www.frontiermodelforum.org/updates/issue-brief-preliminary-taxonomy-of-ai-bio-safety-evaluations/ (2024).
- Dev, S. et al. Toward Comprehensive Benchmarking of the Biological Knowledge of Frontier Large Language Models. https://www.rand.org/pubs/working_papers/WRA3797-1.html (2025).
- Lu, A. B. & Lewis, A. C. F. Governance strategies for biological AI: beyond the dual-use dilemma. Trends in Biotechnology S016777992500397X (2025) doi:10.1016/j.tibtech.2025.09.012.
- Titus, A. Shock Doctrine in the Life Sciences - When Fear Overwhelms Facts. SSRN Scholarly Paper at https://doi.org/10.2139/ssrn.5428795 (2025).
- Manheim, D., Williams, A., Aveggio, C. & Berke, A. Understanding the Theoretical Limits of AI-Enabled Pathogen Design: Insights from a Delphi Study. https://www.rand.org/pubs/research_reports/RRA4087-1.html (2025).
- Patel, A. J. et al. Physical Approaches to Civilian Biodefense: Identifying Potential Preparedness Measures for Challenging Biological Threats. https://www.rand.org/pubs/research_reports/RRA4036-1.html (2025).
- Dettman, J., Lathrop, E., Attal-Juncqua, A., Nicotra, M. & Berke, A. Prioritizing Feasible and Impactful Actions to Enable Secure Artificial Intelligence Development and Use in Biology. https://www.rand.org/pubs/working_papers/WRA4213-1.html (2025).
- Wittmann, B. J. et al. Strengthening nucleic acid biosecurity screening against generative protein design tools. Science 390, 82–87 (2025).
- Moulange, R. Defensive acceleration needs execution, not just good intentions. Securing the Interface https://richardmoulange.substack.com/p/defensive-acceleration-needs-execution (2025).
- Aveggio, C., Patel, A. J., Nevo, S. & Webster, K. Exploring the Offense-Defense Balance of Biology: Identifying and Describing High-Level Asymmetries. https://www.rand.org/pubs/perspectives/PEA4102-1.html (2025).
- Anthropic. Claude Sonnet 4.5 System Card. (2025).
- Attal-Juncqua, A. et al. Biosecurity Governance Across Uncertain Artificial Intelligence Futures: Perspectives from a Side Event on Biosecurity and Frontier AI at the 2025 AI Action Summit. https://www.rand.org/pubs/conf_proceedings/CFA4186-1.html (2025).
- Webster, T. et al. Global Risk Index for AI-Enabled Biological Tools. https://www.rand.org/pubs/external_publications/EP71093.html (2025) doi:10.71172/wjyw-6dyc.
- Pannu, J. et al. Dual-use capabilities of concern of biological AI models. PLOS Computational Biology 21, e1012975 (2025).
- Zhu, T. Mirror of the unknown: should research on mirror-image molecular biology be stopped? Nature 645, 588–591 (2025).
- Del Castello, B. & Willis, H. H. Assessing the Impacts of Technology Maturity and Diffusion on Malicious Biological Agent Development Capabilities: Demonstrating a Transparent, Repeatable Assessment Method. https://www.rand.org/pubs/research_reports/RRA3662-1.html (2025).
- Biodefense in the Age of Synthetic Biology. (National Academies Press, Washington, D.C., 2018). doi:10.17226/24890.
- Feldman, J. & Feldman, T. Resilient Biosecurity in the Era of AI-Enabled Bioweapons. Preprint at https://doi.org/10.48550/arXiv.2509.02610 (2025).
- Pannu, J. et al. Defining Hazardous Capabilities of Biological AI Models: Expert Convening to Inform Future Risk Assessment. https://www.rand.org/pubs/conf_proceedings/CFA3649-1.html (2025).
- Titus, A. Violet Teaming AI in the Life Sciences A Preprint. (2023). doi:10.5281/zenodo.8180396.
- Coltoff, E. C. & Davis, A. L. Emerging biotechnologies: Dual-use potential and strategies to prevent misuse. MIT Science Policy Review 6, (2025).
- National Security Commission on Emerging Biotechnology. Charting the Future of Biotechnology: An action plan for American security and prosperity. (2025).
- Drexel, B. & Withers, C. AI and the Evolution of Biological National Security Risks. https://www.cnas.org/publications/reports/ai-and-the-evolution-of-biological-national-security-risks (2025).
- Godbold, G. D. et al. The Case for Limiting “Sequences of Concern” to Those with Demonstrated Pathogenic Function. Applied Biosafety apb.2025.0015 (2025) doi:10.1089/apb.2025.0015.
- National Academies of Sciences, Engineering, and Medicine. The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations. (2025).