Within the past few years, policy officials and experts have pointed to potential new security threats from the convergence of artificial intelligence (AI) and advances in the life sciences and biotechnology. For example, AI is being incorporated into biological design tools to design new biological components and chemical molecules; some worry that these new tools could be used to design new types of biological weapons. Automated and AI-enabled cloud laboratories have been identified as a possible concern for remote, on-demand bioweapons production in the future. Others have pointed to the rapid advancements in and diffusion of large language models (LLMs), which could upskill a wider range of actors with the information to work with dangerous pathogens and launch bio attacks. At the same time, some are advocating for the use of AI to counter bioweapons threats. In July 2025, US President Donald Trump called for the creation of a new AI-enabled verification system to identify suspect activities in contravention to the Biological and Toxin Weapons Convention (BTWC). These varying perspectives show a divergence in opinion on the risks and benefits of the current and future AI-biotech convergence. We are still in the early days of these technological developments. AI can be a concern for bioweapons development but that possibility remains distant. What is clear now, however, is that the reality of this AI-biotech convergence is complicated and will take some time to study and sort out. One area that we can examine where there is present utility (and known concerns) is AI for bio-threat assessment.

Regarding the BTWC, AI systems can be very useful in gathering and analyzing data required in reporting under states parties obligations to the convention, and in confirming the accuracy of data collected. AI tools could also be used by states parties to gather and analyze a larger trove of data regarding potential suspect facilities. For example, AI systems could be used for rapid data mining of scientific publications and other open source and government data (e.g., procurement and financial records; emissions, effluent or energy data; video surveillance and satellite imagery data; patent information) to identify indicators of illicit research activities. AI-enabled systems could also be used for disease surveillance to gather and quickly process data on outbreaks indicative of possible biological attacks and provide early warning capabilities and fast dissemination of information to public health officials and members of the public on preventative or protective measures that could be undertaken. In spite of various beneficial applications of AI for bio-threat detection, it is important to remember that AI systems work with data that can be quantified or made codified; they are not useful for evaluating the tacit dimensions of weapons work that have been shown to be important in former bioweapons programs of state and non-state actors. AI systems are also limited in their ability to infer intent, i.e., a state or non-state actor’s motivation to develop and maintain a biological weapons program.

In addition, AI-systems have limitations and vulnerabilities that could be overlooked or exploited to generate flawed information. For example, AI tools can be used to spread misinformation and disinformation, as has been observed with Russian accusations of suspected biological weapons activities occurring by the United States and their allies and partners (with no concrete evidence that confirms illicit activity). AI systems are vulnerable to data poisoning, in which nefarious actors could corrupt the data used by these AI systems leading to inaccurate conclusions. AI systems are also subject to hallucinations, in which a LLM reports data or makes conclusions that are nonsensical or inaccurate. Accuracy in LLM outputs rely heavily on the integrity of the data used, therefore, missing data, inaccurate data, and corrupt data can lead to error-laden outputs that can mischaracterize the threat. Thus, the outputs of AI-enabled biothreat assessments are only as good as the inputs.

Now and into the future, we need to carefully consider the strengths, weaknesses, and limitations of AI-systems for bio-threat assessment. The most powerful adoption and use of AI is in human-machine teaming, which captures the strengths of both and modulates the limitations of both. To date, most attention in bio-threat assessment is currently focused on the AI technology itself. This is a known problem of focusing on a technological fix to address problems, rather than doing the harder work of thinking holistically about how to skillfully use the strengths of both humans and machines to provide better data and assessments about threats emanating from the convergence of AI, the life sciences, and biotechnology. We need to think carefully about how to use AI systems for human benefit in bio-threat assessment now and into the future.

Kathleen M. Vogel
Professor
School for the Future of Innovation in Society
Arizona State University
USA
Kathleen.Vogel@asu.edu
Back to Table of Contents