Artificial intelligence (AI) is rapidly becoming ubiquitous in society and on the battlefield, and is poised to do the same in the intelligence analysis space. The emergence of large language models as active collaborators in a broad array of tasks over the last several years has left governments and companies scrambling to leverage AI models as quickly as possible. While we have not yet achieved artificial general intelligence (nor the independent, sentient systems portrayed in film), the rapid advancement of machine learning and large language models (LLMs) techniques is simulating systems that appear intelligent enough to their users. This has accelerated the adoption of these tools across an ever-growing set of missions, as well as making their application to a range of existing data types possible.
In Ukraine, artificial intelligence has been used for processing data from the battlefield to make targeting operations more efficient. For example, the AI-powered software GIS Arta system is used for rapid targeting of enemy artillery. In Israel, AI models like “The Gospel” and “Fire Factory” assist in identifying and tracking human targets and automating strike recommendations. In the United States, investments are being made into better understanding and deploying an LLM to support intelligence analysts in their workflows, performing tasks such as identifying logic gaps or masking the identity of sources to increase the distribution of analytical products. Businesses are similarly seeking to advance their corporate intelligence and consumer engagement by deploying machine learning techniques on their available data (resulting in the generic “AI-enabled” branding) or to engage with customers. While noteworthy challenges persist in these developments, such as AI hallucinations, inability to explain the logic of how an output was achieved, data verifiability, and protections against malicious data injection, among others, these systems are rapidly sought out and implemented.
Setting aside the morality and ethical issues of these systems, two issues will persist regardless of how much the underlying algorithms improve. The first is the impact of cognitive offload by analysts onto artificial systems, eroding over time the analytical rigor of the analyst. Early research already suggests that heavy reliance on these tools can impede the development and maintenance of critical thinking skills. As such, policymakers will need to walk a tightrope. These tools cannot be ignored; their integration will be a requirement given the work of other countries to also utilize the advantages provided by these tools. However, as these tools expand and become more ubiquitous in the analyst toolbox, they will have a negative effect on the capacity of the analyst. Tool development and deployment will need to be selective and deliberate, providing support to the analyst while not replacing their critical capacities.
Second, data will start to emerge as the next major hurdle in AI. Currently, AI tools are applied to existing datasets or layered onto existing sensors to enhance processing capacity. The “low-hanging fruit” has been the focus given their easy accessibility. To continue to extract the full value from AI, deliberate strategies will need to be implemented to generate data specifically for AI models. For example, in the conflict in Ukraine, sensors were deployed to capture data from specific areas to increase situational awareness of movements throughout the country. This deliberate planning made the AI tools currently deployed feasible. As intelligence agencies implement these tools, they will be able to utilize existing data streams, but will also need to identify methods for collecting or transforming data with the AI requirements in mind. Without strategic planning, the AI tool ecosystem is more likely to resemble a hand-carved woodworking shop, an assortment of bespoke tools for individual tasks, rather than a consolidated platform of integrated data, more akin to an automated manufacturing line, where each component feeds into the next through shared interfaces and a single governing workflow.
Artificial intelligence tools released to the public over the last several years have captured our imaginations and spurred a new age of AI exploration and integration. However, we are quickly approaching the end of the low-hanging data that has enabled rapid deployment throughout society. We will also face challenges in implementing these tools while maintaining a vibrant analytical workforce. With deliberate planning and the right investments, the next generation of AI can supplement human judgment rather than distorting it.
James Madison University
United States of America

