When the internet first came along, it immediately transformed intelligence collection and analysis. Suddenly, there was a new source of material, unfiltered through governments and newspapers. Over the years the flood gates opened wider and wider, until the amount of material became unmanageable by traditional analysis techniques.
Big data is a loaded term. It means something different to a Google or Facebook who analyse their customers in a strictly limited domain, trying to prove click attribution or ranking search results. Statistical methods are effective ways of doing this.
In intelligence, there’s a hope that we can use AI to make sense of the deluge of information. But we must be careful. Our best AIs today will give you a facsimile of a person from 2024, but past performance is not indicative of future results. After all, an AI trained on data up to and including the 18th century will give you a facsimile of an 18th century person, but won’t tell you anything about what happens in the 19th.
Perhaps the biggest problem we face is a poverty of imagination. We have fallen into narrative-based thinking, modelling adversaries and allies as predictable economically-driven automatons, even as we’re hit by events that we failed to predict, or often were told were impossible — everything from Trump and Brexit to Ukraine and Gaza. There’s a real danger that a naive approach to integrating AI decision support will weaken our ability to understand reality by constructing a fantasy cocoon of statistically based reasoning about the world.
All is not lost. We can take advantage of AI to help us with the flood of source material, but not simply by training models on the data and expecting those models to spit out predictions. Instead, we must look to the work done on frameworks intended to make sense of human behaviour for human analysts, frameworks which are used today to great effect in the field and in training.
At Deep Drama, one approach we use is to use LLMs to automate the generation of Confrontation Analysis (Drama Theory) models from intelligence sources. With those models as a base, other LLMs can provide insights, simulations and decision support tied to a causal chain of reasoning from the model, not from the blackbox AI.
The goal of conflict analysis must be to understand our adversaries. In a complex, multi-polar world, this is a daunting task, and we can and should use AI to help. However, real help does not come as predictions from a blackbox. At Deep Drama we are building “AI intelligence analysts” that can sit alongside decision makers, keeping track of the complexity and providing guard rails for decisions as they are made.