Saul Howard

LLMs and intelligence analysis: tools not oracles

Richard Heuer in the classic “Psychology of Intelligence Analysis” made these points:

Therefore, he said, we should encourage products that “clearly delineate assumptions, chains of inference and specify degree and source of uncertainty.”

This is a problem for AI tech! An LLM is a big ball of assumptions and uncertainty. Naive use of LLM tech will only degrade our analyses.

If you hoped that the AI would replace human analysts by slurping up OSINT at scale and spitting out actionable analyses on demand, I’m sorry but that won’t work. We must think of AI technology as providing “tools, not oracles”.

Instead, we can take advantage of the incredible (and rapidly improving) LLM technology to build tools. The actionable insights will always come from human analysis. The AI tools can supercharge our ability to open our minds, help us to explore possibilities and structure our knowledge to provide guide ropes for creativity.