Saul Howard

Decision making needs explainable AI

Decision making is being transformed by AI technology. LLMs are a new capability, allowing us to quantitatively model human behaviour like never before. However, using LLMs to drive a statistical approach to decision making will only end in disillusionment.

Those who invent a technology often are not best placed to take advantage of that technology. The current advances in AI were born from statistics, and it's a natural impulse to use them for “statistical” approaches. For example, one might try to use the ability to mine vastly larger datasets to predict human behaviour from analysis of past behaviour. This statistical black box approach is wrong. Human behaviour is inherently unpredictable. Decision making is about choices, not narratives.

Instead, we can use LLMs to supercharge our modelling of human behaviour. The models are explainable because they are using frameworks intended for human use. LLMs can help humans to understand, while the explainable model is always there as a ground-truth. We can also use generative AI to make exploratory sallies into possible near-futures. All of this gives decision makers better understanding of their choices and those of the counter-parties.

In decision making, there is no black box that will output a perfect strategy. Better choices come from a more complete understanding of the situation. We must be careful never to fool ourselves with statistically derived far-reaching narratives. In the end, our opponents are creative humans, not trendlines on a graph.