In their working lives, most people don’t make decisions in complex situations of conflict and cooperation. Most companies instinctively, or intentionally, steer away from situations of dramatic conflict. Business concerns itself with win-win situations: I sell, you buy.
Many roles do confront dramatic conflict. If you work in government, the military, healthcare, education, policing, then conflict is inherent. You are used to making decisions with unreliable information, with questionable actors and uncertain outcomes.
Could the corporate world do more in this area — is it possible to have a dramatic relationship with your customers, where risk, deception and cooperation are all possible outcomes? I believe the answer is yes — drama is necessary if we want to take advantage of LLM tech.
Software interfaces are moving beyond the transactional - embodied in the push-button GUIs of every app which is itself a direct inheritance from the first mechanical machines.
LLM tech gives us a possibility of creating new interfaces that accept, and work within, the dramatic possibilities of language. We can build software that forms goals, reasons, arguments. It can be in cooperation or conflict with its users. Thinking about software in these terms is essential as we start to build LLM-powered “agents”. Dramatic intelligence is needed where we don’t expect to program every step of the software’s lifetime. Finally, as no incumbent company will allow themselves this kind of relationship with their customer, it presents a huge opportunity for a new kind of LLM-native enterprise.