Saul Howard

I’m a hacker in London. I'm the co-founder of Deep Drama.

I create software at Clear Line Tech. I produce VR, AR and mobile content at Catalyst VR.

I write on this site and sometimes on a substack at 31 Seconds.

From 2016–2021, I led a team at Apple building the CloudKit developer platform. Before Apple, I worked at startups around London and Asia, including Hailo.

I produced the feature film Brighton Wok. I work on applications for Drama Theory.

I’m on GitHub, LinkedIn and Twitter.

Posts

  1. LLMs and intelligence analysis: tools not oracles

    Richard Heuer in the classic “Psychology of Intelligence Analysis” made these points:

    • Human minds are poorly “wired” to deal with uncertainty.
    • Simply learning about our biases doesn’t in fact improve outcomes.
    • Instead, we need tools and techniques for structuring information, challenging assumptions and exploring alternative interpretations.

    Therefore, he said, we should encourage products that “clearly delineate assumptions, chains of inference and specify degree and source of uncertainty.”

    This is a problem for AI tech! An LLM is a big ball of assumptions and uncertainty. Naive use of LLM tech will only degrade our analyses.

    If you hoped that the AI would replace human analysts by slurping up OSINT at scale and spitting out actionable analyses on demand, I’m sorry but that won’t work. We must think of AI technology as providing “tools, not oracles”.

    Instead, we can take advantage of the incredible (and rapidly improving) LLM technology to build tools. The actionable insights will always come from human analysis. The AI tools can supercharge our ability to open our minds, help us to explore possibilities and structure our knowledge to provide guide ropes for creativity.

  2. Who makes the decsions?

    When the internet first came along, it immediately transformed intelligence collection and analysis. Suddenly, there was a new source of material, unfiltered through governments and newspapers. Over the years the flood gates opened wider and wider, until the amount of material became unmanageable by traditional analysis techniques.

    Big data is a loaded term. It means something different to a Google or Facebook who analyse their customers in a strictly limited domain, trying to prove click attribution or ranking search results. Statistical methods are effective ways of doing this.

    In intelligence, there’s a hope that we can use AI to make sense of the deluge of information. But we must be careful. Our best AIs today will give you a facsimile of a person from 2024, but past performance is not indicative of future results. After all, an AI trained on data up to and including the 18th century will give you a facsimile of an 18th century person, but won’t tell you anything about what happens in the 19th.

    Perhaps the biggest problem we face is a poverty of imagination. We have fallen into narrative-based thinking, modelling adversaries and allies as predictable economically-driven automatons, even as we’re hit by events that we failed to predict, or often were told were impossible — everything from Trump and Brexit to Ukraine and Gaza. There’s a real danger that a naive approach to integrating AI decision support will weaken our ability to understand reality by constructing a fantasy cocoon of statistically based reasoning about the world.

    All is not lost. We can take advantage of AI to help us with the flood of source material, but not simply by training models on the data and expecting those models to spit out predictions. Instead, we must look to the work done on frameworks intended to make sense of human behaviour for human analysts, frameworks which are used today to great effect in the field and in training.

    At Deep Drama, one approach we use is to use LLMs to automate the generation of Confrontation Analysis (Drama Theory) models from intelligence sources. With those models as a base, other LLMs can provide insights, simulations and decision support tied to a causal chain of reasoning from the model, not from the blackbox AI.

    The goal of conflict analysis must be to understand our adversaries. In a complex, multi-polar world, this is a daunting task, and we can and should use AI to help. However, real help does not come as predictions from a blackbox. At Deep Drama we are building “AI intelligence analysts” that can sit alongside decision makers, keeping track of the complexity and providing guard rails for decisions as they are made.

  3. The kids are alright

    Social media, mobile phones, video games and now AI are among the crowning jewels of human creation. Infinite libraries of information are in everyone's pocket. Entertaining new formats for instruction and diversion are being created, not by media institutions, but by ordinary people in their spare time. Children are no longer kept apart from each other in classrooms, but have their own virtual worlds in which to create knowledge and share it.

    The response from our gerontocracy is predictable: ban it.

    As usual, Douglas Adams summed it up:

    I've come up with a set of rules that describe our reactions to technologies:

    1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
    2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
    3. Anything invented after you're thirty-five is against the natural order of things.

    The truth is that technology is disruptive: it changes societies. In their time, movies, books and even the bicycle all instigated widespread moral panic. Bicycles meant that for the first time, young people could easily meet friends under their own steam, the social network technology of their day.

    Today, teachers worry about children in their classrooms "addicted to their phones". So the children aren't paying attention to their lessons, but the truth is many never were. Now instead of sitting at desks like zombies, they're happily engaged in their phones because they have access to something that really interests them.

    But disruptive memes are spreading among children! Well why shouldn't children be allowed to create and disseminate knowledge among their peers? Isn't that the essence of education? Or do we think children should be passive, empty vessels into which we pour our approved ideas? If you disagree with the memes children are sharing, you should make an argument for your position, not shut down the debate.

    Children are using ChatGPT to write their essays! I'm sorry to tell you, but adults are using ChatGPT to write their emails and reports. Education needs to adapt to the reality that social media is a better way to discover new interests and YouTube is simply a better way to learn basically anything.

    There's another sinister undercurrent to the "ban social media and AI" platform. In the case of AI, big tech companies (Google, Facebook, Apple etc.) thought at first that AI would be a "sustaining technology" - that is, because of their ability to invest billions and hire the best engineers, mastery in AI would help to cement their monopolies. But recently, the CEOs have woken up to a growing realisation that AI might well be a "disruptive technology" - as a commodity, AI gives startup companies the ability to challenge Google in search, Facebook in social media and Apple with the idea that maybe we don't all need to use their phones. Their playbook is the same as any other industry cartel's: lobby the government to regulate their industries, because regulation raises the table stakes to enter the industry so effectively that startups become non-starters. Facebook will happily pay the governments tax to censor their social networks, as the enormous cost of doing so serves to prevent any challengers to their business.

    Note that, this isn't an argument against allowing the Chinese ownership of TikTok. I see real geopolitical reasons why we might not want our media companies in foreign hands. We have had similar legislation against foreign ownership of newspapers, for example.

    While I may not agree, it's wrong to say that there's no argument for some regulation of AI. First we should admit that most of the scare stories and scenarios presented are already illegal. Why should it be more illegal for a judge to be racially biased if the judge is using AI, for example? The call for regulations over issues already covered by existing legislation is a tell-tale sign of a monopoly campaigning for cartel protection. I'm told that, out of all AI scenarios, politicians are most worried about deepfakes, and that is a genuinely new capability that perhaps needs oversight. However, lawmakers should be aware that it may not be technically possible to ban video or audio that "sounds like Rishi Sunak". More likely, we need to adapt to a world in which all media is suspect, and learn to use technologies for proving provenance of video and audio.

    Lawmakers should be very careful of any regulation governing such a fast moving and potentially fruitful technology as AI. If we regulate it out of existence, or make it so that only Google can afford to play, we will never know what possible good we have lost: badly needed medical, educational, military and productivity gains are all to play for.

    Ban children from social media and we stultify them, lock them out of the most exciting new technological spaces and prevent them from educating themselves.

  4. Against prediction

    Everybody wants someone to tell them what's going to happen. But whatever consciousness is, it's fundamentally unpredictable. As long as you're in the human domain, telling the future is out.

    There's a story Drama Theorists use to illustrate this. A husband and wife are playing chess. The husband makes a move, "checkmate". The wife objects, wait, there must be something I can do. No, says the husband, it's checkmate -- there are no possible moves. The wife reaches out and smashes the board into the air, pieces flying, "how about this for a move?".

    Okay, so life isn't a game, there aren't rules. So what then? If analysis isn't predictive, what is it? Analysis is the art of understanding the present moment as fully as possible. Choosing models for compressing reality to aid decision making. The actions that make up the future will come from unpredictable human creativity. A wider understanding of the present moment gives our creativity the best grounding from which to imagine the future.

    AI can't predict the future, but it can help us model the present.

  5. Decision making needs explainable AI

    Decision making is being transformed by AI technology. LLMs are a new capability, allowing us to quantitatively model human behaviour like never before. However, using LLMs to drive a statistical approach to decision making will only end in disillusionment.

    Those who invent a technology often are not best placed to take advantage of that technology. The current advances in AI were born from statistics, and it's a natural impulse to use them for “statistical” approaches. For example, one might try to use the ability to mine vastly larger datasets to predict human behaviour from analysis of past behaviour. This statistical black box approach is wrong. Human behaviour is inherently unpredictable. Decision making is about choices, not narratives.

    Instead, we can use LLMs to supercharge our modelling of human behaviour. The models are explainable because they are using frameworks intended for human use. LLMs can help humans to understand, while the explainable model is always there as a ground-truth. We can also use generative AI to make exploratory sallies into possible near-futures. All of this gives decision makers better understanding of their choices and those of the counter-parties.

    In decision making, there is no black box that will output a perfect strategy. Better choices come from a more complete understanding of the situation. We must be careful never to fool ourselves with statistically derived far-reaching narratives. In the end, our opponents are creative humans, not trendlines on a graph.

  6. The new acqui-hire

    Satya Nadella is a fearsome operator.

    VCs Mus Sat dilemmas
    Satya Nadella
    Acquire Inflection VCs T wrt Sat
    Mus P wrt Sat
    Hire Mustafa Suleyman to head a new division at Microsoft. - ✓c Mus t wrt Sat
    Mustafa Suleyman
    Bring most of Inflection's 70 staff with him. VCs P wrt Mus
    Sat T wrt Mus

    Satya Nadella 's adoption is conditional (promise) on Mustafa Suleyman bringing most of Inflection's 70 staff with him.

    Tuesday’s hiring was “basically an acquisition of Inflection without having to go through regulatory approval”, wrote Tony Wang, managing partner at venture capital firm 500 Global.

    Steven Weber, a professor and expert on technology and intellectual property at the University of California, Berkeley, noted the deal was similar to the offer Microsoft made to OpenAI employees after chief executive Sam Altman was temporarily sacked last year.

    Microsoft and Inflection have stressed that the agreement is not an acquisition and that Inflection remains an independent company. FT

  7. Interface to human language

    One approach to making use of LLMs is to see it as an oracle: we can say "solve this problem for me", or at least "give me a choice of solutions". As all the answers to our problems can theoretically be generated, we can go ahead and retire all the human theorists and engineers. Science is solved.

    This approach assumes that creative problem solving is a matter of rearranging existing knowledge. Or, if more knowledge is needed, that knowledge acquisition is a process of mechanically recording the universe. Both are mistaken. We don't know the algorithms for creative problem solving or knowledge acquisition. While the "LLM as oracle" approach will certainly produce advancements in knowledge retrieval, it won't create new knowledge.

    Another approach, one that we're following at Deep Drama, is to use the LLM primarily as an interface for human language. The potential of "LLM as language API" is greater than it seems. There is the obvious path of building more powerful User Interfaces far beyond chatbots. But there is also the potential to expand the scope of our software, beyond solving transactional problems to encompass as-yet-untapped social frameworks. Our software has blind spots. Large areas of human knowledge and experience have been overlooked by engineers because of the messiness of their interfaces.

    As an example, I used Deep Drama's LLM-powered Source tool to generate this Drama Theoretic model of a random news article:

    Don Joe Nip Uni dilemmas
    Donald Trump
    Block the deal immediately if winning the 2024 election
    Joe Biden
    Express opposition to the merger Don T wrt Joe
    Uni T wrt Joe
    United Steelworkers labour union
    Oppose the merger Joe P wrt Uni
    Nip T wrt Uni
    Nippon Steel
    Acquire US Steel in a non-hostile deal worth $14.1 billion Don P wrt Nip
    Joe P wrt Nip
    Uni P wrt Nip
    Introduce new technology and capital to US Steel Joe P wrt Nip
    Uni P wrt Nip

    The model's format is explainable and programmable. By using the LLM as an interface to language, Deep Drama can then use this model as the basis for further analysis and interaction, both with LLMs and with traditional interfaces. Deep Drama keeps the knowledge, and the opportunity for creativity, in the hands of the human users.

  8. Anatomy of a Fall

    Over the weekend, I saw the movie Anatomy of a Fall. It was fascinating to get a look at the French justice system. Common Law justice systems seem to be highly overrepresented in film. I'm not sure whether this is because the British and Americans have a particular liking for courtroom drama, or I'm just not watching the right foreign movies.

    The question at the heart of the movie was, did Sandra kill her husband? You might think this is the model:

    Sandra
    Sandra
    Kill Samuel ?

    However, this isn't quite right, as the Option "Kill Samuel" doesn't refer to a future possibility. Rather, the drama in the film revolves around whether Sandra will be convicted for murder:

    Sandra Prosecutor dilemmas
    Court
    Convict Sandra Sandra, Prosecutor

    The Court's position is of course unstated. This means that Sandra (and by extension her Defence) and the Prosector both have persuasion dilemmas with respect to the Court. They are compelled to persuade the Court to take their position.

    Much of the drama in the film comes from how powerless Sandra is to influence events. She provides compelling testimony, but so does the Prosecutor. In the end, it comes down to her son Daniel's testimony, and whether he will take her side.

    Sandra Prosecutor Daniel dilemmas
    Court
    Convict Sandra - Sandra, Prosecutor
    Daniel
    Testify against Sandra - Sandra, Prosecutor

    This gives Sandra and the Prosecutor persuasion dilemmas with respect to Daniel. Sandra in particular is compelled to persuade him to reject the idea of testifying against her. This is fraught, as if Daniel takes his Mother's side, then it means accepting the idea that his father took his own life. For much of the film, Daniel is clearly undecided as to whether he believes his Mother (represented by the -). The film's climax comes when Daniel makes his decision over this option. Although the film does not take a side, ultimately it seems that Daniel persuades himself of his Mother's innocence.

    Actually, when we lack an element to judge something, and the lack is unbearable, all we can do is decide. You see? To overcome doubt, sometimes we have to… decide to sway one way rather than the other. Since you need to believe one thing but have two choices, you must choose.

    Sandra Prosecutor Daniel dilemmas
    Court
    Convict Sandra Sandra, Prosecutor, Daniel
    Daniel
    Testify against Sandra Sandra, Prosecutor
  9. Dramatic interfaces

    In their working lives, most people don’t make decisions in complex situations of conflict and cooperation. Most companies instinctively, or intentionally, steer away from situations of dramatic conflict. Business concerns itself with win-win situations: I sell, you buy.

    Many roles do confront dramatic conflict. If you work in government, the military, healthcare, education, policing, then conflict is inherent. You are used to making decisions with unreliable information, with questionable actors and uncertain outcomes.

    Could the corporate world do more in this area — is it possible to have a dramatic relationship with your customers, where risk, deception and cooperation are all possible outcomes? I believe the answer is yes — drama is necessary if we want to take advantage of LLM tech.

    Software interfaces are moving beyond the transactional - embodied in the push-button GUIs of every app which is itself a direct inheritance from the first mechanical machines.

    LLM tech gives us a possibility of creating new interfaces that accept, and work within, the dramatic possibilities of language. We can build software that forms goals, reasons, arguments. It can be in cooperation or conflict with its users. Thinking about software in these terms is essential as we start to build LLM-powered “agents”. Dramatic intelligence is needed where we don’t expect to program every step of the software’s lifetime. Finally, as no incumbent company will allow themselves this kind of relationship with their customer, it presents a huge opportunity for a new kind of LLM-native enterprise.

  10. Service politics

    Microservice architectures are a solution to scaling systems. Startups who start with a micro service architecture before they have scaling problems, sometimes before they even launch, attract ridicule.

    But microservices are also a solution to an engineering management problem. Startup codebases are usually a mess. Proof of concepts, abandoned features, quick hacks are what it takes to find the product.

    Developers keep returning to service architectures because the separation of concerns enforced by the network makes it easier to maintain the codebase, onboard new developers and adopt and abandon features. Maybe your startup has a loyal team of focused developers, and you can do all of that within a monolith. But many don’t. In Yegge’s Platform Rant, he showed how service contracts are necessary for managing multiple engineering teams, but individual developers can benefit from them too.

    Serverless functions, LLM-scaffolding, cloud IDEs and bounties for features are all on the rise. Increasingly, systems are built from loose federations of distributed functionality. Like all engineering solutions, it’s a tradeoff. Abstracting problems at the network layer gives you network problems. But it can solve for spaghetti-monoliths.

Page 1 / 2 »