Saul Howard

I’m a hacker in London. I'm the co-founder of Deep Drama.

I create software at Clear Line Tech. I produce VR, AR and mobile content at Catalyst VR.

I write on this site and sometimes on a substack at 31 Seconds.

From 2016–2021, I led a team at Apple building the CloudKit developer platform. Before Apple, I worked at startups around London and Asia, including Hailo.

I produced the feature film Brighton Wok. I work on applications for Drama Theory.

I’m on GitHub, LinkedIn and Twitter.

Articles in the dailydrama category

  1. Decision making needs explainable AI

    Decision making is being transformed by AI technology. LLMs are a new capability, allowing us to quantitatively model human behaviour like never before. However, using LLMs to drive a statistical approach to decision making will only end in disillusionment.

    Those who invent a technology often are not best placed to take advantage of that technology. The current advances in AI were born from statistics, and it's a natural impulse to use them for “statistical” approaches. For example, one might try to use the ability to mine vastly larger datasets to predict human behaviour from analysis of past behaviour. This statistical black box approach is wrong. Human behaviour is inherently unpredictable. Decision making is about choices, not narratives.

    Instead, we can use LLMs to supercharge our modelling of human behaviour. The models are explainable because they are using frameworks intended for human use. LLMs can help humans to understand, while the explainable model is always there as a ground-truth. We can also use generative AI to make exploratory sallies into possible near-futures. All of this gives decision makers better understanding of their choices and those of the counter-parties.

    In decision making, there is no black box that will output a perfect strategy. Better choices come from a more complete understanding of the situation. We must be careful never to fool ourselves with statistically derived far-reaching narratives. In the end, our opponents are creative humans, not trendlines on a graph.

  2. The new acqui-hire

    Satya Nadella is a fearsome operator.

    VCs Mus Sat dilemmas
    Satya Nadella
    Acquire Inflection VCs T wrt Sat
    Mus P wrt Sat
    Hire Mustafa Suleyman to head a new division at Microsoft. - ✓c Mus t wrt Sat
    Mustafa Suleyman
    Bring most of Inflection's 70 staff with him. VCs P wrt Mus
    Sat T wrt Mus

    Satya Nadella 's adoption is conditional (promise) on Mustafa Suleyman bringing most of Inflection's 70 staff with him.

    Tuesday’s hiring was “basically an acquisition of Inflection without having to go through regulatory approval”, wrote Tony Wang, managing partner at venture capital firm 500 Global.

    Steven Weber, a professor and expert on technology and intellectual property at the University of California, Berkeley, noted the deal was similar to the offer Microsoft made to OpenAI employees after chief executive Sam Altman was temporarily sacked last year.

    Microsoft and Inflection have stressed that the agreement is not an acquisition and that Inflection remains an independent company. FT

  3. Interface to human language

    One approach to making use of LLMs is to see it as an oracle: we can say "solve this problem for me", or at least "give me a choice of solutions". As all the answers to our problems can theoretically be generated, we can go ahead and retire all the human theorists and engineers. Science is solved.

    This approach assumes that creative problem solving is a matter of rearranging existing knowledge. Or, if more knowledge is needed, that knowledge acquisition is a process of mechanically recording the universe. Both are mistaken. We don't know the algorithms for creative problem solving or knowledge acquisition. While the "LLM as oracle" approach will certainly produce advancements in knowledge retrieval, it won't create new knowledge.

    Another approach, one that we're following at Deep Drama, is to use the LLM primarily as an interface for human language. The potential of "LLM as language API" is greater than it seems. There is the obvious path of building more powerful User Interfaces far beyond chatbots. But there is also the potential to expand the scope of our software, beyond solving transactional problems to encompass as-yet-untapped social frameworks. Our software has blind spots. Large areas of human knowledge and experience have been overlooked by engineers because of the messiness of their interfaces.

    As an example, I used Deep Drama's LLM-powered Source tool to generate this Drama Theoretic model of a random news article:

    Don Joe Nip Uni dilemmas
    Donald Trump
    Block the deal immediately if winning the 2024 election
    Joe Biden
    Express opposition to the merger Don T wrt Joe
    Uni T wrt Joe
    United Steelworkers labour union
    Oppose the merger Joe P wrt Uni
    Nip T wrt Uni
    Nippon Steel
    Acquire US Steel in a non-hostile deal worth $14.1 billion Don P wrt Nip
    Joe P wrt Nip
    Uni P wrt Nip
    Introduce new technology and capital to US Steel Joe P wrt Nip
    Uni P wrt Nip

    The model's format is explainable and programmable. By using the LLM as an interface to language, Deep Drama can then use this model as the basis for further analysis and interaction, both with LLMs and with traditional interfaces. Deep Drama keeps the knowledge, and the opportunity for creativity, in the hands of the human users.

  4. Anatomy of a Fall

    Over the weekend, I saw the movie Anatomy of a Fall. It was fascinating to get a look at the French justice system. Common Law justice systems seem to be highly overrepresented in film. I'm not sure whether this is because the British and Americans have a particular liking for courtroom drama, or I'm just not watching the right foreign movies.

    The question at the heart of the movie was, did Sandra kill her husband? You might think this is the model:

    Sandra
    Sandra
    Kill Samuel ?

    However, this isn't quite right, as the Option "Kill Samuel" doesn't refer to a future possibility. Rather, the drama in the film revolves around whether Sandra will be convicted for murder:

    Sandra Prosecutor dilemmas
    Court
    Convict Sandra Sandra, Prosecutor

    The Court's position is of course unstated. This means that Sandra (and by extension her Defence) and the Prosector both have persuasion dilemmas with respect to the Court. They are compelled to persuade the Court to take their position.

    Much of the drama in the film comes from how powerless Sandra is to influence events. She provides compelling testimony, but so does the Prosecutor. In the end, it comes down to her son Daniel's testimony, and whether he will take her side.

    Sandra Prosecutor Daniel dilemmas
    Court
    Convict Sandra - Sandra, Prosecutor
    Daniel
    Testify against Sandra - Sandra, Prosecutor

    This gives Sandra and the Prosecutor persuasion dilemmas with respect to Daniel. Sandra in particular is compelled to persuade him to reject the idea of testifying against her. This is fraught, as if Daniel takes his Mother's side, then it means accepting the idea that his father took his own life. For much of the film, Daniel is clearly undecided as to whether he believes his Mother (represented by the -). The film's climax comes when Daniel makes his decision over this option. Although the film does not take a side, ultimately it seems that Daniel persuades himself of his Mother's innocence.

    Actually, when we lack an element to judge something, and the lack is unbearable, all we can do is decide. You see? To overcome doubt, sometimes we have to… decide to sway one way rather than the other. Since you need to believe one thing but have two choices, you must choose.

    Sandra Prosecutor Daniel dilemmas
    Court
    Convict Sandra Sandra, Prosecutor, Daniel
    Daniel
    Testify against Sandra Sandra, Prosecutor