Saul Howard

I’m a hacker in London. I'm the co-founder of Deep Drama.

I create software at Clear Line Tech. I produce VR, AR and mobile content at Catalyst VR.

I write on this site and sometimes on a substack at 31 Seconds.

From 2016–2021, I led a team at Apple building the CloudKit developer platform. Before Apple, I worked at startups around London and Asia, including Hailo.

I produced the feature film Brighton Wok. I work on applications for Drama Theory.

I’m on GitHub, LinkedIn and Twitter.

Posts

  1. Against prediction

    Everybody wants someone to tell them what's going to happen. But whatever consciousness is, it's fundamentally unpredictable. As long as you're in the human domain, telling the future is out.

    There's a story Drama Theorists use to illustrate this. A husband and wife are playing chess. The husband makes a move, "checkmate". The wife objects, wait, there must be something I can do. No, says the husband, it's checkmate -- there are no possible moves. The wife reaches out and smashes the board into the air, pieces flying, "how about this for a move?".

    Okay, so life isn't a game, there aren't rules. So what then? If analysis isn't predictive, what is it? Analysis is the art of understanding the present moment as fully as possible. Choosing models for compressing reality to aid decision making. The actions that make up the future will come from unpredictable human creativity. A wider understanding of the present moment gives our creativity the best grounding from which to imagine the future.

    AI can't predict the future, but it can help us model the present.

  2. Decision making needs explainable AI

    Decision making is being transformed by AI technology. LLMs are a new capability, allowing us to quantitatively model human behaviour like never before. However, using LLMs to drive a statistical approach to decision making will only end in disillusionment.

    Those who invent a technology often are not best placed to take advantage of that technology. The current advances in AI were born from statistics, and it's a natural impulse to use them for “statistical” approaches. For example, one might try to use the ability to mine vastly larger datasets to predict human behaviour from analysis of past behaviour. This statistical black box approach is wrong. Human behaviour is inherently unpredictable. Decision making is about choices, not narratives.

    Instead, we can use LLMs to supercharge our modelling of human behaviour. The models are explainable because they are using frameworks intended for human use. LLMs can help humans to understand, while the explainable model is always there as a ground-truth. We can also use generative AI to make exploratory sallies into possible near-futures. All of this gives decision makers better understanding of their choices and those of the counter-parties.

    In decision making, there is no black box that will output a perfect strategy. Better choices come from a more complete understanding of the situation. We must be careful never to fool ourselves with statistically derived far-reaching narratives. In the end, our opponents are creative humans, not trendlines on a graph.

  3. The new acqui-hire

    Satya Nadella is a fearsome operator.

    VCs Mus Sat dilemmas
    Satya Nadella
    Acquire Inflection VCs T wrt Sat
    Mus P wrt Sat
    Hire Mustafa Suleyman to head a new division at Microsoft. - ✓c Mus t wrt Sat
    Mustafa Suleyman
    Bring most of Inflection's 70 staff with him. VCs P wrt Mus
    Sat T wrt Mus

    Satya Nadella 's adoption is conditional (promise) on Mustafa Suleyman bringing most of Inflection's 70 staff with him.

    Tuesday’s hiring was “basically an acquisition of Inflection without having to go through regulatory approval”, wrote Tony Wang, managing partner at venture capital firm 500 Global.

    Steven Weber, a professor and expert on technology and intellectual property at the University of California, Berkeley, noted the deal was similar to the offer Microsoft made to OpenAI employees after chief executive Sam Altman was temporarily sacked last year.

    Microsoft and Inflection have stressed that the agreement is not an acquisition and that Inflection remains an independent company. FT

  4. Interface to human language

    One approach to making use of LLMs is to see it as an oracle: we can say "solve this problem for me", or at least "give me a choice of solutions". As all the answers to our problems can theoretically be generated, we can go ahead and retire all the human theorists and engineers. Science is solved.

    This approach assumes that creative problem solving is a matter of rearranging existing knowledge. Or, if more knowledge is needed, that knowledge acquisition is a process of mechanically recording the universe. Both are mistaken. We don't know the algorithms for creative problem solving or knowledge acquisition. While the "LLM as oracle" approach will certainly produce advancements in knowledge retrieval, it won't create new knowledge.

    Another approach, one that we're following at Deep Drama, is to use the LLM primarily as an interface for human language. The potential of "LLM as language API" is greater than it seems. There is the obvious path of building more powerful User Interfaces far beyond chatbots. But there is also the potential to expand the scope of our software, beyond solving transactional problems to encompass as-yet-untapped social frameworks. Our software has blind spots. Large areas of human knowledge and experience have been overlooked by engineers because of the messiness of their interfaces.

    As an example, I used Deep Drama's LLM-powered Source tool to generate this Drama Theoretic model of a random news article:

    Don Joe Nip Uni dilemmas
    Donald Trump
    Block the deal immediately if winning the 2024 election
    Joe Biden
    Express opposition to the merger Don T wrt Joe
    Uni T wrt Joe
    United Steelworkers labour union
    Oppose the merger Joe P wrt Uni
    Nip T wrt Uni
    Nippon Steel
    Acquire US Steel in a non-hostile deal worth $14.1 billion Don P wrt Nip
    Joe P wrt Nip
    Uni P wrt Nip
    Introduce new technology and capital to US Steel Joe P wrt Nip
    Uni P wrt Nip

    The model's format is explainable and programmable. By using the LLM as an interface to language, Deep Drama can then use this model as the basis for further analysis and interaction, both with LLMs and with traditional interfaces. Deep Drama keeps the knowledge, and the opportunity for creativity, in the hands of the human users.

  5. Anatomy of a Fall

    Over the weekend, I saw the movie Anatomy of a Fall. It was fascinating to get a look at the French justice system. Common Law justice systems seem to be highly overrepresented in film. I'm not sure whether this is because the British and Americans have a particular liking for courtroom drama, or I'm just not watching the right foreign movies.

    The question at the heart of the movie was, did Sandra kill her husband? You might think this is the model:

    Sandra
    Sandra
    Kill Samuel ?

    However, this isn't quite right, as the Option "Kill Samuel" doesn't refer to a future possibility. Rather, the drama in the film revolves around whether Sandra will be convicted for murder:

    Sandra Prosecutor dilemmas
    Court
    Convict Sandra Sandra, Prosecutor

    The Court's position is of course unstated. This means that Sandra (and by extension her Defence) and the Prosector both have persuasion dilemmas with respect to the Court. They are compelled to persuade the Court to take their position.

    Much of the drama in the film comes from how powerless Sandra is to influence events. She provides compelling testimony, but so does the Prosecutor. In the end, it comes down to her son Daniel's testimony, and whether he will take her side.

    Sandra Prosecutor Daniel dilemmas
    Court
    Convict Sandra - Sandra, Prosecutor
    Daniel
    Testify against Sandra - Sandra, Prosecutor

    This gives Sandra and the Prosecutor persuasion dilemmas with respect to Daniel. Sandra in particular is compelled to persuade him to reject the idea of testifying against her. This is fraught, as if Daniel takes his Mother's side, then it means accepting the idea that his father took his own life. For much of the film, Daniel is clearly undecided as to whether he believes his Mother (represented by the -). The film's climax comes when Daniel makes his decision over this option. Although the film does not take a side, ultimately it seems that Daniel persuades himself of his Mother's innocence.

    Actually, when we lack an element to judge something, and the lack is unbearable, all we can do is decide. You see? To overcome doubt, sometimes we have to… decide to sway one way rather than the other. Since you need to believe one thing but have two choices, you must choose.

    Sandra Prosecutor Daniel dilemmas
    Court
    Convict Sandra Sandra, Prosecutor, Daniel
    Daniel
    Testify against Sandra Sandra, Prosecutor
  6. Dramatic interfaces

    In their working lives, most people don’t make decisions in complex situations of conflict and cooperation. Most companies instinctively, or intentionally, steer away from situations of dramatic conflict. Business concerns itself with win-win situations: I sell, you buy.

    Many roles do confront dramatic conflict. If you work in government, the military, healthcare, education, policing, then conflict is inherent. You are used to making decisions with unreliable information, with questionable actors and uncertain outcomes.

    Could the corporate world do more in this area — is it possible to have a dramatic relationship with your customers, where risk, deception and cooperation are all possible outcomes? I believe the answer is yes — drama is necessary if we want to take advantage of LLM tech.

    Software interfaces are moving beyond the transactional - embodied in the push-button GUIs of every app which is itself a direct inheritance from the first mechanical machines.

    LLM tech gives us a possibility of creating new interfaces that accept, and work within, the dramatic possibilities of language. We can build software that forms goals, reasons, arguments. It can be in cooperation or conflict with its users. Thinking about software in these terms is essential as we start to build LLM-powered “agents”. Dramatic intelligence is needed where we don’t expect to program every step of the software’s lifetime. Finally, as no incumbent company will allow themselves this kind of relationship with their customer, it presents a huge opportunity for a new kind of LLM-native enterprise.

  7. Service politics

    Microservice architectures are a solution to scaling systems. Startups who start with a micro service architecture before they have scaling problems, sometimes before they even launch, attract ridicule.

    But microservices are also a solution to an engineering management problem. Startup codebases are usually a mess. Proof of concepts, abandoned features, quick hacks are what it takes to find the product.

    Developers keep returning to service architectures because the separation of concerns enforced by the network makes it easier to maintain the codebase, onboard new developers and adopt and abandon features. Maybe your startup has a loyal team of focused developers, and you can do all of that within a monolith. But many don’t. In Yegge’s Platform Rant, he showed how service contracts are necessary for managing multiple engineering teams, but individual developers can benefit from them too.

    Serverless functions, LLM-scaffolding, cloud IDEs and bounties for features are all on the rise. Increasingly, systems are built from loose federations of distributed functionality. Like all engineering solutions, it’s a tradeoff. Abstracting problems at the network layer gives you network problems. But it can solve for spaghetti-monoliths.

  8. SEO at the end of days

    Google’s announced a GPT-4 killer, Gemini, for release by December. There’s no question that OpenAI have a massive lead, but from the outside, it doesn’t look unassailable. LLMs may have presented Google with the first real threat to their search business.

    LLMs hint at a new interface to computing. Microsoft’s strength isn’t interfaces, but sales. They can add LLM features to their software as customers need them. Apple can fall back on their hardware — and may see an increasing demand in consumer compute even as they slowly come around to updating iOS. But Google is only an interface. A single UI paradigm — search. They’ll need to move fast.

    I was talking SEO today, and the paradigm already feels outdated. People out there still discovering software through search engines like boomers with cable subscriptions.

  9. LLMs are complementary

    LLMs (and generative AI generally) have an important feature: broad adoption of LLMs doesn’t require a change in consumer behaviour. LLMs are a complementary technology. We can deploy them right now to the cloud, to people’s smartphones and into enterprise software stacks.

    The iPhone’s success fostered a narrative: that swift widespread change of consumer behaviour would be inevitable if the promise was there. But in reality, the unprecedented rise of mobile was overdetermined: the hardware was finally good enough (after decades of development) at the same time that the internet was finally changing consumer behaviour (after many false starts).

    Crypto and VR proponents have been selling them as a wave of technology about to break over our heads. But adoption in their current forms requires consumers to adopt new behaviours — buying (and using) costly barely-good-enough headsets, or changing their financial arrangements. I don’t doubt that we’ll see these technologies changing people’s lives in the future, but it won’t happen until the transition becomes easier for consumers to swallow. Making Crypto and VR not worth betting your balance sheet on.

    LLMs are different. LLM-powered services can sit alongside all existing software stacks, seamlessly providing functionality that wasn’t possible before. Consumers continue using their cloud-backed web and mobile apps and as far as the customer is concerned, their software got better without them having to do anything.

    In this way, LLMs are more like the move to cloud. Cloud-powered features first showed up as options within our traditional desktop apps. “Share” or “Save to Cloud” buttons (with the floppy disk icon, naturally). Users didn’t need to know what it all meant, they could just opt-in to the new functionality alongside their existing workflows. Eventually, consumer’s came to intuit the new model, and apps changed, but it was a gradual process.

    Each technology has its own path to adoption. Growth of LLM-powered services will be smoother than cloud, as it doesn’t need big-bang digital transformations. It’s possible to re-write individual functions within an existing system to take advantage of LLMs. The tech is complementary to our current cloud, web and mobile platforms.

  10. Where’s the web 2023

    I tried out a few web frameworks and libraries recently. My thoughts:

    Vite

    I needed to upgrade some old React apps. It turned out they killed create-react-app, and it seems like Vite is the go-to “just a React app” framework.

    Vite made the refactor painless. I think the only incompatible thing was a new format for env vars. Other than that, it kept out of the way. Recommended.

    Remix

    I wanted a new app that would deploy to Cloudflare Pages/Workers. The worker runs an edge endpoint proxying to a GCP Python service for the heavy lifting.

    Remix was impressive. It has a nice balance of web fundamentals and React. I’m not so sure about the push away from React with tech like htmx. I like React! I see their point — React apps are in some ways ephemeral, it’s diverging from the web and the ecosystem is chaotic. But for quick development in the real world, it works. I’d choose Remix over NextJS. Remix feels lighter, with less custom magic.

    Astro

    This one was cool. Astro has a great set of features for a static site generator that you can integrate into the rest of your stack with MDX and React, while keeping all the posts in Markdown.

    I could share React components from the main webapp for the layout and added endpoints to the Astro app to serve the Markdown content to the other services.

    Bun

    I want to love Bun. JavaScript/TypeScript/NodeJS is a mess, and one of these attempts to replace the toolchain in one blow has to work.

    I found myself writing a NodeJS service, so I set it up with Bun, and it was great — until I tried to pull in the SDK libs I needed to use. It turns out the NodeJS compatibility isn’t there yet.

    Usually the only time I’m writing NodeJS is because of a specific library I want to use. Bun is great for writing dependency-free services, but if the service is dependency-free why would I use TypeScript? I understand they are aiming for 100% compatibility, and I hope they get there.