Saul Howard

I’m a hacker in London. I'm the co-founder of Deep Drama.

I create software at Clear Line Tech. I produce VR, AR and mobile content at Catalyst VR.

I write on this site and sometimes on a substack at 31 Seconds.

From 2016–2021, I led a team at Apple building the CloudKit developer platform. Before Apple, I worked at startups around London and Asia, including Hailo.

I produced the feature film Brighton Wok. I work on applications for Drama Theory.

I’m on GitHub, LinkedIn and Twitter.

Articles in the misc category

  1. LLMs and intelligence analysis: tools not oracles

    Richard Heuer in the classic “Psychology of Intelligence Analysis” made these points:

    • Human minds are poorly “wired” to deal with uncertainty.
    • Simply learning about our biases doesn’t in fact improve outcomes.
    • Instead, we need tools and techniques for structuring information, challenging assumptions and exploring alternative interpretations.

    Therefore, he said, we should encourage products that “clearly delineate assumptions, chains of inference and specify degree and source of uncertainty.”

    This is a problem for AI tech! An LLM is a big ball of assumptions and uncertainty. Naive use of LLM tech will only degrade our analyses.

    If you hoped that the AI would replace human analysts by slurping up OSINT at scale and spitting out actionable analyses on demand, I’m sorry but that won’t work. We must think of AI technology as providing “tools, not oracles”.

    Instead, we can take advantage of the incredible (and rapidly improving) LLM technology to build tools. The actionable insights will always come from human analysis. The AI tools can supercharge our ability to open our minds, help us to explore possibilities and structure our knowledge to provide guide ropes for creativity.

  2. Who makes the decsions?

    When the internet first came along, it immediately transformed intelligence collection and analysis. Suddenly, there was a new source of material, unfiltered through governments and newspapers. Over the years the flood gates opened wider and wider, until the amount of material became unmanageable by traditional analysis techniques.

    Big data is a loaded term. It means something different to a Google or Facebook who analyse their customers in a strictly limited domain, trying to prove click attribution or ranking search results. Statistical methods are effective ways of doing this.

    In intelligence, there’s a hope that we can use AI to make sense of the deluge of information. But we must be careful. Our best AIs today will give you a facsimile of a person from 2024, but past performance is not indicative of future results. After all, an AI trained on data up to and including the 18th century will give you a facsimile of an 18th century person, but won’t tell you anything about what happens in the 19th.

    Perhaps the biggest problem we face is a poverty of imagination. We have fallen into narrative-based thinking, modelling adversaries and allies as predictable economically-driven automatons, even as we’re hit by events that we failed to predict, or often were told were impossible — everything from Trump and Brexit to Ukraine and Gaza. There’s a real danger that a naive approach to integrating AI decision support will weaken our ability to understand reality by constructing a fantasy cocoon of statistically based reasoning about the world.

    All is not lost. We can take advantage of AI to help us with the flood of source material, but not simply by training models on the data and expecting those models to spit out predictions. Instead, we must look to the work done on frameworks intended to make sense of human behaviour for human analysts, frameworks which are used today to great effect in the field and in training.

    At Deep Drama, one approach we use is to use LLMs to automate the generation of Confrontation Analysis (Drama Theory) models from intelligence sources. With those models as a base, other LLMs can provide insights, simulations and decision support tied to a causal chain of reasoning from the model, not from the blackbox AI.

    The goal of conflict analysis must be to understand our adversaries. In a complex, multi-polar world, this is a daunting task, and we can and should use AI to help. However, real help does not come as predictions from a blackbox. At Deep Drama we are building “AI intelligence analysts” that can sit alongside decision makers, keeping track of the complexity and providing guard rails for decisions as they are made.

  3. The kids are alright

    Social media, mobile phones, video games and now AI are among the crowning jewels of human creation. Infinite libraries of information are in everyone's pocket. Entertaining new formats for instruction and diversion are being created, not by media institutions, but by ordinary people in their spare time. Children are no longer kept apart from each other in classrooms, but have their own virtual worlds in which to create knowledge and share it.

    The response from our gerontocracy is predictable: ban it.

    As usual, Douglas Adams summed it up:

    I've come up with a set of rules that describe our reactions to technologies:

    1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
    2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
    3. Anything invented after you're thirty-five is against the natural order of things.

    The truth is that technology is disruptive: it changes societies. In their time, movies, books and even the bicycle all instigated widespread moral panic. Bicycles meant that for the first time, young people could easily meet friends under their own steam, the social network technology of their day.

    Today, teachers worry about children in their classrooms "addicted to their phones". So the children aren't paying attention to their lessons, but the truth is many never were. Now instead of sitting at desks like zombies, they're happily engaged in their phones because they have access to something that really interests them.

    But disruptive memes are spreading among children! Well why shouldn't children be allowed to create and disseminate knowledge among their peers? Isn't that the essence of education? Or do we think children should be passive, empty vessels into which we pour our approved ideas? If you disagree with the memes children are sharing, you should make an argument for your position, not shut down the debate.

    Children are using ChatGPT to write their essays! I'm sorry to tell you, but adults are using ChatGPT to write their emails and reports. Education needs to adapt to the reality that social media is a better way to discover new interests and YouTube is simply a better way to learn basically anything.

    There's another sinister undercurrent to the "ban social media and AI" platform. In the case of AI, big tech companies (Google, Facebook, Apple etc.) thought at first that AI would be a "sustaining technology" - that is, because of their ability to invest billions and hire the best engineers, mastery in AI would help to cement their monopolies. But recently, the CEOs have woken up to a growing realisation that AI might well be a "disruptive technology" - as a commodity, AI gives startup companies the ability to challenge Google in search, Facebook in social media and Apple with the idea that maybe we don't all need to use their phones. Their playbook is the same as any other industry cartel's: lobby the government to regulate their industries, because regulation raises the table stakes to enter the industry so effectively that startups become non-starters. Facebook will happily pay the governments tax to censor their social networks, as the enormous cost of doing so serves to prevent any challengers to their business.

    Note that, this isn't an argument against allowing the Chinese ownership of TikTok. I see real geopolitical reasons why we might not want our media companies in foreign hands. We have had similar legislation against foreign ownership of newspapers, for example.

    While I may not agree, it's wrong to say that there's no argument for some regulation of AI. First we should admit that most of the scare stories and scenarios presented are already illegal. Why should it be more illegal for a judge to be racially biased if the judge is using AI, for example? The call for regulations over issues already covered by existing legislation is a tell-tale sign of a monopoly campaigning for cartel protection. I'm told that, out of all AI scenarios, politicians are most worried about deepfakes, and that is a genuinely new capability that perhaps needs oversight. However, lawmakers should be aware that it may not be technically possible to ban video or audio that "sounds like Rishi Sunak". More likely, we need to adapt to a world in which all media is suspect, and learn to use technologies for proving provenance of video and audio.

    Lawmakers should be very careful of any regulation governing such a fast moving and potentially fruitful technology as AI. If we regulate it out of existence, or make it so that only Google can afford to play, we will never know what possible good we have lost: badly needed medical, educational, military and productivity gains are all to play for.

    Ban children from social media and we stultify them, lock them out of the most exciting new technological spaces and prevent them from educating themselves.

  4. Against prediction

    Everybody wants someone to tell them what's going to happen. But whatever consciousness is, it's fundamentally unpredictable. As long as you're in the human domain, telling the future is out.

    There's a story Drama Theorists use to illustrate this. A husband and wife are playing chess. The husband makes a move, "checkmate". The wife objects, wait, there must be something I can do. No, says the husband, it's checkmate -- there are no possible moves. The wife reaches out and smashes the board into the air, pieces flying, "how about this for a move?".

    Okay, so life isn't a game, there aren't rules. So what then? If analysis isn't predictive, what is it? Analysis is the art of understanding the present moment as fully as possible. Choosing models for compressing reality to aid decision making. The actions that make up the future will come from unpredictable human creativity. A wider understanding of the present moment gives our creativity the best grounding from which to imagine the future.

    AI can't predict the future, but it can help us model the present.

  5. Dramatic interfaces

    In their working lives, most people don’t make decisions in complex situations of conflict and cooperation. Most companies instinctively, or intentionally, steer away from situations of dramatic conflict. Business concerns itself with win-win situations: I sell, you buy.

    Many roles do confront dramatic conflict. If you work in government, the military, healthcare, education, policing, then conflict is inherent. You are used to making decisions with unreliable information, with questionable actors and uncertain outcomes.

    Could the corporate world do more in this area — is it possible to have a dramatic relationship with your customers, where risk, deception and cooperation are all possible outcomes? I believe the answer is yes — drama is necessary if we want to take advantage of LLM tech.

    Software interfaces are moving beyond the transactional - embodied in the push-button GUIs of every app which is itself a direct inheritance from the first mechanical machines.

    LLM tech gives us a possibility of creating new interfaces that accept, and work within, the dramatic possibilities of language. We can build software that forms goals, reasons, arguments. It can be in cooperation or conflict with its users. Thinking about software in these terms is essential as we start to build LLM-powered “agents”. Dramatic intelligence is needed where we don’t expect to program every step of the software’s lifetime. Finally, as no incumbent company will allow themselves this kind of relationship with their customer, it presents a huge opportunity for a new kind of LLM-native enterprise.

  6. Service politics

    Microservice architectures are a solution to scaling systems. Startups who start with a micro service architecture before they have scaling problems, sometimes before they even launch, attract ridicule.

    But microservices are also a solution to an engineering management problem. Startup codebases are usually a mess. Proof of concepts, abandoned features, quick hacks are what it takes to find the product.

    Developers keep returning to service architectures because the separation of concerns enforced by the network makes it easier to maintain the codebase, onboard new developers and adopt and abandon features. Maybe your startup has a loyal team of focused developers, and you can do all of that within a monolith. But many don’t. In Yegge’s Platform Rant, he showed how service contracts are necessary for managing multiple engineering teams, but individual developers can benefit from them too.

    Serverless functions, LLM-scaffolding, cloud IDEs and bounties for features are all on the rise. Increasingly, systems are built from loose federations of distributed functionality. Like all engineering solutions, it’s a tradeoff. Abstracting problems at the network layer gives you network problems. But it can solve for spaghetti-monoliths.

  7. SEO at the end of days

    Google’s announced a GPT-4 killer, Gemini, for release by December. There’s no question that OpenAI have a massive lead, but from the outside, it doesn’t look unassailable. LLMs may have presented Google with the first real threat to their search business.

    LLMs hint at a new interface to computing. Microsoft’s strength isn’t interfaces, but sales. They can add LLM features to their software as customers need them. Apple can fall back on their hardware — and may see an increasing demand in consumer compute even as they slowly come around to updating iOS. But Google is only an interface. A single UI paradigm — search. They’ll need to move fast.

    I was talking SEO today, and the paradigm already feels outdated. People out there still discovering software through search engines like boomers with cable subscriptions.

  8. LLMs are complementary

    LLMs (and generative AI generally) have an important feature: broad adoption of LLMs doesn’t require a change in consumer behaviour. LLMs are a complementary technology. We can deploy them right now to the cloud, to people’s smartphones and into enterprise software stacks.

    The iPhone’s success fostered a narrative: that swift widespread change of consumer behaviour would be inevitable if the promise was there. But in reality, the unprecedented rise of mobile was overdetermined: the hardware was finally good enough (after decades of development) at the same time that the internet was finally changing consumer behaviour (after many false starts).

    Crypto and VR proponents have been selling them as a wave of technology about to break over our heads. But adoption in their current forms requires consumers to adopt new behaviours — buying (and using) costly barely-good-enough headsets, or changing their financial arrangements. I don’t doubt that we’ll see these technologies changing people’s lives in the future, but it won’t happen until the transition becomes easier for consumers to swallow. Making Crypto and VR not worth betting your balance sheet on.

    LLMs are different. LLM-powered services can sit alongside all existing software stacks, seamlessly providing functionality that wasn’t possible before. Consumers continue using their cloud-backed web and mobile apps and as far as the customer is concerned, their software got better without them having to do anything.

    In this way, LLMs are more like the move to cloud. Cloud-powered features first showed up as options within our traditional desktop apps. “Share” or “Save to Cloud” buttons (with the floppy disk icon, naturally). Users didn’t need to know what it all meant, they could just opt-in to the new functionality alongside their existing workflows. Eventually, consumer’s came to intuit the new model, and apps changed, but it was a gradual process.

    Each technology has its own path to adoption. Growth of LLM-powered services will be smoother than cloud, as it doesn’t need big-bang digital transformations. It’s possible to re-write individual functions within an existing system to take advantage of LLMs. The tech is complementary to our current cloud, web and mobile platforms.

  9. Where’s the web 2023

    I tried out a few web frameworks and libraries recently. My thoughts:

    Vite

    I needed to upgrade some old React apps. It turned out they killed create-react-app, and it seems like Vite is the go-to “just a React app” framework.

    Vite made the refactor painless. I think the only incompatible thing was a new format for env vars. Other than that, it kept out of the way. Recommended.

    Remix

    I wanted a new app that would deploy to Cloudflare Pages/Workers. The worker runs an edge endpoint proxying to a GCP Python service for the heavy lifting.

    Remix was impressive. It has a nice balance of web fundamentals and React. I’m not so sure about the push away from React with tech like htmx. I like React! I see their point — React apps are in some ways ephemeral, it’s diverging from the web and the ecosystem is chaotic. But for quick development in the real world, it works. I’d choose Remix over NextJS. Remix feels lighter, with less custom magic.

    Astro

    This one was cool. Astro has a great set of features for a static site generator that you can integrate into the rest of your stack with MDX and React, while keeping all the posts in Markdown.

    I could share React components from the main webapp for the layout and added endpoints to the Astro app to serve the Markdown content to the other services.

    Bun

    I want to love Bun. JavaScript/TypeScript/NodeJS is a mess, and one of these attempts to replace the toolchain in one blow has to work.

    I found myself writing a NodeJS service, so I set it up with Bun, and it was great — until I tried to pull in the SDK libs I needed to use. It turns out the NodeJS compatibility isn’t there yet.

    Usually the only time I’m writing NodeJS is because of a specific library I want to use. Bun is great for writing dependency-free services, but if the service is dependency-free why would I use TypeScript? I understand they are aiming for 100% compatibility, and I hope they get there.