About
About Making Shit Up As A Service
LLMs have become a pervasive part of our daily lives. For many of us they have become tools that we use through multiple work streams. As fast as things have moved up to now, the pace of change is only accelerating.
The problem underlying all of this is the level of trust people are putting into what is essentially a probabilistic engine. Tools with good product surfaces constrain the models more than enough to make them incredibly useful (e.g. for coding).
Which leads us to a question, how do we make use of these models in a sensible and coherent manner. While I won’t be answering that question here, I will be sharing some examples of the more nonsensical pieces of content you can create with an LLM – enough to encourage you to always think carefully about the answers you get when the subject is not one you know already.
Each week we will construct a piece of “thought leadership” content, using one of the multitudes of LLMs that are available to us. Why? Because they will sound plausible, authoritative, and yet have absolutely no meaning. The random mutterings of a stochastic parrot.
All of that said, and slightly less cynically, yes, I do you use these tools every single day in my work. I am also in the lucky position that its part of my job to use these. Some my work pays for; some I pay for myself. I am regularly using:
- ChatGPT from OpenAI
- GitHub Copilot (part of my day job)
- Various models with the GitHub Model Playground (Mistral, Cohere, Deepseek, Llama)
- Microsoft Copilot within Office 365
- Anthropic Claude
- Google Gemini and Notebooklm
I use these tools for research (by far my biggest use case is to get the starting point of a piece of research off the ground with a series of prompts), automation, summarization and brain storming among other things. But I always treat the outputs as something that I must check and validate.