Series
The AI Field Guide
Large language models are the loudest part of AI, not the only part. This series covers the rest of the field -- diffusion, encoder-only and encoder-decoder transformers, classical NLP, search and planning, logic, constraints, probabilistic reasoning -- so that picking the right tool for a job stops being a guessing game. Part of Under the Hood.
Under the Hood · The AI Field Guide
How LLMs Actually Work
Tokens, transformers, attention, and the training pipeline -- what large language models actually do when they 'predict the next token', why they hallucinate, and why they're so good at code.
Read articleUnder the Hood · The AI Field Guide
To LLMs… and Beyond!
LLMs are one corner of a much larger field. Diffusion models, reasoning models, multimodal systems, open-weight vs closed — what they are, how they differ, and how to choose.
Read articleUnder the Hood · The AI Field Guide
The Other Transformers
BERT and T5 are transformers too, but they aren't trying to be ChatGPT. They're trying to be the boring layer underneath -- classifiers, embeddings, structured transformations -- and they're often a better answer than an LLM.
Read articleThe Reranker You Didn't Know You Needed
RAG explanations stop at 'embed the query, look up the nearest documents, hand them to the LLM.' That's the demo. In production, there's a second pass between the lookup and the LLM, and it's the one that actually makes retrieval work.
Coming soon