Artificial Intelligence · Notes


Fri, 23 Aug 2024 07:55:52 GMT
RAG is conceptually simple

RAG boils down to 5 steps:

  1. Create a representation of all the possible information (text) you’d like to be considered for your question (info-representation)
  2. Create a representation of the question being asked (question-representation)
  3. Find the top N info-representations most similar to your question-representation
  4. Feed all of the information (text) from the top N representations into your LLM of choice (e.g., OpenAI GPT4o) along with the question
  5. And Voila! Your model will give you an answer given the context you’ve added

It could almost be called “Expand your LLM prompt with more context”.

Mon, 02 Dec 2024 11:51:44 GMT
ChatGPT at age two

ChatGPT and systems like it, what they are going to do right now is they're going to drive the cost of producing text and images and code and, soon enough, video and audio to basically zero. It's all going to look and sound and read very, very convincing. That is what these systems are learning to do. They are learning how to be convincing. They are learning how to sound and seem human. But they have no idea actual idea what they are saying or doing. It is content that has no real relationship to the truth.

So what does it mean to drive the cost of nonsense to zero, even as we massively increase the scale and persuasiveness and flexibility at which it can be produced?

Sat, 15 Feb 2025 06:55:39 GMT
Intention vs. extension

In philosophy, a distinction is made between intention and extension. The intention of something is basically the abstract meaning, like "even number." The extension is a list of all the even numbers. And neural networks basically work at the extensional level, but they don't work at the intentional level. They are not getting the abstract meaning of anything.

Mon, 05 Jan 2026 12:16:38 GMT
What breaks first when you try to build real world AI agents

Working with AI agents outside of demos and toy tasks, and a pattern keeps repeating: the first things to break are rarely model quality. A few failure modes showed up almost immediately.

  1. The biggest one was memory. Long term memory sounds clean on paper, but in practice it drifts. Old assumptions leak into new tasks, context gets overweighted, and agents become confidently wrong in ways that are hard to debug. Resetting memory often improved results more than adding more.
  2. Tools were the second problem. Most agent architectures assume tools are deterministic and cheap. They aren't. APIs fail, return partial data, change formats, or time out. Agents don't just need tools, they need strategies for tool failure, retries, and graceful degradation.
  3. Evaluation broke next. Benchmarks didn't help much once tasks became multi step and open ended. We tried success heuristics, human review, and partial credit scoring. None were satisfying. Measuring "did this agent actually help" turned out to be far harder than measuring accuracy.
  4. Cost and latency quietly limited everything. An agent that feels smart at 10 dollars per task or 30 seconds per response is unusable in most real systems. Optimizing prompts and models mattered less than reducing unnecessary reasoning steps.
  5. Finally, trust degraded faster than expected. Once an agent makes a confident but wrong decision, users mentally downgrade it. Recovering that trust is much harder than preventing the failure in the first place.

The main lesson so far is that building useful agents feels more like distributed systems work than model tuning. Failure handling, observability, and clear contracts matter more than clever prompting.