What Would Vin Claudel Do?
I had my agents scour the leaked Claude Code source, and extract the patterns they use. Then I built a tool that does a semantic search on those patterns. This is the article introducing the tool
Try It Yourself
npx wwvcd "hallucination" --jsonYou are building an AI agent. It halluccinates. It gets stuck in a loop. It backgrounds bash processes incorrectly. It doesn’t know the exact timeout before auto-backgrounding kicks in, or the circuit breaker that prevents infinite retry loops, or the exact retry strategy for a 529 overload.
The Claude Code team solved all of this. The answers are in their source code. This database extracts those answers - with exact constants, TypeScript interfaces, and Rust implementations - so your agents can query them directly.
Whenever your agent gets stuck, ask: What would Vin Claudel do?
He would drive a 1970 Dodge Charger off a moving cargo plane. We are building AI agents. So the next best thing is to look up exactly how the Claude Code team solved it.
When your AI agent gets stuck in an infinite loop, don’t guess. Instantly search 1,166 exact code snippets and constants used by state-of-the-art coding assistants.
What It Is: A Database of Hard Technical Truths
I wanted the hard technical truth of how state-of-the-art coding agents operate. So, I took a clean-room implementation of Anthropic’s Claude Code and launched an adversarial AI swarm over the TypeScript source.
The original extraction run gave me 3,200 findings. But they were terrible. The LLM summarized the code into fluffy blog prose like, “BashTool automatically backgrounds tasks after a certain time.” That is useless to a developer.
So, I nuked it. I built a new extraction pipeline that intentionally stripped out the AI “fluff” and preserved the exact technical precision. I forced it to pull exact millisecond constants, actual TypeScript interfaces, tool schemas, and raw code evidence.
The result is WWVCD, a global NPM package and open-source database of 1,166 deep technical architectural patterns.
How It Works
WWVCD is a zero-dependency CLI tool. You don’t need to install it permanently. You just open your terminal and type your problem:
npx wwvcd "bash background timeout" --jsonUnder the hood, it doesn’t just do a dumb string search. It splits SCREAMING_SNAKE_CASE tokens, expands semantic concepts (mapping “hallucination” to “fabrication”, “grounding”, and “false claim”), strips out English stopwords, and scores the results. It instantly returns the exact architectural blueprint for your problem, complete with the source file context and verbatim code snippets.
Why This Is Useful (Real Examples)
When you are building an AI agent, you don’t need high-level philosophy. You need to know exactly how to architect the tool boundaries. Here is how WWVCD actually helps you build better software.
Example 1: The “Long-Running Bash Command” Problem
The Problem: You give your agent a bash tool. It runs npm install. The command takes 2 minutes. Your agent’s main evaluation loop is blocked, the user thinks the app is frozen, and if the output is too long, the context window explodes.
What people usually try: Tell the prompt, “Run commands asynchronously.” (This fails).
What Vin Claudel does (npx wwvcd "bash background"):
BashTool Timeout and Background Thresholds
BashTool auto-backgrounds long-running commands to prevent blocking the main loop.ASSISTANT_BLOCKING_BUDGET_MScontrols how long the model waits before backgrounding. Commands exceeding the budget get backgrounded: output is captured to aTaskOutputand the model is notified it can continue. TheCircularBufferinTaskOutputauto-evicts oldest output lines when full. Sandboxing by platform: macOS uses sandbox-exec (Seatbelt), Linux uses bwrap.
The Takeaway: This gives you the exact architectural blueprint. You don’t just “run it async.” You set a hard millisecond budget. You wrap the output in a CircularBuffer so a runaway log doesn’t nuke your LLM context window. You use platform-specific sandboxing. This is actionable engineering.
Example 2: The “Evaluator Hallucination” Problem
The Problem: You build a “Judge” agent to check the work of your “Coder” agent. But the Judge agent looks at a 90% correct file, gets seduced by how plausible it looks, and just rubber-stamps it as “PASS”, hallucinating that the final 10% is correct.
What people usually try: Add “BE VERY STRICT AND DO NOT LIE” to the system prompt. (This fails).
What Vin Claudel does (npx wwvcd "evaluation hallucination"):
The CLI instantly surfaces two Core Structural Strategies:
1. Structural Evidence: Make evidence a structural requirement of the output format. A claim without an exact
Source Quoteblock or aCommand Runoutput is not verified—it is automatically considered a FABRICATION.
2. The Read-Only Judge: Evaluation agents MUST be architecturally decoupled from generation. Remove their ability to write or edit files (FILE_WRITE,FILE_EDIT). If they have the tools to fix the code, they will be seduced by plausibility and hallucinate fixes.
The Takeaway: You fix hallucinations with architecture, not begging. You strip the Judge agent of all write permissions so it is forced to be adversarial. You force the output JSON schema to require a verbatim evidence_quote field. If the quote is empty, the run fails.
Stop Renting, Start Architecting
Building reliable, autonomous AI agents is about defensive, structural prompt engineering. Vin Claudel doesn’t hallucinate. He executes. Because it’s about family. And living life one token at a time.
If your agent is stuck, don’t guess. Look up exactly how the best in the world solved it.
Try It Yourself
npx wwvcd "hallucination" --jsonor
npx wwvcd "bash background timeout"Get the code, the database, and the patterns here:


