<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Claude-Code on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/claude-code/</link><description>Recent content in Claude-Code on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Sat, 02 May 2026 08:00:00 +0000</lastBuildDate><atom:link href="https://renezander.com/tags/claude-code/index.xml" rel="self" type="application/rss+xml"/><item><title>Agentic Knowledge Base — Karpathy's LLM wiki, with adapters</title><link>https://renezander.com/blog/agentic-knowledge-base/</link><pubDate>Sat, 02 May 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/agentic-knowledge-base/</guid><description>&lt;p>When Karpathy&amp;rsquo;s &lt;a href="https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f">LLM Wiki&lt;/a> post landed, I already had semantic search over my TickTick — qdrant for the vector store, nomic-embed-text via ollama for embeddings, a daily cron to keep the index fresh, the works. The agent-side retrieval wasn&amp;rsquo;t the missing piece.&lt;/p>
&lt;p>What was missing was the &lt;em>structure&lt;/em>. Karpathy&amp;rsquo;s framing — designate a wiki, write notes for an LLM reader, lean on retrieval instead of taxonomy — surfaced the parts of my setup that didn&amp;rsquo;t have shape yet: where durable knowledge lives versus ephemeral tasks, how agents pull structured data out of notes humans wrote, why my existing semantic search sometimes returned the right answer and sometimes returned nothing useful.&lt;/p></description></item><item><title>What Anthropic's April 23 Postmortem Reveals About Your Agent Harness</title><link>https://renezander.com/blog/anthropic-three-bugs-every-agent-harness-ships/</link><pubDate>Thu, 30 Apr 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/anthropic-three-bugs-every-agent-harness-ships/</guid><description>&lt;p>The April 23 Claude Code postmortem dropped last week. Three bugs, two months of degraded output, one usage-limit reset for every Pro subscriber.&lt;/p>
&lt;p>I read it twice. The second time I started writing notes for my own agent harness.&lt;/p>
&lt;p>It is unusually candid for a company at this scale, and it reads like a checklist of failure modes any team running production AI agents will eventually hit. Worth treating as a free engineering review.&lt;/p></description></item><item><title>Claude Code with Local LLMs and ANTHROPIC_BASE_URL: Ollama, LM Studio, llama.cpp, vLLM</title><link>https://renezander.com/guides/claude-code-local-llm-anthropic-base-url/</link><pubDate>Wed, 29 Apr 2026 07:00:00 +0200</pubDate><guid>https://renezander.com/guides/claude-code-local-llm-anthropic-base-url/</guid><description>&lt;p>&lt;em>Native Anthropic endpoints, tool-call compatibility, and context-window sizing for local Claude Code.&lt;/em>&lt;/p>
&lt;p>&lt;em>Last tested: April 2026. See Changelog at the bottom.&lt;/em>&lt;/p>
&lt;h2 id="tldr-cheat-sheet">TL;DR cheat sheet&lt;/h2>
&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Goal&lt;/th>
 &lt;th>Use&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>MacBook Air&lt;/td>
 &lt;td>Gemma 4 26B-A4B Q4, &lt;strong>32K context&lt;/strong>, LM Studio or Ollama&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>MacBook Pro&lt;/td>
 &lt;td>Gemma 4 26B-A4B Q4 / UD-Q4, &lt;strong>64K context&lt;/strong>, llama.cpp or LM Studio&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Claude Code minimum&lt;/td>
 &lt;td>&lt;strong>32K context&lt;/strong> (anything below is a chat demo)&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Best local backend&lt;/td>
 &lt;td>LM Studio or Ollama first; llama.cpp for advanced; vLLM for servers&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>Avoid&lt;/td>
 &lt;td>8K / 16K context, dense 31B Gemma 4 on 32 GB machines, old llama.cpp builds&lt;/td>
 &lt;/tr>
 &lt;/tbody>
&lt;/table>
&lt;h2 id="the-local-claude-code-rule-of-thumb">The local-Claude-Code rule of thumb&lt;/h2>
&lt;p>Three things decide whether a local Claude Code session works:&lt;/p></description></item><item><title>Claude vs ChatGPT for Developers: A 2026 Practitioner Review</title><link>https://renezander.com/guides/claude-vs-chatgpt-developers/</link><pubDate>Sun, 12 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/claude-vs-chatgpt-developers/</guid><description>&lt;p>I pay for both Claude Max and ChatGPT Pro. Both are open in my task bar right now. I default to Claude for coding and to ChatGPT for a narrower set of things. If you want the short answer: Claude wins for day-to-day engineering in 2026, ChatGPT wins for a handful of specific workflows, and the gap between them on the CLI agent side is wider than most people realise.&lt;/p></description></item><item><title>Claude Code SDK Agents: Build Production Agents Without the Loop</title><link>https://renezander.com/blog/claude-code-sdk-agents/</link><pubDate>Wed, 01 Apr 2026 12:00:00 +0200</pubDate><guid>https://renezander.com/blog/claude-code-sdk-agents/</guid><description>&lt;p>Most &amp;ldquo;build an agent with Claude&amp;rdquo; tutorials hand you a while-loop around &lt;code>client.messages.create&lt;/code>, a hand-rolled tool dispatcher, and a promise that you&amp;rsquo;ll wire up file reads and shell execution yourself. That works. It also means you spend two weeks rebuilding the same plumbing that Claude Code already ships with.&lt;/p>
&lt;p>The Claude Code SDK, sometimes called the Claude Agent SDK, is the shortcut. Same runtime as the &lt;code>claude&lt;/code> CLI, exposed as a library in TypeScript and Python, plus a print mode you can call from a bash cron job. You get file tools, bash, MCP client, subagents, hooks, and permission modes without writing any of it.&lt;/p></description></item></channel></rss>