<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Production on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/production/</link><description>Recent content in Production on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 30 Apr 2026 08:00:00 +0000</lastBuildDate><atom:link href="https://renezander.com/tags/production/index.xml" rel="self" type="application/rss+xml"/><item><title>What Anthropic's April 23 Postmortem Reveals About Your Agent Harness</title><link>https://renezander.com/blog/anthropic-three-bugs-every-agent-harness-ships/</link><pubDate>Thu, 30 Apr 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/anthropic-three-bugs-every-agent-harness-ships/</guid><description>&lt;p>The April 23 Claude Code postmortem dropped last week. Three bugs, two months of degraded output, one usage-limit reset for every Pro subscriber.&lt;/p>
&lt;p>I read it twice. The second time I started writing notes for my own agent harness.&lt;/p>
&lt;p>It is unusually candid for a company at this scale, and it reads like a checklist of failure modes any team running production AI agents will eventually hit. Worth treating as a free engineering review.&lt;/p></description></item><item><title>How to Choose an LLM for Production: 7 Criteria That Matter</title><link>https://renezander.com/guides/how-to-choose-llm-for-production/</link><pubDate>Fri, 17 Apr 2026 07:00:00 +0200</pubDate><guid>https://renezander.com/guides/how-to-choose-llm-for-production/</guid><description>&lt;p>Most teams pick an LLM for production the wrong way. They read a leaderboard, pick the top model, and wire it into an endpoint. Six weeks later they hit a rate limit during a traffic spike, or a compliance reviewer asks where EU data is processed, or the p99 latency kills a user-facing flow. Then the real selection work starts, under pressure, in production.&lt;/p>
&lt;p>This guide is how to choose an LLM for production the right way, before any of that happens. I run AI agents and LLM-backed automations for DACH clients, and every production deployment I&amp;rsquo;ve shipped went through the same seven-criteria filter. The order matters. Skip one and you will find out later, usually on a weekend.&lt;/p></description></item><item><title>Production AI Agent Architecture: Patterns That Actually Ship</title><link>https://renezander.com/guides/production-ai-agent-architecture/</link><pubDate>Fri, 10 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/production-ai-agent-architecture/</guid><description>&lt;p>Most agent tutorials end at &amp;ldquo;the model calls tools in a loop, done.&amp;rdquo; That works for a demo. It falls apart the first time a tool 500s, a user asks something off-script, or the token bill crosses $20 on a single task. Production AI agent architecture is the set of patterns that keep that loop alive when reality hits.&lt;/p>
&lt;p>I run 10 agents in production right now. Bash scripts calling &lt;code>claude -p&lt;/code>, scheduled via systemd timers, reporting outcomes to Telegram. Not fancy. They ship work every day because the architecture around the loop is boring and deliberate. This guide is that playbook: the patterns, the must-haves, the anti-patterns, and the opinionated verdicts on what to use when.&lt;/p></description></item></channel></rss>