<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Claude on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/claude/</link><description>Recent content in Claude on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Wed, 15 Apr 2026 08:00:00 +0200</lastBuildDate><atom:link href="https://renezander.com/tags/claude/index.xml" rel="self" type="application/rss+xml"/><item><title>LLM API Comparison 2026: Best API for Production</title><link>https://renezander.com/guides/llm-api-comparison/</link><pubDate>Wed, 15 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/llm-api-comparison/</guid><description>&lt;p&gt;I have five LLM providers wired into production code. Not in side projects. Real things I get paid to maintain. After two years of swapping between them, retrying failed calls at 3am, and debugging tool-use schemas, I have opinions.&lt;/p&gt;
&lt;p&gt;This is an LLM API comparison focused on what actually matters when you ship. Not benchmark leaderboards. Not marketing spec sheets. Features, SDK quality, failure modes, tool-use reliability, and whether the docs will waste your afternoon.&lt;/p&gt;</description></item><item><title>Telegram Bot with Claude API: Build an AI Assistant</title><link>https://renezander.com/blog/telegram-bot-claude-api/</link><pubDate>Tue, 14 Apr 2026 11:00:00 +0200</pubDate><guid>https://renezander.com/blog/telegram-bot-claude-api/</guid><description>&lt;p&gt;I have a Telegram bot wired to Claude Opus 4.7 that I talk to from anywhere. Train, couch, cafe, bed. It reads my TickTick tasks, writes code against my repos, runs shell commands on my VPS, and sends me a morning briefing at 06:30 Madrid time. The whole thing is a bash script and a systemd unit. No frontend. No hosting bill. No auth pages to build.&lt;/p&gt;
&lt;p&gt;This guide walks through exactly how to build one. Two architectures (bash long-polling and a TypeScript webhook server), full runnable code, attachment handling, MCP tool integration, and the security steps most tutorials skip. The primary stack is a Telegram bot Claude API wiring that runs on any Linux box with a few hundred megs of RAM.&lt;/p&gt;</description></item><item><title>MCP Servers Explained: What Model Context Protocol Does</title><link>https://renezander.com/blog/mcp-servers-explained/</link><pubDate>Mon, 13 Apr 2026 10:00:00 +0200</pubDate><guid>https://renezander.com/blog/mcp-servers-explained/</guid><description>&lt;p&gt;If you have heard &amp;ldquo;MCP&amp;rdquo; thrown around in the last year without getting a clear answer on what it is, here is the explainer that stays grounded in what it actually does, not what the marketing decks claim.&lt;/p&gt;
&lt;p&gt;I run an MCP server in production. It exposes my task manager to every LLM client I use, daily, as a systemd service on a Debian VPS. That constant use is where this explainer comes from. No conference slides, no speculation about where the protocol is heading in five years. Just what MCP is, what it replaces, and when you should bother writing one.&lt;/p&gt;</description></item><item><title>n8n AI Agent Workflow Examples: 5 Production Patterns</title><link>https://renezander.com/blog/n8n-ai-agent-workflows/</link><pubDate>Sun, 12 Apr 2026 14:00:00 +0200</pubDate><guid>https://renezander.com/blog/n8n-ai-agent-workflows/</guid><description>&lt;p&gt;I run n8n in production for content ops, email triage, and invoice parsing. The visual canvas is not the point. The point is that triggers, retries, queues, and credentials are free, and I can hand a workflow to a non-engineer to edit prompts without them breaking the integration layer.&lt;/p&gt;
&lt;p&gt;This post is five n8n ai agent workflow examples I actually ship or have shipped for clients. Each one includes the node graph, the Claude prompt, the cost per run, and the production gotchas. No toy demos.&lt;/p&gt;</description></item><item><title>Claude vs ChatGPT for Developers: A 2026 Practitioner Review</title><link>https://renezander.com/guides/claude-vs-chatgpt-developers/</link><pubDate>Sun, 12 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/claude-vs-chatgpt-developers/</guid><description>&lt;p&gt;I pay for both Claude Max and ChatGPT Pro. Both are open in my task bar right now. I default to Claude for coding and to ChatGPT for a narrower set of things. If you want the short answer: Claude wins for day-to-day engineering in 2026, ChatGPT wins for a handful of specific workflows, and the gap between them on the CLI agent side is wider than most people realise.&lt;/p&gt;</description></item><item><title>LLM API Cost Comparison 2026: Framework, Not a Stale Table</title><link>https://renezander.com/guides/llm-api-cost-comparison/</link><pubDate>Sat, 11 Apr 2026 13:00:00 +0200</pubDate><guid>https://renezander.com/guides/llm-api-cost-comparison/</guid><description>&lt;p&gt;Every llm api cost comparison I see online has the same problem: it goes stale in two weeks. Providers drop a new tier, another one halves their output price, a reasoning model ships at triple the cost. By the time the post ranks on Google, the numbers are wrong and the rankings are meaningless.&lt;/p&gt;
&lt;p&gt;So this piece is not a table you check once. It is the framework I use to model llm api pricing for my own production workloads, plus a snapshot of list prices as of April 2026, plus four realistic scenarios run through that framework. The scenarios are the point. Plug your own traffic into them, change the model, get a defensible monthly cost number.&lt;/p&gt;</description></item><item><title>Production AI Agent Architecture: Patterns That Actually Ship</title><link>https://renezander.com/guides/production-ai-agent-architecture/</link><pubDate>Fri, 10 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/production-ai-agent-architecture/</guid><description>&lt;p&gt;Most agent tutorials end at &amp;ldquo;the model calls tools in a loop, done.&amp;rdquo; That works for a demo. It falls apart the first time a tool 500s, a user asks something off-script, or the token bill crosses $20 on a single task. Production AI agent architecture is the set of patterns that keep that loop alive when reality hits.&lt;/p&gt;
&lt;p&gt;I run 10 agents in production right now. Bash scripts calling &lt;code&gt;claude -p&lt;/code&gt;, scheduled via systemd timers, reporting outcomes to Telegram. Not fancy. They ship work every day because the architecture around the loop is boring and deliberate. This guide is that playbook: the patterns, the must-haves, the anti-patterns, and the opinionated verdicts on what to use when.&lt;/p&gt;</description></item><item><title>GitHub Issue Management AI: Build Claude-Powered Triage That Works</title><link>https://renezander.com/blog/github-issue-management-ai/</link><pubDate>Thu, 09 Apr 2026 14:00:00 +0200</pubDate><guid>https://renezander.com/blog/github-issue-management-ai/</guid><description>&lt;p&gt;Maintainers do not ship software on Tuesday mornings. They triage. They read a new issue, check whether it is a duplicate of something filed three weeks ago, decide whether it is a bug or a question, pick a priority, add two or three labels, and sometimes write a polite comment asking for a repro. Then they do it again for the next issue in the queue. The job is pure admin, and on any active repo it eats real hours every week.&lt;/p&gt;</description></item><item><title>ChatGPT vs Claude Vergleich: Welche KI für DACH-Teams 2026</title><link>https://renezander.com/guides/chatgpt-vs-claude-vergleich/</link><pubDate>Fri, 03 Apr 2026 09:00:00 +0200</pubDate><guid>https://renezander.com/guides/chatgpt-vs-claude-vergleich/</guid><description>&lt;p&gt;ChatGPT und Claude sind 2026 die zwei ernsthaften Optionen, wenn du KI in einem DACH-Unternehmen produktiv einsetzen willst. Alles andere ist Nische, Forschung oder Selbstbau. Die Frage, die ich von Kunden am häufigsten höre, lautet &amp;ldquo;welches soll ich nehmen?&amp;rdquo;. Die Frage ist falsch gestellt.&lt;/p&gt;
&lt;p&gt;Die richtige Frage ist &amp;ldquo;welches Modell für welchen Workload?&amp;rdquo;. Ein Reasoning-lastiger Agent mit mehrstufigen Tool Calls hat andere Anforderungen als ein Support-Chatbot mit Latenzdruck oder ein Bulk-Klassifikator, der Millionen Tickets pro Monat verarbeitet. Wer sich auf genau einen Provider festlegt, ohne die Workloads zu trennen, zahlt entweder zu viel oder bekommt schlechtere Ergebnisse als nötig.&lt;/p&gt;</description></item><item><title>RAG Pipeline Tutorial: Build a Production Document Q&amp;A System with Qdrant and Claude</title><link>https://renezander.com/blog/rag-pipeline-tutorial/</link><pubDate>Wed, 01 Apr 2026 09:00:00 +0200</pubDate><guid>https://renezander.com/blog/rag-pipeline-tutorial/</guid><description>&lt;p&gt;Most RAG tutorials ship a toy. You paste a PDF, it answers one question, and the moment you point it at 500 documents the retrieval goes sideways and Claude hallucinates half the citations. This one is the opposite. I am going to walk through the pipeline I actually run in production, line by line, with the tradeoffs called out where they bit me.&lt;/p&gt;
&lt;p&gt;The verdict first. If your corpus is under 200k tokens and rarely changes, skip RAG and stuff it all into Claude&amp;rsquo;s context window. If your corpus is larger, changes often, or you need hard citations, build this RAG pipeline tutorial end to end with Qdrant, a local embedding model, and Claude Sonnet 4.6. That is the sweet spot for cost and quality in 2026.&lt;/p&gt;</description></item><item><title>Build MCP Server TypeScript: Complete Tutorial with Claude</title><link>https://renezander.com/blog/build-mcp-server-typescript/</link><pubDate>Mon, 30 Mar 2026 11:00:00 +0100</pubDate><guid>https://renezander.com/blog/build-mcp-server-typescript/</guid><description>&lt;p&gt;Most teams do not need a custom MCP server. If you have one LLM app, one integration, and one codebase, calling the vendor API directly is faster to ship and easier to debug. The moment you have two Claude surfaces (Claude Desktop plus Claude Code, or Claude Code plus Cursor) hitting the same internal system, you stop duplicating tool code. That is when you build MCP server TypeScript projects worth maintaining.&lt;/p&gt;</description></item></channel></rss>