<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/ai/</link><description>Recent content in Ai on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 05 May 2026 05:30:00 +0000</lastBuildDate><atom:link href="https://renezander.com/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Your AI Workflow Doesn't Need Better Prompts. It Needs Less AI.</title><link>https://renezander.com/blog/your-ai-workflow-needs-less-ai/</link><pubDate>Tue, 05 May 2026 05:30:00 +0000</pubDate><guid>https://renezander.com/blog/your-ai-workflow-needs-less-ai/</guid><description>&lt;p>The first stage of AI work is prompting.&lt;/p>
&lt;p>The last stage is removing the model from most of the workflow.&lt;/p>
&lt;p>That sounds backwards.&lt;/p>
&lt;p>It is not.&lt;/p>
&lt;p>When a workflow is new, the LLM is useful because the work is still ambiguous. You are discovering what good looks like. You try a prompt, read the output, adjust the examples, change the tone, add constraints, and run it again.&lt;/p>
&lt;p>That is a good use of AI.&lt;/p></description></item><item><title>What Anthropic's April 23 Postmortem Reveals About Your Agent Harness</title><link>https://renezander.com/blog/anthropic-three-bugs-every-agent-harness-ships/</link><pubDate>Thu, 30 Apr 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/anthropic-three-bugs-every-agent-harness-ships/</guid><description>&lt;p>The April 23 Claude Code postmortem dropped last week. Three bugs, two months of degraded output, one usage-limit reset for every Pro subscriber.&lt;/p>
&lt;p>I read it twice. The second time I started writing notes for my own agent harness.&lt;/p>
&lt;p>It is unusually candid for a company at this scale, and it reads like a checklist of failure modes any team running production AI agents will eventually hit. Worth treating as a free engineering review.&lt;/p></description></item><item><title>95% of PII Redaction Doesn't Need an LLM. The Other 5% Does.</title><link>https://renezander.com/blog/pii-redaction-deterministic-vs-llm/</link><pubDate>Tue, 21 Apr 2026 10:00:00 +0000</pubDate><guid>https://renezander.com/blog/pii-redaction-deterministic-vs-llm/</guid><description>&lt;p>A VP at an SAP shop told me recently: &amp;ldquo;Every time we copy production to our lower environments, PII leaks. And no, we&amp;rsquo;re not throwing an LLM at it. That&amp;rsquo;s a thousand times the compute of what we already run.&amp;rdquo;&lt;/p>
&lt;p>He&amp;rsquo;s right.&lt;/p>
&lt;p>Most of the PII redaction problem in enterprise data isn&amp;rsquo;t a neural network problem. It&amp;rsquo;s a lookup table problem. And the incumbents already solve it. SAP TDMS, Delphix, Informatica, IBM InfoSphere Optim. All schema-aware. All row-level. All deterministic.&lt;/p></description></item><item><title>What llama.cpp's Pace Tells You About On-Prem LLM Readiness</title><link>https://renezander.com/blog/what-llamacpps-pace-tells-you-about-on-prem-llm-readiness/</link><pubDate>Tue, 14 Apr 2026 06:00:00 +0000</pubDate><guid>https://renezander.com/blog/what-llamacpps-pace-tells-you-about-on-prem-llm-readiness/</guid><description>&lt;p>Your team asked for GPU budget for self-hosted inference. You said &amp;ldquo;not yet&amp;rdquo; because last time you checked, the tooling wasn&amp;rsquo;t production-grade. That was true 18 months ago. It&amp;rsquo;s not true now, and the delay is costing you leverage you don&amp;rsquo;t know you&amp;rsquo;re losing.&lt;/p>
&lt;p>I&amp;rsquo;m writing this because most decision-makers I talk to are still running on an outdated mental model of what self-hosted LLM infrastructure looks like. The software moved. The org didn&amp;rsquo;t.&lt;/p></description></item><item><title>Claude API vs OpenAI API in 2026: Which One Ships Faster?</title><link>https://renezander.com/guides/claude-api-vs-openai-business-automation/</link><pubDate>Sat, 11 Apr 2026 07:00:00 +0000</pubDate><guid>https://renezander.com/guides/claude-api-vs-openai-business-automation/</guid><description>&lt;p>Claude API vs OpenAI for business automation in 2026: Claude produces cleaner structured output and follows instructions more literally, making it safer for extraction and content tasks. OpenAI ships parallel function calls, a wider model lineup, and more capacity at peak. Pick Claude when output quality matters; pick OpenAI when ecosystem and tool fan-out matter.&lt;/p>
&lt;p>I integrate LLM APIs into business automation systems. Content pipelines, document processing, customer communication flows, data enrichment. Not chatbots. Production systems that run unattended and need to work every time.&lt;/p></description></item><item><title>Your AI Content Tool Knows Your Strategy. Do You?</title><link>https://renezander.com/blog/your-ai-content-tool-knows-your-strategy-do-you-know-where-it-goes/</link><pubDate>Tue, 07 Apr 2026 06:00:00 +0000</pubDate><guid>https://renezander.com/blog/your-ai-content-tool-knows-your-strategy-do-you-know-where-it-goes/</guid><description>&lt;p>Your team is using AI for content. Everybody is. LinkedIn posts, blog drafts, internal comms, maybe some customer-facing copy too.&lt;/p>
&lt;p>And it works. The output is decent, the speed is real, nobody wants to go back to writing everything from scratch.&lt;/p>
&lt;p>But have you thought about what you are actually pasting into these tools?&lt;/p>
&lt;h2 id="the-prompt-is-the-product">The Prompt Is the Product&lt;/h2>
&lt;p>Every time someone on your team writes a prompt, they are feeding context into a system they do not control. Brand voice guidelines. Competitive positioning notes. Messaging frameworks. That internal strategy deck someone summarized into a prompt last Tuesday.&lt;/p></description></item><item><title>Spend Your Human Thinking Tokens Where They Compound</title><link>https://renezander.com/blog/spend-your-human-thinking-tokens-where-they-compound/</link><pubDate>Tue, 31 Mar 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/spend-your-human-thinking-tokens-where-they-compound/</guid><description>&lt;p>More automations running. More agents deployed. More pipelines humming in the background.&lt;/p>
&lt;p>I run about a dozen automated jobs. Daily briefings, proposal generation, content pipelines, data syncing, monitoring alerts. They handle a lot.&lt;/p>
&lt;p>But the biggest improvement to my workflow this year wasn&amp;rsquo;t adding more automation. It was getting honest about where my thinking actually matters.&lt;/p>
&lt;h2 id="you-have-a-token-budget-too">You Have a Token Budget Too&lt;/h2>
&lt;p>LLMs have context windows. Feed in too much noise and the signal degrades. The output gets worse even though you gave it more to work with.&lt;/p></description></item><item><title>AI Skills Are the New Boilerplate: They Fix Nothing</title><link>https://renezander.com/blog/ai-skills-are-the-new-boilerplate-they-solve-almost-nothing/</link><pubDate>Tue, 24 Mar 2026 11:13:17 +0000</pubDate><guid>https://renezander.com/blog/ai-skills-are-the-new-boilerplate-they-solve-almost-nothing/</guid><description>&lt;p>Everyone&amp;rsquo;s sharing their skill libraries right now. &amp;ldquo;Here are my 20 custom slash commands.&amp;rdquo; &amp;ldquo;Check out my prompt template collection.&amp;rdquo; &amp;ldquo;This skill saves me 2 hours a day.&amp;rdquo;&lt;/p>
&lt;p>I use skills too. I have about a dozen. They handle cover letters, content pipelines, code review, commit messages. Repeatable workflows where the input and output are predictable.&lt;/p>
&lt;p>They cover maybe 10% of what my AI system actually does.&lt;/p>
&lt;p>The other 90% is the part nobody shares on social media because it&amp;rsquo;s ugly. It&amp;rsquo;s API integrations that break when headers change. It&amp;rsquo;s state management between sessions. It&amp;rsquo;s error handling for when the third-party service returns garbage. It&amp;rsquo;s monitoring that pages you at 6 AM because a cron failed. It&amp;rsquo;s human-in-the-loop workflows where the AI proposes and you approve before anything touches production.&lt;/p></description></item><item><title>How I Built a Business Email Agent with Compliance Controls in Go</title><link>https://renezander.com/blog/how-i-built-a-business-email-agent-with-compliance-controls-in-go/</link><pubDate>Sat, 21 Mar 2026 12:49:31 +0000</pubDate><guid>https://renezander.com/blog/how-i-built-a-business-email-agent-with-compliance-controls-in-go/</guid><description>&lt;p>Every few weeks another AI agent product launches that can &amp;ldquo;handle your email.&amp;rdquo; Dispatch, OpenClaw, and a dozen others promise to read, summarize, and reply on your behalf.&lt;/p>
&lt;p>They work fine for personal use. But the moment you try to use them for business operations, three problems show up:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>No spending controls.&lt;/strong> The agent calls an LLM as many times as it wants. You find out what it cost at the end of the month.&lt;/li>
&lt;li>&lt;strong>No approval flow.&lt;/strong> It either sends emails autonomously or it doesn&amp;rsquo;t. There&amp;rsquo;s no &amp;ldquo;show me the draft, let me approve it&amp;rdquo; step.&lt;/li>
&lt;li>&lt;strong>No audit trail.&lt;/strong> If a client asks &amp;ldquo;why did your system send me this?&amp;rdquo;, you have no answer.&lt;/li>
&lt;/ol>
&lt;p>I needed an email agent for my consulting business that could triage inbound mail, draft replies, and digest threads. But I also needed to explain every action it took to a client if asked. So I built one.&lt;/p></description></item><item><title>Your Vector Database Decision Is Simpler Than You Think</title><link>https://renezander.com/blog/your-vector-database-decision-is-simpler-than-you-think/</link><pubDate>Tue, 17 Mar 2026 07:41:59 +0000</pubDate><guid>https://renezander.com/blog/your-vector-database-decision-is-simpler-than-you-think/</guid><description>&lt;p>Every week someone asks which vector database they should use. The answer is almost always &amp;ldquo;it depends on three things,&amp;rdquo; and none of them are throughput benchmarks.&lt;/p>
&lt;p>I run semantic search in production on a single VPS. Over a thousand items indexed, embeddings generated on the same machine, queries return in under a second. But that setup only works because of the constraints I&amp;rsquo;m operating in. Change the constraints and the answer changes completely.&lt;/p></description></item><item><title>I Run 10 AI Agents in Production. They're All Bash Scripts.</title><link>https://renezander.com/blog/i-run-10-ai-agents-in-production-theyre-all-bash-scripts-df2/</link><pubDate>Thu, 12 Mar 2026 14:29:44 +0000</pubDate><guid>https://renezander.com/blog/i-run-10-ai-agents-in-production-theyre-all-bash-scripts-df2/</guid><description>&lt;p>A week ago I wrote about &lt;a href="https://dev.to/renezander030/lots-of-people-are-demoing-ai-agents-almost-nobodys-shipping-them-the-right-way-5c10">shipping AI agents the right way&lt;/a>. That piece was about the harness: quality gates, token economics, multi-model verification. The stuff that separates demos from production.&lt;/p>
&lt;p>A lot of people resonated with it. But I left out the part that actually eats most of my time: keeping the boring stuff running.&lt;/p>
&lt;p>So let me walk you through what production AI agents actually look like when the conference talk is over.&lt;/p></description></item><item><title>Lots Of People Are Demoing AI Agents. Almost Nobody's Shipping Them The Right Way.</title><link>https://renezander.com/blog/lots-of-people-are-demoing-ai-agents-almost-nobodys-shipping-them-the-right-way/</link><pubDate>Wed, 04 Mar 2026 10:56:24 +0000</pubDate><guid>https://renezander.com/blog/lots-of-people-are-demoing-ai-agents-almost-nobodys-shipping-them-the-right-way/</guid><description>&lt;p>Lots of people are demoing AI agents. Almost nobody&amp;rsquo;s shipping them the right way.&lt;/p>
&lt;p>Conference stages are packed with live demos of agents writing Terraform, spinning up Kubernetes clusters, and generating Helm charts on command. The audience claps. The tweet goes viral. And then&amp;hellip; nothing ships.&lt;/p>
&lt;p>Here&amp;rsquo;s the uncomfortable truth: the gap between &amp;ldquo;look what my agent can do&amp;rdquo; and &amp;ldquo;this runs in production every day&amp;rdquo; is enormous. I&amp;rsquo;ve been on both sides. I spent years as an Enterprise Architect watching organizations spin up AI pilots that never graduated. Now I run my own infrastructure with Claude as the core agent — not as a demo, not as a proof of concept, but as the actual engine that keeps things moving.&lt;/p></description></item></channel></rss>