<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Programming on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/programming/</link><description>Recent content in Programming on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 05 May 2026 05:30:00 +0000</lastBuildDate><atom:link href="https://renezander.com/tags/programming/index.xml" rel="self" type="application/rss+xml"/><item><title>Your AI Workflow Doesn't Need Better Prompts. It Needs Less AI.</title><link>https://renezander.com/blog/your-ai-workflow-needs-less-ai/</link><pubDate>Tue, 05 May 2026 05:30:00 +0000</pubDate><guid>https://renezander.com/blog/your-ai-workflow-needs-less-ai/</guid><description>&lt;p>The first stage of AI work is prompting.&lt;/p>
&lt;p>The last stage is removing the model from most of the workflow.&lt;/p>
&lt;p>That sounds backwards.&lt;/p>
&lt;p>It is not.&lt;/p>
&lt;p>When a workflow is new, the LLM is useful because the work is still ambiguous. You are discovering what good looks like. You try a prompt, read the output, adjust the examples, change the tone, add constraints, and run it again.&lt;/p>
&lt;p>That is a good use of AI.&lt;/p></description></item><item><title>What llama.cpp's Pace Tells You About On-Prem LLM Readiness</title><link>https://renezander.com/blog/what-llamacpps-pace-tells-you-about-on-prem-llm-readiness/</link><pubDate>Tue, 14 Apr 2026 06:00:00 +0000</pubDate><guid>https://renezander.com/blog/what-llamacpps-pace-tells-you-about-on-prem-llm-readiness/</guid><description>&lt;p>Your team asked for GPU budget for self-hosted inference. You said &amp;ldquo;not yet&amp;rdquo; because last time you checked, the tooling wasn&amp;rsquo;t production-grade. That was true 18 months ago. It&amp;rsquo;s not true now, and the delay is costing you leverage you don&amp;rsquo;t know you&amp;rsquo;re losing.&lt;/p>
&lt;p>I&amp;rsquo;m writing this because most decision-makers I talk to are still running on an outdated mental model of what self-hosted LLM infrastructure looks like. The software moved. The org didn&amp;rsquo;t.&lt;/p></description></item><item><title>Your AI Content Tool Knows Your Strategy. Do You?</title><link>https://renezander.com/blog/your-ai-content-tool-knows-your-strategy-do-you-know-where-it-goes/</link><pubDate>Tue, 07 Apr 2026 06:00:00 +0000</pubDate><guid>https://renezander.com/blog/your-ai-content-tool-knows-your-strategy-do-you-know-where-it-goes/</guid><description>&lt;p>Your team is using AI for content. Everybody is. LinkedIn posts, blog drafts, internal comms, maybe some customer-facing copy too.&lt;/p>
&lt;p>And it works. The output is decent, the speed is real, nobody wants to go back to writing everything from scratch.&lt;/p>
&lt;p>But have you thought about what you are actually pasting into these tools?&lt;/p>
&lt;h2 id="the-prompt-is-the-product">The Prompt Is the Product&lt;/h2>
&lt;p>Every time someone on your team writes a prompt, they are feeding context into a system they do not control. Brand voice guidelines. Competitive positioning notes. Messaging frameworks. That internal strategy deck someone summarized into a prompt last Tuesday.&lt;/p></description></item><item><title>Spend Your Human Thinking Tokens Where They Compound</title><link>https://renezander.com/blog/spend-your-human-thinking-tokens-where-they-compound/</link><pubDate>Tue, 31 Mar 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/spend-your-human-thinking-tokens-where-they-compound/</guid><description>&lt;p>More automations running. More agents deployed. More pipelines humming in the background.&lt;/p>
&lt;p>I run about a dozen automated jobs. Daily briefings, proposal generation, content pipelines, data syncing, monitoring alerts. They handle a lot.&lt;/p>
&lt;p>But the biggest improvement to my workflow this year wasn&amp;rsquo;t adding more automation. It was getting honest about where my thinking actually matters.&lt;/p>
&lt;h2 id="you-have-a-token-budget-too">You Have a Token Budget Too&lt;/h2>
&lt;p>LLMs have context windows. Feed in too much noise and the signal degrades. The output gets worse even though you gave it more to work with.&lt;/p></description></item></channel></rss>