<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Productivity on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/productivity/</link><description>Recent content in Productivity on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 05 May 2026 05:30:00 +0000</lastBuildDate><atom:link href="https://renezander.com/tags/productivity/index.xml" rel="self" type="application/rss+xml"/><item><title>Your AI Workflow Doesn't Need Better Prompts. It Needs Less AI.</title><link>https://renezander.com/blog/your-ai-workflow-needs-less-ai/</link><pubDate>Tue, 05 May 2026 05:30:00 +0000</pubDate><guid>https://renezander.com/blog/your-ai-workflow-needs-less-ai/</guid><description>&lt;p>The first stage of AI work is prompting.&lt;/p>
&lt;p>The last stage is removing the model from most of the workflow.&lt;/p>
&lt;p>That sounds backwards.&lt;/p>
&lt;p>It is not.&lt;/p>
&lt;p>When a workflow is new, the LLM is useful because the work is still ambiguous. You are discovering what good looks like. You try a prompt, read the output, adjust the examples, change the tone, add constraints, and run it again.&lt;/p>
&lt;p>That is a good use of AI.&lt;/p></description></item><item><title>AI Automation for Freelancers: 8 Workflows I Run Daily</title><link>https://renezander.com/blog/ai-automation-freelancers/</link><pubDate>Wed, 15 Apr 2026 11:00:00 +0200</pubDate><guid>https://renezander.com/blog/ai-automation-freelancers/</guid><description>&lt;p>Freelancers lose 30 to 40 percent of their time to ops. Proposals, invoicing, status updates, lead tracking, research, content. Work that nobody pays for directly, but that every solo operator has to do.&lt;/p>
&lt;p>I run eight AI automations that handle most of that ops layer for me. They run on a €15/mo Debian VPS, call Claude for anything that needs judgement, and talk to me through Telegram. Total monthly cost including API usage sits under $80. I still hit send on every outbound message, but I no longer type them from scratch.&lt;/p></description></item><item><title>What llama.cpp's Pace Tells You About On-Prem LLM Readiness</title><link>https://renezander.com/blog/what-llamacpps-pace-tells-you-about-on-prem-llm-readiness/</link><pubDate>Tue, 14 Apr 2026 06:00:00 +0000</pubDate><guid>https://renezander.com/blog/what-llamacpps-pace-tells-you-about-on-prem-llm-readiness/</guid><description>&lt;p>Your team asked for GPU budget for self-hosted inference. You said &amp;ldquo;not yet&amp;rdquo; because last time you checked, the tooling wasn&amp;rsquo;t production-grade. That was true 18 months ago. It&amp;rsquo;s not true now, and the delay is costing you leverage you don&amp;rsquo;t know you&amp;rsquo;re losing.&lt;/p>
&lt;p>I&amp;rsquo;m writing this because most decision-makers I talk to are still running on an outdated mental model of what self-hosted LLM infrastructure looks like. The software moved. The org didn&amp;rsquo;t.&lt;/p></description></item><item><title>Spend Your Human Thinking Tokens Where They Compound</title><link>https://renezander.com/blog/spend-your-human-thinking-tokens-where-they-compound/</link><pubDate>Tue, 31 Mar 2026 08:00:00 +0000</pubDate><guid>https://renezander.com/blog/spend-your-human-thinking-tokens-where-they-compound/</guid><description>&lt;p>More automations running. More agents deployed. More pipelines humming in the background.&lt;/p>
&lt;p>I run about a dozen automated jobs. Daily briefings, proposal generation, content pipelines, data syncing, monitoring alerts. They handle a lot.&lt;/p>
&lt;p>But the biggest improvement to my workflow this year wasn&amp;rsquo;t adding more automation. It was getting honest about where my thinking actually matters.&lt;/p>
&lt;h2 id="you-have-a-token-budget-too">You Have a Token Budget Too&lt;/h2>
&lt;p>LLMs have context windows. Feed in too much noise and the signal degrades. The output gets worse even though you gave it more to work with.&lt;/p></description></item></channel></rss>