<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Self-Hosted on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/self-hosted/</link><description>Recent content in Self-Hosted on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 16 Apr 2026 09:00:00 +0200</lastBuildDate><atom:link href="https://renezander.com/tags/self-hosted/index.xml" rel="self" type="application/rss+xml"/><item><title>Self-Hosted LLM vs API Cost: Break-Even Analysis (2026)</title><link>https://renezander.com/guides/self-hosted-llm-vs-api/</link><pubDate>Thu, 16 Apr 2026 09:00:00 +0200</pubDate><guid>https://renezander.com/guides/self-hosted-llm-vs-api/</guid><description>&lt;p&gt;Every few months a client asks me the same question. &amp;ldquo;We&amp;rsquo;re burning $8k/mo on Claude. Should we self-host Llama?&amp;rdquo; The answer is almost always no, and the reason has nothing to do with whether the model is good enough. It has to do with what a GPU costs when it&amp;rsquo;s idle, and how much engineering time it takes to keep a serving stack healthy at 3am.&lt;/p&gt;
&lt;p&gt;This guide breaks down self-hosted LLM vs API cost with real numbers. Hetzner GPU pricing, RunPod and Lambda hourly rates, Claude Sonnet 4.6 and Haiku 4.5 token pricing, and the break-even points that actually matter. The goal is to give you a decision framework, not a marketing pitch for either side.&lt;/p&gt;</description></item><item><title>Migrate Zapier to n8n: A Practitioner's Playbook for 2026</title><link>https://renezander.com/blog/migrate-zapier-to-n8n/</link><pubDate>Fri, 10 Apr 2026 12:00:00 +0200</pubDate><guid>https://renezander.com/blog/migrate-zapier-to-n8n/</guid><description>&lt;p&gt;Most teams that want to migrate Zapier to n8n hit the same wall: pricing crosses a threshold around 10,000 tasks per month, or a data sovereignty requirement lands on the roadmap, and Zapier&amp;rsquo;s per-task model becomes a liability. n8n fixes both, but only if you pick the right deployment and plan the cutover properly.&lt;/p&gt;
&lt;p&gt;I run n8n self-hosted in production for Teedian, alongside Make.com blueprints for clients who do not want to operate their own infrastructure. This is the zapier to n8n migration playbook I wish I had: concept mapping, pattern translation, a six-step rollout, the cost math that decides Cloud vs self-hosted, and the gotchas that burn people in week two.&lt;/p&gt;</description></item><item><title>Self-Hosted LLM on Kubernetes: A Production vLLM Deployment</title><link>https://renezander.com/blog/self-hosted-llm-kubernetes/</link><pubDate>Sun, 05 Apr 2026 07:00:00 +0200</pubDate><guid>https://renezander.com/blog/self-hosted-llm-kubernetes/</guid><description>&lt;p&gt;Most teams asking about self-hosted LLM Kubernetes deployments should not be running Kubernetes for this at all. The honest answer is that vLLM on a single GPU box, wrapped in systemd or Docker Compose, covers more use cases than anyone wants to admit. Kubernetes earns its keep only when you already run it, or when you need horizontal scaling, multi-tenant isolation, or proper rolling deploys across a GPU node pool.&lt;/p&gt;</description></item><item><title>n8n Self-Hosting Guide: Docker, Kubernetes, and Bare Metal in Production</title><link>https://renezander.com/blog/n8n-self-hosting-guide/</link><pubDate>Tue, 31 Mar 2026 09:00:00 +0200</pubDate><guid>https://renezander.com/blog/n8n-self-hosting-guide/</guid><description>&lt;p&gt;I have been running n8n self-hosted since 2022 across three different topologies: a single-VPS Docker Compose setup, a small Kubernetes cluster with queue mode, and a bare systemd install on a hardened Debian box. Each one earns its place, and picking wrong costs you weekends. This n8n self-hosting guide is the version I wish I had when I started, written for teams that want production stability, not a demo.&lt;/p&gt;
&lt;p&gt;The short verdict up front: run Docker Compose until you physically cannot. Move to Kubernetes only when you already run Kubernetes for other services, or when you are genuinely north of 50,000 executions per day. The bare systemd path exists for people like me who enjoy minimal stacks and want to understand every moving part. All three paths work. The wrong one for your situation will feel like a second job.&lt;/p&gt;</description></item></channel></rss>