<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Devops on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/devops/</link><description>Recent content in Devops on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Mon, 06 Apr 2026 08:00:00 +0200</lastBuildDate><atom:link href="https://renezander.com/tags/devops/index.xml" rel="self" type="application/rss+xml"/><item><title>Hetzner vs AWS for AI Workloads: The Honest Breakdown (2026)</title><link>https://renezander.com/guides/hetzner-vs-aws-ai-workloads/</link><pubDate>Mon, 06 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/hetzner-vs-aws-ai-workloads/</guid><description>&lt;p>Most &amp;ldquo;hetzner vs aws ai workloads&amp;rdquo; comparisons I read online are either breathless Hetzner fanboying or AWS enterprise sales brochures. Neither is useful when you actually have to pick where your AI inference pipeline, your n8n instance, or your fine-tuned Llama deployment should live.&lt;/p>
&lt;p>I run production AI systems on both clouds. Steady-state stuff sits on Hetzner. Burst GPU jobs and anything that has to integrate with an AWS-native enterprise backend goes on AWS. The decision is not ideological. It comes down to workload shape, team size, and what you actually need from your cloud.&lt;/p></description></item><item><title>n8n Self-Hosting Guide: Docker, Kubernetes, and Bare Metal in Production</title><link>https://renezander.com/blog/n8n-self-hosting-guide/</link><pubDate>Tue, 31 Mar 2026 09:00:00 +0200</pubDate><guid>https://renezander.com/blog/n8n-self-hosting-guide/</guid><description>&lt;p>I have been running n8n self-hosted since 2022 across three different topologies: a single-VPS Docker Compose setup, a small Kubernetes cluster with queue mode, and a bare systemd install on a hardened Debian box. Each one earns its place, and picking wrong costs you weekends. This n8n self-hosting guide is the version I wish I had when I started, written for teams that want production stability, not a demo.&lt;/p>
&lt;p>The short verdict up front: run Docker Compose until you physically cannot. Move to Kubernetes only when you already run Kubernetes for other services, or when you are genuinely north of 50,000 executions per day. The bare systemd path exists for people like me who enjoy minimal stacks and want to understand every moving part. All three paths work. The wrong one for your situation will feel like a second job.&lt;/p></description></item><item><title>AI Skills Are the New Boilerplate: They Fix Nothing</title><link>https://renezander.com/blog/ai-skills-are-the-new-boilerplate-they-solve-almost-nothing/</link><pubDate>Tue, 24 Mar 2026 11:13:17 +0000</pubDate><guid>https://renezander.com/blog/ai-skills-are-the-new-boilerplate-they-solve-almost-nothing/</guid><description>&lt;p>Everyone&amp;rsquo;s sharing their skill libraries right now. &amp;ldquo;Here are my 20 custom slash commands.&amp;rdquo; &amp;ldquo;Check out my prompt template collection.&amp;rdquo; &amp;ldquo;This skill saves me 2 hours a day.&amp;rdquo;&lt;/p>
&lt;p>I use skills too. I have about a dozen. They handle cover letters, content pipelines, code review, commit messages. Repeatable workflows where the input and output are predictable.&lt;/p>
&lt;p>They cover maybe 10% of what my AI system actually does.&lt;/p>
&lt;p>The other 90% is the part nobody shares on social media because it&amp;rsquo;s ugly. It&amp;rsquo;s API integrations that break when headers change. It&amp;rsquo;s state management between sessions. It&amp;rsquo;s error handling for when the third-party service returns garbage. It&amp;rsquo;s monitoring that pages you at 6 AM because a cron failed. It&amp;rsquo;s human-in-the-loop workflows where the AI proposes and you approve before anything touches production.&lt;/p></description></item><item><title>Linux VPS AI Development Setup: Debian, Claude Code, MCP</title><link>https://renezander.com/blog/linux-vps-ai-development-setup/</link><pubDate>Tue, 24 Mar 2026 07:30:00 +0100</pubDate><guid>https://renezander.com/blog/linux-vps-ai-development-setup/</guid><description>&lt;p>My laptop sleeps. My agents do not. That is the whole reason I run a Linux VPS AI development setup instead of coding AI agents against a local Python venv and calling it a day.&lt;/p>
&lt;p>Everything I ship, the TickTick MCP server, the Telegram bot that long-polls Claude Opus, the cron driven morning briefings, the customer profiling pipeline, runs on one Debian box. No Kubernetes. No Docker Swarm. Just systemd, bash, and the Anthropic SDK. This tutorial is the exact sequence I use when I provision a new VPS for an AI project, from a fresh Hetzner image to a working Claude Code CLI with MCP clients wired up.&lt;/p></description></item><item><title>Systemd Services for AI Servers: Production Setup on Linux</title><link>https://renezander.com/blog/systemd-services-ai-servers/</link><pubDate>Sat, 21 Mar 2026 07:00:00 +0100</pubDate><guid>https://renezander.com/blog/systemd-services-ai-servers/</guid><description>&lt;p>I run a TickTick MCP server, a Telegram bot that routes through Claude Opus, and ten scheduled AI agents on a single Debian VPS. None of them run in Docker. All of them run as systemd services or systemd timers.&lt;/p>
&lt;p>This is the setup guide for running systemd services for AI servers the way I actually do it in production. Unit files, logs, timers, resource limits, and the security hardening that matters. No container orchestration, no Kubernetes, no Docker Compose YAML. Just systemd, because for single-host AI workloads it is the right tool.&lt;/p></description></item><item><title>Docker Compose AI ML Development Stack: Local LLM, Vector DB, Full YAML</title><link>https://renezander.com/blog/docker-compose-ai-development-stack/</link><pubDate>Fri, 20 Mar 2026 10:00:00 +0100</pubDate><guid>https://renezander.com/blog/docker-compose-ai-development-stack/</guid><description>&lt;p>Every AI project I start now begins the same way: &lt;code>docker compose up -d&lt;/code> and I have Ollama, Qdrant, Postgres, Redis, and a LiteLLM proxy running in under two minutes. No pyenv conflicts, no homebrew drift, no &amp;ldquo;works on my machine&amp;rdquo;. One YAML file, one command, identical stack across my laptop and my dev VPS.&lt;/p>
&lt;p>This is a tutorial for a full docker compose AI ML development stack. Copy the YAML, run it, pull a model, and start building. I use this exact layout for prototyping RAG pipelines, testing MCP servers, and running my cron-driven Claude agents before they ship to production.&lt;/p></description></item><item><title>Your Vector Database Decision Is Simpler Than You Think</title><link>https://renezander.com/blog/your-vector-database-decision-is-simpler-than-you-think/</link><pubDate>Tue, 17 Mar 2026 07:41:59 +0000</pubDate><guid>https://renezander.com/blog/your-vector-database-decision-is-simpler-than-you-think/</guid><description>&lt;p>Every week someone asks which vector database they should use. The answer is almost always &amp;ldquo;it depends on three things,&amp;rdquo; and none of them are throughput benchmarks.&lt;/p>
&lt;p>I run semantic search in production on a single VPS. Over a thousand items indexed, embeddings generated on the same machine, queries return in under a second. But that setup only works because of the constraints I&amp;rsquo;m operating in. Change the constraints and the answer changes completely.&lt;/p></description></item><item><title>I Run 10 AI Agents in Production. They're All Bash Scripts.</title><link>https://renezander.com/blog/i-run-10-ai-agents-in-production-theyre-all-bash-scripts-df2/</link><pubDate>Thu, 12 Mar 2026 14:29:44 +0000</pubDate><guid>https://renezander.com/blog/i-run-10-ai-agents-in-production-theyre-all-bash-scripts-df2/</guid><description>&lt;p>A week ago I wrote about &lt;a href="https://dev.to/renezander030/lots-of-people-are-demoing-ai-agents-almost-nobodys-shipping-them-the-right-way-5c10">shipping AI agents the right way&lt;/a>. That piece was about the harness: quality gates, token economics, multi-model verification. The stuff that separates demos from production.&lt;/p>
&lt;p>A lot of people resonated with it. But I left out the part that actually eats most of my time: keeping the boring stuff running.&lt;/p>
&lt;p>So let me walk you through what production AI agents actually look like when the conference talk is over.&lt;/p></description></item><item><title>Lots Of People Are Demoing AI Agents. Almost Nobody's Shipping Them The Right Way.</title><link>https://renezander.com/blog/lots-of-people-are-demoing-ai-agents-almost-nobodys-shipping-them-the-right-way/</link><pubDate>Wed, 04 Mar 2026 10:56:24 +0000</pubDate><guid>https://renezander.com/blog/lots-of-people-are-demoing-ai-agents-almost-nobodys-shipping-them-the-right-way/</guid><description>&lt;p>Lots of people are demoing AI agents. Almost nobody&amp;rsquo;s shipping them the right way.&lt;/p>
&lt;p>Conference stages are packed with live demos of agents writing Terraform, spinning up Kubernetes clusters, and generating Helm charts on command. The audience claps. The tweet goes viral. And then&amp;hellip; nothing ships.&lt;/p>
&lt;p>Here&amp;rsquo;s the uncomfortable truth: the gap between &amp;ldquo;look what my agent can do&amp;rdquo; and &amp;ldquo;this runs in production every day&amp;rdquo; is enormous. I&amp;rsquo;ve been on both sides. I spent years as an Enterprise Architect watching organizations spin up AI pilots that never graduated. Now I run my own infrastructure with Claude as the core agent — not as a demo, not as a proof of concept, but as the actual engine that keeps things moving.&lt;/p></description></item></channel></rss>