<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Docker on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/docker/</link><description>Recent content in Docker on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 31 Mar 2026 09:00:00 +0200</lastBuildDate><atom:link href="https://renezander.com/tags/docker/index.xml" rel="self" type="application/rss+xml"/><item><title>n8n Self-Hosting Guide: Docker, Kubernetes, and Bare Metal in Production</title><link>https://renezander.com/blog/n8n-self-hosting-guide/</link><pubDate>Tue, 31 Mar 2026 09:00:00 +0200</pubDate><guid>https://renezander.com/blog/n8n-self-hosting-guide/</guid><description>&lt;p&gt;I have been running n8n self-hosted since 2022 across three different topologies: a single-VPS Docker Compose setup, a small Kubernetes cluster with queue mode, and a bare systemd install on a hardened Debian box. Each one earns its place, and picking wrong costs you weekends. This n8n self-hosting guide is the version I wish I had when I started, written for teams that want production stability, not a demo.&lt;/p&gt;
&lt;p&gt;The short verdict up front: run Docker Compose until you physically cannot. Move to Kubernetes only when you already run Kubernetes for other services, or when you are genuinely north of 50,000 executions per day. The bare systemd path exists for people like me who enjoy minimal stacks and want to understand every moving part. All three paths work. The wrong one for your situation will feel like a second job.&lt;/p&gt;</description></item><item><title>Docker Compose AI ML Development Stack: Local LLM, Vector DB, Full YAML</title><link>https://renezander.com/blog/docker-compose-ai-development-stack/</link><pubDate>Fri, 20 Mar 2026 10:00:00 +0100</pubDate><guid>https://renezander.com/blog/docker-compose-ai-development-stack/</guid><description>&lt;p&gt;Every AI project I start now begins the same way: &lt;code&gt;docker compose up -d&lt;/code&gt; and I have Ollama, Qdrant, Postgres, Redis, and a LiteLLM proxy running in under two minutes. No pyenv conflicts, no homebrew drift, no &amp;ldquo;works on my machine&amp;rdquo;. One YAML file, one command, identical stack across my laptop and my dev VPS.&lt;/p&gt;
&lt;p&gt;This is a tutorial for a full docker compose AI ML development stack. Copy the YAML, run it, pull a model, and start building. I use this exact layout for prototyping RAG pipelines, testing MCP servers, and running my cron-driven Claude agents before they ship to production.&lt;/p&gt;</description></item></channel></rss>