<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Development on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/ai-development/</link><description>Recent content in Ai-Development on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 24 Mar 2026 07:30:00 +0100</lastBuildDate><atom:link href="https://renezander.com/tags/ai-development/index.xml" rel="self" type="application/rss+xml"/><item><title>Linux VPS AI Development Setup: Debian, Claude Code, MCP</title><link>https://renezander.com/blog/linux-vps-ai-development-setup/</link><pubDate>Tue, 24 Mar 2026 07:30:00 +0100</pubDate><guid>https://renezander.com/blog/linux-vps-ai-development-setup/</guid><description>&lt;p>My laptop sleeps. My agents do not. That is the whole reason I run a Linux VPS AI development setup instead of coding AI agents against a local Python venv and calling it a day.&lt;/p>
&lt;p>Everything I ship, the TickTick MCP server, the Telegram bot that long-polls Claude Opus, the cron driven morning briefings, the customer profiling pipeline, runs on one Debian box. No Kubernetes. No Docker Swarm. Just systemd, bash, and the Anthropic SDK. This tutorial is the exact sequence I use when I provision a new VPS for an AI project, from a fresh Hetzner image to a working Claude Code CLI with MCP clients wired up.&lt;/p></description></item><item><title>Docker Compose AI ML Development Stack: Local LLM, Vector DB, Full YAML</title><link>https://renezander.com/blog/docker-compose-ai-development-stack/</link><pubDate>Fri, 20 Mar 2026 10:00:00 +0100</pubDate><guid>https://renezander.com/blog/docker-compose-ai-development-stack/</guid><description>&lt;p>Every AI project I start now begins the same way: &lt;code>docker compose up -d&lt;/code> and I have Ollama, Qdrant, Postgres, Redis, and a LiteLLM proxy running in under two minutes. No pyenv conflicts, no homebrew drift, no &amp;ldquo;works on my machine&amp;rdquo;. One YAML file, one command, identical stack across my laptop and my dev VPS.&lt;/p>
&lt;p>This is a tutorial for a full docker compose AI ML development stack. Copy the YAML, run it, pull a model, and start building. I use this exact layout for prototyping RAG pipelines, testing MCP servers, and running my cron-driven Claude agents before they ship to production.&lt;/p></description></item></channel></rss>