<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Infrastructure on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/ai-infrastructure/</link><description>Recent content in Ai-Infrastructure on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 23 Apr 2026 09:00:00 +0000</lastBuildDate><atom:link href="https://renezander.com/tags/ai-infrastructure/index.xml" rel="self" type="application/rss+xml"/><item><title>Voice AI in Production: From RunPod to Hosted Kubernetes</title><link>https://renezander.com/blog/voice-ai-production-kubernetes/</link><pubDate>Thu, 23 Apr 2026 09:00:00 +0000</pubDate><guid>https://renezander.com/blog/voice-ai-production-kubernetes/</guid><description>&lt;p>Your voice model works in a demo. The same model in production stalls under concurrent load. The model file is identical. So is the GPU card. Only the deployment changed.&lt;/p>
&lt;p>If your TTS service runs on a single RunPod pod, you&amp;rsquo;ve already met this wall. You handle one request per GPU at a time. A crash costs ninety seconds to reload the model. Failover isn&amp;rsquo;t in the setup. Your marketing page says &amp;ldquo;generate narration instantly.&amp;rdquo; Your infrastructure says &amp;ldquo;please form an orderly queue.&amp;rdquo;&lt;/p></description></item><item><title>Hetzner vs AWS for AI Workloads: The Honest Breakdown (2026)</title><link>https://renezander.com/guides/hetzner-vs-aws-ai-workloads/</link><pubDate>Mon, 06 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/hetzner-vs-aws-ai-workloads/</guid><description>&lt;p>Most &amp;ldquo;hetzner vs aws ai workloads&amp;rdquo; comparisons I read online are either breathless Hetzner fanboying or AWS enterprise sales brochures. Neither is useful when you actually have to pick where your AI inference pipeline, your n8n instance, or your fine-tuned Llama deployment should live.&lt;/p>
&lt;p>I run production AI systems on both clouds. Steady-state stuff sits on Hetzner. Burst GPU jobs and anything that has to integrate with an AWS-native enterprise backend goes on AWS. The decision is not ideological. It comes down to workload shape, team size, and what you actually need from your cloud.&lt;/p></description></item></channel></rss>