<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Openai on René Zander | AI Automation Consultant</title><link>https://renezander.com/tags/openai/</link><description>Recent content in Openai on René Zander | AI Automation Consultant</description><generator>Hugo</generator><language>en</language><lastBuildDate>Wed, 15 Apr 2026 08:00:00 +0200</lastBuildDate><atom:link href="https://renezander.com/tags/openai/index.xml" rel="self" type="application/rss+xml"/><item><title>LLM API Comparison 2026: Best API for Production</title><link>https://renezander.com/guides/llm-api-comparison/</link><pubDate>Wed, 15 Apr 2026 08:00:00 +0200</pubDate><guid>https://renezander.com/guides/llm-api-comparison/</guid><description>&lt;p&gt;I have five LLM providers wired into production code. Not in side projects. Real things I get paid to maintain. After two years of swapping between them, retrying failed calls at 3am, and debugging tool-use schemas, I have opinions.&lt;/p&gt;
&lt;p&gt;This is an LLM API comparison focused on what actually matters when you ship. Not benchmark leaderboards. Not marketing spec sheets. Features, SDK quality, failure modes, tool-use reliability, and whether the docs will waste your afternoon.&lt;/p&gt;</description></item><item><title>LLM API Cost Comparison 2026: Framework, Not a Stale Table</title><link>https://renezander.com/guides/llm-api-cost-comparison/</link><pubDate>Sat, 11 Apr 2026 13:00:00 +0200</pubDate><guid>https://renezander.com/guides/llm-api-cost-comparison/</guid><description>&lt;p&gt;Every llm api cost comparison I see online has the same problem: it goes stale in two weeks. Providers drop a new tier, another one halves their output price, a reasoning model ships at triple the cost. By the time the post ranks on Google, the numbers are wrong and the rankings are meaningless.&lt;/p&gt;
&lt;p&gt;So this piece is not a table you check once. It is the framework I use to model llm api pricing for my own production workloads, plus a snapshot of list prices as of April 2026, plus four realistic scenarios run through that framework. The scenarios are the point. Plug your own traffic into them, change the model, get a defensible monthly cost number.&lt;/p&gt;</description></item><item><title>Migrate OpenAI to Claude: API Migration Guide for 2026</title><link>https://renezander.com/guides/migrate-openai-to-claude/</link><pubDate>Sat, 04 Apr 2026 10:00:00 +0200</pubDate><guid>https://renezander.com/guides/migrate-openai-to-claude/</guid><description>&lt;p&gt;Most teams I talk to arrive at the same moment: the OpenAI bill crosses $500/month, an agent loop that worked on GPT-4o starts fumbling tool calls, or legal raises an eyebrow about single-provider risk. Then the question lands in my inbox: what does it actually take to migrate OpenAI to Claude?&lt;/p&gt;
&lt;p&gt;Short answer: a weekend if you have one endpoint, two weeks if you have a real product. The SDKs are similar enough that the ported code looks boring. The interesting work is in the prompts, the tool use loop, and the parts of your codebase that silently depend on OpenAI-specific behavior like &lt;code&gt;seed&lt;/code&gt;, &lt;code&gt;logprobs&lt;/code&gt;, or the &lt;code&gt;response_format&lt;/code&gt; JSON schema flag.&lt;/p&gt;</description></item></channel></rss>