Build with AI, Not on AI: How to Survive the GPT-5 Rollercoaster
When GPT-5 launched, LinkedIn filled up fast with two kinds of posts:
“This is incredible, my AI assistant is smarter than ever!”
“This ruined everything, my workflows are broken and my agent won’t behave the same way!”
Both are true. GPT-5 is a step forward in many ways, but for a whole category of users, it’s also a rude awakening.
The Hidden Risk in Chat and Agent-Based Solutions
If you’ve been building AI-driven solutions using a chat interface (like ChatGPT) or a prebuilt “agent” system to perform tasks, you may not have control of which models are powering your setup.
You can’t pin the model to a specific version.
You can’t adjust the prompts when the behaviour changes.
And you can’t roll back when things go wrong.
The provider decides what models are available in that environment. Overnight, the options can shift, just as last week saw that plethora of models - each with a purpose and favourite of a subset of users - all replaced with just one, GPT-5 added as the default “to make it easier”.
Why This Feels Different from “Normal” Software Changes
In traditional software development:
If you integrate with an API, you can often choose the version you want to stick with.
You get deprecation notices and migration timelines.
You control the update process.
With chat and agent-based AI tools, you’re not at the API level - you’re interacting through a product layer. That means you only see what the provider exposes.
The underlying API might still offer the old model [just like OpenAI do], but if you’ve never worked at the API level, you might not even realise you could switch.
Who’s Most at Risk?
People who have:
Built internal workflows entirely inside a chat UI.
Created business processes around a single agent’s behaviour.
Rely on a specific model’s quirks for tone, reasoning, or formatting.
When the provider swaps the model in the interface, your “product” changes overnight and you might not have the skills, access, or permissions to re-create it at the API level.
The Safer Approach: Build with AI, Not on AI
If your solution is critical, don’t let it live entirely inside someone else’s chat window.
Instead:
Use the chat or agent interface to prototype and explore ideas.
Once you know what works, move the logic to an API-based integration you control.
Treat the AI as a build-time assistant, not the permanent runtime engine.
That way, if GPT-6 arrives and GPT-5 disappears from the chat interface, your workflow keeps working, because you own the connection, the model choice, and the prompts.
How ikirugai Helps
I work with teams to:
Identify where their AI dependencies are exposed to provider changes.
Migrate fragile chat-based workflows into stable, API-driven solutions.
Workout if AI is even needed in the particular process
Architect AI use so you can swap models without losing functionality.
GPT-5 is just the latest reminder: the AI models are still there, but access and defaults change. If you’re not working at the API level, you’re building on shifting sand.