Glitches, Hitches, and Fresh Fixes

Glitches, Hitches, and Fresh Fixes

If you’ve spent any meaningful time working with AI tools, you’ll know the feeling: you’re grinding away at a problem, tweaking prompts, reshaping the question, chasing that elusive answer. And sometimes it just… doesn’t land. No matter how many ways you phrase it, the output feels off, repetitive, or stuck in a loop.

It’s frustrating, but also familiar. Because it’s not a new phenomenon, it’s the exact same experience as trying to solve a tricky problem yourself. You hammer away, stare at the screen, get nowhere, and eventually the solution comes when you step away for five minutes, or when someone else glances over your shoulder and says, “Have you tried it this way?”

What’s interesting now is that we can build this “fresh perspective” into our AI working practices. Not by leaving the desk or waiting for inspiration, but by using one AI assistant to assist another. It’s like pair-programming, but with different models…

…and It’s something I had to implement his week.

When Claude Got Stuck

I was working on an application adding a piece of functionality. Something not especially exotic, but complicated enough that required a specific direction and prompt. I dropped it into Claude, who’s usually solid on reasoning and explaining logic in natural language. We made progress and worked through maybe 80% of what was needed.

But as we iterated through to complete the final 20%, things got a bit messy - not in the ways you’ll have heard the neysayers proclaim AI and vibe coding is terrible - just that the specifics were intricate enough that each iteration would resolve something but not quite to the correct level. I explained the errors, asked it to revise, and we went back and forth. Round and round. Each time it tried something slightly different, but it never escaped the same orbit. Like a record stuck in the groove, endlessly playing variations on the same broken theme.

I could feel myself getting sucked into the loop too. I was explaining harder, trying to break it down simpler, wondering if I was the one not communicating properly. That’s the risk with these tools: because they’re fluent and confident, you assume the fault lies with you.

Eventually, I stepped back. I copied the same problem, almost word for word, over to OpenAI’s Codex. And instantly, first time, Codex nailed it. Clean, working solution. No drama.

There are a few reasons this works so well:

  1. Different training, different strengths.
    Claude and GPT-based models aren’t trained identically. They’ve got different biases, different reasoning strategies, different priorities. What feels like a blind spot for one may be obvious to the other.

  2. Breaking the mental loop.
    When you’re locked in with one assistant, the conversation builds momentum in a certain direction. Sometimes you need to break that trajectory entirely. Asking another model resets the frame. - a bit like the Eli5 scenario.

  3. Comparative confidence.
    If two models give the same answer independently, you gain confidence. If they diverge, you know there’s nuance worth digging into.

  4. You stay in control.
    Most importantly, this approach keeps you at the centre. The AI isn’t your oracle, it’s your sounding board. Running the same question through multiple assistants reinforces that it’s your judgement that matters.

Think about how teams work in practice. Rarely do you throw a tough problem at one colleague and then just keep hammering them for hours until they solve it. You involve different people. You ask around. You compare answers. You triangulate.

AI tools can be more powerful when treated the same way. They’re not gods of knowledge, they’re colleagues with quirks.

Here are a few ways I’ve been doing this in practice:

1. Reframing the Problem

If one model is stuck, I’ll literally paste its failed output into another. For example:

“We’ve been working on this issue but it’s not working. Can you explain why and suggest an alternative? here are the logs and the gist of what we’re trying to do: [logs][description]”

This does two things at once: it gives the second model a head start, and it forces it to critique the other’s work, something AIs are oddly good at, because they can spot inconsistencies faster than humans.

2. Parallel Drafting

For creative tasks, like writing copy or brainstorming names, I’ll ask two models in parallel and then merge the best bits. One might give me a more playful take, the other a more structured list. The act of comparing them sharpens the end result.

3. Validation Loops

When I get an answer I think is correct but I’m not fully sure, I’ll send it to a second AI purely for review:

“Here’s the proposed solution. Does it make sense? Where might it break?”

This is especially powerful with code, where hallucinations can hide inside otherwise-perfect logic.

4. Cross-Training Yourself

Sometimes I’ll deliberately choose the “wrong” AI for the task, just to hear how it frames things. Asking a language-focused model to explain a technical concept, or a coding model to brainstorm metaphors. Sometimes you will get unexpected angles that can offer perspectives you wouldn’t have noticed.

Taking a Break IRL

The real-world equivalent of this is taking a break. You know how you wrestle with a crossword clue, get nowhere, walk away to make tea, and the answer pops into your head the moment you stop looking?

Using multiple AIs gives you that reframing on demand. You don’t have to wait for the subconscious to do its thing, you can trigger a fresh perspective immediately by bringing another assistant into the loop.

It doesn’t always work, of course. Sometimes they’ll both get it wrong. Sometimes you’ll end up with twice as much nonsense. But that’s still useful data, because it tells you the fault might be in the way you’re framing the problem, not in the model.

There’s a temptation to treat this as a magic trick—like if you just juggle enough AIs, the perfect answer will emerge. That’s not the point.

The point is to use them the way we already use human collaboration: as checks, balances, and sources of perspective. The work is still yours. The decision-making is still yours. These tools are not there to replace you, they’re there to stop you getting lost in your own loops.

And sometimes, honestly, they just save you hours of bashing your head against a wall.

And of course this practice has wider implications beyond coding or writing, it’s wherever you may use AI right now:

  • Business strategy. Run a draft plan through multiple AIs, see where they agree, and where they challenge your assumptions.

  • Learning. If you’re stuck on a concept, ask one AI to explain it, then ask another to explain it differently. You’ll quickly find the version that resonates.

  • Decision-making. Even when it’s ultimately a human choice, triangulating AI perspectives gives you a richer dataset.

There’s also an emerging ecosystem of tools being built around this idea: agents that call other agents, frameworks that orchestrate multiple models in sequence, even systems that vote on answers. But you don’t need fancy orchestration to start, you just need to treat the tools you already have as a team, not a single voice.

When Claude got stuck and Codex cracked the problem straight away, it wasn’t just a time-saver. It was a reminder that AI tools aren’t monolithic. They have personalities, quirks, blind spots. And like people, they benefit from being used in combination.

The next time you feel like you’re going in circles with one assistant, don’t assume the fault is yours. Bring in another. Ask them to check each other’s work. Let them give you that fresh perspective.

Because in the end, the real skill here isn’t prompt engineering, it’s orchestration. Knowing when to push, when to switch, and when to let your AI assist assist your AI assist.

And that, ironically, is a very human skill.

If you ever need a nudge, pointers, or just want to explore how AI can shift perspective, that’s the work I do at ikirugai.

Total Pixel Space: Finite Pixels, Infinite Debate

Total Pixel Space: Finite Pixels, Infinite Debate