AI Won’t Save Your Code Migration, But It Can Accelerate It

Ankit Jain (CEO and co-founder of Aviator) and Chris Westerhold (Global Practice Director at Thoughtworks) break down why migrations stall, why fully automated AI code migrations are not realistic, and what a human-in-the-loop system looks like in practice.

Large-scale code migrations are often compared to cleaning the house. It’s something we know we have to do, but nobody wants to do it. They’re tricky, time-consuming, and, more often than not, spiral into an endless cycle of debugging and resolving unforeseen issues.
They fail because of scale, lost context, and the absence of a repeatable system.

In a recent workshop, Ankit Jain (CEO and co-founder of Aviator) and Chris Westerhold (Global Practice Director at Thoughtworks) broke down why migrations stall, why fully automated AI code migrations are not realistic, and what a human-in-the-loop system looks like in practice.

Why do code migrations fail?

Scale turns upgrades into programs

Small dependency bumps and security patches are manageable. The real pain starts when upgrades are deferred for years and the gap becomes structural. Moving from Java 8 to Java 21 isn’t just a version change—it’s accumulated drift. APIs evolved, patterns shifted, and teams built conventions around older behavior. By the time you migrate, every change touches embedded assumptions.

Tribal knowledge disappears

Engineers leave. Teams reorganize. The reasoning behind why a framework was chosen or why a wrapper exists often lives only in people’s heads. When that context is gone, teams hesitate because they don’t know which patterns encode business logic and which are safe to refactor.

Manual and Codemod approaches have limits

Most migrations are still manual. Codemods help in some ecosystems, but they are unevenly available and often custom-built. Regex-based transforms break on edge cases. Custom transformation logic reflects only the knowledge you currently have, not the full complexity of the system.

Fully automated AI migrations are unrealistic

If somebody is telling you that their tool is going to do the migration completely automatically, it is very likely that it’s not going to be true.

Hallucinations, inconsistent rule application, and blind spots around edge cases are still real constraints. Automation reduces effort; it does not eliminate supervision.

Human-in-the-loop code migration

The central theme of the workshop was that migrations require a human-in-the-loop model.

Every codebase uses frameworks in slightly different ways. Some teams wrap APIs. Others extend them. Some rely on undocumented behavior. A generic migration rule rarely applies cleanly across the entire system. 

What’s needed is consistent pattern application combined with continuous human feedback. That feedback loop does three critical things:

  • Identifies edge cases early.
  • Refines transformation rules as new patterns surface.
  • Ensures semantic correctness, not just syntactic conversion.

Instead of treating AI as a one-shot migration engine, the more practical framing is collaborative execution. Ankit described it this way:

“The idea really is to treat the AI tool almost like you’re bringing another engineer in your team and provide everything that you would typically do to an engineer.”

That means providing documentation, tribal knowledge, review feedback, and guidance. Trust builds over time. The system improves as it learns from corrections.

This reframes AI from replacement to acceleration.

Aviator Runbooks: Spec-Driven, Multiplayer

Executable specifications

A Runbook is an executable specification that guides agents through a complex, multi-step task. Instead of relying solely on an AI model’s limited context window, teams explicitly document:

  • Transformation rules
  • Assumptions
  • Constraints
  • Edge cases

The specification becomes the source of truth.

Multiplayer collaboration

Migrations are not solo work. Runbooks are designed as collaborative artifacts. Teams refine the specification together before large-scale generation begins. This shifts verification left. Instead of debating intent in a 10,000-line diff, teams align on rules upfront.

The result is tighter feedback loops and fewer surprises in generated code.

Runbooks also integrate with existing tools and coding agents rather than replacing them. The system adds structured specification and memory on top of workflows teams already use.

Versioned knowledge for future migrations

One of the most important ideas from the workshop was that migrations are recurring, not exceptional.

In many organizations, every major migration feels like starting over. Edge cases are rediscovered. Patterns are relearned. The same mistakes repeat. The cognitive load on engineers remains high.

A versioned Runbook approach changes that dynamic. The corner cases identified in one migration become encoded rules for the next. Feedback from reviews becomes part of the system’s memory. Tribal knowledge is captured explicitly instead of living in Slack threads and senior engineers’ heads.

Chris framed the benefit in terms of engineer burden:

You can leverage these same Runbooks to do it again in the future. And you don’t have to have such a high cognitive load for the actual engineer.”

Over time, this creates:

  • Consistency in how migrations are executed
  • A reusable foundation for future upgrades
  • Reduced dependence on individual memory
  • Clear visibility into how transformations are defined and applied

Ready to transform your development workflow?

Transform scattered processes into reliable, collaborative Runbooks.

Subscribe

Be the first to know once we publish a new blog post

Join our Discord

Learn best practices from modern engineering teams

Get a free 30-min consultation with the Aviator team to improve developer experience across your organization.

Powered by WordPress