Why AI Coding Still Fails in Enterprise Teams – and How to Fix It
The future of AI coding in enterprise lies in spec-driven development, shared context, and collaboration, not vibe coding.

You’ve seen viral demos of apps built in hours and heard bold claims of AI writing 30, 40, or even more than 50% of code at some companies. While those claims make it seem like a new era of software productivity is already here, they do not reflect the reality of most larger engineering organizations.
Large, complex, and long-lived codebases – with strict quality, security, and maintainability requirements – demand an entirely different approach to AI adoption.
Four industry veterans — Kent Beck, Bryan Finster, Rahib Amin, and Punit Lad — shared their perspectives on how enterprises can adopt AI coding tools wisely. Their consensus: enterprises need less hype and more of a systemic approach.
Beware the Shiny Prototype Trap
“Large organizations are making the same mistakes in implementing AI that they have made in the past with other platforms,” says Bryan Finster, DevOps and continuous delivery advocate with over 25 years of experience designing, building, and operating mission-critical software delivery systems for global enterprises:
We’ve seen the same with the rise of infrastructure platforms and developer platforms, where the tools are deployed without the training, and, at best, things don’t improve. At worst, costs skyrocket and quality and security are compromised.
Rahib Amin, Senior Technical Product Manager at Thoughtworks, warns of large engineering organizations following the vibe coding patterns and creating ‘shiny new prototypes’:
On the surface, it solves the problem. However, behind the curtain are thousands of lines of spaghetti code, forcing the developers who inherit these code bases to make unfortunate, difficult decisions.
The gap between the hype, the potential of AI-driven development, and the practical challenges faced in enterprise software is real.
Why AI Coding Still Fails in Enterprise Teams
In reality, some developers swear by AI tools and report 10x productivity, while others question whether it truly adds value or merely compounds technical debt. And, sometimes, both of those can be at the same company.
There are several challenges that make it difficult to scale AI in enterprise organizations with the current approach.
Lack of training
Effectively leveraging coding agents demands training—something most organizations currently don’t provide. Bryan Finster cites a recent study that showed that AI coding tools are slowing developers down:
Of course they are. Companies are demanding that people use these expensive tools without providing them with any training on how to use them effectively.
It requires a higher level of engineering discipline than most companies are accustomed to to get the best from these platforms.
Some developers have built their own cookbooks through trial and error, while others are still experimenting with limited results.
Lack of context
Working within large, complex enterprise codebases is far from casual vibe coding. No matter how advanced models become or how sophisticated the next coding agent is, the main thing that determines whether an agent succeeds or fails is the quality of the context it’s given.
Spec-driven development, the practice of writing specifications before writing code or asking an AI tool to write code, replaces the chaos of ad hoc, prompt-driven vibe coding with a structured, durable way for programmers to express their goals. It allows developers to be more specific about particular details, and for the agent to communicate its plan ahead of time.
Specs are “refined context” that provides just enough information to the LLMs to be effective without being overwhelmed.

Tribal knowledge
Context is not just in documentation and specification of ‘what’ and ‘how.’ Enterprises are not built overnight. People come and go, sending a lot of tribal knowledge of ‘why’ out the door regularly.
The evolution of an organization involves thousands of micro-decisions made every day, forming the decision trees that are reflected in the code. Understanding why the product is built in such a way is just as important as how it’s built.
No defined workflows
AI coding tools are transforming how software is built. Yet few teams have clear workflows for using these tools. Enterprise developers are used to structured processes, and without them, they spend more time figuring out how to use AI than actually building with it.
Building software with AI agents isn’t a solo sport, especially when projects touch multiple repos, services, and prompt engineering knowledge. Engineers work on the prompts, provide feedback to the coding agents back and forth, generate the code, submit the code for review, and then throw away the prompts. The problem is not just developer-agent collaboration but multiplayer developer collaboration to collaborate on prompts, share execution workflows, and maintain audit trails.
Cultural resistance and trust
If AI coding is to have a meaningful impact, the prerequisites are as much organizational as technical. Punit Lad, Lead Consultant for Platform Engineering at Thoughtworks, emphasizes culture and trust: making sure that AI tools that are introduced are actually helping solve the problems that the organization has and that they are reliable.
AI can be wrong and can make mistakes, and if the people using it can’t trust it, your organization’s culture won’t adopt it and ultimately push back on it.
Misaligned incentives
Experts also highlight an often-overlooked human factor: fear. Bryan Finster says that “naysayers and hype merchants both need to be debunked. People will fear for their jobs.”
Kent Beck, a software engineering veteran best known for creating Extreme Programming and co-creating Test-Driven Development, points out that the incentives of companies and developers are not aligned:
Incentives aren’t aligned with any simple definition of success. Coders could go faster? Oh, you mean you’ll fire half of us?
When teams view AI as a threat, adoption grinds to a halt.
AI Amplifies What You Already Have
Amin calls for frameworks that enable experimentation and a return to disciplined practices like Test-Driven Development.
You need frameworks for enabling people across the organization. AI amplifies whatever you already have, good or bad.
Finster stresses that leaders must realize that “development is almost never the constraint.” Speeding up coding without fixing downstream delivery bottlenecks just amplifies existing risks.
Beck has no doubts that AI is a tool and will become ubiquitous, even in larger engineering organizations:
It’s just too powerful not to. It comes with significant risks and costs and needs to be managed with those in mind.
