Patrick Debois is often called the "Godfather of DevOps" for his pioneering role in the movement that reshaped how teams build and ship software. He is the co-author of the DevOps Handbook, and principal product engineer at Humans and Code. Patrick's work focuses on helping engineering teams become more productive with AI tooling, and delivering AI-powered products with engineering rigor and good practices. He takes a pragmatic approach to his work, grounded in reality and experience.
On this episode of The Hangar DX Podcast, Aviator CEO Ankit Jain sits down with Patrick to discuss what we can learn from DevOps as AI transforms engineering, how developer roles are shifting, and why good engineering practices are more important than ever.
I have that history with DevOps that haunts me, but I've been around the industry for a long time. Every time there's something new emerging, I look at the patterns and how it can improve our craft, whether that's mobile, cloud, or the early days of the internet. And now it happens to be AI.
I'm always excited to work on what's emerging; you can call this the shiny new thing syndrome. It is more about the learning: What is new? How do we have to adapt, and how is it actually improving our jobs? As I am learning, I try to share the learning, and I also look for stories from other people.
And the best way to do this is in the community. In the past I've done this with a community called DevOps Days around DevOps and that sparked something. And now I joined a community called the AINative Dev. What sets us a little bit different is that we're solely focusing on software delivery with AI.
Whenever there's new technology, there's a bunch of people who go all in on this tech. And there are people who are more skeptical and more averse to any change.
You have the believers who say, I'm going to go all in on AI. It will solve all world problems. And the other ones say I can still do it better than AI, and that's fine. So that has been a constant with new technology.
There were plenty of naysayers in the beginning of DevOps saying 10 deploys a day is insane; who would do that? And then eventually we got there.
That skepticism is always there. The way that I deal with it is that I take note of all the criticism, and then I lean into it. And the reason why I lean into it is to almost overuse it to start understanding where it is not applicable.
What is different is that the speed of the introduction of a new technology over the years has gone faster and faster. What is new today is old tomorrow. The destabilizing factor is definitely something that is annoying for people in that chaotic period. And we might still be in there for a couple of years until it settles.
The job of software engineer is definitely changing with AI. The abstraction is different: I’m no longer typing the code myself; I’m instructing AI to do it. That means I still need to know what “good” looks like, but my role shifts from producer to supervisor.
I sometimes jokingly say that dev jobs are becoming ops jobs. In the old days, I was receiving war files, jar files, whatever packages they were sending to me, and I had to deploy this as a sys admin. I had no intimate knowledge about what the code was doing. And still I was responsible to do kind of the operations.
It's very similar to the AI. The AI is doing a lot of coding. I don't maybe understand it, and I haven't gone through the thinking process, but I'll still be the person who is in charge and needs to take the heat when it isn't working.
We're moving a little bit to devs becoming more ops, where DevOps was putting a little bit more ops into the dev people.
Humans still need to review code before production; that won’t change. Having a second set of eyes, whether from a teammate or even a different AI model, is still valuable. The level of review depends on your appetite for risk: some teams may allow senior engineers to approve the code they had instructed the AI tools to create, while others will stick with strict two-step reviews.
AI can also assist in reviews, catching bugs or enforcing guidelines, but ultimately, review practices are about balancing trust, safety, and risk.
With all the talk of increasing productivity, it is important to say that AI does not always speed up coding. If I have to tell somebody or something else to do it all the time, it might be slower than me doing it. In other cases, it might help me speed up. That balance of knowing when to automate and what to do yourself is important.
More and more code is being produced by AI, requiring us to spend increased time reviewing and deciding whether to accept or reject suggestions. As we evolve toward autonomous agents, we’re essentially becoming managers of agent development teams.
We no longer need to write the implementation and how to do a task. Instead, we are able to focus on what we are building, shifting our focus to supplying our intent. Tools have evolved to provide chat and instruction UI to gather the intent in small chunks, and we are now moving to writing richer requirements that along with full context, allow the AI to do much larger sets of work. If there are errors in the output, we simply update the specifications.
How do we know what to build? What do we try first? The cost of experimenting has gotten so much smaller now that we can ask AI to prototype ideas and offer a variety of options. This allows us to discover new ideas or compare alternatives: would this design work better? What solution delivers the best performance? We can then run experiments through our existing CI/CD pipelines to provide a feedback loop that helps us make informed choices.
I'm sure you have watched a seasoned member of your team leave, and you had that painful realization that they are taking a lot of hard earned implicit knowledge with them.
AI now gives us a compelling reason to share our knowledge. This benefits both our colleagues and the AI systems themselves. In fact, knowledge might be your company's only unique value proposition as building and delivering products increasingly becomes a commodity.
Specifications are a great way of aligning yourself with the model and also help align humans. It's a false thing to believe that LLM is always going to stick to whatever we write in the specs. But it definitely helps not to repeat yourself; writing things up keeps them available. That's definitely an improvement, and it's a way of structuring things. When two engineers are arguing about anything technical, my trick is to tell them to move to a whiteboard. The specs do something similar. Collaboration around specs and the writing of specs is very useful, and it's still maturing in the field as well.
Do you still want to deploy to production with end users? The DORA metrics are all about getting things out to the user in a reliable way. And so they still matter. I know that people think that whatever has an old term is not useful anymore, but it still makes sense in this world.
If I do tests, I know that AI is not skipping and removing pieces of the code that I didn't want to do it. Big upfront changes? AI actually likes it in smaller chunks. You want to do snapshotting and commits? Actually, that's also helpful. Write good documentation.? Also helps the AI.
The same thing with technical debt. Imagine you have a code base that has four conflicting naming schemes. Bad for AI, getting confused, same as a human. So if you fix that, it will actually be improving for the AI.
If you couldn't convince your boss about good engineering practices, try with the AI angle to say, hey, it's good for AI, it will make us more productive. Maybe they'll listen this time!
00:00 Introduction to Developer Experience and AI Native Dev
03:12 The Evolution of DevOps and AI
08:22 The Role of AI in Software Development
13:00 Ownership and Code Review in AI Development
20:09 Exploration and Experimentation in Software Delivery
27:54 The Relevance of DORA Metrics in AI Development
33:22 The Future of Software Engineering with AI