A Season of Insights: What We Learned from 21 Hangar DX Episodes
We explored how developer experience is evolving and how AI is reshaping what it means to be an engineer.

This season of The Hangar DX was special. We had the opportunity to sit down with 21 incredible leaders, researchers, and practitioners across the software industry. Together, we explored how developer experience is evolving, from measuring productivity more thoughtfully to rethinking code reviews, platform engineering, and how AI is reshaping what it means to be an engineer.
AI and the Changing Nature of Engineering
Across conversations, AI wasn’t framed as a replacement for engineers but as a force pushing the industry back toward fundamentals.
Angie Jones, VP of Engineering at Block, shared how a single internal experiment became Goose, one of the first MCP clients, and how Block deployed AI agents across every department in just eight weeks.
Her advice on how to drive internal AI adoption:
You can’t uplift everyone at once. It’s too hard.
Find champions, people who won’t be discouraged by early failures. I assembled a cohort of 50 engineers whose repos collectively cover ~60% of Block’s code. They dedicate ~30% of their time to AI enablement, experimentation, and pattern creation.
Annie Vella, Distinguished Engineer at Westpac NZ, spoke in another episode about the ‘software engineering identity crisis’. In her viral blog post Annie shared how software engineers found their identity in building things, not managing things, and now have to shift to find a new identity as overseers of agents:
I’m hoping that AI brings engineers closer to users. I’m hearing the term “product engineer” everywhere for engineers who keep the customer front and center and own delivery end-to-end.
But lately I’ve also seen a new label emerging: product builder. Not just a software engineer. Not just a product thinker. But someone who builds products in an AI-first world, with agents, with context, with orchestration.
It’s funny that we’re reinventing the label to remind ourselves what software engineering was always supposed to be.
She also pointed out how AI is pushing teams back to core engineering practices:
Now we’re being pushed back toward fundamentals: writing good specifications, designing systems thoughtfully, validating and verifying AI-produced code,
engineering context for agents, or writing meaningful test suites.These are all the things we should have been doing all along. AI is just forcing us to remember.
Developer Productivity: Metrics with a Grain of Salt
Measuring productivity came up in nearly every conversation, usually alongside strong warnings about misuse.
Adam Berry, staff engineer at Netflix, discussed everything wrong with productivity metrics and why metrics alone are not enough.
At the core, the questions are simple: as an organization, can we ship as effectively as we’d like? Is life good for individual developers here?
Those are the right questions. But answering them with metrics alone is nearly impossible.
The DORA Four were meant as feedback mechanisms for teams to improve, not as a way to compare performance across an entire organization. Somewhere along the way, we lost that thread and started chasing “productivity metrics” instead.
Dr. Cat Hicks, a research architect and psychologist for software teams, shared insights from her research on Cycle Time and how metrics can be both illuminating and damaging depending on how they’re used.
Ultimately, good measurement is about clarity. It’s not about punishing people or creating a leaderboard. It’s about understanding where your org is, what outcomes matter, and how to move toward them in a thoughtful, human way.
Moritz Beller, software engineering researcher at Meta, explained the science behind DAT (Diff Authoring Time) and shared advice on how to start with metrics for engineering managers who want to make more data-driven decisions:
Start simple, with surveys. They’re low-cost, and qualitative data can be converted into quantifiable signals.
Just remember that measuring developer productivity, like measuring any knowledge work, is not a solved problem. Frameworks like DORA or SPACE, or even DAT, are helpful, but they’re still proxies. Combining objective metrics with subjective surveys gives the best approximation of reality.
Despite advances in AI, code reviews kept coming up as a deeply human practice.
Charity Majors, CTO at Honeycomb, emphasized the importance of creating a culture where shipping code is a regular, expected practice. Code reviews are an integral part of that culture:
The main point of code review is to generate a shared understanding, to ensure that at least one other person really knows and has a stake in what’s going on.
So far, and AI may change all this, the real product of any software engineering team is shared ownership and shared understanding of a corpus of work.
And why talking through code matters:
Just the act of having to talk through what you’re doing, the choices that you’ve made, typically makes you find problems in your own code, that you otherwise might not have if you were just – done, merged it.
Adrianne Tacke, Senior Developer Advocate, wrote a book, Looks Good to Me, about how to do constructive code reviews. In her Hangar DX interview, she shared how, with all the code generated by AI, it’s even more important for humans to look at it.
Every tool that generates code comes with a disclaimer: Make sure you review it!
Absolutely use AI to take care of the more mundane tasks, like generating the description of what is happening in your PR or letting it do a first-pass review and making sure things are syntactically correct.
But AI can’t replace human judgment, at least at this point, even with all the improvements in LLMs and agents.
Platform Engineering as a Product (Not a Project)
Platform engineering repeatedly came up as a discipline that fails when treated as infrastructure alone.
Luca Galante, core contributor to the Platform Engineering Community, discussed why platform engineering is crucial for developer experience and shed some light on why platform engineering initiatives fail at some companies:
They fail when platform engineering is done as just rebranded DevOps.
The solution is not to look at the platform as a ‘one-and-done’ six-month infrastructure project but to adopt a product management approach, see the developers as customers, and apply all the product management best practices to it, from user research and UX to product marketing.
He also emphasized that failure is rarely technical:
I have yet to see a platform engineering project fail because of the tech stack versus because of not getting developer adoption or stakeholder buy-in.
In another episode, The Limitations of Platform Engineering, Vilas Veeraraghavan and Bryan Finster shared how their many years of experience building platforms that are still used today.
They also tackled the hardest question of them all: how to measure the ROI of platform engineering?
People love saying, “We saved 1,500 developer hours.” Cool, and what did those devs do with that time? Did they ship features? Take longer coffee breaks? That metric is meaningless on its own.
What really matters is whether you’re helping teams deliver value faster and more safely. But that’s difficult to measure. It’s often lagging indicators, like customer satisfaction or revenue, tied to faster delivery.
Culture, Safety, and the Work That Makes Teams Work
Many conversations circled back to culture and the importance of psychological safety in engineering teams.
Titus Winters, Senior Principal Scientist at Adobe, said psychological safety and culture are the foundation for high-performing engineering teams.
I just cannot find research anywhere that doesn’t say psychological safety and a culture focused on growth and learning for frontline developer teams are the primary indicators, predictors, and foundations of technical success.
The research showed that you have to establish psychological safety and then get people to actually follow through on their commitments. Because if you get the psychological safety thing right, the rest of it just kind of starts to fall into place.
Meri Williams, CTO at Pleo, shared her thoughts on the future of engineering leadership in the age of AI and why engineering organizations can’t thrive without managers:
There is also a lot of flattening, organizations thinking they don’t need middle managers or pushing them back into coding roles. It’s been a fascinating experiment, which I think has failed.
Organizations can be OK without managers for a short time. After six months, cracks start to appear. In the next six months, people start to disappear.
Questions, not Answers
Across 21 conversations, one thing became clear: the tools are changing fast, but the fundamentals aren’t. AI is accelerating development, but it’s also making gaps in culture, documentation, testing, and clarity impossible to ignore.
None of our guests offered definitive answers, but these conversations were more about sharing thoughts and asking the right questions. And as we head into a new season, those questions will guide the next set of conversations on how teams build, measure, and evolve in an AI-shaped world. Stay tuned!









