The Anti-Metrics Era of Developer Productivity

Instead of building shiny dashboards, let’s focus on automated workflows across the entire SDLC – development, code reviews, builds, tests, and deployments.
CEO @ Aviator

The Anti-Metrics Era of Developer Productivity

Instead of building shiny dashboards, let’s focus on automated workflows across the entire SDLC – development, code reviews, builds, tests, and deployments.
developer-productivity-metrics

Sophisticated AI coding assistants have fundamentally altered how developers work. What once took hours of concentrated typing and debugging can now be accomplished in minutes through well-crafted prompts and iterative collaboration with AI tools.

Today’s development process often looks like this:

  1. The developer gathers requirements for a task 
  2. The developer provides context, with clear step-by-step directions to an AI assistant
  3. AI generates an initial code implementation
  4. Developer reviews, edits, and refines the AI-generated code
  5. Developer and AI iterate until the solution is optimal

This workflow bears little resemblance to the traditional coding process, where developers wrote every line manually. The skills that matter most have shifted from typing speed and syntax memorization to problem formulation, solution evaluation, and effective collaboration with AI tools.

Tech’s Obsession with Measuring Productivity

The old adage says that you cannot improve the things you cannot measure. But the tech industry has taken that out of context, becoming obsessed with measuring “everything we can” despite the fact that over two decades ago, Martin Fowler wrote that developer productivity cannot be measured

Developer productivity metrics are useful for understanding the bottlenecks in the engineering processes, but they are not the goal.  

Metrics are still important. DORA is an industry-standard set of metrics for measuring and comparing DevOps performance, developed by the DevOps Research and Assessment (DORA) team, a Google Cloud-led initiative that promotes good DevOps practices.

But metrics are only a compass to identify what’s wrong in the engineering process, not a solution. And definitely not a way to measure individual performance!

The need to measure everything truly spiked during COVID when we started working remotely, and there wasn’t a good way to understand how work was done. Part of this also stemmed from management’s insecurities about understanding what’s going on in software engineering.

However, when surveyed about the usefulness of developer productivity metrics, most leaders admit that the metrics they track are not representative of developer productivity and tend to conflate productivity with experience. And now that most of the code is written by AI, measuring productivity the same way makes even less sense. If AI improves programming effort by 30%, does that mean we get 30% more productivity?”

What Kills Productivity?

Developers are also pretty clear about what would make them more productive. Atlassian’s State of Developer Experience has revealed that 69% of developers lose eight hours per week – 20% of their time –  to inefficiencies. The key friction points are technical debt (59%), lack of documentation (41%), build processes (27%), lack of time for deep work (27%), and lack of clear direction (25%).

Whether you call it developer experience or platform engineering, the lack of friction equals happy developers, which equals productive developers. In the same survey, 63% of developers said developer experience is important for their job satisfaction.

The Anti-Metrics Approach: Automated Workflows

That’s why I believe in the anti-metrics approach to developer productivity and focusing on eliminating necessary but mundane tasks that developers confront every day.

Instead of building shiny dashboards, we are building automated workflows across the entire Software Development Lifecycledevelopment, code reviews, builds, tests, and deployments.

This helps us focus on solving real developer problems instead of just pointing at the problems.

Tracking engineering productivity metrics is still important. However, metrics are only a compass to identify what’s wrong, not the solution. Real impact on developer experience comes from balancing three key levers: people, practices, and tools. 

The tools we are building rely on five core delivery practices:

Trunk-Based Development
The foundational book in the field of software delivery, Accelerate, researched and documented that “developing off trunk/master rather than on long-lived feature branches was correlated with higher delivery performance,(…) independent of team size, organization size, or industry”.

Aviator Releases and MergeQueue enable developers to optimize smaller and more frequent releases, managing a healthy trunk branch.

Continuous delivery
Continuous delivery is a practice where code changes are automatically built, tested, and prepared for release to production. This approach enables teams to deploy code changes more frequently and reliably and is critical for high-performing engineering organizations. By automating the delivery pipeline, teams can maintain a constant flow of updates while ensuring quality and stability.

As Bryan Finster put it – Continuous delivery is about always being deliverable, the ability to deploy the latest changes to production at any moment.

Aviator Releases enables you to democratize deployments, and simplify cherry-picks and rollbacks

Monorepos
A monorepo setup helps establish consistent standards across projects by centralizing build configurations, linting rules, and development workflows. Split microservices in independent deployments with Aviator Releases, automatically assign reviews to the appropriate domain experts with FlexReview, or ‘manage’ parallel distributed merges using Aviator MergeQueue.

Small Reviews
Small, focused code reviews help maintain high code quality and developer productivity. By breaking changes into smaller, digestible PRs with Aviator Stacked PRs, reviewers can provide more thorough feedback and catch potential issues earlier. This approach also reduces cognitive load on reviewers and speeds up the review process, leading to faster iteration cycles.

Clear and Accurate Ownership
Universal ownership for developer assets promotes well-defined shared responsibility and knowledge sharing across the team. When everyone feels ownership over the codebase, it encourages collaboration, reduces knowledge silos, and ensures that any team member can contribute to any part of the project. Aviator Teams is a self-managed teams portal powered by AI.

    While metrics frameworks and dashboards still have a role in engineering organizations, if
    we really care about developer productivity, we need to stop obsessing over dashboards and start focusing on what actually helps teams do their best work. 

    That means adopting solid engineering practices, removing unnecessary hurdles, and creating an environment where developers feel supported and empowered.

    A slightly different version of this article was published on The New Stack.

    Subscribe

    Be the first to know once we publish a new blog post

    Join our Discord

    Learn best practices from modern engineering teams

    Get a free 30-min consultation with the Aviator team to improve developer experience across your organization.

    Powered by WordPress