Home
/
Podcast
/
The Science Behind DAT, Meta’s Productivity Metric

The Science Behind DAT, Meta’s Productivity Metric

November 27, 2025
Metrics
Moritz, a software engineering researcher at Meta, explains their foundational metric, why time is one of the least gameable metrics, how AI-assisted development changes the meaning of “productivity,” and why investments in tooling drive more value than UI tweaks.
Hosted by
Ankit Jain
Co-founder at Aviator
Guest
Moritz Beller
Software Engineering Researcher

About Moritz Beller

Moritz is a part of Developer Insights, team at Meta, which closely partners with DevEx teams to find and scale insights that make developers more productive. His interest lies in creating and empirically evaluating tools that help developers be more productive.

Measuring Developer Productivity at Meta

Meta is known for its experimentation culture on the product side, but internally, especially within developer experience, the company historically relied on surveys and expert judgment.

That gap eventually led to the creation of Diff Authoring Time (DAT): Meta’s system for measuring the time engineers spend authoring and iterating on code changes (diffs).

The creation of DAT (Diff Authoring Time)

A few things converged around late 2022. First, Meta was suddenly operating at a massive remote-work scale. That raised a question if people were working effectively from home. At the same time, Meta has a rigorous experimentation culture for product features. But internally, for developer tooling, we relied mostly on surveys or expert opinion, both of which are vulnerable to bias.

So the idea emerged: why not bring that same scientific rigor to the internal developer experience? DAT became the backbone for that.

What does DAT measure?

A diff is Meta’s equivalent of a pull request, self-contained code changes with a title, reviewer, and test plan.

Diff Authoring Time (DAT) measures the total time engineers spend developing a diff. Ideally, that includes everything: coding, docs, testing, meetings, and related tasks.

Of course, some activities, like whiteboarding or pair programming, can’t be captured perfectly, but the core work happens in the IDE, which DAT tracks well. And we keep pushing the frontier.
For example, with LLMs, we can now automatically analyze meetings, extract timestamps, and map them to specific diffs, something that seemed impossible two years ago. We accept some natural inaccuracy, but the coverage is improving rapidly.

Productivity Gained Through Metrics

Code Sharing

Meta has separate iOS and Android instances for apps like Facebook. Code sharing frameworks let teams reuse logic across platforms.
With DAT, the team measured how long it took to build a feature with shared code and modeled how long it would take without it. By aggregating diffs across both ecosystems, they quantified the engineering time saved.

Auto-Memoization in React

React components used to require manual memoization, essentially UI-level caching. It was error-prone and expensive. The Hack team built compiler-level auto-memoization (not yet open-sourced), which reduced development time by 33%, not even counting the bugs it prevented.

Foundational Improvements over UI tweaks

We tried small UI optimizations, similar to product experiments like changing a button color. But developers all use their IDE differently; the space is huge compared to consumer interfaces.

These small tweaks showed no measurable effect. By contrast, deep, language-level, or framework-level changes produced huge gains.

DAT has helped justify bolder investments into developer tooling, not just incremental tweaks.

Gaming Metrics

Transparency is crucial. Developers at Meta know exactly what’s captured and how it’s used. That clarity builds psychological safety.

Also, time is one of the least gameable metrics. You can artificially inflate code output, but you can’t fake time without hurting yourself; developers don’t want to spend more time on diffs. Also, time is a limited metric to game; there are only 24 hours in a day.

And importantly, DAT is not used at the individual level. It’s not an evaluation tool; it’s a research and insights tool.

Our research shows time spent correlates strongly with self-perceived productivity. Also, using multiple metrics together reduces the risk that any one metric can be gamed.

Starting with Metrics

My advice for an engineering manager who wants to make more data-driven decisions would be to start simple, with surveys. They’re low-cost, and qualitative data can be converted into quantifiable signals.

Just remember that measuring developer productivity, like measuring any knowledge work, is not a solved problem. Frameworks like DORA or SPACE, or even DAT, are helpful, but they’re still proxies. Combining objective metrics with subjective surveys gives the best approximation of reality.

Measuring developer productivity is fundamental now that we're observing the largest change in software engineering in a decade. I'm happy we have our traditional productivity metrics in a good place so we can better observe the effect of AI.

Get notified of new episodes

Listen on
Join Hangar DX
A vetted community of developer-experience (DX) enthusiasts.

Chapters

00:00 Introduction
01:04 Understanding Developer Insights at Meta0
04:42 Defining Diff Authoring Time (DAT)
07:48 Evolution of DAT: From Version 1 to 6
11:17 Telemetry and Data Collection for Productivity
14:01 Challenges in Measuring Software Engineering Productivity
15:56 Impact of AI on Software Development Metrics
17:48 Case Studies: Productivity Gains from Metrics
22:26 Counterintuitive Findings in Productivity Metrics
24:43 The Challenges of Measuring Productivity
30:04 Qualitative Feedback and Developer Insights
33:28 Advice for Engineering Leaders on Data-Driven Practices
35:14 Future of Productivity Measurement in Software Engineering

Measuring developer productivity is fundamental now that we're observing the largest change in software engineering in a decade. I'm happy we have our traditional productivity metrics in a good place so we can better observe the effect of AI.

Get notified of new episodes

Subscribe to receive new Hangar DX podcast releases.

We’ll be in touch with new episodes!