{"id":2487,"date":"2024-08-11T18:15:24","date_gmt":"2024-08-11T18:15:24","guid":{"rendered":"https:\/\/www.aviator.co\/blog\/?p=2487"},"modified":"2025-09-25T11:56:07","modified_gmt":"2025-09-25T11:56:07","slug":"how-to-calculate-dora-metrics","status":"publish","type":"post","link":"https:\/\/www.aviator.co\/blog\/how-to-calculate-dora-metrics\/","title":{"rendered":"How to Calculate DORA Metrics"},"content":{"rendered":"\n<figure class=\"wp-block-image is-style-default\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"577\" src=\"https:\/\/www.aviator.co\/blog\/wp-content\/uploads\/2024\/08\/dora-metrics.jpg\" alt=\"DORA metrics\" class=\"wp-image-2488\" srcset=\"https:\/\/www.aviator.co\/blog\/wp-content\/uploads\/2024\/08\/dora-metrics.jpg 1024w, https:\/\/www.aviator.co\/blog\/wp-content\/uploads\/2024\/08\/dora-metrics-300x169.jpg 300w, https:\/\/www.aviator.co\/blog\/wp-content\/uploads\/2024\/08\/dora-metrics-768x433.jpg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#how-to-calculate-dora-metrics\"><\/a><a href=\"https:\/\/cloud.google.com\/blog\/products\/devops-sre\/using-the-four-keys-to-measure-your-devops-performance\" target=\"_blank\" rel=\"noopener\" title=\"\">DORA metrics<\/a>, developed by Google Cloud&#8217;s\u00a0<a href=\"https:\/\/dora.dev\/\" target=\"_blank\" rel=\"noopener\" title=\"\">DevOps Research and Assessment<\/a>\u00a0team, are a proven and effective way to measure and improve DevOps delivery performance. By tracking and optimizing these metrics, development and DevOps teams can identify bottlenecks, enhance processes, and ultimately deliver higher-quality software more quickly and reliably.<\/p>\n\n\n\n<p>Although these metrics are simple, they&#8217;ve become an industry standard because they provide actionable insight into software delivery performance. The four DORA metrics are as follows:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Lead time for changes<\/li>\n\n\n\n<li>Deployment frequency<\/li>\n\n\n\n<li>Failed deployment recovery time<\/li>\n\n\n\n<li>Change failure rate<\/li>\n<\/ul>\n\n\n\n<p>DORA metrics also have the benefit of not singling out individual DevOps team members. Software delivery issues are usually caused by processes, not people. DORA metrics are most useful at identifying process bottlenecks, which, if improved, enable people to do their best work. <\/p>\n\n\n\n<p>While DORA metrics alone don&#8217;t guarantee a good experience for team members, they are a strong indicator of thoughtful management focused on creating a healthy DevOps process that gets work into production quickly.<\/p>\n\n\n\n<p>In this guide, you&#8217;ll learn what each metric is, why it matters, and how to calculate it manually using\u00a0<a href=\"https:\/\/github.com\/features\/actions\" target=\"_blank\" rel=\"noopener\" title=\"\">GitHub Actions<\/a>\u00a0in your GitHub repository without any external tools.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Calculating the DORA Metrics<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#calculating-your-dora-metrics\"><\/a><\/h2>\n\n\n\n<p>If you want to follow along with this guide, you&#8217;ll need a GitHub repository to add your DORA actions to. The actions will work best in an active repository with frequent commits and deployments to provide data for calculating metrics. However, you can also add the actions to an empty repository and then add an empty\u00a0<a href=\"https:\/\/docs.github.com\/en\/actions\/deployment\/about-deployments\/deploying-with-github-actions\" target=\"_blank\" rel=\"noopener\" title=\"\">deployment<\/a>\u00a0so you can experiment with the DORA actions.<\/p>\n\n\n\n<p>Start by cloning the repository you&#8217;ll use, and then create a new directory named\u00a0<code>.github\/workflows<\/code>\u00a0in the root of the repository. As you create each action below, place it in a YAML file in the directory you just created with a meaningful name, such as\u00a0<strong>calculate-lead-time.yml<\/strong>. The exact name you choose for each file does not matter as GitHub automatically processes all YAML files in a repository&#8217;s\u00a0<code>.github\/workflows<\/code>\u00a0directory. For more information on how GitHub Actions work and how to set them up, refer to the\u00a0<a href=\"https:\/\/docs.github.com\/en\/actions\" target=\"_blank\" rel=\"noopener\" title=\"\">GitHub Actions docs<\/a>.<\/p>\n\n\n\n<p>You will store the data for DORA metric calculation in CSV files saved to the repository, which avoids the need for an external data store. While there are many automated tools for calculating DORA metrics, learning how to calculate the metrics manually ensures you will fully understand your data if you adopt an automated solution.<\/p>\n\n\n\n<p>In addition to the actions that store the raw data, you&#8217;ll create a final action that calculates cumulative DORA metrics for the past day, week, and month in a Markdown-formatted report you can view via the GitHub UI for your repository.<\/p>\n\n\n\n<p>Let&#8217;s start by creating an action that calculates lead time.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lead Time for Changes<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#lead-time-for-changes\"><\/a><\/h3>\n\n\n\n<p>Lead time for changes measures the time it takes for a Git commit to get into production. This metric helps you understand how quickly you deliver new features or fixes to your users.<\/p>\n\n\n\n<p>To calculate it, you need timestamps for when commits are initially added to the system and when those commits are pushed into production. Here&#8217;s how you can use a GitHub action to calculate the lead time for changes:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: Calculate Lead Time for Changes\non:\n  deployment_status:\n    types: &#91;success]\njobs:\n  lead-time:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout code\n        uses: actions\/checkout@v2\n      - name: Calculate lead time\n        run: |\n          DEPLOYMENT_SHA=${{ github.event.deployment.sha }}\n          DEPLOYMENT_DATE=$(date -d \"${{ github.event.deployment.created_at }}\" +%s)\n          git log --pretty=format:'%H,%ct' $DEPLOYMENT_SHA &gt; commit_times.csv\n          awk -F',' -v deploy_date=$DEPLOYMENT_DATE '{print deploy_date - $2}' commit_times.csv &gt; lead_time_results.csv\n      - name: Commit results\n        run: |\n          git add lead_time_results.csv\n          git commit -m \"Update lead time results\"\n          git push<\/code><\/pre>\n\n\n\n<p>The&nbsp;<code>git log<\/code>&nbsp;command retrieves the commit hashes and timestamps, which are then processed using&nbsp;<code>awk<\/code>&nbsp;to calculate the lead time by subtracting the commit timestamp from the deployment timestamp.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Deployment Frequency<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#deployment-frequency\"><\/a><\/h3>\n\n\n\n<p>Deployment is a measure of how frequently your projects are deployed to production. High deployment frequency generally indicates a team&#8217;s ability to deliver updates quickly and reliably.<\/p>\n\n\n\n<p>To track deployment frequency, log each deployment&#8217;s timestamp. Here&#8217;s an example using a GitHub action:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: Track Deployment Frequency\non:\n  deployment:\n    types: &#91;created]\njobs:\n  deployment-frequency:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout code\n        uses: actions\/checkout@v2\n      - name: Log deployment\n        run: |\n          echo \"$(date +%s)\" &gt;&gt; deployment_log.csv\n      - name: Commit results\n        run: |\n          git add deployment_log.csv\n          git commit -m \"Log deployment\"\n          git push<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Failed Deployment Recovery Time<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#failed-deployment-recovery-time\"><\/a><\/h3>\n\n\n\n<p>Failed deployment recovery time measures how quickly service is fully restored after an outage or service degradation caused by a change released to production. Depending on the severity of the issue, it may require anything from a quick hotfix to a complete rollback to restore service.<\/p>\n\n\n\n<p>This metric is crucial for understanding the resilience of your systems: the faster you recover from service disruptions caused by deploying changes to production, the less likely it is that users will be negatively impacted.<\/p>\n\n\n\n<p>To log the time delta between a service disruption and restoration, you can use a GitHub action triggered by a&nbsp;<code>repository_dispatch<\/code>&nbsp;event:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: Track Failed Deployment Recovery\non:\n  repository_dispatch:\n    types: &#91;service-disruption, service-restoration]\njobs:\n  time-to-restore:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout code\n        uses: actions\/checkout@v2\n      - name: Log disruption or restoration time\n        run: |\n          if &#91; \"${{ github.event.action }}\" == \"service-disruption\" ]; then\n            echo \"Disruption,$(date +%s)\" &gt;&gt; restore_log.csv\n          elif &#91; \"${{ github.event.action }}\" == \"service-restoration\" ]; then\n            echo \"Restoration,$(date +%s)\" &gt;&gt; restore_log.csv\n          fi\n      - name: Commit results\n        run: |\n          git add restore_log.csv\n          git commit -m \"Log service disruption\/restoration\"\n          git push<\/code><\/pre>\n\n\n\n<p>Note that GitHub has no way of automatically detecting when an application is experiencing a service disruption. This means you must trigger the event by using a monitoring tool to track your application&#8217;s status and\u00a0<a href=\"https:\/\/docs.github.com\/en\/rest\/repos\/repos#create-a-repository-dispatch-event\" target=\"_blank\" rel=\"noopener\" title=\"\">create a repository dispatch event<\/a>\u00a0with a type of\u00a0<code>service-disruption<\/code>\u00a0or\u00a0<code>service-restoration<\/code>\u00a0via the\u00a0<a href=\"https:\/\/docs.github.com\/en\/rest\" target=\"_blank\" rel=\"noopener\" title=\"\">GitHub REST API<\/a>. Also consider how you will determine whether a service disruption is related to a failed deployment. If your monitoring tool is sophisticated, you can filter out most disruptions unrelated to deployment and only call the GitHub API for relevant events.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Change Failure Rate<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#change-failure-rate\"><\/a><\/h3>\n\n\n\n<p>Change failure rate calculates the percentage of your deployments that fail to deploy successfully, thereby helping you understand the stability of your deployment pipeline. Ideally, you should analyze and fix the root causes of deployment failures to ensure the failure rate trends downward over time.<\/p>\n\n\n\n<p>To store the data for calculating change failure rate, log the total number of deployments and the number of failed deployments in a GitHub action:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: Track Change Failure Rate\non:\n  deployment_status:\n    types: &#91;failure, success]\njobs:\n  change-failure-rate:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout code\n        uses: actions\/checkout@v2\n      - name: Log deployment status\n        run: |\n          if &#91; \"${{ github.event.deployment_status.state }}\" == \"failure\" ]; then\n            echo \"failure,$(date +%s)\" &gt;&gt; deployment_status_log.csv\n          else\n            echo \"success,$(date +%s)\" &gt;&gt; deployment_status_log.csv\n          fi\n      - name: Commit results\n        run: |\n          git add deployment_status_log.csv\n          git commit -m \"Log deployment status\"\n          git push<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Calculating Cumulative DORA Metrics<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#calculating-cumulative-dora-metrics\"><\/a><\/h3>\n\n\n\n<p>Now that you&#8217;ve created all the actions to store the data needed to calculate DORA metrics, let&#8217;s see how to create an action that uses this data to generate a DORA metrics report.<\/p>\n\n\n\n<p>To calculate cumulative DORA metrics for the past day, week, and month, you can create an on-demand GitHub Action that processes the log files:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: Calculate Daily DORA Metrics\non:\n  workflow_dispatch:\n  schedule:\n    - cron: '0 0 * * *'\njobs:\n  calculate-metrics:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout code\n        uses: actions\/checkout@v2\n      - name: Set up Python\n        uses: actions\/setup-python@v2\n        with:\n          python-version: '3.x'\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          pip install pandas\n      - name: Calculate daily metrics\n        shell: python\n        run: |\n          import pandas as pd\n          from datetime import datetime, timedelta\n\n          def read_csv(filename):\n              return pd.read_csv(filename, header=None, names=&#91;'timestamp', 'value'])\n\n          def calculate_daily_metrics(df, date):\n              start_of_day = date.replace(hour=0, minute=0, second=0, microsecond=0)\n              end_of_day = start_of_day + timedelta(days=1)\n              df&#91;'date'] = pd.to_datetime(df&#91;'timestamp'], unit='s')\n              return len(df&#91;(df&#91;'date'] &gt;= start_of_day) &amp; (df&#91;'date'] &lt; end_of_day)])\n\n          def calculate_daily_failure_rate(deployments_df, failures_df, date):\n              deployments = calculate_daily_metrics(deployments_df, date)\n              failures = calculate_daily_metrics(failures_df&#91;failures_df&#91;'value'] == 'failure'], date)\n              return (failures \/ deployments) * 100 if deployments &gt; 0 else 0\n\n          def calculate_daily_lead_time(df, date):\n              start_of_day = date.replace(hour=0, minute=0, second=0, microsecond=0)\n              end_of_day = start_of_day + timedelta(days=1)\n              df&#91;'date'] = pd.to_datetime(df&#91;'timestamp'], unit='s')\n              filtered_df = df&#91;(df&#91;'date'] &gt;= start_of_day) &amp; (df&#91;'date'] &lt; end_of_day)]\n              return filtered_df&#91;'value'].mean() if len(filtered_df) &gt; 0 else 0\n\n          def calculate_daily_restore_time(df, date):\n              start_of_day = date.replace(hour=0, minute=0, second=0, microsecond=0)\n              end_of_day = start_of_day + timedelta(days=1)\n              df&#91;'date'] = pd.to_datetime(df&#91;'timestamp'], unit='s')\n              filtered_df = df&#91;(df&#91;'date'] &gt;= start_of_day) &amp; (df&#91;'date'] &lt; end_of_day)]\n              disruptions = filtered_df&#91;filtered_df&#91;'value'] == 'Disruption,']\n              restorations = filtered_df&#91;filtered_df&#91;'value'] == 'Restoration,']\n              total_restore_time = 0\n              for _, disruption in disruptions.iterrows():\n                  restoration = restorations&#91;restorations&#91;'timestamp'] &gt; disruption&#91;'timestamp']].iloc&#91;0]\n                  total_restore_time += restoration&#91;'timestamp'] - disruption&#91;'timestamp']\n              return total_restore_time \/ len(disruptions) if len(disruptions) &gt; 0 else 0\n\n          def generate_mermaid_chart(title, dates, values):\n              chart = f\"```mermaid\\nxychart-beta\\n    title \\\"{title}\\\"\\n\"\n              chart += f\"    x-axis &#91;{', '.join(date.strftime('%d-%m') for date in dates)}]\\n\"\n              max_value = max(values)\n              chart += f\"    y-axis \\\"{title}\\\" 0 --&gt; {max_value * 1.1:.2f}\\n\"\n              chart += f\"    bar &#91;{', '.join(f'{value:.2f}' for value in values)}]\\n\"\n              chart += \"```\\n\\n\"\n              return chart\n\n          now = datetime.now()\n          dates = &#91;now - timedelta(days=i) for i in range(30, 0, -1)]\n\n          deployment_log = read_csv('deployment_log.csv')\n          deployment_status_log = read_csv('deployment_status_log.csv')\n          lead_time_results = read_csv('lead_time_results.csv')\n          restore_log = read_csv('restore_log.csv')\n\n          metrics = {\n              'Deployments': &#91;calculate_daily_metrics(deployment_log, date) for date in dates],\n              'Failure Rate (%)': &#91;calculate_daily_failure_rate(deployment_log, deployment_status_log, date) for date in dates],\n              'Lead Time (hours)': &#91;calculate_daily_lead_time(lead_time_results, date) for date in dates],\n              'Restore Time (hours)': &#91;calculate_daily_restore_time(restore_log, date) for date in dates]\n          }\n\n          with open('daily_metrics.md', 'w') as f:\n              f.write(\"# Daily DORA Metrics (Past 30 Days)\\n\\n\")\n              for metric, values in metrics.items():\n                  f.write(f\"## {metric}\\n\\n\")\n                  f.write(generate_mermaid_chart(metric, dates, values))\n\n      - name: Commit results\n        run: |\n          git add daily_metrics.md\n          git commit -m \"Update daily DORA metrics\"\n          git push<\/code><\/pre>\n\n\n\n<p>This script processes the CSV log files, calculates the DORA metrics for each of the past thirty days, and outputs the results as Mermaid charts embedded in Markdown. It will run automatically once a day at midnight UTC, and it can also be run manually via the GitHub UI.<\/p>\n\n\n\n<p>Once you&#8217;ve added all the actions, you can push them to your repository so GitHub can process them. Every deployment from the repository will then update the DORA data, making it available when generating the cumulative report.<\/p>\n\n\n\n<p>If you use these actions in a busy production repo, consider adding an action that occasionally rotates the CSV data files to prevent the accumulation of old, unneeded data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Interpreting and Optimizing Your DORA Metrics<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#interpreting-and-optimizing-your-dora-metrics\"><\/a><\/h2>\n\n\n\n<p>Now that you can calculate your DORA metrics, what should you do with the data? Unfortunately, there&#8217;s no straightforward answer because it depends heavily on the kind of software your team ships and the type of organization you work in.<\/p>\n\n\n\n<p>Generally, you want to aim for high deployment frequency (<em>eg<\/em>&nbsp;multiple deployments per day), low lead time for changes (<em>eg<\/em>&nbsp;less than one day), quick recovery from failed deployments (<em>eg<\/em>&nbsp;less than one hour), and low change failure rate (<em>eg<\/em>&nbsp;less than 5 percent). But the exact targets depend on your team&#8217;s context. For example, if you currently only deploy once a month, you can start aiming for once a week as a starting point.<\/p>\n\n\n\n<p>So while DORA metrics tell you&nbsp;<em>what is happening<\/em>, they don&#8217;t tell you&nbsp;<em>what to do about it<\/em>. Even if you identify bottlenecks that slow down your deployment process, it&#8217;s not always easy to solve them.<\/p>\n\n\n\n<p>That&#8217;s where a developer collaboration tool like\u00a0<a href=\"https:\/\/www.aviator.co\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Aviator<\/a>\u00a0can help. Slow reviews and merges are a\u00a0<a href=\"https:\/\/www.michaelagreiler.com\/code-review-pitfalls-slow-down\/\" target=\"_blank\" rel=\"noopener\" title=\"\">major cause of slow deployments<\/a>, and slow deployments negatively impact all four DORA metrics. Features like\u00a0<a href=\"https:\/\/www.aviator.co\/flexreview\" target=\"_blank\" rel=\"noopener\" title=\"\">FlexReview<\/a>,\u00a0<a href=\"https:\/\/www.aviator.co\/merge-queue\" target=\"_blank\" rel=\"noopener\" title=\"\">MergeQueue<\/a>,\u00a0<a href=\"https:\/\/www.aviator.co\/stacked-prs\" target=\"_blank\" rel=\"noopener\" title=\"\">Stacked PRs<\/a>, and\u00a0<a href=\"https:\/\/www.aviator.co\/releases\" target=\"_blank\" rel=\"noopener\" title=\"\">Releases<\/a>\u00a0help improve your metrics\u00a0<em>and<\/em>\u00a0make your developers happier.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<a href=\"https:\/\/github.com\/jain-av\/mq-demo\/new\/main#conclusion\"><\/a><\/h2>\n\n\n\n<p>DORA metrics won&#8217;t take your team from subpar to world-class overnight, but when used correctly, they\u00a0<em>will<\/em>\u00a0help you steadily improve over time\u2014and\u00a0<a href=\"https:\/\/www.aviator.co\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Aviator<\/a>\u00a0can help you get there more quickly.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.aviator.co\/releases\"><img decoding=\"async\" src=\"https:\/\/www.aviator.co\/blog\/wp-content\/uploads\/2024\/08\/blog-cta-9Release_CTA.svg\" alt=\"\" class=\"wp-image-2489\"\/><\/a><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>DORA metrics won&#8217;t take your team from subpar to world-class overnight, but when used correctly, they\u00a0will\u00a0help you steadily improve over time.<\/p>\n","protected":false},"author":39,"featured_media":2488,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[77],"tags":[31,29],"class_list":["post-2487","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-developer-productivity"],"blocksy_meta":[],"acf":[],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/www.aviator.co\/blog\/wp-content\/uploads\/2024\/08\/dora-metrics.jpg","post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/posts\/2487","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/users\/39"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/comments?post=2487"}],"version-history":[{"count":6,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/posts\/2487\/revisions"}],"predecessor-version":[{"id":4916,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/posts\/2487\/revisions\/4916"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/media\/2488"}],"wp:attachment":[{"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/media?parent=2487"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/categories?post=2487"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aviator.co\/blog\/wp-json\/wp\/v2\/tags?post=2487"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}