Sleuth Documentation
HomeBlogSupportSign up
  • Getting started
  • Navigating Sleuth
  • DORA metrics
    • Deploy frequency
    • Change lead time
    • Change failure rate
    • MTTR
    • Interpreting Metrics in Sleuth
  • Deployment tracking
    • Organization
      • Labels
      • Trends
      • Compare
      • Search
      • Status
    • Projects
      • Issue trackers
    • Environments
    • Code deployments
      • Creating a deployment
      • How to register a deploy
      • Rollbacks
      • Automatic tagging
      • Deployment locking
      • Environment drift
      • Move code deployments
      • Search everything
    • Feature flags
    • Manual changes
    • Deploys
    • Teams
  • Work in Progress
  • Goals
  • Sleuth Automations
    • Automations Marketplace
      • Installing Automations
        • Installing PR "Update" Automations
      • Editing and uninstalling Automations
      • Smart suggestions
      • Understanding efficacy
    • Custom Automations
      • Automations Cookbook
      • Webhook Actions
      • Trigger Build Actions
        • Bitbucket Pipelines
        • CircleCI
        • Github Actions
        • Jenkins
  • Slack & Email Notifications
  • Auto-verify deploys
    • Anomaly detection
    • Error impact
    • Metric impact
  • Ignoring pull requests
  • Slack mission control
    • Approvals
    • Project notifications
    • Personal notifications
    • Search Sleuth in Slack
    • Project/Deployment history
    • Developer standup
  • Sleuth API
    • Deploy Registration
    • Deploy import
    • Manual Change
    • Custom Incident Impact Registration
    • Custom Metric Impact Registration
    • Deprecation information
    • GraphQL Queries
    • GraphQL Mutations
    • Query batching
  • Integrations
    • About Integrations...
    • Code integrations (read-only)
      • Azure DevOps
      • Bitbucket
      • GitHub
      • GitLab
      • Custom Git
      • Terraform Cloud
    • Code integrations (write)
    • Feature flag integrations
      • LaunchDarkly
    • Impact integrations
      • Error trackers
        • Bugsnag
        • Honeybadger
        • Rollbar
        • Sentry
      • Metric trackers
        • AppDynamics
        • AWS CloudWatch
        • Custom
        • Datadog
        • Jira metrics (Cloud / Data Center)
        • NewRelic
        • SignalFx
      • Incident tracker integrations
        • Blameless
        • PagerDuty
        • Datadog Monitors
        • Statuspage
        • Opsgenie
        • Jira (Cloud/Data Center)
        • FireHydrant
        • Rootly
        • ServiceNow
        • Custom
          • Grafana OnCall
      • CI/CD builds
        • Azure Pipelines
        • Bitbucket Pipelines
        • Buildkite
        • CircleCI
        • GitHub Actions
        • GitLab CI/CD Pipelines
        • Jenkins
    • Sleuth DORA App for Slack
    • Microsoft Teams integration
    • CI/CD integrations
      • Azure Pipelines
      • Bitbucket Pipelines
      • Buildkite
      • CircleCI
      • Github Actions
      • GitLab CI/CD Pipelines
      • Jenkins
    • Issue tracker integrations
      • Jira Cloud
      • Jira Data Center
      • Linear
      • Shortcut
    • Fixing broken integrations
  • Pulse
    • Welcome to Pulse docs
    • Quick Start setup guide
    • Beginner tutorials
      • 1. How to create a Teamspace
      • 2. How to create a Review
      • 3. How to create a Survey
  • Features
    • Reviews
      • Review workflow
      • Review templates
      • Widgets and Sections
        • Widget type
      • Review settings
    • Surveys
      • Survey Workflow
    • Teamspaces
    • Inbox
    • AI assistant
    • General settings
      • Users and Teams
      • Investment mix
  • Settings
    • Organization settings
      • Details
      • Authentication
        • SAML 2.0 Setup
          • Okta Configuration
          • Azure AD Configuration
          • PingIdentity Configuration
      • Access Tokens
      • Members
      • Team Settings
      • Billing
    • Project settings
      • Details
      • Slack settings
      • Environment settings
      • Code deployment settings
      • Feature flag settings
      • Impact settings
    • Account settings
      • Account settings
      • Notifications settings
      • Identities settings
    • Role Based Access Control
  • Resources
    • FAQ
    • Sleuth TV
    • Purchasing
    • About Sleuth...
Powered by GitBook
On this page
  • Interpreting "Percent Change" in Sleuth
  • Percent Change for Project Metrics and Team Metrics
  • Percent Change for the Trends dashboard
  • Interpreting Team-level Metrics
  • Interpreting Averages across Multiple Projects

Was this helpful?

  1. DORA metrics

Interpreting Metrics in Sleuth

PreviousMTTRNextDeployment tracking

Last updated 2 years ago

Was this helpful?

In order to trust and effectively improve your DORA metrics, it's helpful to understand exactly how Sleuth calculates and presents each of the four DORA metrics throughout its various dashboards and views.

This article expands on the descriptions of the four core DORA metric calculations already described on the preceding pages, and we highly recommend familiarizing yourself with these before reading on:

  • (and included discussion of )

Parts of this article also assume a basic familiarity with Sleuth's , , and dashboards.

Interpreting "Percent Change" in Sleuth

Percent Change for Project Metrics and Team Metrics

In both the and dashboards, Sleuth provides the ability to filter by a specific target date range. Sleuth then displays 2 lines within each of the four DORA charts, one for the currently selected period and a second for the prior period of the same length. This overlay is great for comparing and zooming-in on specific points in time, however, what's often more important is understanding how the current period compares overall against the prior period.

In addition to plotting the currently selected period and the prior period as two distinct timelines on each graph, Sleuth also displays an overall "percent change" at the top of each graph to help you see at-a-glance how the average for the selected period compares to the average of the prior period.

  • First, Sleuth calculates the net difference between the two periods by subtracting the average for the prior period from the average for the currently selected period.

  • Then, Sleuth calculates the percent change by dividing this net difference by the average for the prior period and multiplying that result by 100

Percent Change for the Trends dashboard

For the Trends dashboard, Sleuth calculates percent change by splitting the selected period into two equal halves and calculating the average for each half. From there, the calculation of percent change is similar to the one described for the Project Metrics and Team Metrics dashboards above.

Interpreting Team-level Metrics

Sleuth does not require users to explicitly associate Teams with Projects. Rather, when calculating Team-level metrics, Sleuth automatically infers which Teams are contributing to which Projects by searching for Team Members within Deploys (and in the case of Code Change Deploys, by searching within the underlying PRs, Branches, Builds, and Issues included in those Deploys). If any Team Member is included as an author on a PR, Branch, or Build within a Deploy, as an owner of an linked Issues listed in the Deploy, or as the initiator of the Deploy itself, then Sleuth will include the entirety of that Deploy in its calculation of Team level-metrics for all Teams to which that Team Member belongs.

Interpreting Averages across Multiple Projects

When viewing metric averages across multiple Projects in Sleuth, it's important to note that Sleuth calculates cross-Project averages based on the underlying Deploys within each Project.

So, for example, if Project A has 2 deploys and Project B has 7 deploys, Sleuth will calculate the average CLT across both Projects by adding the CLT for all 9 Deploys and then dividing that sum by 9 (the total number of Deploys across both Projects).

This produces an average CLT result that in most cases will not be equal to the result of adding up the Project-level CLTs and dividing that sum by 2 (the total number of Projects). Sleuth has been intentionally designed around a Deploy-centric point of view, and we believe this Deploy-level handling of cross-project averages provides the most accurate representation of customers' DORA metrics across Projects.

Some specific use cases where this applies include:

  • Viewing multiple Projects on the Trends dashboard

  • Using Labels to view cross-project metrics

  • Viewing Team-level metrics for Teams working on multiple Projects

  • Passing multiple Project slugs into the Sleuth API

On both the and dashboards, percent change is calculated as follows:

The dashboard also displays percent change for each of the four DORA metrics, but the calculation for percent change here differs slightly from the Project Metrics and Team Metrics dashboards in that the Trends dashboard display only one period of time (i.e. has no concept of a "prior period" in tis comparison.

As such, Sleuth can present powerful DORA metric "intersections" that show each Team's relative contribution to the DORA metrics for the Projects they're working on. This is evident in views such dashboard's Projects contributed to panel below, which shows the DORA metrics at the specific intersection of this Team and those Projects.

Similarly, from within the dashboard, Sleuth presents a view into Contributing teams and their relative impacts on that Project's metrics.

Project Metrics
Team Metrics
Trends
Team Metrics
Project Metrics
Deploy Frequency
MTTR
Project Metrics
Team Metrics
Trends
Project Metrics
Team Metrics
Change lead time
Change failure rate
Batch size breakdown
Percent change is displayed from prior period to current period
Percent change is displayed on the Trends dashboard
Sleuth detects team members directly within change sources
The Team Metrics dashboard shows DORA metrics for the specific intersections between the selected team and the specific projects to which they're contributing
The Project Metrics dashboard shows DORA metrics for each team's specific contributions to the project