Interpreting Metrics in Sleuth
Last updated
Last updated
In order to trust and effectively improve your DORA metrics, it's helpful to understand exactly how Sleuth calculates and presents each of the four DORA metrics throughout its various dashboards and views.
This article expands on the descriptions of the four core DORA metric calculations already described on the preceding pages, and we highly recommend familiarizing yourself with these before reading on:
Deploy Frequency (and included discussion of Batch size breakdown)
Parts of this article also assume a basic familiarity with Sleuth's Project Metrics, Team Metrics, and Trends dashboards.
In both the Project Metrics and Team Metrics dashboards, Sleuth provides the ability to filter by a specific target date range. Sleuth then displays 2 lines within each of the four DORA charts, one for the currently selected period and a second for the prior period of the same length. This overlay is great for comparing and zooming-in on specific points in time, however, what's often more important is understanding how the current period compares overall against the prior period.
In addition to plotting the currently selected period and the prior period as two distinct timelines on each graph, Sleuth also displays an overall "percent change" at the top of each graph to help you see at-a-glance how the average for the selected period compares to the average of the prior period.
On both the Project Metrics and Team Metrics dashboards, percent change is calculated as follows:
First, Sleuth calculates the net difference between the two periods by subtracting the average for the prior period from the average for the currently selected period.
Then, Sleuth calculates the percent change by dividing this net difference by the average for the prior period and multiplying that result by 100
The Trends dashboard also displays percent change for each of the four DORA metrics, but the calculation for percent change here differs slightly from the Project Metrics and Team Metrics dashboards in that the Trends dashboard display only one period of time (i.e. has no concept of a "prior period" in tis comparison.
For the Trends dashboard, Sleuth calculates percent change by splitting the selected period into two equal halves and calculating the average for each half. From there, the calculation of percent change is similar to the one described for the Project Metrics and Team Metrics dashboards above.
Sleuth does not require users to explicitly associate Teams with Projects. Rather, when calculating Team-level metrics, Sleuth automatically infers which Teams are contributing to which Projects by searching for Team Members within Deploys (and in the case of Code Change Deploys, by searching within the underlying PRs, Branches, Builds, and Issues included in those Deploys). If any Team Member is included as an author on a PR, Branch, or Build within a Deploy, as an owner of an linked Issues listed in the Deploy, or as the initiator of the Deploy itself, then Sleuth will include the entirety of that Deploy in its calculation of Team level-metrics for all Teams to which that Team Member belongs.
As such, Sleuth can present powerful DORA metric "intersections" that show each Team's relative contribution to the DORA metrics for the Projects they're working on. This is evident in views such Team Metrics dashboard's Projects contributed to panel below, which shows the DORA metrics at the specific intersection of this Team and those Projects.
Similarly, from within the Project Metrics dashboard, Sleuth presents a view into Contributing teams and their relative impacts on that Project's metrics.
When viewing metric averages across multiple Projects in Sleuth, it's important to note that Sleuth calculates cross-Project averages based on the underlying Deploys within each Project.
So, for example, if Project A has 2 deploys and Project B has 7 deploys, Sleuth will calculate the average CLT across both Projects by adding the CLT for all 9 Deploys and then dividing that sum by 9 (the total number of Deploys across both Projects).
This produces an average CLT result that in most cases will not be equal to the result of adding up the Project-level CLTs and dividing that sum by 2 (the total number of Projects). Sleuth has been intentionally designed around a Deploy-centric point of view, and we believe this Deploy-level handling of cross-project averages provides the most accurate representation of customers' DORA metrics across Projects.
Some specific use cases where this applies include:
Viewing multiple Projects on the Trends dashboard
Using Labels to view cross-project metrics
Viewing Team-level metrics for Teams working on multiple Projects
Passing multiple Project slugs into the Sleuth API