Engineering leaders often focus on software development metrics related to their software engineers' efficiency rather than the value those engineers provide to customers. While developer productivity metrics can be insightful at a high level, a focus on the performance and reliability of your applications and infrastructure is appreciated by not only your engineers but also your customers. Organizations rely on a range of software development metrics to fully understand software engineers' progress and software quality, such as performance and user satisfaction.
Engineering KPIs (Key Performance Indicators) give you a high-level insight into how the engineering team is doing. If your engineers are churning or have low productivity, this can be an early warning sign that something isn't working and needs to change.
A focus on performance data gives customers confidence in your software products since they know there will not be sudden outages of services due to code changes went wrong. They also know new features won't make things worse for them when applied as patches instead of updates because reliability metrics track these usage patterns over time.
At its core, reliability means stability and availability of services. Reliability metrics can be measured in two different ways using leading and lagging indicators. Leading indicators can be things like source code test coverage. Lagging indicators can be things like bug reports or the average time between incidents.
Team velocity is a key performance indicator that measures the number of points an engineering team has delivered to the end-user in a given period of time. This can often be measured by a product, project, or development manager. Sprint burndown charts are often used to measure the team velocity over time. When following an agile software development process, a burndown chart can help facilitate conversations about team efficiency.
Code coverage is a software KPI software development teams use to measure code quality. This key performance indicator measures the percentage of code that is covered by unit or integration tests. This can be measured as a snapshot in time or done retrospectively over an entire period. It may be helpful to measure and track this metric on a project, release, sprint, or team level. Teams should also track performance data alongside code coverage metrics: if you write new features, but your application becomes less reliable due to those changes, what was gained?
Release quality refers to how often releases are pushed to a production environment without major defects such as critical errors or any major detriment to customer experience. The number reflects performance and reliability since they go hand-in-hand with each other when working on software development projects at scale. Another helpful metric to measure may be the mean time between failures.
You might not think that metrics like net promoter score (NPS) wouldn't qualify as a software metric. However, customer value and perception are incredibly relevant for any development team trying to build a scalable product.
Not only does customer perception and NPS matter, but it can also help a company assess the quality of their software engineering team. For example, if the NPS from users is low, it may indicate performance or reliability issues with the product engineers need to find and fix.
Customer effort score is a metric that provides a measure of customer satisfaction. Engineers should care about this metric because it can indicate how easy or difficult it is for customers to use the product. In addition, a product's customer effort score often correlates to other metrics such as overall satisfaction, leading to more frequent positive customers reviews.
Software engineering metrics are notoriously difficult to measure. If you were to ask any software engineer about the worst KPI to look at, lines of code shipped would definitely be close to the top, but why?
The pitfall with KPIs that are not based on customer value is a bias towards micromanagement and a developer's output rather than their outcomes.
For example, let's say that you have an engineer working in a brand new codebase. They'll need to write a lot of boilerplate code to get the application working. This particular engineer shipped thousands of lines of code, but your users don't adopt the features they end up shipping, so very little value is added.
On the other hand, an engineer may be working with a very complex system, and by refactoring a few hundred lines of code, they doubled the speed it takes for a customer to complete a task within the application.
If you were to look at metrics like lines of code shipped or code churn in a vacuum, you wouldn't get the whole picture, or worse, you could make a decision that actively hurts your business.
While you may be tempted to measure productivity or code metrics, it's important to ensure that any metric you measure is rooted in delivering business value.