Our thought leadership articles started with general views of aspects of The SQA and Testing marketplace. We then looked at the continuing move to using “agile” approaches and the impact of these drivers on the ongoing evolution of SQA and Testing. We also separately looked at the evolution of “hybrid” roles and the evolution of the “full stack” QA engineer and where the experienced tester stands today. We then did some more focused articles around TEM, TDM and Production Testing.
In this succeeding article we are going to discuss Measurement (software metrics) and process improvement and how they are evolving in an ever more agile world.
Software metrics can be classified into three categories: product metrics, process metrics and project metrics. Product metrics describe the characteristics of the product such as size, complexity, design features etc. and are covered by definitions, user stories, specifications, estimates etc. Project metrics describe the project characteristics and execution e.g. team size, duration, and are covered by project plans and progress reports. Process metrics provide measures of development speed and efficiency and can be used to improve software development and quality.
Software quality metrics are a subset of software metrics and focus on the quality aspects of the product, process and project. Software quality metrics are used in two ways. The first links to product and project metrics in that reporting progress and status of software quality and testing is a key part of programme reporting and governance. The second dimension focuses on the use of software quality metrics to continually assess the effectiveness of the software Quality and Testing operating model and take actions to improve the quality of the process of developing software through the full development lifecycle.
Traditionally, Programme/Project reporting was normally aligned to programme/project overall plans and governance, and covered progress reports, defect metrics, both point in time and trend analysis. It was also used to support impact analysis of proposed changes and as an input to “early warning” of major quality threats to project success. With the move to Agile the “waterfall” version of progress v. plan etc. has been replaced by measures like Sprint/epic/release burndown, story points delivered, commits per project, test case coverage, build success rate, release duration, and deployments per day.
More significantly, the overall set of measures have also evolved through the move to agile to cover additional dimensions with a broader focus. These consider key business metrics around customer satisfaction and value delivered on the much shorter cycle times we are operating in today. Historically, business cases projected longer term benefits for projects which often stretched over a couple of years and benefits management and realising value was longer term and more challenging to measure and achieve.
This “digital transformation” has made developing the “right” product fundamental to success and developing the product “right” the focus on speed, quality and sustainability. With this continuing evolution if these key business metrics are not being met some of the other metrics are somewhat irrelevant.
While the standard software metrics (product, process and project) have evolved through the move to agile the big growth has been in mechanisms to engage the customer and measure customer satisfaction with the product and the “experience”. Extensive clickstream analysis, using predictive algorithms to offer alternatives based on usage patterns, A/B feature testing etc., use the data generated in the “transactions’ to understand behaviour to optimise the experience. This is complemented by looking for actual ratings from customers as the most direct and valuable feedback. Just look at the level of in contact and post contact “surveys” now being conducted by service providers.
Most organisations use Net Promoter Score®, or NPS®, which measures customer experience and predicts business growth. This metric has transformed the business world and now provides the core measurement for customer experience management programs globally.
If we assume that we have measures/processes in place to measure the customer experience and to determine the next products or features to be delivered then we need to ensure the development process delivers fit for purpose features in the appropriate timeline.
Good measures of a process are helpful to monitor the performance of a specific activity. They need to help IT and business leaders optimize the system’s performance as a whole, from customer request to delivery by identifying areas that would most benefit from attention. That's why organizations need the ability to measure the entire system - end-to-end - to understand how value flows and where it is constrained, and most importantly, to correlate those metrics with desired business outcomes. This approach allows for continuous optimization in the pursuit of delivering greater and greater value to the organization, faster. For example, The Running Tested Features (RTF) metric tells you how many software features are fully developed and passing all acceptance tests, thus becoming implemented in the integrated product.
Agile teams should also look at release frequency and delivery speed. At the end of each sprint, the team should release software out to production. How often is that actually happening? Are most release builds getting shipped? In the same vein, how long does it take for the team to release an emergency fix out to production? Is release easy for the team or does it require heroics?
Ideally, an organisation recognises the value of measurement and is gathering the metrics it requires to measure both the development process, and the outcome in terms of value and customer satisfaction. Quality remains an important metric for agile teams and there are a number of traditional metrics that can be applied to agile development:
- How many defects are found...
- at what stage of the cycle?
- after release to customers?
- by people outside of the team?
- How many defects are deferred to a future release?
- How many customer support requests are coming in?
- What is the percentage of automated test coverage?
All these metrics are vital to the business/IT development teams to understand progress and quality and are also a crucial input to process improvement ( the subject of our next article). They may not be as easily understood by senior management so it would be useful to have an interpretation of the metrics that you can present to the senior management so they can comprehend the return value provided by the metrics. From a quality perspective it is useful to have information about the effectiveness of your software testing process. Defect detection percentage is one such agile testing metric.
At Vantage Resources we use our Metisure© framework to deliver our range of software Quality Assurance and Test services. Included in the framework are the data definitions and structures for a set of standard metrics to support software quality. These cover:- Defects by severity by area (function/story/sprint etc.), defect arrival and fix rates, defect position and status, test run rates and completion rates etc. and trend information across these key metrics.