Tracking Test Metrics

Tracking test metrics is really an important and integral part of a test lead’s job. It helps track testing progress, testing trend and helps with telling a story with data. The data is always there. The leads responsibility really lies in determining what to collect and how to present it so as to tell the testing story to the stakeholders, management, project team, etc.

What to collect?
This question is not as complicated as it looks. It boils down to 3 things

What did you plan for the release and where you are in a weekly basis. This is based on two things: Test case creation and test execution. Number of test cases planned versus number of actual test cases created. Number of test cases planned to be executed and number of actual test cases executed with their status (pass/fail).

Some examples of metrics for gathering Progress are

  • Planned Test Cases Created versus Actual Test Cases Created
  • Planned Test Cases Executed versus Actual Test Cases Executed

Trend is more along the lines of the direction testing is taking and what you can predict from the metrics. What does the weekly execution data tell you as a lead? Do you see more failures for a new feature or an existing feature? Do you see more test case execution (productivity) occurring when you get a build early in the week than when you get a build mid week? Its all about letting the data talk to you. At times you wont even see a trend till you see the same data with different context.

Some examples of metrics for gathering Trend are

  • Weekly Test Cases Executed
  • Weekly Test Cases Created
  • Cumulative Defect Density 
  • Weekly Defect density
  • Open and Closed Defects Per Week
  • Defect Age

This boils down to the defects that testing team finds. Defect data can be sliced and diced in several different ways and each time it really talks about the quality of the product. Based on the data that is collected for every defect that testing finds, a lot can be said about the requirements, code, design, product, test cases, testing process, customer, etc. Analysis of defects can really tell the organization a lot about how well we are doing our job and also expose our weakness that can then be easily rectified before the product goes out the door. Finding defects is in no way a negative to any one team or department. Its just a way to judge how we are doing.

Some examples of metrics for gathering Quality are

  • New Defects found per week
  • Defects closed per week
  • Defects found per feature
  • Defects found per build
  • Defects found by Severity
  • Defects found by Priority

This entry was posted in Software Testing, Test Metrics/Status. Bookmark the permalink.

4 Responses to Tracking Test Metrics

  1. Dave Doble says:

    >How do you define defect density?

  2. shilpa says:

    >The standard industry definition for defect density is number of defects divided by the code size. The goal of this metric is to mesure the amount of rework required. Since testers usually dont have the access to the code to determine the number of defects found by line of code they have to be a bit more creative in gathering this data. Some examples would be 1. defects found per build/test cases executed per build2. defects found per feature/test cases per feature

  3. james bach says:

    I stopped counting test cases more than 18 years ago, somewhat less than half-way through my time as a working test manager in Silicon Valley. I would say that any time– with very few possible exceptions– that I see someone reporting on test case counts, I can guarantee they don’t understand what testing actually is, or how to assess its progress. Testing has little to do with the quantity of your test cases. For the same reason, I can’t assess your work by counting the number of files on your hard drive. A test case is basically a file.

    Test case counts basically mean nothing. (There are obscure exceptions, yes. But the general point remains.) They are, however, a popular way to mislead management. Worldwide, lying with test cases is a pandemic. It’s part of what I call the “fake testing industry.” Admittedly, fake testing is a large industry. Still, I urge you to rethink your position on this.

    On July 12th, 1992, I realized that my test case metrics were worthless. I vowed to stop using them. I have not regretted this.

    — James Bach, Consulting Software Tester
    Author: Lessons Learned in Software Testing
    Author: Secrets of a Buccaneer-Scholar

  4. shilpa says:

    I agree with you James. I have had long conversations with Michael Bolton too regarding this topic.
    I came into a system which had set expectations at my current work. I am not saying you are correct or they are wrong. There is a balance and to get to that I have to find my own path. I cant scratch away everything that I have been working with for the past 10 years. Like you said larger segment of the industry follows this and unfortunately I can only take baby steps. From test cases I have moved to user stories and acceptance criteria’s. With time my metrics will also change.
    Thanks for your insight and wish to continue my conversations with you on these topics.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s