Выбрать главу

Communicating Metrics

We know that whatever we measure is bound to change. How many tests are running and passing? How many days until we need a “build on the shelf”? Is the full build passing? Metrics we can’t see and easily interpret aren’t worth having. If you want to track the number of passing tests, make sure that metric is visible in the right way, to the right people. Big visible charts are the most effective way of displaying metrics we know.

Lisa’s Story

My previous team had goals concerned with the number of unit tests. However, the number of unit tests passing wasn’t communicated to anyone; there were no big visible charts or build emails that referred to that number. Interestingly, the team never got traction on automating unit tests.

At my current company, everyone in the company regularly gets a report of the number of passing tests at the unit, behind-the-GUI, and GUI levels (see Tables 5-1 and 5-2 for examples). Business people do notice when that number goes down instead of up. Over time, the team has grown a huge number of useful tests.

—Lisa

Table 5-1 Starting and Ending Metrics

Table 5-2 Daily Build Results

Are your metrics worth the trouble? Don’t measure for the sake of producing numbers. Think about what you’ll learn from those numbers. In the next section, we consider the return on investment you can expect from metrics.

Metrics ROI

When you identify the metrics you need, make sure you can obtain them at a reasonable cost. If your continual build delivers useful numbers, it delivers good value. You’re running the build anyway, and if it gives us extra information, that’s gravy. If you need a lot of extra work to get information, ask yourself if it’s worth the trouble.

Lisa’s team went to a fair amount of trouble to track actual time spent per story versus estimated time. What did they learn other than the obvious fact that estimates are just that? Not much. Some experienced teams find they can dispense with the sprint burndown chart because the task board gives them enough information to gauge their progress. They can use the time spent estimating tasks and calculating the remaining hours on more productive activities.

This doesn’t mean we recommend that you stop tracking these measurements. New teams need to understand their velocity and burndown rate, so that they can steadily improve.

Defect rates are traditional software metrics, and they might not have much value on a team that’s aiming for zero defects. There’s not much value in knowing the rate of bugs found and fixed during development, because finding and fixing them is an integral part of development. If a tester shows a defect to the programmer who’s working on the code, and a unit test is written and the bug is fixed right away, there’s often no need to log a defect. On the other hand, if many defects reach production undetected, there can be value in tracking the number to know if the team improves.

When it started to rewrite its buggy legacy application, Lisa’s team set a goal of no more than six high-severity bugs in new code reported after the code is in production over a six-month period. Having a target that was straightforward and easy to track helped motivate the team to find ways to head bugs off during development and exceed this objective.

Figure each metric’s return on investment and decide whether to track or maintain it. Does the effort spent collecting it justify the value it delivers? Can it be easily communicated and understood? As always, do what works for your situation. Experiment with keeping a particular metric for a few sprints and evaluate whether it’s paying off.

One common metric that relates to software quality is the defect rate. In the next section, we look at reasons to track defects, or to not track defects, and what we can learn from them.

Defect Tracking

One of the questions that are asked by every new agile team is, “Do we still track bugs in a defect tracking system?” There’s no simple answer, but we’ll give you our opinion on the matter and offer some alternatives so that you can determine what fits your team.

Why Should We Use a Defect Tracking System (DTS)?

A lot of us testers have used defect tracking as the only way to communicate the issues we saw, and it’s easy to keep using the tools we are familiar with. A DTS is a convenient place to keep track of not only the defect but the priorities, severities, and status, and to see who it is assigned to. Many agile practitioners say that we don’t need to do this anymore, that we can track defects on cards or some other simple mechanism. We could write a test to show the failure, fix the code, and keep the test in our regression suite.

However, there are reasons to keep using a tool to record defects and how they were fixed. Let’s explore some of them now.

Convenience

One of the concerns about not keeping a defect tracking system is that there is no place to keep all of the details of the bug. Testers are used to recording a bug with lots of information, such as how to reproduce it, what environment it was found in, or what operating system or browser was used. All of this information cannot fit on a card, so how do you capture those details? If you are relying only on cards, you also need conversation. But with conversation, details get lost, and sometimes a tester forgets exactly what was done—especially if the bug was found a few days prior to the programmer tackling the issue.

A DTS is also a convenient place to keep all supplemental documentation, such as screen prints or uploaded files.

Knowledge Base

We have heard reasons to track defects such as, “We need to be able to look at old bug reports.” We tried to think of reasons why you would ever need to look at old bug reports, and as we were working on this chapter, Janet found an example.

Janet’s Story

When I was testing the pre-seating algorithm at WestJet, I found an anomaly. I asked Sandra, another tester, if she had ever come across the issue before. Sandra vaguely recalled something about it but not exactly what the circumstances were. She quickly did a search in Bugzilla and found the issue right away. It had been closed as invalid because the business had decided that it wasn’t worth the time it would take to fix it, and the impact was low.

Being able to look it up saved me from running around trying to ask questions or reentering the bug and getting it closed again. Because the team members sit close to each other, our talking led to another conversation with the business analyst on the team. This conversation sparked the idea of a FAQ page, an outstanding issues list, or something along that line that would provide new testers a place to find all of the issues that had been identified but for which the decision had been made not to address them.

—Janet

This story shows that although the bug database can be used as a knowledge base, there might be other mechanisms for keeping business decisions and their background information. If an issue is old enough to have been lost track of, maybe we should rewrite it and bring it up again. The circumstances may have changed, and the business might decide it is now worthwhile to fix the bug.

The types of bugs that are handy to keep in a DTS are the ones that are intermittent and take a long time to track down. These bugs present themselves infrequently, and there are usually gaps in time during which the investigation stalls for lack of information. A DTS is a place where information can be captured about what was figured out so far. It can also contain logs, traces, and so on. This can be valuable information when someone on the team finally has time to look at the problem or the issue becomes more critical.