11/13/2010

Importance of software testing matrics

I found some nice articles on using 'Matrics' in software testing practices. The 'Testing experience (te)-The magazine for professional testers' (http://www.testingexperience.com/) is one of the most useful magizines published on software testing has devoted its september volume talking about software matrics.

One of the articles in this volume named as 'Go Lean on Your Software Testing Metrics' by Jayakrishnan Nair grabbed my attention a lot and I found its very useful. And I hope some points there worth noting here for reading in case if someone does not have the access to the magazine.
All credit should go to the magazine and the original authors


A Basic Set of Software Testing Metrics
A small set of basic metrics for software testing that are useful for
many organizations is listed below.
• Test Coverage
• Productivity
• Defect Detection Effectiveness
• Defect Acceptance Ratio
• Estimation Accuracy

Test Coverage

The Test Coverage metric measures the extent to which testing
has covered the requirements of the application under test. If this
is not close to 100%, it means that portions of the application remain
untested and therefore undetected defects may be present
in the product. Therefore it is a leading indicator of quality of the
final deliverable and therefore on customer satisfaction (which
is very likely a business goal for the company). Additional time
spent fixing defects in production can impact time to market
the product. Poor customer satisfaction due to quality issues or
a delayed product launch can adversely impact future revenues

Productivity

Productivity is a fundamental metric in software engineering; it
tells you how fast work gets done. In the context of testing, Productivity
can be defined for test design and execution as shown
in the table below. (It can also be defined similarly for other testing
tasks such as test data set up).

Defect Detection Effectiveness
(Percentage of the total number of defects reported for the application that are reported during the testing stage)
is a measure of how effective the testing process is in detecting defects and not letting them
pass through to the next stage in the software lifecycle.


Defect Acceptance Ratio(Percentage of defects reported that are accepted as valid)

If the DAR value is too low, it means that testers are reporting too
many defects that are invalid. This has a direct impact on Productivity.
When an invalid defect is recorded, effort is wasted not only
by the testing team but also by the development team as they
have to process the defect record anyway and prove that it is invalid.
(If very frequent, this may negatively affect the relationship
between testers and developers, again adversely impacting the
project.) A large number of invalid defects may clog the defect
tracking system, which makes it hard to locate useful information

Estimation Accuracy


Estimation Accuracy measures how closely the actual effort spent
in testing tracks the effort estimated in the beginning:
and may add to the maintenance costs of the tool.

No comments:

Post a Comment