Automation Software Test Metrics (ATM)

Automation software test metrics can help implement and improve the Organizational test processes and consequently help track its status. The projects complexity is directly affected due to increased lines of code, defects and fixes etc. Thus project Complexity tends to decrease the test coverage and ultimately affect the quality too. Such negatively evangelized projects affect certain other major factors like overall cost of the product and the time in which to deliver the software. To prevent or overcome such circumstances, very well defined metrics can provide insight into the status of automated testing efforts. Automation software test metrics always has positive impact such that it reverses the negative trend if implemented properly.

ATM ( automation software test metrics) helps in assessing and knowing whether the goals like coverage, progress and quality are being met. In order to accomplish these goals, we must have metrics in place which can set standard of measurement.

What is a metric? Metric is a standard of measurement. A metric is a measurement criteria that helps measure past and present performance and/or predict future performance.
Most of the Metrics have similar following category of which they are comprised of:
Quality: Purposeful/meaningful measured results of test/execution which represent the product quality. Some of the examples are: Defects logged, Usability, performance, scalability and customer satisfaction.
Progress: Specific parameters which help identify the test progress are compared against the success criteria, are collected iteratively over time which can be later on used to represent the progress in epitomized way.
Coverage: In order to measure the test scope and success, meaningful parameters called Coverage is required.

What is ATM and how good it should be?

Automated Test Metrics are used to measure the past, present and future performance of its process and the relevant efforts been put. A metrics stands good only if it is related to the performance of the effort and it can only happen if there are clearly defined goals pertaining to the automation effort.

A well defined automated Test metric has the following characteristics:

  • Its clearly measurable
  • Its purposeful/meaningful
  • Epitomized data/Graphical data representation is derived from easy collected data
  • It should be valid input criteria for Automation improvement.
  • Importantly, it should be simple.

Automation Test Coverage

Automation Test Coverage metrics actually determines via test execution results that whether we have actually achieved what we had covered in past during automation of test cases.
Altogether with manual test coverage, against the total number of tests this type of metric can measure the completeness of the test coverage and can measure how much automation is being executed.

ATC  %    =    Automation Coverage / Total Coverage

Here ‘Total coverage’ means, requirements, units/components, or code coverage.

Automation Progress

Automation Progress points towards the progress of Number of Test cases automated out of the total number of test cases which are automatable.

AP %    =    Number of test cases automated / Number of test cases automatable

The automation progress is tracked over the time, as the automation progresses against the defined Automation milestone tasks, the metric can present valuable data for the time it will take for the whole set of test cases to get automated.

Automation Index
Every project have a sufficient duration, either has automatable test procedures or requires automation from scratch/begin. To fit either of these criteria’s, Automation index or Percent Automatable can be defined:

AI or PA %    =    Number of test cases automatable / Total Number of test cases

Defect Density
Defect measurement is vital for both manual and automated process. If any module/ component of application under test it detected with high density of defects, then it automatically becomes prone to retesting using automation. Defect density is measured using the metrics, the total known defects divided by the size of the software entity being measured.

DD    =    No. Of Known Defects / Size of the Software/Application

Defect Aging
Defect Aging is the difference between – the date when a defect is detected and the current date (if the defect is still open) or the date the defect was fixed (if the defect is already fixed).

DA ( in Time)    =    Defect Fix Date (OR Current Date)   –   Defect Detection Date

Defect Trend Analysis
Defect trend analysis is the trend of defects found over time. This trend truly shows the health/stability of the project. Defect trend is considered to be improving if the no. of defects is reducing over a time, trend is considered to be worse if the defects are increasing. In an agile methodology where the test deliverables needs to be fit in iterative cycles, if the defects count for component/module is reducing in progressive cycles, then it helps in determining the closure of the feature test.

DTA    = No. Of Known Defects / No. Of test Procedures Executed

Defect removal Efficiency
Defect removal efficiency is measured in the percentage of effectiveness of defect removal efforts. This metric if used in collaboration with Automation definitely helps improve the quality of the product. Greater the percentage value, better the quality of the product.

DRE    =    (No. Of Defects found during testing ) /(  ( No. Of Defects found during testing + No. of Defects found after delivery ) )

?Most Popular Topics

?Top 5 Aspects to consider when testing financial applications

?The Challenges Faced by Today’s Quality Assurance Practices

?Design Patterns In Test Automation World

 

Follow us on Aspire Systems Testing to get detailed insights and updates about Testing!

Latest posts by Harvinder (see all)