Tool Mentor: When to Use Verification Points versus Reporting Points
This tool mentor describes how to use IBM Rational Manual Tester to optimize usage of verification and reporting points.
Tool: IBM Rational Manual Tester
Relationships
Main Description

Overview

IBM® Rational® Manual Tester supports four types of statements for writing a test script: steps, verification points, reporting points, and groups. The question often arises: What is the difference between a verification point (VP) and a reporting point (RP)? This tool mentor describes the difference, and provides some guidance on when to use verification points and when to use reporting points.

How are verification points and reporting points different?

Verification points are steps inside a script that are meant to provide the tester with a way of asking questions about the application under test. When a tester has followed a series of steps that change the appearance of the application, a VP can be used to indicate that the application responded correctly. In this way, if a VP fails, the tester knows immediately that one of the following is true:

  • The tester did not follow the steps correctly.
  • The application is not behaving correctly. In this case, the application could have changed since the time when the test was written. As a result, the test itself could be out-of-date, or the application could have a new defect in it.

For example, a VP could ask "Did the application open a new window?" after the tester clicked the File > New Window menu option, or "Is the error message displayed in a red font?" after the test has deliberately created an error condition.

When a tester runs a script, VPs allow the tester to communicate that the application behaved in one of the following ways:

  • As expected (via the pass result)
  • Not as expected (via fail when the result is clearly incorrect, or via error when something completely unexpected happens, like an unexpected error dialog popping up)
  • Neither obviously as expected or not as expected (via the inconclusive result).

Reporting points are also steps in a script, but they are meant to allow the tester to report on the state of the test itself. Suppose a test is intended to prove that the tester can create a new order in an online ordering system. The tester may navigate the application correctly, following all the steps of the script and therefore giving all the VPs a pass result. However, when the tester goes to the order status screen, the order cannot be found. In this case, the tester could use the RP to indicate that the test failed, even though all the VPs passed.

As with verification points, the tester would use the pass result if the test obviously achieved the overall objective. The tester would use the fail result if the test obviously did not achieve its objective. The error result would indicate that the test result was both wrong and unexpected, and the inconclusive result would indicate that the tester could not determine if the test passed or failed.

Tests will typically have many VPs, and at least one but usually very few RPs.

This tool mentor provides an example of how you can structure a test script to use VPs and RPs in an optimal way.

Tool Steps

To create a summary report of VPs in a test script using RPs, perform the following steps:

  1. Create a series of steps to perform some task in the application being tested
  2. At the end of that series of steps, include a VP that asks if the application (or some portion of the application) now has a particular appearance
  3. If it would help the tester to run the test, include an image of your application in the VP
  4. If it helps with readability, put the steps and their VP in a group
  5. Repeat the above four steps for each task in the test
  6. At the end of the test, include one RP that asks if some overall result has been achieved by performing all the steps in the test