Task: Implement Test
This task describes how to develop standalone or collaborating tests.
Purpose

The purpose of this task is to:

  • Implement one or more test work products that enable the validation of the software product through physical execution
  • Develop tests that can be executed in conjunction with other tests as part of a larger test infrastructure
Relationships
Steps
Select appropriate implementation technique
Purpose:  To determine the appropriate technique to implement the test. 

Select the most appropriate technique to implement the test. For each test that you want to conduct, consider implementing at least one Test Script. In some instances, the implementation for a given test will span multiple Test Scripts. In others, a single Test Script will provide the implementation for multiple tests.

Typical methods for implementing tests include writing a textual description in the form of a script to be followed (for manual testing) and the programming, captured-recording or generation of a script-based programming language (for automated testing). Each method is discussed in the following sections.

As with most approaches, we recommend you'll get more useful results if you use a mixture of the following techniques. While you don't need to use them all, you shouldn't confine yourself to using a single technique either.

Sub-topics:

Manual Test Scripts To Select Implement Technique

Many tests are best conducted manually, and you should avoid the trap of attempting to inappropriately automate tests. Usability tests are an area where manual testing is in many cases a better solution than an automated one. Also tests that require validation of the accuracy and quality of the physical outputs from a software system generally require manual validation. As a general heuristic, it's a good idea to begin the first tests of a particular Target Test Item with a manual implementation; this approach allows the tester to learn about the target item, adapt to unexpected behavior from it, and apply human judgment to determine the next appropriate action to be taken.

Sometimes manually conducted tests will be subsequently automated and reused as part of a regression testing strategy. Note however that it isn't necessary or desirable-or even possible-to automate every test that you could otherwise conduct manually. Automation brings certain advantages in speed and accuracy of test execution, visibility and collation of detailed test results and in efficiency of creating and maintaining complex tests, but like all useful tools, it isn't the solution to all your needs.

Automation comes with certain disadvantages: these basically amount to an absence of human judgment and reasoning during test execution. The automation solutions currently available simply don't have the cognitive abilities that a human does-and it's arguably unlikely that they ever will. During implementation of a manual test, human reasoning can be applied to the observed responses of the system to stimulus. Current automated test techniques and their supporting tools typically have limited ability to notice the implications of certain system behaviors, and have minimal ability to infer possible problems through deductive reasoning.

Programmed Test Scripts To Select Implement Technique

Arguably the method of choice practiced by most testers who use test automation. In it's purest form, this practice is performed in the same manner and using the same general principles as software programming. As such, most methods and tools used for software programming are generally applicable and useful to test automation programming.

Using either a standard software development environment (such as Microsoft Visual Studio or IBM Visual Age) or a specialized test automation development environment (such as the IDE provided with Rational Robot), the tester is free to harness the features and power of the development environment to best effect.

The negative aspects of programming automated tests are related to the negative aspects of programming itself as a general technique. For programming to be effective, some consideration should be given to appropriate design: without this the implementation will likely fail. If the developed software will likely be modified by different people over time-the usual situation-then some consideration must be given to adopting a common style and form to be used in program development, and ensuring it's correct use. Arguably the two most important concerns relate to the misuse of this technique.

First, there is a risk that a tester will become engrossed in the features of the programming environment, and spend too much time crafting elegant and sophisticated solutions to problems that could be achieved by simpler means. The result is that the tester wastes precious time on what are essentially programming tasks to the detriment of time that could be spent actually testing and evaluating the Target Test Items. It requires both discipline and experience to avoid this pitfall.

Secondly, there is the risk that the program code used to implement the test will itself have bugs introduced through human error or omission. Some of these bugs will be easy to debug and correct in the natural course of implementing the automated test: others won't. Just as errors can be elusive to detect in the Target Test Item, it can be equally difficult to detect errors in test automation software. Furthermore, errors may be introduced where algorithms used in the automated test implementation are based on the same faulty algorithms used by the software implementation itself. This results in errors going undetected, hidden by the false security of automated tests that apparently execute successfully. Mitigate this risk by using different algorithms in the automated tests wherever possible.

Recorded or captured Test Scripts To Select Implement Technique

There are a number of test automation tools that provide the ability to record or capture human interaction with a software application and produce a basic Test Script. There are a number of different tool solutions for this. Most tools produce a Test Script implemented in some form of a high-level, normally editable, programming language. The most common designs work in one of the following ways:

  • Capturing the interaction with the client UI of an application based on intercepting the inputs sent from the client hardware peripheral input devices: mouse, keyboard and so forth to the client operating system. In some solutions, this is done by intercepting high-level messages exchanged between the operating system and the device driver that describe the interactions in a somewhat meaningful way; in other solutions this is done by capturing low-level messages, often based at the level of time-based movements in mouse coordinates or key-up and key-down events.
  • Intercepting the messages sent and received across the network between the client application and one or more server applications. The successful interpretation of those messages relies typically on the use of standard, recognized messaging protocols, such as HTTP, SQL and so forth. Some tools also allow the capture of "base" communications protocols such as TCP/IP, however it can be more complex to work with Test Scripts of this nature.

While these techniques are generally useful to include as part of your approach to automated testing, some practitioners feel these techniques have limitations. One of the main concerns is that some tools simply capture application interaction and do nothing else. Without the additional inclusion of observation points that capture and compare system state during subsequent script execution, the basic Test Script cannot be considered to be a fully-formed test. Where this is the case, the initial recording will need to be subsequently augmented with additional custom program code to implement observation points within the Test Script.

Various authors have published books and essays on this and other concerns related to using test procedure record or capture as a test automation technique. To gain a more in-depth understanding of these issues, we recommend reviewing the work available on the Internet by the following authors: James Bach, Cem Kaner, Brian Marick and Bret Pettichord, and the relevant content in the book Lessons Learned in Software Testing [KAN01]

Generated Tests To Select Implement Technique

Some of the more sophisticated test automation software enables the actual generation of various aspects of the test-either the procedural aspects or the Test Data aspects of the Test Script-based on generation algorithms. This type of automation can play a useful part in your test effort, but shouldn't be considered a sufficient approach by itself. The Rational TestFactory tool and the Rational TestManager datapool generation feature are example implementations of this type of technology.

Set up test environment preconditions
Purpose:  To ready the environment to the correct starting state. 

Setup the test environment, including all hardware, software, tools, and data. Ensure all components are functioning properly. Typically this will involve some form of basic environment reset (e.g. resetting the windows registry and other configuration files), restoration of underlying databases to known state, and so forth in addition to tasks such as loading paper into printers. While some tasks can be performed automatically, some aspects typically require human attention.

Sub-topics:

(Optional) Manual walk-through of the test To Setup

Especially applicable to automated Test Scripts, it can be beneficial to initially walk-through the test manually to confirm expected prerequisites are present. During the walk-through, you should verify the integrity of the environment, the software and the test design. The walk-through is most relevant where you are using an interactive recording technique, and least relevant where you are programming the Test Script. The objective is to verify that all the elements required to implement the test successfully are present.

Where the software is known to be sufficiently stable or mature, you way elect to skip this step where you deem the risk of problems occurring in the areas the manual walk-through addresses are relatively low.

Identify and confirm appropriateness of Test Oracles To Setup

Confirm that the test oracles you plan to use are appropriate. Where they have not already been identified, now is the time for you to do so.

You should try to confirm through alternative means that the chosen Test Oracle(s) will provide accurate and reliable results. For example, if you plan to validate test results using a field displayed via the application's UI that indicates a database update has occurred, consider independently querying the back-end database to verify the state of the corresponding records in the database. Alternatively, you might ignore the results presented in an update confirmation dialog, and instead confirm the update by querying for the record through an alternative front-end function or operation.

Reset test environment and tools To Setup

Next you should restore the environment-including the supporting tools-back to it's original state. As mentioned in previous steps, this will typically involve some form of basic operating environment reset, restoration of underlying databases to a known state, and so forth in addition to tasks such as loading paper into printers. While some reset tasks can be performed automatically, some aspects typically require human attention.

Set the implementation options of the test-support tools, which will vary depending on the sophistication of the tool. Where possible, you should consider storing the option settings for each tool so that they can be reloaded easily based on one or more predetermined profiles. In the case of manual testing, it will include tasks such as partitioning a new entry in a support system for logging the test results, or signing into an issue and change request logging system.

In the case of automated test implementation tools, there may be many different settings to be considered. Failing to set these options appropriately may reduce the usefulness and value of the resulting test assets.

Implement the test
Purpose:  To implement one or more reusable test implementation assets. 

Using the Test-Ideas List, or one or more selected Test Case artifacts, begin to implement the test. Start by giving the test a uniquely identifiable name (if it does not already have one) and prepare the IDE, capture tool, spreadsheet or document to begin recording the specific steps of the test. Work through the following subsections as many times as are required to implement the test.

Note that for some specific tests or types of tests, there may be little value in documenting the explicit steps required to conduct the test. In certain styles of exploratory testing repetition of the test is not an expected deliverable. For very simple tests, a brief description of the purpose of the tests will be sufficient in many cases to allow it to be reproduced.

Sub-topics:

Implement navigation actions To top of page

Program, record or generate the required navigation actions. Start by selecting your appropriate navigation method of choice. For most classes of system these days, a "Mouse" or other pointing device is the preferred and primary medium for navigation. For example, the pointing and scribing device used with a Personal Digital Assistants (PDA) is conceptually equivalent to a Mouse.

The secondary navigation means is generally that of keyboard interaction. In most cases, navigation will be made up of a combination of mouse-driven and keyboard-driven actions.

In some cases, you will need to consider voice-activated, light, visual and other forms of recognition. These can be more troublesome to automate tests against, and may require the addition of special test-interface extensions to the application to allow audio and visual elements to be loaded and processed from file rather than captured dynamically.

In some situations, you may want to-or need to-perform the same test using multiple navigation methods. There are different approaches you can take to achieve this, for example: automate all the tests using one method and manually perform all or some subset of the tests using others; separate the navigation aspects of the tests from the Test Data that characterize the specific test, providing and building a logical navigation interface that allows either method to be selected to drive the test; simply mix and match navigation methods.

Implement observation points To top of page

At each point in the Test Script where an observation should be taken, use the appropriate Test Oracle to capture the desired information. In many cases, the information gained from the observation point will need to be recorded and retained to be referenced during subsequent control points.

Where this is an automated test, decide how the observed information should be reported from the Test Script. In most cases it usually appropriate simply to record the observation in a central Test Log relative to it's delta-time from the start of the Test Script; in other cases specific observations might be output separately to a spreadsheet or data file for more sophisticated uses.

Implement control points To top of page

At each point in the Test Script where a control decision should be taken, obtain and assess the appropriate information to determine the correct branch for the flow of control to follow. The data retrieved form prior observation points are usually input to control points.

Where a control point occurs, and a decision made about the next action in the flow-of-control, we recommend you record the input values to the control point, and the resulting flow that is selected in the Test Log.

Resolve errors in the test implementation To top of page

During test implementation, you'll likely introduce errors in the test implementation itself. Those errors may even be the result of things you've omitted from the test implementation or may be related to things you've failed to consider in the test environment. These errors will need to be resolved before the test can be considered completely implemented. Identify each error you encounter and work through addressing them.

In the case of test automation that uses a programming language, this might include compilation errors due to undeclared variables and functions, or invalid use of those functions. Work your way through the error messages displayed by the compiler or any other sources of error messages until the Test Script is free of syntactical and other basic implementation errors.

Note that during subsequent execution of the test, other errors in the test implementation might be found. Initially these may appear to be failures in the target test item - you need to be diligent when analyzing test failures that you confirm the failures are actually in the target test item, and not in some aspect of the test implementation.

Establish external data sets
Purpose:  To create and maintain data, stored externally to the test script, that are used by the test during execution. 

In many cases it's more appropriate to maintain your Test Data external to the Test Script. This provides flexibility, simplicity and security in Test Script and Test Data maintenance. External data sets provide value to test in the following ways:

  • Test Data is external to the Test Script eliminating hard-coded references in the Test Script
  • External Test Data can be modified easily, usually with minimal Test Script impact
  • Additional Test Cases can easily be supported by the Test Data with little or no Test Script modifications
  • External Test Data can be shared with many Test Scripts
  • Test Scripts can be developed to use external Test Data to control the conditional branching logic within the Test Script.
Verify the test implementation
Purpose:  To verify the correct workings of the Test Script by executing the Test Script. 

Especially in the case of test automation, you will probably need to spend some time stabilizing the workings of the test when it is being executed. When you have completed the basic implementation of the Test Script, it should be tested to ensure it implements the individual tests appropriately and that they execute properly.

Recover test environment to known state To Verify Test Implementation

Again, you should restore the environment back to it's original state, cleaning up after your test implementation work. As mentioned in previous steps, this will typically involve some form of basic operating environment reset, restoration of underlying databases to known state, and so forth in addition to tasks such as loading paper into printers. While some tasks can be performed automatically, some aspects typically require human attention.

Setup tools and initiate test execution To Verify Test Implementation

Especially in the case of test automation, the settings within the supporting tools should be changed The objective is to verify the correct workings of the Test Script by executing the Test Script.

It's a good idea to perform this step using the same Build version of the software used to implement the Test Scripts. This eliminates the possibility of problems due to introduced errors in subsequent builds.

Resolve execution errors To Verify Test Implementation

It's pretty common that some of the things done and approaches used during implementation will need a degree of adjustment to enable the test to run unattended, especially in regard to executing the test under multiple Test Environment Configurations.

In the case of test automation, be prepared to spend some time checking and the tests "function within tolerances" and adjusting them until they work reliably before you declare the test as implemented. While you might delay this step until later in the lifecycle (e.g. during Test Suite development), we recommend that you don't: otherwise you could end up with a significant backlog of failures that need to be addressed.

Restore test environment to known state
Purpose:  To leave the environment either the way you found it, or in the required state to implement the next test. 

While this step might seem trivial, but it's an important good habit to form to work effectively with the other testers on the team-especially where the implementation environment is shared. It's also important to establish a routine that makes thinking of the system state second nature.

While in a primarily manual testing effort, it's often simple to identify and fix environment restore problems, remember that test automation has much less ability to tolerate unanticipated problems with environment state.

Maintain traceability relationships
Purpose:  To enable impact analysis and assessment reporting to be performed on the traced items. 

Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as required.

Evaluate and verify your results
Purpose:  To verify that the task has been completed appropriately and that the resulting work products are acceptable. 

Now that you have completed the work, it is a good practice to verify that the work was of sufficient value. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

Have the people who will use your work as input in performing their downstream tasks take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure you have represented or considered them sufficiently and accurately. It may be useful to have the author of the input work product review your work on this basis.

Try to remember that that RUP is an iterative delivery process and that in many cases work products evolve over time. As such, it is not usually necessary-and is in many cases counterproductive-to fully-form a work product that will only be partially used or will not be used at all in immediately subsequent downstream work. This is because there is a high probability that the situation surrounding the work product will change-and the assumptions made when the work product was created proven incorrect-before the work product is used, resulting in rework and therefore wasted effort.

Also avoid the trap of spending too many cycles on presentation to the detriment of the value of the content itself. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative or junior resource to perform work on a work product to improve it's presentation.



Properties
Multiple Occurrences
Event Driven
Ongoing
Optional
Planned
Repeatable
More Information