Task: Define Test Details
This task describes how to detail the test ideas within a specific context driven by the target test items.
Purpose

The purpose of this task is to:

  • Define the individual conditions necessary to realize a test idea in a specific context
  • Identify potential points of observation and control for the related test item(s)
  • Identify potential oracles to facilitate observation points
  • Provide consumable resources to support the test
Relationships
Steps
Examine the Target Test Item and related Test-Ideas List
Purpose:  To gain a more detailed understanding of the Targeted Test Item based on the possible Test Ideas. 

Using the Test-Ideas List as context, examine the available information about the Target Test Item. The Use Case and related work products (e.g. Use-Case Realization, Use-Case Storyboard and Use-Case Scenarios) are usually good sources to begin with, in addition to any Supplementary Specifications, Business Rules and design work products.

Where limited information is available to you, you may need to discuss the Target Test Item with the development staff directly.

Select a subset of the Test Ideas to detail
Purpose:  To determine a manageable subset of tests to define that are of most benefit in the current context. 

Review the Test-Ideas List and pick a number of the test ideas that you will design detailed tests for. In most cases you will pick a subset of the test ideas, based on time constraints, relevance of the test ideas to the current test cycle, completeness of the Target Test Item and so forth. Depending on the specific context of your situation the actual number of test ideas you take forward into design in the current test cycle will differ on a case-by-case basis.

We recommend that you avoid designing for all test ideas the first time you design from a given Test-Ideas List. Instead, take an incremental and iterative approach to working with the Test-Ideas List, focusing your efforts instead on the few ideas that you think are most likely to produce useful evaluation information for the given test cycle. This helps to mitigate the risk of devoting too much time to a single Target Test Item to the neglect of other items, and minimizes the risk of expending effort on designs for test ideas that may later prove of little interest.

For each test idea, design the Test
Purpose:  To define the key characteristics of each test that is to be derived from the Test-Ideas List. 

Using the information you've gathered so far, design the test by identifying and defining the key characteristics that will be necessary to realize the test. Note that the resulting test design may be captured in different ways:
  • Traditionally, test design was captured as a Artifact: Test Case.
  • The Artifact: Workload Analysis Model is conceptually a specialized and more complex form of Test Case that relates specifically to system performance testing.
  • Depending on the complexity of the test and the project culture, it may be appropriate to realize the test directly as a Artifact: Test Script, an approach you should consider if it is acceptable for you not to create Test Case artifacts. If you take this approach, be sure to liberally comment your Test Scripts with useful information explaining why the test is useful. Use these comments to act as an informal, in-line Test Case.

Using the information you have gathered, consider each of the following aspects of the test.

identify input, output and execution conditions

Considering the test from a "Black-box" perspective, identify the key external visible characteristics that define the test. Identify what inputs will be required to stimulate the test, and what resulting outputs are to be expected. Also enumerate the key execution condition(s)-the "How" of the execution condition does not have to be explained or understood for this step.

Note that Inputs and Expected Outputs will-depending on the specific test-range from simple data type values (eg "A", "1"), to complex multidimensional data (eg a sound clip, an object). It is better to define the qualifiers behind a particular Input or Expected Outputs, rather than just giving specific values. This provides the person subsequently implementing or executing the test with the required understanding of the reasoning behind the Test Data, allowing them to choose replacement and substitute values to vary the test in any given execution.

identify candidate points of observation

A point of observation is a point during the execution of a test at which you wish to observe some aspect of the state of the test environment. Given what you know of the execution condition(s) and the input and expected outputs, identify what specific points should be observed during test execution, and identify what a data should be observed.

identify candidate points of control

A point of control is a point during the execution of a test at which you wish to make a decision from multiple choices regarding the test's flow of control. Investigate the Test Scenarios that are available, and for each consider the points at which control will vary through different executions of the test. Collate all of the different points of control and reduce the list to those needed for the current test cycle.

Identify appropriate test oracles

A test oracle combines both the expected output values to be tested for, and the means by which those values can be divined: it's both the response given and the medium through which it is given. For example, to verify the accurate representation of fonts used in a word processing package, print preview might be used as the medium by which the font presentation can be verified. The test oracle identifies aspects of both form and function that are necessary to verify the actual results of the test against the expected results.

Define required data sources, values and ranges
Purpose:  To define the required Test Data values, including appropriate sources for that data. 

As mentioned previously, Test Data comes in many shapes-and-forms.

Where complex data-interdependencies are likely, try to make use of Domain Experts to specify appropriate Test Data conditions. Some test productivity tools provide features or utilities that enable simplified generation of Test Data sets.

Source sufficient consumable Test Data
Purpose:  To source and record sufficient valid Test Data to support the test. 

The accurate generation or collation of appropriate Test Data is one of the most arduous and time consuming tasks in defining a test. This is especially true where the system of a class that is data intensive.

We recommend recording Test Data in Microsoft® Excel® or another product with a tabular data management interface, such as Microsoft® Access®.

Maintain traceability relationships
Purpose:  To enable impact analysis and assessment reporting to be performed on the traced items. 

Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as required.

Evaluate and verify your results
Purpose:  To verify that the task has been completed appropriately and that the resulting work products are acceptable. 

Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

Have the people performing the downstream tasks that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input work product review your work on this basis.

Try to remember that that RUP is an iterative process and that in many cases work products evolve over time. As such, it is not usually necessary-and is often counterproductive-to fully-form a work product that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the work product will change-and the assumptions made when the work product was created proven incorrect-before the work product is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.



Properties
Multiple Occurrences
Event Driven
Ongoing
Optional
Planned
Repeatable
More Information