Guideline: Workload Analysis Model
The Workload Analysis Model identifies the variables that affect system's performance and how to measure their effect. This guideline explains how to develop one.
Related Elements
Main Description


Software quality is assessed along different dimensions, including reliability, function, and performance (see Concept: Quality Dimensions). The Workload Analysis Model (see Artifact: Workload Analysis Model) is created to identify and define the different variables that affect or influence an application or system's performance and the measures required to assess performance. The workload profiles that make up the model represent candidates for conditions to be simulated against the Target Test Items under one or more Test Environment Configurations. The workload analysis model is used by the following roles:

  • the test analyst (see Role: Test Analyst) uses the workload analysis model to identify test ideas and define test cases for different tests
  • the test designer (see Role: Test Designer) uses the workload analysis model to define an appropriate test approach and identify testability needs for the different tests
  • the tester (see Role: Tester) uses the workload analysis model to better understand the goals of the test to implement, execute and analyze its execution properly
  • the user representative (see Role: Stakeholder) uses the workload analysis model to assess the appropriateness of the workload, and the tests required to effectively assess the systems behavior against that workload analysis model

The information included in the workload analysis model focuses on characteristics and attributes in the following primary areas:

  • Use-Case Scenarios (or Instances, see Artifact: Use Case) to be executed and evaluated during the tests
  • Actors (see Artifact: Actor) to be simulated / emulated during the tests
  • Workload profile - representing the number and type of simultaneous actor instances, use-case scenarios executed by those actor instances, and on-line responses or throughput associated with each use-case scenario.
  • Test Environment Configuration (actual, simulated or emulated) to be used in executing and evaluating the tests (see Artifact: Test Environment Configuration. Also see Artifact: Software Architecture Document, Deployment view, which should form the basis for the Test Environment Configuration)

Tests should be considered to measure and evaluate the characteristics and behaviors of the target-of-test when functioning under different workloads. Successfully designing, implementing, and executing these tests requires identifying both realistic and exceptional data for these workload profiles.

Use Cases and Use Case Attributes

Two aspects of use cases are considered for selection of scenarios for this type of testing:

  • critical use cases contain the key use-case scenarios to be measured and evaluated in the tests
  • significant use cases contain use-case scenarios that may impact the behavior of the critical use-case scenarios

Critical Use Cases

Not all use-case scenarios being implemented in the target-of-test may be needed for these tests. Critical use cases contain those use-case scenarios that will be the focus of the test - that is their behaviors will be measured and evaluated.

To identify the critical use cases, identify those use-case scenarios that meet one or more of the following criteria:

  • require measurement and assessment based on workload profile
  • are executed frequently by one or more end-users (actor instances)
  • that represent a high percentage of system use
  • that consume significant system resources

List the critical use-case scanners for inclusion in the test. As theses are being identified, the use case flow of events should be reviewed. Begin to identify the specific sequence of events between the actor (type) and system when the use-case scenario is executed.

Additionally, identify (or verify) the following information:

  • Preconditions for the use cases, such as the state of the data (what data should / should not exist) and the state of the target-of-test
  • Data that may be constant (the same) or must differ from one use-case scenario to the next
  • Relationship between the use case and other use cases, such as the sequence in which the use cases must be performed.
  • The frequency of execution of the use-case scenario, including characteristics such as the number of simultaneous instances of the use case and the percent of the total load each scenario places on the system.

Significant Use Cases

Unlike critical use-case scenarios, which are the primary focus of the test, significant use-case scenarios are those that may impact the performance behaviors of critical use-case scenarios. Significant use-case scenarios include those that meet one or more of the following criteria:

  • they must be executed before or after executing a critical use case (a dependent precondition or postcondition)
  • they are executed frequently by one or more actor instances
  • they represent a high percentage of system use
  • they require significant system resources
  • they will be executed routinely on the deployed system while critical use-case scenarios are executed, such as e-mail or background printing

As the significant use-case scenarios are being identified and listed, review the use case flow of events and additional information as done above for the critical use-case scenarios.

Actors and Actor Attributes

Successful performance tests requires identifying not just the actors executing the critical and significant use-case scenarios, but must also simulate / emulate actor behavior. That is, one instance of an actor may interact with the target-of-test differently (take longer to respond to prompts, enter different data values, etc.) while executing the same use-case scenario as another instance of that actor. Consider the simple use cases below:

Diagram described in caption.

Actors and use cases in an ATM machine.

The first instance of the "Customer" actor executing a use-case scenario might be an experienced ATM user, while another instance of the "Customer" actor may be inexperienced at ATM use. The experienced Customer quickly navigates through the ATM user-interface and spends little time reading each prompt, instead, pressing the buttons by rote. The inexperienced Customer however, reads each prompt and takes extra time to interpret the information before responding. Realistic workload profiles reflect this difference to ensure accurate assessment of the behaviors of the target-of-test.

Begin by identifying the actors for each use-case scenario identified above. Then identify the different actor profiles that may execute each use-case scenario. In the ATM example above, we may have the following actor stereotypes:

  • Experienced ATM user
  • Inexperienced ATM user
  • ATM user's account is "inside" the ATM's bank network (user's account is with bank owning ATM)
  • ATM user's account is outside the ATM's bank network (competing bank)

For each actor profile, identify the different attributes and their values such as:

  • Think time - the period of time it takes for an actor to respond to a target-of-test's individual prompts
  • Typing rate - the rate at which the actor interacts with the interface
  • Request Pace - the rate at which the actor makes requests of the target-of-test
  • Repeat factor - the number of times a use case or request is repeated in sequence
  • Interaction method - the method of interaction used by the actor, such as using the keyboard to enter in values, tabbing to a field, using accelerator keys, etc., or using the mouse to "point and click", "cut and paste", etc.

Additionally, for each actor profile identify their workload profile, specifying all the use-case scenarios they execute, and the percentage of time or proportion of effort spent by the actor executing these scenarios. Identifying this information is used in identifying and creating a realistic load (see Load and Load Attributes below).

System Attributes and Variables To top of page

The specific attributes and variables of the Test Environment Configuration that uniquely identify the environment must also be identified, as these attributes also impact the measurement and evaluation of behavior. These attributes include:

  • The physical hardware (CPU speed, memory, disk caching, etc.)
  • The deployment architecture (number of servers, distribution of processing, etc.)
  • The network attributes
  • Other software (and use cases) that may be installed and executed simultaneously to the target-of-test

Identify and list the system attributes and variables that are to be considered for inclusion in the tests. This information may be obtained from several sources, including:

Workload Profiles To top of page

As stated previously, workload is an important factor that impacts the behavior of a target-of-test. Accurately identifying the workload profile that will be used to evaluate the targets behavior is critical. Typically, test that involve workload are executed several times using different workload profiles, each representing a variation of the attributes described below:

  • The number of simultaneous actor instances interacting with the target-of-test
  • The profile of the actors interacting with the target-of-test
  • The use-case scenarios executed by each actor instance
  • The frequency of each critical use-case scenarios executed and how often it is repeated

For each workload profile used to evaluate the performance of the target-of-test, identify the values for each of the above variables. The values used for each variable in the different loads may be derived by observing or interviewing actors or, from the Business Use-Case Model  if one is available. It is common for one or more of the following workload profiles to be defined:

  • Optimal - a workload profile that reflects the best possible deployment conditions, such as a minimal number of actor instances interacting with the system, executing only the critical use-case scenarios, with minimal additional software and workload executing during the test.
  • Average (AKA Normal) - a workload profile that reflects the anticipated or actual average usage conditions.
  • Instantaneous Peak - a workload profile that reflects anticipated or actual instantaneous heavy usage conditions, that occur for short periods during normal operation.
  • Peak - a workload profile that reflects anticipated or actual heavy usage conditions, such as a maximum number of actor instances, executing high volumes of use-case scenarios, with much additional software and workload executing during the test.

When workload testing includes Stress Testing (see Concept: Performance Test and Technique: Test Types), several additional loads should be identified, each targeting specific aspects of the system in abnormal or unexpected states beyond the expected normal capacity of the deployed system.

Performance Measurements and Criteria To top of page

Successful workload testing can only be achieved if the tests are measured and the workload behaviors evaluated. In identifying workload measurements and criteria, the following factors should be considered:

  • What measurements are to be made?
  • Where / what are the critical measurement points in the target-of-test / use-case execution.
  • What are the criteria to be used for determining acceptable performance behavior?

Performance Measurements

There are many different measurements that can be made during test execution. Identify the significant measurements to be made and justify why they are the most significant measurements.

Listed below are the more common performance behaviors monitored or captured:

  • Test script state or status - a graphical depiction of the current state, status, or progress of the test execution
  • Response time / Throughput - measurement (or calculation) of response times or throughput (usually stated as transactions per second).
  • Traces - capturing the messages / conversations between the actor (test script) and the target-of-test, or the dataflow and / or process flow during execution.

See Concept: Key Measures of Test for additional information

Critical Performance Measurement Points

In the Use Cases and Use Case Attributes section above, it was noted that not all use cases and their scenarios are executed for performance testing. Similarly, not all performance measures are made for each executed use-case scenario. Typically only specific use-case scenarios are targeted for measurement, or there may be a specific sequence of events within a specific use-case scenario that will be measured to assess the performance behavior. Care should be taken to select the most significant starting and ending "points" for the measuring the performance behaviors. The most significant ones are typically those the most visible sequences of events or those that we can affect directly through changes to the software or hardware.

For example, in the ATM - Cash Withdraw use case identified above, we may measure the performance characteristics of the entire use-case instance, from the point where the Actor initiates the withdrawal, to the point in which the use case is terminated - that is, the Actor receives their bank card and the ATM is now ready to accept another card, as shown by the black "Total Elapsed Time" line in the diagram below:

Diagram is described in the content.

Notice, however, there are many sequences of events that contribute to the total elapsed time, some that we may have control over (such as read card information, verify card type, initiate communication with bank system, etc., items B, D, and E above), but other sequences, we have not control over (such as the actor entering their PIN or reading the prompts before entering their withdrawal amount, items A, C, and F). In the above example, in addition to measuring the total elapsed time, we would measure the response times for sequences B, D, and E, since these events are the most visible response times to the actor (and we may affect them via the software / hardware for deployment).

Performance Measurement Criteria

Once the critical performance measures and measurement points have been identified, review the performance criteria. Performance criteria are stated in the Supplemental Specifications (see Artifact: Supplementary Specifications). If necessary revise the criteria.

Here are some criteria that are often used for performance measurement:

  • response time (AKA on-line response)
  • throughput rate
  • response percentiles

On-line response time, measured in seconds, or transaction throughput rate, measured by the number of transactions (or messages) processed is the main criteria.

For example, using the Cash Withdraw use case, the criteria is stated as "events B, D, and E (see diagram above) must each occur in under 3 seconds (for a combined total of 9 seconds)". If during testing, we note that that any one of the events identified as B, D, or E takes longer than the stated 3 second criteria, we would note a failure.

Percentile measurements are combined with the response times and / or throughput rates and are used to "statistically ignore" measurements that are outside of the stated criteria. For example, the performance criteria for the use case was now states "for the 90th percentile, events B, D, and E must each occur in under 3 seconds ...". During test execution, if we measure 90 percent of all performance measurements occur within the stated criteria, no failures are noted.