Task: Analyze Runtime Behavior
This task describes how to analyze the behavior of a component during its execution to identify improvements that can be made.
Disciplines: Implementation
Purpose
  • To understand the behavior of a component during its execution.
  • To identify anomalous behavior and any corrective actions required.
Relationships
RolesPrimary Performer: Additional Performers:
InputsMandatory:
    Optional:
      Outputs
        Process Usage
        Steps
        Determine Required Execution Scenario
        Purpose:  To identify the execution path that will stimulate the desired runtime behavior

        If the observation and analysis of runtime behavior is to provide the desired insight into the behavior of the software, you will need to give consideration to which execution paths through the application will be of importance to explore and of those, which will offer the most opportunity in understanding the runtime behavior of the software.

        In general, the most useful scenarios to explore tend to reflect all or part of those that the user will typically use. As such, it is useful wherever possible to identify scenarios by questioning or otherwise consulting with a domain expert such as a representative user of the software being developed.

        Use cases offer a valuable set of artifacts from which useful scenarios can be identified and explored. As a developer, the most familiar of these will likely be the use-case realizations which you should begin with if available. In the absence of use-case realizations, identify any available use-case scenarios that offer a textual explanation of the path the user will navigate through the various flows of events in the use case. Finally, the use-case flows of events can be consulted to provide information from which likely candidate scenarios can be identified. The success of this last approach is improved by consultation with a representative for the uses cases actor or other domain expert.

        Testers are another useful resource to consult when attempting to identify useful scenarios for runtime analysis. Testers often have insight into and experience with the domain through their testing efforts that evolves them into pseudo-domain experts. In many cases, the stimulus for observing the software's runtime behavior will come from the results of the testing effort itself.

        If this task is driven by a reported defect, the main focus will be to reproduce it in a controlled environment. Based on the information which has been logged when the problem happened, a number of test case have to be identified as potential candidates for making the defect occur reliably. You might need to tweak some of the tests or write new ones, but keep in mind that reproducing the defect is an essential step and for the most difficult cases it will take more time to stabilize the defect than to fix it.

        Prepare Implementation Component for Runtime Observation
        Purpose: To ensure the component is ready in an appropriate state to enable runtime execution

        For runtime execution of the component to yield accurate results, care should be taken to prepare the component satisfactorily so that no anomalous results occur as a by-product of errors in implementation, compilation or linking.

        It is often necessary to make use of stubbed components so that the runtime observation can be completed in a timely manner, or so that it can actually be conducted in situations where the component is reliant on other components that have not yet been implemented.

        You will also need to prepare any framework or supporting tools required to execute the component. In some cases this may mean creating driver or harness code to support execution of the component; in other cases it may mean instrumenting the component so that external support tools can observe and possibly control the components behavior.

        Prepare Environment for Execution
        Purpose: To ensure the prerequisite setup of the target environment has been completed satisfactorily.

        It is important to consider any requirements and constraints that must be addressed for the target environment in which the runtime analysis will occur. In some cases it will be necessary to simulate one or more of the intended deployment environments in which the component will ultimately be required to run. In other cases, it will be sufficient to perform the observe the runtime behavior on the developers machine.

        In any case, it is important to setup the target environment for the runtime observation satisfactorily so that the exercise is not wasted by the inclusion of "contaminants" that will potentially invalidate the subsequent analysis.

        Another consideration is the use of tools that generate environmental constraints or exception conditions that are otherwise difficult to reproduce. Such tools are invaluable in isolating failures or anomalies that occur in runtime behavior under these conditions.

        Execute the Component and Capture Behavioral Observations
        Purpose: To observe and capture the runtime behavior of the component.

        Having prepared both the component and the environment it will be observed in, you can now begin to execute the component through the chosen scenario. Dependent on the techniques and tools employed, this step may be performed largely unattended or may offer (or even require) ongoing attention as the scenario progresses.

        Review Behavioral Observations and Isolate Initial Findings
        Purpose: To identify failures and anomalies in the components runtime behavior

        Either during each step in or at the conclusion of the scenario you are observing, look for failures or anomalies in the expected behavior. Note any observations you make or impressions you have that you think might relate to the anomalous behavior.

        Analyze Findings to Understand Root Causes
        Purpose: To understand the root cause of any failure and anomaly

        Take your findings and begin to investigate the underlying fault or root cause of each failure.

        Identify and Communicate Follow-up Actions
        Purpose: To suggest further investigative or corrective actions

        Once you've reviewed all of your findings, you'll likely have a list of thoughts or hunches that will require further investigation, and possibly specific corrective actions that you propose. If you will not be taking immediate action on these items yourself, record your proposals in an appropriate format and communicate them to the members of your team who can approve or otherwise undertake your proposals.

        Evaluate Your Results
        Purpose: To verify that the task has been completed appropriately and that the resulting work products are acceptable.

        Now that you have completed the work, it is a good practice to verify that the work was of sufficient value. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

        Have the people who will use your work as input in performing their downstream tasks take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure you have represented or considered them sufficiently and accurately. It may be useful to have the author of the input work product review your work on this basis.

        Try to remember that that RUP is an iterative delivery process and that in many cases work products evolve over time. As such, it is not usually necessary-and is in many cases counterproductive-to fully-form a work product that will only be partially used or will not be used at all in immediately subsequent downstream work. This is because there is a high probability that the situation surrounding the work product will change-and the assumptions made when the work product was created proven incorrect-before the work product is used, resulting in rework and therefore wasted effort.

        Also avoid the trap of spending too many cycles on presentation to the detriment of the value of the content itself. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative or junior resource to perform work on a work product to improve its presentation.



        More Information