Cheat Sheet: Performance Modeling Web Applications

From Guidance Share

Jump to: navigation, search

Technique: Performance Modeling for Web Applications

Purpose: Identify relevant performance threats in your scenario to help shape your application's performance design.


A number of inputs are required for the performance modeling process. These include initial (maybe even tentative) information about the following:

  • Scenarios and design documentation about critical and significant use cases.
  • Application design and target infrastructure and any constraints imposed by the infrastructure.
  • QoS requirements and infrastructure constraints, including service level agreements (SLAs).
  • Workload requirements derived from marketing data on prospective customers.


The output from performance modeling is the following:

  • A performance model document.
  • Test cases with goals.

Technique Overview

The performance modeling process model is summarized in Figure 2.1.

image: PerformanceModelingSteps.gif

Figure 2.1: Eight step performance model

The performance modeling process involves the following steps:

  • Step 1. Identify key scenarios. Identify scenarios where performance is important and scenarios that pose the most risk to your performance objectives.
  • Step 2. Identify workload. Identify how many users and concurrent users your system needs to support.
  • Step 3. Identify performance objectives. Define performance objectives for each of your key scenarios. Performance objectives reflect business requirements.
  • Step 4. Identify budget. Identity your budget or constraints. This includes the maximum execution time in which an operation must be completed and resource utilization constraints, such as CPU, memory, disk I/O, and network I/O.
  • Step 5. Identify processing steps. Break down your key scenarios into component processing steps.
  • Step 6. Allocate budget. Spread your budget (determined in Step 4) across your processing steps (determined in Step 5) to meet your performance objectives (defined in Step 3).
  • Step 7. Evaluate. Evaluate your design against objectives and budget. You may need to modify your design or spread your response time and resource utilization budget differently to meet your performance objectives.
  • Step 8. Validate. Validate your model and estimates. This is an ongoing activity and includes prototyping, assessing, and measuring.

Technique Summary Table

The following table summarizes the performance modeling activity and shows the input and output for each step.

Technique Summary with Input and Output




  • Business requirements
  • Quality of Service requirements
  • Deployment diagrams
  • Use cases

Step 1. Identify key scenarios

  • Critical Scenarios
  • Significant Scenarios
  • Total users.
  • Concurrently active users
  • Data volumes
  • Transaction volumes and transaction mix

Step 2. Identify workload

  • Workload models
  • Workload requirements
  • Service level agreements
  • Response times
  • Projected growth
  • Lifetime of your application

Step 3. Identify performance objectives

  • Response time
  • Throughput
  • Resource Utilization (CPU, Memory, Network, Disk)
  • Network considerations
  • Hardware considerations
  • Resource dependencies
  • Shares resources
  • Project resources

Step 4. Identify budget

  • Execution time
  • Resource utilization
  • Key Scenarios

Step 5. Identify processing steps

  • Key Scenarios
  • Processing steps

Step 6. Allocate budget.

  • Processing steps with budgets
  • Processing steps with budgets

Step 7. Evaluate

  • Design modifications
  • Requirement modifications
  • Test cases with goals
  • Prototype code

Step 8. Validate

  • Performance measures

Information in the Performance Model

The information in the performance model is divided into different areas. Each area focuses on capturing one perspective. Each area has important attributes that help you execute the process. Table 2.1 shows the key information in the performance model.

Table 2.1: Information in the Performance Model



Application Description

The design of the application in terms of its layers and its target infrastructure.


Critical and significant use cases, sequence diagrams, and user stories relevant to performance.

Performance Objectives

Response time, throughput, resource utilization.


Constraints you set on the execution of use cases, such as maximum execution time and resource utilization levels, including CPU, memory, disk I/O, and network I/O.


Actual performance metrics from running tests, in terms of resource costs and performance issues.

Workload Goals

Goals for the number of users, concurrent users, data volumes, and information about the desired use of the application.

Baseline Hardware

Description of the hardware on which tests will be run — in terms of network topology, bandwidth, CPU, memory, disk, and so on.

Other elements of information you might need include those shown in Table 2.2.

Table 2.2: Additional Information You Might Need



Quality-of-Service (QoS) Requirements

QoS requirements, such as security, maintainability, and interoperability, may impact your performance. You should have an agreement across software and infrastructure teams about QoS restrictions and requirements.

Workload Requirements

Total number of users, concurrent users, data volumes, and information about the expected use of the application.

Key Concepts

This performance modeling approach is optimized to help you identify performance issues in your application. The key concepts represent proven practices refined by industry performance experts, consultants, product support engineers, customers, and partners.

Key Concepts



Modeling to reduce risk

Use performance modeling to identify when and where you should apply effort to eliminate or reduce a potential issue. Avoid wasted effort by modeling to clarify the areas of most risk.

Incremental rendering

Create the model iteratively and incrementally. You should not be too concerned about missing details in any single iteration — instead focus on making each iteration productive.

Context precision

Understand application use cases and roles in order to identify performance threats that are specific to your application. Different application types, application usage, and roles can yield different performance issues.


Establish boundaries in order to help you define constraints and goals. Boundaries help you identify what must not be allowed to happen, what can happen, and what must happen.

Entry and exit criteria

Define entry and exit criteria to establish tests for success. You should know before you start what your performance model will look like when complete (good enough) and when you have spent the right amount of time on the activity.

Communication and collaboration tool

Use the performance model as a communication and collaboration tool. Leverage discovered performance issues to improve shared knowledge and understanding.

Pattern-based information model

Use a pattern-based information model to identify the patterns of repeatable problems and solutions, and organize them into categories. See Cheat Sheet: Web Application Performance Frame

Key engineering decisions

Expose your high-risk engineering decisions and design choices with your performance model. These high-risk choices are good candidates for focusing prototyping efforts.

Personal tools