Performance Testing Process

System performance testing is carried out to check compliance with the specified requirements, ensure that software applications function smoothly under the expected and substantial workload and are ready for real-life operation. Following a proper methodology ensures a successful performance test project. Having a defined performance testing process makes it easier for project to track status and deliverables Below picture depicts high level view of Perf testing process.

Performance Testing Process

Project assessment:

This is the first step of Performance testing process to determine whether the work can be done and if so, how? Here we analyze the system which includes application features, operation modes, user journeys, architectural details. Requirement gathering is a subset of project assessment where we gather information of the application, technology used, test and production environment details and project expectations, resource and logistic details. Requirements are analyzed to determine whether the requirements can be met with what is available in the specific environment.

Planning and Strategy design:

Here we use the information gathered during the project assessment to plan for performance testing. The performance test plan must contain all the key detail like in-scope scenarios, SLAs, entry and exit criteria of each step / milestone, SLAs, types of tests, Vusers, workload modelling, environment details, performance testing tool to be used. Below are key items that should be included in the test plan

  • Goals & Objectives
  • Components / Feature Scope – Inclusion and Exclusion
  • Environment – Performance test environment and application (Application server, DB server, Load generators)
  • Test tool and Monitoring tool to be used
  • Test data requirements
  • Timeline and effort estimates
  • Test scripts, scenario, execution details
  • Results success criteria with respect to KPIs
  • Risks / Issues / Mitigation plan

Test plan is crucial as it includes the sections that cover requirements for testing and can be used as tracking document throughout the testing process.

Scripting:

Scripting is where the testing starts and for this we would need to make sure that the application is functionally tested. Script recording / creation for the application if done carefully saves a lot of time and trouble. This step can be further divided into 3 parts:

Script User journey analysis

User journey / navigation should be analysed before the development of load test cases. Each step should be checked manually to establish no manual errors appear and then only the test script should be recorded.

Script development

Once the script is recorded, it needs to be developed using various functions like parameterization, co-relation to make it robust enough to run with multiple users.

Scripts debugging

After script is developed, it should be executed few times for various unique users to ensure its proper functioning. If each scenario in the scripts works correctly, the script debugging stage can be considered complete.

Test Execution:

Once we have our test script created along with test data and workload modelling, we move on to test execution. Monitoring is set-up prior to execution to record platform performance data for analysis. This information not only identifies whether the infrastructure is capable of handling the traffic load efficiently, but also helps to determine whether additional capacity (Memory, CPU etc.) is required. Based on application we may need to execute tests from below list.

  • Baseline test –  Establish performance baselines
  • Load test – Emulate production load on the system
  • Stress test – Load the system to breakpoint
  • Soak test – Test the system over a long period of time
  • Spike Test – Test the system for sudden increase in users / throughput
  • Volume test – High data volumes / throughput, Database growth
  • Scalability Test – tests system performance when infra is scaled up/down

Results Analysis:

The results / outputs / logs of the previous stages are used to analyse and describe the outcomes. Result analysis is perhaps the most critical part of performance testing. It starts with the design of scenarios and set-up of monitors. Test results are the most important deliverable for the performance testing process and the most effective way of showing the success of the testing.

Performance testing is an iterative process so keeping a track of every test run, failures / success, good/bad response times, changes between tests, any system change and issues/resolution will help prepare a comprehensive result summary. A good result summary should include:

  • Overview of test
  • Scenario summary – Test type, duration, target KPIs and achieved KPIs
  • Graphs – Response time, throughput, system resources, comparison graphs
  • Recommendations for next test / application tuning

Reporting:

This is the last stage of Performance testing process. A full performance test report presents the content in a manner that technical as well as non-technical people can understand the goal and outcome of performance testing. The aim is to explain the content of the final report and answer questions anyone might have about the testing and findings. The final report covers the findings of the test process as a whole. Below are key points to mention in Perf test report:

  • Executive Summary with RAG status clearly showing Performance Met / Breach status
  • Performance testing scope / goals / coverage as per test plan
  • Test environment configuration and scaling with respect to Production environment
  • KPI Stats – Target vs achieved (Response times, system resource utilization, SLAs)
  • Conclusion on the software performance in general and its bottlenecks if any
  • Recommendations on improvement

Leave a Reply