The performance test

11 July 2019
test de performance
The quality of a computer tool is defined by ISO9126 & 25000 standards based on six criteria. Functional tests cover the "functional capacity" and "ease of use" criteria. Technical tests assess the “reliability, yield, maintainability and portability” of the system. Through this article, we are more specifically interested in the performance test which covers some of the technical test criteria.
What is the performance test?

From an operational point of view, the aim of the performance test is to assess the behaviour of an application system according to its load. This assessment consists in measuring the system's resource consumption and response times during the simulation of a large number of concurrent users, based on performing user actions that are close to reality with a high data volume.

The system's quality is based on:

  • its response time (user, network and query),
  • its capacity to support n users simultaneously,
  • its resource consumption (memory, processor, drive and network),
  • its stability (nominal operation with no errors or data compromise),
  • its scalability (capacity to adapt to load increases and decreases).
Integration of the performance tests in ISO 9126 & 25000 standards
In terms of the reliability criterion 

Does the software maintain its level of service under specific conditions during a determined period?

The performance test enables us to verify the system's fault tolerance, i.e. its capacity to operate in degraded mode, namely, in case of failure.

In terms of the yield criterion 

Does the software require efficient and proportionate scaling of the host platform in light of other requirements?

This criterion is fully covered by the performance test. It groups the application's behaviour in terms of time (response time, number of transactions per second), the use of resources (memory, processor, disc and network) and the yield itself (relationship between the software's performance level and the quantity of resources used).

In terms of the maintainability criterion 

Does the software require little effort to upgrade to meet new needs?

The performance test applies to the notion of stability, i.e. the state in which the software can accomplish all the tasks planned without new bugs appearing.

In terms of the portability criterion 

Can the software be transferred from one platform or environment to another?

The performance test assesses the software's ease of adaptation to changes in operational specifications or environments.

To sum up:

Integration of the performance tests in ISO 9126 & 25000 standards
Performance test contributions

Performance tests are used throughout the software life cycle, when it is initially developed, as well as in maintenance to upgrade it or to correct faults. There are multiple benefits:

  • know the system's capacity and its limits,
  • detect and monitor its weaknesses, 
  • optimise its infrastructure costs by streamlining resources,
  • ensure that it operates without errors under certain load conditions,
  • optimise response times to improve the user experience,
  • verify the stability between the production version and the version n+1,
  • reduce technical risks and production incidents,
  • reproduce a production problem,
  • anticipate a future load increase or the addition of a function,
  • ensure the system and its third-party applications are operating smoothly, in case of failure then reconnection, etc.
Methodology

An application's performance must be taken into account from the pre-study/design phase. It is important to identify the system constraints (back-up, network) and to ensure that the technical/application architecture chosen as well as the Frameworks selected will be suitable to cover requirements.

The implementation of performance tests is based on:

  • definition of the scope (technical, functional),
  • definition of the test conditions (input, output, third-party services),
  • the reliability of the test environment
  • system monitoring,
  • the effectiveness of testing tools.

Methodology for setting up a performance test
Test campaign procedure

A performance test campaign is divided into four separate phases. 

1. Campaign study

A performance test campaign starts with the study of the campaign and the definition of prerequisites.

Initially, the following are reviewed: the technology used (web, SAP, Oracle, RDP, Citrix, Mainframe, Windows Sockets), the architecture in place (server, cluster, database, proxy), the components to be tested as well as third-party services (calls to services outside the environment).

Then the business processes to be tested must be identified (the most frequently used and the most risky user scenarios), the expected traffic in the shorter or longer term (nominal number of users, load peak, number of transactions per second) and the expected data volume (availability of a production database, need to create data, data perishability).

Once these two stages are completed, it is important to carefully define the requirements in terms of response times and resource consumption on which the test result acceptance criteria will be based.

Depending on the technology used and the budgetary constraints, the scripting, injection and monitoring tools (defined in the next paragraph) are chosen.

And finally, the test strategy is put in place defining the type of tests to run to cover all the requirements.

 

2. Preparation phase

This phase includes the implementation of the test environment dedicated to performance tests (ISO PROD - configuration/volume, implementation of probes to monitor resources), the creation of datasets, if necessary, and the scripting of business processes (user action sequences) to be tested via the scripting tool selected.

If the system uses third-party services, a plug system will be put in place to isolate the platform from outside and avoid tests being affected by network latency between the system to be tested and these external calls. To do this, several mechanisms exist: virtual service/text files that store the response to a query, updated application code to always return the same response to the system without sending a query outside.

 

3. Running tests

Once the test phase has begun, the performance tests defined in the test strategy are run with the injection tool selected, analysed (detection of problems according to requirements) and stored (backup of test reports, logging).

The anomalies identified are input and documented in the bug management tool selected.

Depending on the anomalies encountered, new investigation tests may be run (diagnostic tests defined later on or use of diagnostic tools) to target the cause of the problem and facilitate its resolution.

As soon as a patch is available, platform is updated and the tests necessary to validate it are rerun.

 

4. Reports & Recommendations

Throughout the performance test campaign, intermediary statuses are sent. The frequency varies according to the criticality of tests and what has been agreed in the preliminary study of the campaign.

Once all the tests have been run, a detailed report is sent with a final status (GO/NO GO) as well as recommendations, if necessary.

If problems persist at the end of the campaign it may be agreed to relaunch a new test campaign after delivery of the patches expected.

To sum up:

Running a test campaign

 

Don’t hesitate to check out our web page on: Software Testing.

Let's have a chat about your projects. 

bouton-contact-us-en