Pricing

Performance Testing Metrics: What Exactly Does It Contain?

Performance testing metrics are essential aspects to consider in a performance testing report. The objective of performance testing is to obtain relevant data about server performance under high user and data loads, analyze system bottlenecks, and enhance system stability.
In this blog, we primarily address two main questions, which may also include subcategories related to performance testing metrics:
  • What is performance testing?
  • What are performance testing metrics?

 

What is performance testing?

Performance testing involves applying specific stress to a system according to defined testing strategies. The aim is to obtain performance metrics such as response time, transactions per second (TPS), throughput, resource utilization, and other performance indicators. This process helps assess whether the system can meet user requirements after going live.

The objectives of performance testing typically include:

 

1. Assessing system capabilities:

The load and response time data obtained from testing can validate planned models and assist in decision-making (e.g., as reference metrics in specifications for initial versions).

2. Identifying weaknesses in the system:

Controlled loads can be increased to extreme levels to reveal and address system bottlenecks or weak points (knowing the system's weaknesses and continuously improving it throughout its lifecycle).

3. System tuning:

Repeated testing and validation of system adjustments ensure that the desired performance improvements are achieved.

4. Detecting issues within the software:

Prolonged testing may uncover failures caused by memory leaks, exposing hidden problems or conflicts in the program (anticipating outcomes through memory trend analysis).

5. Validating stability and reliability:

Testing the system under production loads for a certain duration is the only way to evaluate if the system meets stability and reliability requirements.

What are performance testing metrics?

In a performance testing report, metrics such as maximum concurrent user hits per second (HPS), transaction response time, transactions per second (TPS), clicks per second, throughput, CPU usage, physical memory usage, and network traffic utilization are included. These metrics are known as performance testing metrics and can be categorized into two types: system performance metrics and resource performance metrics.

System performance metrics typically include response time, system processing capacity, throughput, concurrent users, and error rates.

 

  • Response Time: Abbreviated as RT, it refers to the total time from when a client initiates a request until it receives a response from the system. This includes the time taken from the moment a user sends a request from the client side until the client receives the response from the server side.
  • System Processing Capacity: System processing capacity refers to the system's ability to process information using hardware and software platforms. It is evaluated based on the number of transactions the system can handle per second. Transactions can be understood in two ways: from a business perspective, it represents a business process; from a system perspective, it represents a transaction request and response process. The former is called a business transaction process, while the latter is referred to as a transaction (a collection of user operations or steps). Both types of transaction metrics can evaluate the system's processing capacity.
  • Throughput: Throughput refers to the number of requests a system can handle within a unit of time. In the case of concurrent systems, throughput is often used as a performance metric.
  • Concurrent User Count: Concurrent user count refers to the number of users who are logged into the system and performing business operations at the same time.
  • Error Rate: Abbreviated as FR, the error rate represents the probability of failed transactions under load. It is calculated as the ratio of the number of failed transactions to the total number of transactions, multiplied by 100% (Error Rate = (Number of Failed Transactions / Total Transactions) * 100%).

 

Resource performance metrics typically encompass CPU, memory, disk throughput, and network throughput.

 

  • CPU: Also known as the Central Processing Unit, it is a large-scale integrated circuit and serves as the computational and control core of a computer. Its main function is to interpret computer instructions and process data within computer software. CPU metrics primarily refer to CPU utilization, including user state (user), system state (sys), wait state (wait), and idle state (idle).
  • Memory: Memory is one of the crucial components in a computer and acts as a bridge for communication with the CPU. All program operations in a computer are carried out in memory, making memory performance highly influential on computer operations.
  • Disk Throughput: Disk Throughput, also known as disk I/O throughput, refers to the amount of data passing through a disk unit within a specific time frame, assuming no disk failures occur. Disk metrics mainly include megabytes read and written per second, disk busy rate, disk queue length, average service time, average wait time, and space utilization. The disk busy rate is an essential indicator that directly reflects whether the disk is experiencing a bottleneck. Typically, the disk busy rate should be kept below 70%.
  • Network Throughput: Network throughput refers to the amount of data transmitted through a network within a specific time frame, assuming no network failures occur. The unit of measurement is usually in bytes per second (Byte/s). Network throughput metrics are used to evaluate system demands on network devices or link transmission capacity. When the network throughput metric approaches the maximum capacity of network devices or links, upgrading the network equipment should be considered.

 

Choosing the appropriate tools for performance testing can significantly streamline the process. WeTest PerfDog is a powerful yet user-friendly performance testing tool that supports testing all types of applications, games, mini-programs, H5, and websites and provides accurate and comprehensive performance testing metric data, making it an ideal solution for product performance optimization.

Latest Posts
1A review of the PerfDog evolution: Discussing mobile software QA with the founding developer of PerfDog A conversation with Awen, the founding developer of PerfDog, to discuss how to ensure the quality of mobile software.
2Enhancing Game Quality with Tencent's automated testing platform UDT, a case study of mobile RPG game project We are thrilled to present a real-world case study that illustrates how our UDT platform and private cloud for remote devices empowered an RPG action game with efficient and high-standard automated testing. This endeavor led to a substantial uplift in both testing quality and productivity.
3How can Mini Program Reinforcement in 5 levels improve the security of a Chinese bank mini program? Let's see how Level-5 expert mini-reinforcement service significantly improves the bank mini program's code security and protect sensitive personal information from attackers.
4How UDT Helps Tencent Achieve Remote Device Management and Automated Testing Efficiency Let's see how UDT helps multiple teams within Tencent achieve agile and efficient collaboration and realize efficient sharing of local devices.
5WeTest showed PC & Console Game QA services and PerfDog at Gamescom 2024 Exhibited at Gamescom 2024 with Industry-leading PC & Console Game QA Solution and PerfDog