Understanding Performance Test and its Metrics
There are a lot of definition that you could draw out of the concept “Performance testing,” one of these, which I found brief and simple:
Performance testing is the process by which software is tested and tuned with the intent of realizing the required performance.
Regardless of the many terms that you could relate to “Performance,” like load, stress, spike, soak etc. There are three major categories that you should focus on when you do performance test:
Speed — Does the application respond quickly enough for the intended users?
Scalability — Will the application handle the expected user load and beyond?
Stability — Is the application stable under expected and unexpected user loads?
And in order for you to objectively measure the above categories, you need to carefully identify the suitable performance metrics to be used. To give you an overview of performance metrics, here are some of useful information from RadView Software’s White Paper: Test Metrics – Which Are Most Valuable?
During a test session, virtual clients generate result data (metrics) as they run scenarios against an application. These metrics determine the application’s performance, and provide specific information on system errors and individual functions. Understanding these different metrics will enable you to match them to the application function and build a more streamlined test plan.
Scalability and Performance
1. Hits per Second
– a Hit is a request of any kind made from the virtual client to the application being tested. The higher the Hits Per Second, the more requests the application is handling per second. A virtual client can request an HTML page, image, file, etc.
2. Pages per Second
– measures the number of pages requested from the application per second. The higher the Page Per Second the more work the application is doing per second.
– this is an important baseline metric and is often used to check that the application and its server connection is working. Throughput measures the average number of bytes per second transmitted from the application being tested to the virtual clients running the test agenda during a specific reporting interval. This metric is the response data size (sum) divided by the number of seconds in the reporting interval.
– tells you the total number of times the test agenda was executed versus the total number of times the virtual clients attempted to execute the Agenda. The more times the agenda is executed, the more work is done by the test and the application.
Responses and Availability
1. Hit Time
– hit time is the average time in seconds it took to successfully retrieve an element of any kind (image, HTML, etc). The time of a hit is the sum of the Connect Time, Send Time, Response Time and Process Time. It represents the responsiveness or performance of the application to the end user.
2. Time to First Byte
– this measurement is important because end users often consider a site malfunctioning if it does not respond fast enough. Time to First Byte measures the number of seconds it takes a request to return its first byte of data to the test software’s Load Generator.
3. Page Time
– page time calculates the average time in seconds it takes to successfully retrieve a page with all of its content. This statistic is similar to Hit Time but relates only to pages. In most cases this is a better statistic to work with because it deals with the true dynamics of the application.
With regards to choosing the performance metric, you should always consider the type of application that you’re testing. Say for an open and public web application where you expect that many concurrent users will hit the system at the same time, HIT PER SECOND would be a valuable metric to use, compared to an in-house application like accounting system where you could explicitly tell how many client will be using the application, HITS PER SECOND would be irrelevant.