Skip to content
  • ZipCode Api
  • Blog
  • About RedLine13
RedLine13
RedLine13
Primary Navigation Menu
Menu
  • Start Testing
  • Demo
  • Pricing
  • JMeter
  • Partners
  • Docs
    • Documentation Home
    • AWS Set Up for load testing
    • AWS Approval for Large Tests
    • PHP, NodeJS, Python Load Tests
    • Scalability
    • Jenkins Plugin Setup
    • Premium Features
    • Knowledge Base

A Beginner’s Guide to Interpreting Load Test Results

By: RedLine13

A Beginner’s Guide to Interpreting Load Test Results

The first time you run a load test using RedLine13 and are presented with the test summary page, you may feel somewhat daunted by the breadth of metrics we provide. However, in this post we will provide a basic orientation that will quickly allow you to make sense of this information interpreting your load test results. Since JMeter tests are by far the most popular using our platform, the examples discussed will have special relevance.

Performance at a Glance

One of the first tasks you may find yourself addressing is the debugging of cloud-based load tests. (If you have not yet run your first load test, please check out this brief post.) As you optimize for scale, there are several areas we recommend you analyze to ascertain the health and efficiency of your tests. The first place to look is of course the “Summary” section.  Without analyzing the numbers too much, this will give you some qualitative information about your test. For instance, if we encounter a relatively large number of errors when compared with overall requests, we may find ourselves facing two hypotheses: (a) the test has saturated the capabilities of the endpoints tested; or (b) there is some structural problem with the test itself.

Summary metrics provide key information regarding your load test.
Summary metrics provide key information regarding your load test.

Within the Summary section, the center gold-colored boxes display threads (i.e., virtual users), requests, and time data. Comparing the “failed” or “error” numbers to their respective totals can give you an overall impression on whether your test was successful. The percentiles information at the top row in blue boxes shows timing information averages. The two grey boxes depict load agent and network health information, while the green box in the lower right corner shows an estimated AWS cost for resources used during the test. Regarding this estimated cost, the number displayed here will always reflect the maximum charge you can expect from AWS. Your actual cost incurred will often times be significantly lower. (If you have not yet integrated with AWS, please see our guided integration post for easy step-by-step instructions.)

Interpreting Errors and Logs

Continuing to diagnose potential problems with your tests, you may jump to the bottom of the page and look at the “Error Tables”. If any platform errors have been raised (e.g., JMeter) they will appear here. The content of these errors will give you insight into the nature of potential issues. These would include things like bad host names, test script exceptions, etc.

If there are too many errors (more than a few different types), they will not all be visible here. At that point your best bet is to look at the output files. We recommend when debugging tests that you enable this option (see below). Output files are only visible with this selection enabled:

Setting the option for saving response output files.

You will have to tick the box to generate the output files before you run your test (otherwise they will not be saved). Once your tests conclude, you can then download these files as a zipped archive. From there, you may analyze these for specific messages that may elucidate a problem.

Analyzing Trends

The graphs we generate can also be helpful in interpreting your load test results. These are available both in near-real time while your test is running, as well as static graphs after your test completes. Here is an example of load agent CPU utilization for a load test containing four (4) load agent virtual machines:

Typical load agent CPU utilization profiles.
Typical load agent CPU utilization profiles.

In this test we had relatively stable CPU utilization. However, if we overtaxed our load agents one way we might tell is consistently high CPU usage approaching 100%. There are other similar charts that plot other measures of load agent health, such as available memory and disk space.

Spotting Unhealthy Trends

In contrast we can compare a different example which shows “unhealthy” CPU utilization patterns. As annotated in the screen capture below, there are multiple times during this 2-agent load test where one or both instances are at 100% CPU utilization. When this occurs, our load tests do not behave as expected. As a rule of thumb, it is best to maintain CPU utilization at approximately 70% or lower. This will ensure that load agent system factors do not significantly impact test performance. Momentary excursions above this value are fine and well-tolerated, provided utilization stays below 100% at all times.

Unhealthy CPU utilization profiles.
Unhealthy CPU utilization profiles.

Other Graphed Metrics

Another category of charts depict metrics of the test itself. Below is an example of a request sent over time. Such may be useful if diagnosing an expected-to-actual request count mismatch, for example.

A typical request rate cross-sectional plot.
A typical request rate cross-sectional plot.

Again, all of these charts are generated both in real time while your test is running, as well as a static archived form at the conclusion of your test. There is also the option to download metrics for most charts to a CSV format for further analysis on your local machine.

Now that we have explained the basics to interpreting your load test results, the best way to explore our test analytics further is to try RedLine13 for yourself. We offer a full-featured free trial that you can sign up for today!

2021-04-18
Previous Post: TechStrong TV Interview with RedLine13 Co-Founder Bob Bickel Talks about Load Testing and DevOps
Next Post: Merge JTL Output Files for Reports and Offline Analysis

Recent Posts

  • JMeter XML Format Post Processor
  • Order of Elements in JMeter
  • The JMeter Synthesis Report
  • Using the JMeter Plugins Manager
  • JMeter Rotating JTL Listener

Related

  • JMeter XML Format Post Processor
  • Order of Elements in JMeter
  • The JMeter Synthesis Report
  • Using the JMeter Plugins Manager
  • JMeter Rotating JTL Listener
  • Using Test Fragments in JMeter Tests
  • Step-by-Step Guide to Testing with JMeter
  • Functional Testing vs Performance Testing
  • A Gentle Introduction to Load Testing
  • Using the JMeter Counter Element

© RedLine13, LLC | Privacy Policy | Contract
Contact Us: info@redline13.com

Designed using Responsive Brix. Powered by WordPress.