Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Performance Testing in a Nutshell

It is very easy to forget about performance testing and its importance while delivering software under tight deadlines. It is also a challenge to convince the client to start with performance testing right from the beginning of the project, rather than treat it as a second class citizen.

Difficult to convince client

Whether we are developing an ecommerce website or a mobile app, we need to be prepared for the traffic that is going to hit us. More importantly, we need to understand, estimate and analyze, based on past and future trends, the level of traffic that we are going to expect, and how efficiently we are going to serve them without any dropouts. Also, if we hit the peak loads, then how are we going to gracefully handle the additional load? These are some of the questions that I start asking myself and the stakeholders at the beginning of the project. The stakeholders will have to take a call on how much to invest in performance testing, based on the business impact, versus the cost of delivering to their performance expectations.

Performance testing is an ongoing task throughout the life cycle of the project. It not only involves testers, but also developers and operations to run and maintain the application to its performance expectations.
 
Performance testing has always been challenging and equally interesting to me. It requires multifaceted skill-sets from writing test scripts, monitoring and analyzing test results, tweaking the application and repeating the whole process again.
 
As part of performance testing an application, we also take part in Load and Soak testing at the very least. I will be covering this, along with front end performance testing, in a separate blog.
 
There are various elements to consider when coming up with a back end performance test strategy.

Environments

Cloud vs Physical Hardware

Traditionally, running the performance tests from local dev boxes was the norm, but it would limit you to the performance of the dev box itself. Harnessing the power of cloud computing and using a cloud provider to run your tests, is what I find economical and low maintenance.

Another approach is to buy beefy hardware and spin up multiple virtual machines to simulate tests. This may be comfortable for companies who are limited by their VPNs, firewalls and may not be able to access local environments from outside the network.

Environment

Configuration

I would make sure that my environments are configured to what is expected, both in terms of software (OS, application etc.) and hardware (infrastructure, architecture etc.). Ideally, the configuration should be as close to production as possible.

Scaling

We had the capability to automatically scale up/down the number of machines we used in the environment, based on the traffic. If you are using a cloud based provider, this usually is a cost effective solution, where you only pay for what you use. The ability to scale up sometimes gives a little bit of breathing space when the traffic goes up.

Independent Infrastructure

It would be ideal if the environments are on an independent infrastructure. If they are in a shared environment, then we need to analyze how much and how often other systems, outside our realm, impact the performance of the application.

Scripts

Session Handling

Initially, I used to find it tricky to write performance tests with multiple user logins, as it involved user sessions. On the one hand, I had to get the dynamically generated sessions and on the other, I had to make sure that the sessions did not clash with each other while the tests were running. Recently, I started to grab the session id from the browser and inject that in the test script as a parameter value. At the same time, I force the test to use a new browser for every session.

Caching

While coding, we should make sure that static content is cached to some level. In two of my projects, the static content being served from one of the services was eating up all its memory. As a byproduct, we started observing memory leaks with performance degradation.

Test Pyramid

We can write and run the performance tests at different levels, just like functional tests. We can have performance tests written at a service level to test its individual performance. We can have an inter-service level to test the performance between a few services. And finally, at a user journey level to test the performance of all the services and the system as a whole. Having structured performance tests at different levels greatly helps us in identifying, narrowing and quickly diagnosing the performance issues.

Scripting

Monitoring

Monitoring the results of the performance test is equally important. What and how you monitor plays a significant role.

Monitoring

How to Monitor Your Metrics

Ideally, the monitoring tools should be sitting as close to the application servers as possible, to avoid network lags and loss of data points. On the other hand, if the monitoring tools are sharing the same resources as the system under test, then that too may affect the performance of the application.
I would avoid taking into account the network lag into the metrics. We can do only so much about it. And, it would also hide the actual problems in the system under test.

What Metrics to Monitor

At an application level, the two main metrics that one needs to usually monitor are the Response Times (how long it takes to serve a request) and Throughput (how many requests it can serve per second). At an infrastructure level, we should be keeping an eye on CPU usage and memory for various machines and services. Graphite is a tool that we used on several projects for monitoring. Jemalloc along with JProfile are really good tools to analyze memory usage in services.

Record Your Metrics

It is very important to keep a record of the different metrics from the test results. The trend helps us to trace back and analyze any improvement or deterioration of the performance, based on the changes we have made over time.

Analyze Your Metrics

Once we run the performance test and analyze the results, it is time to tackle the problem. Finding the bottleneck and fixing it may result in another bottleneck surfacing up.

Scripting Tools

Like any other development tool on a project, we need to carefully choose the right performance tools. The tools may vary, based on the programming language you are using. Like automated functional tests, the higher the level of performance tests, the more difficult it is to analyze a performance issue.
 
Some of the criteria that I would consider while choosing a tool are

  • Easy to create and maintain user agents
  • Easy to create and maintain tests
  • Debug and validate your scripts through in-built logging
  • Can easily simulate multiple user sessions during the test
  • Ability to ramp up and ramp down users
  • Parameterisation of test data
  • Works for both API and Web
  • Can run across environments
  • Keeps history of the test runs
  • Customized graphical test result reporting with various parameters
  • Good support through email
  • Can be automated to run on CI (Jenkins, TeamCity, GO and New Relic)
  • Can spit server metrics (CPU, memory etc)
  • Works on mobile
  • Spins up tests in various browsers
  • Simulates different network speeds including 3G

 Load Impact is the tool  that satisfied most of these criteria. In my previous projects, I have also used Browsermob (Neustar) and JMeter.

Some Lessons to Take Away

Say No to Background Tasks

Application services and databases should not be doing any background tasks (data analysis, generating mass emails etc.), when their primary purpose is to serve the customer. You should have other instances of these services, which are not serving the end user, to perform the background tasks.

One Parameter Principle

While performance testing, change one parameter at a time. For example, do not change the script and the application build at the same time. In that case, if you see any discrepancies in performance, then you do not know whether it was the application build or the script that caused it. Hence, you need to create baselines for every change you make and take baby steps to analyze your results.

Think Like a Real User

Make sure that the performance test scripts depict actual user behavior.
If there is a legacy system, this would help in comparing the stats generated from production to the stats on the performance environment.

Cache or No Cache

Something that helps increase the performance of the system is caching. At the same time, too much of caching will not help either. We should cache intelligently. Otherwise, users would start seeing stale information. Caching can be considered at several levels. You can have caching from the content delivery network (CDN) level, to the application level or even at the service / database levels.

Beware of Expensive Queries

Sometimes, database queries too can be expensive. The queries, if not given enough attention, may slow the running or use up all the databases resources. So these should be monitored and the calls should be minimized.

Minimize Additional Calls

Sometimes, a section of the page would make extra calls to the backend systems to retrieve additional information, even though not all end users may require to view it. So it would be better to analyze these cases and wrap the sections under expanding links, such that the additional calls would be made only when a user genuinely wants that information and clicks on the link.
 
In a nutshell, this is an overview of some of the things one may look out for while performance testing. I am sure you may come across many other challenges based on different project conditions, but the underlying principles which I’ve outlined here would more or less remain the same.

This post was designed and produced by Kaifeng Zhang

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights