(photo by Sardinelly)
Testing whether the system can do the expected functions specified in the requirements is one thing, there other thing is to test how well the system is doing it.
It doesn’t make sense if a system can perfectly do its functions, but only when one person is using it and as soon as the second user is logged in, the system stops. Or in case of a single-user desktop application, the application works fine processing ten items, but crashes when processing 100 items.
So the topic for this blog post is quality of service testing, also called non-functional requirement testing, and in particular we’ll take a look at the performance, load, and stress testing.
Performance testing in software development is, intuitively enough, the action of testing how a software performs given certain workload. Since light workload is not interesting, most of the performance testing is done using more than the normal and extreme workload.
The difference between load and stress testing is a little bit challenging, and I’ve read different definitions out there. For the purpose of this post, let’s just use the definitions given by Wikipedia.
Load testing is the testing conducted to understand the system’s behavior under a specific expected load.
Stress testing is the testing used to understand the upper limits of system’s capacity.
So, stress testing is load testing pushed to the limit.
(If you got 1-2 minutes you may want to read the article in Wikipedia that discuss about endurance and spike testing, other types of testing that can be confused with load and stress testing).
How to do these tests?
Paying people to manually test the system at the same time, although possible, is definitely not the economical and scalable way to do the performance testing.
There are tools, both commercial and open-source, specifically developed for performance testing. Some of them are:
- Apache JMeter (open-source)
- loadUI (open-source)
- IBM Rational Performance Tester (commercial)
- HP LoadRunner (commercial)
These tools support recording of test scripts, which then can be run simultaneously from different computers and can be configured to be how many connections, and how long is the interval between the test. The results then can be generated as graphs and reports.
But wait? What are the parameters to test?
The most common metric to test for performance testing are:
- Response times
- Concurrency supported
- Records processed
- Hardware usage
The first metric is the result that the tools give after a performance test has been ran. The other three metrics can be measured separately during the performance test.
I’m ready to do performance test, let’s do it!
Having the tools and knowing what to test are good, yet there are some challenges that still need to be tackled.
1. Different type of platform system
The way we test desktop application vs web application vs mobile application can be significantly different, because the platforms where the user access them are different.
2. Source of bottleneck
As applications usually consist of different components and layers (both software and hardware), pinpointing the source of the performance issue is helpful to solve the problems quickly. Partitioning and measuring the performance of the system at each layer can be done to achieve this.
3. Various Test cases possibilities
Not all function of the system performs at the same speed. Having a various test cases combinations is necessary to see how the performance may degrade when different functions are executed simultaneously. This can also help to find unexpected performance during cases that are most commonly used by the users.
As is the case with the functional requirement testing, performance testing should not be an ad-hoc activity where it is conceived after the software has been constructed.
I had an experience in a project where more than six-months work of development is scrapped because the system didn’t perform as it supposed to. Other than monetary loss, this also had an impact to the morale of the development team.
Ideally, the performance testing plan should be integrated in every iteration so performance problems can be caught early in the development.
Happy performance testing! 🙂