Workflow example

Introduction

If you develop a web application or a backend server or maybe even a complicated distributed system you might be interested in regular performance testing of the builds available in your CI process to prevent performance degradations in the releases or track performance improvements. In this workflow example we will explain all the steps needed to do it. The web application testing scenario will be taken as example. However, despite the fact, the performance tests will be different for different systems, the overall performance CI workflow will be very similar in all the cases.

Let’s assume you have regular (daily) build of your web project deployed at www.example.com and want to be sure that performance of your application pages is still fine and does not degrade after a bunch of daily commits.

Let’s imagine you are interested mostly in testing the following pages:

  • www.example.com/
  • www.example.com/news
  • www.example.com/blog
  • www.example.com/about

and want to test latency and throughput once a day

Installing the perftracker server

In this example, we will assume you have already python3 installed and show how to install the latest perftracker version from git and run it on top of local sqlite database

git clone https://github.com/perfguru87/perftracker.git
cd perftracker
pip3 install -r ./requirements.txt
python3 ./manage.py migrate
python3 ./manage.py createsuperuser
python3 ./manage.py runserver 0.0.0.0:8080

Open the http://127.0.0.1:8080 in your browser and you should be able to see something like this:

Please read the perftracker README.md for more advanced installation and deployment options.

Installing the perftracker client

The perftracker client libraries must be installed on every client machine where you are going to run performance tests:

pip install perftrackerlib

Writing the performance tests

Now it’s time to write your performance tests or take an existing one and wrap it around. To make this example simple we’ll take a standard third-party ab (apache benchmark) performance test which is shipped as a part of the standard httpd-tools package available on any Linux distributive.

First, lets run it manually and ask it to do 100 requests (-n 100) to http://www.example.com with concurrency equal to 1 (-c 1) and with keep-alive (-k) enabled:

$ ab -n 100 -c 1 -k http://www.example.com/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.example.com (be patient).....done


Server Software:        ECS
Server Hostname:        www.example.com
Server Port:            80

Document Path:          /
Document Length:        1270 bytes

Concurrency Level:      1
Time taken for tests:   11.845 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Keep-Alive requests:    100
Total transferred:      162519 bytes
HTML transferred:       127000 bytes
Requests per second:    8.44 [#/sec] (mean)
Time per request:       118.447 [ms] (mean)
Time per request:       118.447 [ms] (mean, across all concurrent requests)
Transfer rate:          13.40 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1  11.6      0     116
Processing:   116  117   1.5    117     124
Waiting:      116  117   1.5    117     124
Total:        116  118  11.6    117     232

Percentage of the requests served within a certain time (ms)
  50%    117
  66%    117
  75%    118
  80%    118
  90%    121
  95%    121
  98%    124
  99%    232
 100%    232 (longest request)

TODO

Combining the individual tests to a performance test suite

TODO

Launching your test suite by cron

TODO

Manual performance comparison on several application builds

TODO

Automated performance regression tracking

TODO