User Guide

Portal Server User Guide

Thank you for using the Real Load product.

On this first page of the user guide you will learn:

  • How to perform a simple HTTP/S test
  • The basic concepts of the product
  • How to determine the system capacity of a stressed server

Projects Menu

“alt attribute”

After you have signed in to the portal server, you will see a navigation bar whose first menu on the left is Projects. This is a file browser that displays all of your files. The view is divided into Projects and Resource Sets which contain the files for executing your tests and the measured test results. You can also store additional files in a Resource Set which contain e.g. instructions on how a test should be performed.

Among other things, you can:

  • Create, clone and delete projects.
  • Create, clone and delete resource sets (subdirectories of projects).
  • Upload and download files, create new files, display the content of files, edit and delete files.

There is also a recycle bin from which you can restore deleted projects, resources sets and files.

“alt attribute”

Measuring Agents and Proxy Recorders

“alt attribute”

In order to perform a test you need at least one “Measuring Agent” (load generator). If you want to test entire Web pages, you also need an “HTTP/S Proxy Recorder” to record the HTTP/S calls of the Web pages.

Ready-to-use Amazon EC2 instances of these two components are already available, which you can start by yourself. Alternatively, you can install an operate “Measuring Agents” and “HTTP/S Proxy Recorders” on your own systems.

After you have started a “Measuring Agent” or “HTTP/S Proxy Recorder” you can register it in the portal server, and check the reachability by a “ping” at application level.

“alt attribute”

“alt attribute”

HTTP Test Wizard

“alt attribute”

Tests can be created in a number of different ways. You can even develop test programs from scratch that measure anything and use any protocol.

To keep it simple in the first step, we introduce the “HTTP Test Wizard”, with which you can easily define HTTP/S sessions and run them as tests.

Test Preparation

  1. Navigate to the “Projects Menu” and create a new Project and a new Resouce Set.
  2. Navigate to the “HTTP Test Wizard”.
  3. Create a New Session in the “HTTP Test Wizard”.
  4. Save the New Session to the Project and Resouce Set which you have created before.

“alt attribute”

You now have an empty session to which you can add HTTP requests (URL calls).

Test Creation

Now add one or more URLs to the test. If you want to subdivide the URLs into individual groups, first add a “Measuring Group”, then add the associated URLs and then again, add the next “Measuring Group” with the corresponding URLs.

If you define a “Measuring Group”, the (separately measured) response time of the group is measured overall URLs of the group.

“alt attribute”

Entering a URL is easy. Select the HTTP method (GET, POST, …) and enter the absolute URL. If you want to execute an HTTP POST method, first select the requests content type and then enter the data of the post request. Finally click at the “Add URL” button at the bottom of the dialog.

After you’ve created the test, it looks similar to this:

“alt attribute”

Save the test again (save session) and then debug the test.

Debugging the Test

In order to debug a test, at least one “Measuring Agent” must be available. This is because it is a remote debugger.

If you click on “Debug Session”, the test is automatically transmitted to one of your measuring agents and the debugger is started. If you have registered several measuring agents, you can select in the debugger at the top right on which measuring agent the debug session will be executed.

“alt attribute”

During debugging, please pay close attention to the HTTP status code you get from the URLs. For example, if you receive a 404 status code (not found), you have probably entered a faulty url. In such a case, cancel the debug session, correct the URL and then start the debugger once again.

As you may have already noticed, the HTTP status code of the URLs has not yet been checked by the debugger. Return to the HTTP Session Wizard and configure the corresponding HTTP status code for each URL (section “Verify HTTP Response”). Don’t forget to click at the “Modify URL” button at the bottom of the dialog.

“alt attribute”

Debug your test one last time and then save it.

Generating the Code

Generating the code is pretty easy. First click on “Generate Source Code” and then on “Compile & Generate JAR”.

“alt attribute”

“alt attribute”

Then click on “Define New Test”.

Defining the Test

Defining a test with the “HTTP Test Wizarzd” is also quite easy. Optionally, you can enter here a test description. Then click on “Define Test”.

“alt attribute”

After you have clicked on “Define Test” the test is created and the view changes from the “HTTP Test Wizarzd” menu to the “Tests” menu.

Tests Menu

The Test Menu shows all tests that you have defined - across all Projects and Resource Sets. You can also filter the view according to Projects and Resource Sets, and sort the tests in different ways.

“alt attribute”

Note that a test is something like a bracket that only contains references to the files that are required for the test execution. There a no files stored inside the test itself.

On the one hand, this means that a test become invalid/corrupted if you delete the corresponding resource files of the test in the Projects Menu. On the other hand, this also means that all changes to files that are saved in the tree of the Projects Menu are immediately applied to the test.

Each test has a base location from which it was defined (Project and Resource Set) and to where also the test results are saved.

A test itself can reference its resource files in two ways:

  • Resource Files that reside in the base location of the test (the test core files).
  • Referenced Files that are located in other Projects / Resource Sets. These are typically libraries, plug-ins and input files which are used/shared by several, different tests.

Defining a Test Job

After you have clicked on a test on “Define Test Job” and then on “Continue”, the intermediate “Define Test Job” menu is displayed.

“alt attribute”

In contrast to a “test”, all referenced files of the test are copied into the “test job”. For this reason there is also the option “Synchronize Files at Start of Job” which you should always have switched on. At “Additional Job Arguments” you can enter test-specific command line arguments that are transferred directly to the test program or test script. However, you usually do not need to enter anything.

After you have clicked on “Define Load Test Job” the test job is created and the view changes from the “Tests” menu to the “Test Jobs” menu.

Test Jobs Menu

In this menu all test jobs are displayed with their states. The green point at the top right next to the title indicates that all measurement agents that are set to active can also be reached. The color of the point changes to yellow or red if one or more measuring agents cannot be reached / are not available.

“alt attribute”

A test job can have one of the following state:

  • invalid: The job is damaged and can only be deleted.
  • defined: The job is defined locally in the portal server, but has not yet been transmitted to a masuring agent. Jobs that are in this state can still be modified.
  • submitted: The job was transmitted to a measuring agent, but is not yet ready to be started.
  • ready to run: The job on the measuring agent is ready to start (the corresponding data collector process is already running on the measuring agent, but the job itself is not yet started).
  • start failed: The start of the job has failed.
  • running: The job is currently running and can be monitored at real time.
  • completed: A job that was previously running is now completed (done).

As soon as a job is in the “defined” state, it has a “local job Id”. If the job is then transmitted to a measuring agent, the job has additionally a “remote job Id”.

Starting a Test Job

After you have clicked on “Start Test Job” in a test job you can select/modify the measuring agent on which the job will be executed and configure the job settings.

“alt attribute”

Input Fields:

  • Measuring Agent: The measuring agent on which the test is executed.
  • Number of Users: The number of simulated users that are started.
  • Max. Test Duration: The maximum test duration.
  • Max. Loops per User: The maximum number of sessions executed per user (the number of iterations of the test per user). Each time a new session of a user is started, the user’s session context is reset.
  • Loop Iteration Delay: The delay time after a session of a user has ended until the next session of the user is started.
  • Ramp Up Time: The length of time at the beginning of the test until all simulated users are started. Example: with 20 simulated users and a time of 10 seconds, a new user is started each 0.5 seconds.
  • Additional Arguments: Additional values which are transferred on the command line when the test script is started. These arguments are test specific. For tests that were created with the “HTTP Test Wizard” you can specify for example the following values: “-tcpTimeout 8000 -sslTimeout 5000 -httpTimeout 60000” (TCP connect timeout / SSL handshake timeout / HTTP processing timeout) which are considered by the executed URL calls and override the default values.
  • Debug Execution: This option effects that detailed information are written to the log file of the test. For example variable values which have been extracted from input files or from HTTP responses as well as variable values which are assigned at runtime. Only activate this option if you have problems with the execution of the test.
  • Debug Measuring: Effects that the Data Collector of the Measuring Agent writes the JSON objects of the DKFQS All Purpose Interface to its log file. This option can be enabled to debug self-developed tests that have been written from scratch.

Normally you do not have to enter any “Additional Arguments” and leave “Debug Execution” and “Debug Measuring” switched off.

After clicking on “Start Test Job”, the job is started on the measuring agent and the status of the job is now “Running”. Then click on “Monitor Jobs”.

“alt attribute”

Real-Time Monitor Menu

The real-time display shows all currently running jobs including their measured values and measured errors.

“alt attribute”

You can also suspend a running job for a while and resume it later. However this has no effect on the “Max. Test Duration”.

“alt attribute”

After the job is completed you can click on “Analyze Result”. The view changes then to the “Test Results” menu.

“alt attribute”

Test Results Menu

The “Test Results” menu is a workspace into which you can load multiple test results. You can switch back and forth between the test results. As in the “Real-Time Monitor” menu, all measured values and all measured errors are displayed. In addition, percentile statistics and diagrams of error types are also displayed in this menu.

This menu enables you also to combine several test results into a so-called “load curve” - and thus to determine the maximum number of users that a system such a Web server can handle (see next chapter).

“alt attribute”

The Summary Statistic of a test result contains some interesting values:

  • Avg. Measured Samples per Second: These are the number of successful measured values per second, counted over the whole test (in other products also so-called as “hits per second”).
  • Max. Concurrent Users: These are the maximum number of concurrent user that have been really reached during the test (which may differ from the “Number of Users” defined when starting the test).
  • Max. Pending Samples: This is the maximum amount of requests to the server, for which no immediately response has been received, measured over the whole test (the maximum traffic jam of the requests).
  • Average CPU Load: This is the average CPU load in percent on the measuring agent (load generator) which was captured during the execution of the test. If this value is greater than 95% the test is invalid because the measuring agent itself was overloaded.

Since several thousand to several million response times can be measured in a very short time during a test, the successfully measured response times are summarized in the response time diagrams at 4-second intervals. For this reason, a minimum value, a average value and a maximum value is displayed for each 4-second interval in such diagrams.

However, this summarization is not performed for percentile statistics and for measured errors. Every single measured value is taken into account here.

Determining System Capacity

The maximum capacity of a system, such as the maximum number of users that a web server can handle, can be determined by a so-called “load curve”.

To obtain such a load curve, you must repeat the same test several times, by increasing the number of users with each test. For example a test series with 10, 50, 100, 200, 400, 800, 1200 and 1600 users.

The easiest way to repeat a test is to clone a test job. You can then enter the higher number of users when starting the test.

“alt attribute”

A measured load curve looks like this, for example:

“alt attribute”

As you can see, the throughput of the server increases linearly up to 400 users - with the response times remaining more or less the same (Avg. Passed Session Time). Then, with 800, 1200 and 1600 users, only individual errors are measured at first, then also many errors, with the response times now increasing sharply.

This means that the server can serve up to 400 users without any problems.
But could you operate the server with 800 users if you accept longer response times? With 800 users, 745,306 URL calls were successfully measured, with only 50 errors occurring. To find it out, let’s compare the detailed response times of “Page 1” of 400 users with 800 users.

Response Times of “Page 1” at 400 Users: “alt attribute”

Response Times of “Page 1” at 800 Users: “alt attribute”

The 95% percentile value at 400 users is 224 milliseconds and increases to 1952 milliseconds at 800 users. Now you could say that it just takes longer. However, if you look at the red curve of the outliers, these are only one time a little bit more than 1 second at 400 users, but often more than 8 seconds at 800 users. Conclusion: The server cannot be operated with 800 users because it is then overloaded.

Now let’s do one last test with 600 users. Result:

“alt attribute”

The throughput of the server at 600 users is a little bit higher than at 400 users and also little bit higher than at 800 users. No errors were measured.

Response Times of “Page 1” at 600 Users: “alt attribute”

The 95% percentile value at 600 users is 650 milliseconds, and there are only two outliers with a little bit more than one second. Final Conclusion: The server can serve up to 600 Users, but no more.

Troubleshooting a Test

In rare cases it can happen that a test does not measure anything (neither measurement results nor errors). In this case you should either wait until the test is finished or stop it directly in the “Real Time Monitor” menu.

Then you can then acquire the test log files in the “Test Jobs” menu and search for errors.

“alt attribute”

“alt attribute”


AWS Measuring Agents

How to locate and use AWS based Mesuring Agents

HTTP Test Wizard

User Guide | HTTP Test Wizard

HTTP/S Remote Proxy Recorder

User Guide | HTTP/S Remote Proxy Recorder