AWS Measuring Agents
How to locate and use AWS based Mesuring Agents
Thank you for using the Real Load product.
On this first page of the user guide you will learn:
After you have signed in to the portal server, you will see a navigation bar whose first menu on the left is Projects. This is a file browser that displays all of your files. The view is divided into Projects and Resource Sets which contain the files for executing your tests and the measured test results. You can also store additional files in a Resource Set which contain e.g. instructions on how a test should be performed.
Among other things, you can:
There is also a recycle bin from which you can restore deleted projects, resources sets and files.
Tests can be created in a number of different ways. You can even develop test programs from scratch that measure anything and use any protocol.
To keep it simple in the first step, we introduce the “HTTP Test Wizard”, with which you can easily define HTTP/S sessions and run them as tests.
You now have an empty session to which you can add HTTP requests (URL calls).
Now add one or more URLs to the test. If you want to subdivide the URLs into individual groups, first add a “Measuring Group”, then add the associated URLs and then again, add the next “Measuring Group” with the corresponding URLs.
If you define a “Measuring Group”, the (separately measured) response time of the group is measured overall URLs of the group.
Entering a URL is easy. Select the HTTP method (GET, POST, …) and enter the absolute URL. If you want to execute an HTTP POST method, first select the requests content type and then enter the data of the post request. Finally click at the “Add URL” button at the bottom of the dialog.
After you’ve created the test, it looks similar to this:
Save the test again (save session) and then debug the test.
In order to debug a test, at least one “Measuring Agent” must be available. This is because it is a remote debugger.
If you click on “Debug Session”, the test is automatically transmitted to one of your measuring agents and the debugger is started. If you have registered several measuring agents, you can select in the debugger at the top right on which measuring agent the debug session will be executed.
During debugging, please pay close attention to the HTTP status code you get from the URLs. For example, if you receive a 404 status code (not found), you have probably entered a faulty url. In such a case, cancel the debug session, correct the URL and then start the debugger once again.
As you may have already noticed, the HTTP status code of the URLs has not yet been checked by the debugger. Return to the HTTP Session Wizard and configure the corresponding HTTP status code for each URL (section “Verify HTTP Response”). Don’t forget to click at the “Modify URL” button at the bottom of the dialog.
Debug your test one last time and then save it.
Generating the code is pretty easy. First click on “Generate Source Code” and then on “Compile & Generate JAR”.
Then click on “Define New Test”.
Defining a test with the “HTTP Test Wizarzd” is also quite easy. Optionally, you can enter here a test description. Then click on “Define Test”.
After you have clicked on “Define Test” the test is created and the view changes from the “HTTP Test Wizarzd” menu to the “Tests” menu.
The Test Menu shows all tests that you have defined - across all Projects and Resource Sets. You can also filter the view according to Projects and Resource Sets, and sort the tests in different ways.
Note that a test is something like a bracket that only contains references to the files that are required for the test execution. There a no files stored inside the test itself.
On the one hand, this means that a test become invalid/corrupted if you delete the corresponding resource files of the test in the Projects Menu. On the other hand, this also means that all changes to files that are saved in the tree of the Projects Menu are immediately applied to the test.
Each test has a base location from which it was defined (Project and Resource Set) and to where also the test results are saved.
A test itself can reference its resource files in two ways:
Referenced files which are used/shared by multiple tests (such as libraries, plug-ins and input files) can be stored in the predifined special project “Resources Library”.
After you have clicked on a test on “Define Test Job” and then on “Continue”, the intermediate “Define Test Job” menu is displayed.
In contrast to a “test”, all referenced files of the test are copied into the “test job”. For this reason there is also the option “Synchronize Files at Start of Job” which you should always have switched on. At “Additional Job Arguments” you can enter test-specific command line arguments that are transferred directly to the test program or test script. However, you usually do not need to enter anything.
After you have clicked on “Define Load Test Job” the test job is created and the view changes from the “Tests” menu to the “Test Jobs” menu.
In this menu all test jobs are displayed with their states. The green point at the top right next to the title indicates that all measurement agents that are set to active can also be reached. The color of the point changes to yellow or red if one or more measuring agents cannot be reached / are not available.
A test job can have one of the following state:
As soon as a job is in the “defined” state, it has a “local job Id”. If the job is then transmitted to a measuring agent, the job has additionally a “remote job Id”.
After you have clicked on “Start Test Job” in a test job you can select/modify the measuring agent on which the job will be executed and configure the job settings.
Input Fields:
Normally you do not have to enter any “Additional Arguments” and leave “Debug Execution” and “Debug Measuring” switched off.
After clicking on “Start Test Job”, the job is started on the measuring agent and the status of the job is now “Running”. Then click on “Monitor Jobs”.
The real-time display shows all currently running jobs including their measured values and measured errors.
You can also suspend a running job for a while and resume it later. However this has no effect on the “Max. Test Duration”.
After the job is completed you can click on “Analyze Result”. The view changes then to the “Test Results” menu.
If you click on “Analyze Result”, the test result is also copied (once) from the Measuring Agent into the Project / Resource Set from which the test was defined. From there you can reload the test result into the “Test Results” menu at any time.
The “Test Results” menu is a workspace into which you can load multiple test results. You can switch back and forth between the test results. As in the “Real-Time Monitor” menu, all measured values and all measured errors are displayed. In addition, percentile statistics and diagrams of error types are also displayed in this menu.
This menu enables you also to combine several test results into a so-called “load curve” - and thus to determine the maximum number of users that a system such a Web server can handle (see next chapter).
The Summary Statistic of a test result contains some interesting values:
Since several thousand to several million response times can be measured in a very short time during a test, the successfully measured response times are summarized in the response time diagrams at 4-second intervals. For this reason, a minimum value, a average value and a maximum value is displayed for each 4-second interval in such diagrams.
However, this summarization is not performed for percentile statistics and for measured errors. Every single measured value is taken into account here.
The maximum capacity of a system, such as the maximum number of users that a web server can handle, can be determined by a so-called “load curve”.
To obtain such a load curve, you must repeat the same test several times, by increasing the number of users with each test. For example a test series with 10, 50, 100, 200, 400, 800, 1200 and 1600 users.
The easiest way to repeat a test is to clone a test job. You can then enter the higher number of users when starting the test.
A measured load curve looks like this, for example:
As you can see, the throughput of the server increases linearly up to 400 users - with the response times remaining more or less the same (Avg. Passed Session Time). Then, with 800, 1200 and 1600 users, only individual errors are measured at first, then also many errors, with the response times now increasing sharply.
This means that the server can serve up to 400 users without any problems.
But could you operate the server with 800 users if you accept longer response times? With 800 users, 745,306 URL calls were successfully measured, with only 50 errors occurring.
To find it out, let’s compare the detailed response times of “Page 1” of 400 users with 800 users.
Response Times of “Page 1” at 400 Users:
Response Times of “Page 1” at 800 Users:
The 95% percentile value at 400 users is 224 milliseconds and increases to 1952 milliseconds at 800 users. Now you could say that it just takes longer. However, if you look at the red curve of the outliers, these are only one time a little bit more than 1 second at 400 users, but often more than 8 seconds at 800 users. Conclusion: The server cannot be operated with 800 users because it is then overloaded.
Now let’s do one last test with 600 users. Result:
The throughput of the server at 600 users is a little bit higher than at 400 users and also little bit higher than at 800 users. No errors were measured.
Response Times of “Page 1” at 600 Users:
The 95% percentile value at 600 users is 650 milliseconds, and there are only two outliers with a little bit more than one second. Final Conclusion: The server can serve up to 600 Users, but no more.
In rare cases it can happen that a test does not measure anything (neither measurement results nor errors). In this case you should either wait until the test is finished or stop it directly in the “Real Time Monitor” menu.
Then you can then acquire the test log files in the “Test Jobs” menu and search for errors.
If your test has problems when extracting and assigning variable values, you should also search the log files for error messages. To get detailed information you can run the test once again - but this time with the option “Debug Execution” enabled.
Don’t forget to turn off the “Debug Execution” option after the problem has been solved.
How to locate and use AWS based Mesuring Agents
User Guide | HTTP Test Wizard
User Guide | HTTP/S Remote Proxy Recorder