This report is based on the Round X - 01 of Enterprise Service Bus performance benchmarking as a continuation to the previous seven rounds of benchmarking conducted since June 2007. Please refer to the write-ups for Round 7 and the previous rounds, for more information and the history of the benchmark.
In this performance benchmarking round, AdroitLogic’s latest integration solution, UltraESB-X is compared against two of the leading ESBs in the industry which are referred as ESB-1 and ESB-2 throughout this report. The benchmarking test was conducted on an Amazon EC2 instance on June 20, 2017.
ESB configurations were built by AdroitLogic, based on the public documentation, other articles, and guides of the vendors. An immense effort was put to configure ESBs to work under identical conditions as much as possible.
Four message flow scenarios were tested on HTTP transport over a set of concurrent users with different payload sizes. The scenarios tested are as follows.
The test was conducted over a variety of XML payload sizes to evaluate the ESBs' capabilities of handling small to large messages.
In order to understand the stability of ESBs, different levels of concurrency have been introduced into the test.
For more information of the benchmark, please refer Performance Test Cases.
For this performance benchmarking, an Amazon EC2 compute-optimized c3.xlarge instance with 4 vCPUs, 7.5 GiB in memory and 40GiB of General Purpose SSD was used. The operating system which the test executed on, 64-bit Ubuntu 16.04.1 LTS with the kernel version of 4.4.0-79-generic, was first tuned up for better performance, as described in the Standalone Execution article. All the ESBs were allocated 2GB of Java heap memory, on 64 bit Oracle JDK 1.8.0_121-b13. A configured Apache Tomcat 7.0.56 instance was used as the backend server.
ESBs, Tomcat server and the load generating client program resided in the same machine. The backend server was started first, followed by the ESB under test, and finally the load generating program was started generating a load profile for all the scenarios over different concurrency levels with different payload sizes. A few iterations of warm up rounds were carried out before each test run to make sure that the JVM is warmed up beforehand. After completion of the test run for a particular ESB, the backend server was stopped, along with the ESB. This same procedure was followed again for the other ESBs, completing one iteration of the test. The benchmarking test was carried out for 3 such iterations to eliminate any discrepancy among the results.
Since all the traffic was only among the load generator, ESB, and the backend service, network isolation was not considered. The instance was configured to have a 4GiB RAM disk which was utilized by the UltraESB-X. All the ESBs were attempted to be configured as allowing approximately 500 worker threads, and a socket timeout of 120 seconds.
All the results extracted from this benchmarking test are summarized and publicly shared in a Google Spreadsheet.
For each case, successful responses, failures and transactions per second (TPS) were measured. Average values of corresponding data were calculated to enrich the fairness of results. Unlike the previous test rounds, results for each payload size in each test scenario were considered separately in this round of benchmarking.
Throughput is one of the most significant metrics in a performance intensive environment. Transactions per second (TPS) was measured to monitor the ability of ESBs to perform seamlessly in such an environment. Summarized TPS values for each scenario across all the concurrency levels, for each payload size, averaged over 3 iterations displays a trend as depicted in the figures.
All the ESBs completed the full benchmark by handling 1,920,000 requests each, for a particular test scenario, over different concurrent user levels, with different payload sizes in each iteration. Total failures, errors or timeouts occurred for each ESB for different test scenarios were as follows.
|Test Scenarios||Number of requests||Total Errors|
|Content Based Routing||5,760,000||0||0||4|
|Header Based Routing||5,760,000||0||0||1|
All the ESBs completed the full test round over three iterations. While UltraESB-X along with ESB-1 displayed high stability with no failures or errors, ESB-2 resulted in six errors in total. UltraESB-X outperformed the other ESBs across all the payloads and test scenarios except for XSLT transformation. However, even with the XSLT transformation scenario, when the payload gets larger than 10 kB, UltraESB-X starts dominating the performance.
Refer to the Google Spreadsheet, ESB Performance Benchmarking - Round X - 01 for complete results.