This article is based on the 7th round of public ESB Performance benchmarking conducted since June 2007. Please refer to the write-ups for Round 6 and the previous rounds, for more information and history of the benchmark. Based on the outcomes of the previous rounds, and considering the contributions received to enhance the configurations for the different ESBs, we decided to only consider the following ESBs into this roundA.
This round does not introduce any significant changes, and would be the last execution round for the current test cases using SOAP payloads and SOAP based test cases. In future, a new performance benchmark would be created based on REST-ful payloads and API management use cases.
Please refer the ESB Performance Benchmark resources project on BitBucket for updated information on the Round 7 execution. The Amazon EC2 AMI image ami-136a3a7a includes the Round 7 test results, log files and configurations.
These configurations have been built by AdroitLogic, partly based on previously published
resources and contributions submitted to the resources repository. The
solutions we’ve come up with may have inherent limitations due to our limited
understanding of some of the other ESBs and the limited time we had spent. Hence these
configurations may not be optimal, and/or may not be optimally tuned for the scenarios
However, AdroitLogic has spent many days and nights to get the other ESBs to work under identical conditions as much as possible. We would be happy to receive feedback to improve the configurations used, and the tuning parameters for the different ESBs from the vendors and/or end users. We will include those updates into future rounds where applicable, and such improvements could be submitted directly to the Mercurial repository as pull requests.
Direct Proxy Service
Content Based Routing Proxy
on SOAP body payload
on SOAP header
on Transport Header
XSLT Transformation Proxy
Each of these scenarios are described in detail under Performance Test Cases
The performance test was carried out on an Amazon EC2 High CPU Extra Large (c1.xlarge) instance running the Public AMI "ami-136a3a7a" based on the EBS backed official Ubuntu 10.10 image. This instance was a 64 bit instance and had 7GB of RAM with a 8GB EBS backed storage, 20 ECUs and 8 cores. All tests against each ESB was carried out on the same instance started on the 10th of October 2013, and executed one after another, and iterated over three rounds to avoid any time based impact to the results. A new shell script run-test-suite.sh was used for this purpose, and this automated the text execution completely, to start the Apache Tomcat server, the selected ESB, and then the load test - for each ESB. After execution, the results were converted to CSV files and the execution logs emailed along with the results in CSV format, the ESB shutdown, and the next test case executed similarly.
Network isolation was not required as all test traffic was only between the local machine hosting the load generator, ESB and backend service. Each ESB was allocated 2GB of Java Heap memory. The 64 bit Oracle / Sun JDK 1.6.0_45-b06 was used with the unlimited strength policy files applied to allow WS-Security testing with larger key sizes. The instance had a 2GB RAM disk which was utilized by the UltraESB. We attempted to allow approximately 300 worker threads, and a socket timeout of 120 seconds for each ESB, but this may not have been possible for all ESB’s due to limitations.
The configurations used for each ESB are published on the BitBucket repository at https://bitbucket.org/adroitlogic/esbperformance. This includes a README.TXT file for each ESB, with basic instruction to configure each ESB from scratch, and to build and deploy the configurations. The AMI image "ami-136a3a7a" includes the real machine image of the Round 7 test execution, including all configurations and log files etc. Except for any passwords and other private information of AdroitLogic, the image contains all files, information and dependencies.
ESB Performance Testing Round 7 is a publicly accessible Google document hosting the summarized results.
All of the ESBs were able to complete the full benchmark
The three iterations executed per each ESB tested a total of 11,138,400 requests per ESB. The HTTP level errors received for each ESB for this load is as follows:
(for 11,138,400 total requests per each ESB)
|AdroitLogic UltraESB v2.0.0||1|
|WSO2 ESB v 4.7.0||3,007|
|Mule Community Edition v3.4.0||620|
|Talend ESB SE v5.3.1||3|
We discovered that the WSO2 ESB response size for each of the XSLT test cases was
363 bytes, irrespective of the request payload size of 512bytes, 1K, 5K and 10K;
signifying that the test case failed.
We were surprised to find out that the same failure existed even when the WSO2 ESB 4.6.0 was used, as documented by WSO2 in the Round 6.5 results.
See article Why the Round 6.5 results published by WSO2 is flawed for more details.
Further, a significant defect was discovered with the WSO2 ESB version 4.7.0 that affected any payload over 16,384 bytes in size - even when a single message without concurrency passed through the WSO2 ESB in any test case where the payload was processed (i.e. all test cases excluding the Direct Proxy and Transport Header based routing scenarios)
This was also a surprise to find out that the same defect existed even when the WSO2 ESB 4.6.0 was used, as documented by WSO2 in the Round 6.5 results; as well as on the latest Milestone 4 build of the version 4.8.0 of 9th October 2013. We believe that this defect has still not been identified by the WSO2 team.
When the response sizes for each test case was analysed across the iterations, we noted that the WSO2 ESB, Mule CE ESB and the Talend ESB reported different response sizes for the 100KB Direct Proxy scenario. Since this difference is observed without any HTTP level message failures, it seems to indicate a possible corruption of the responses - which should all be exactly 102,400 bytes in length.
As stated above, we discovered that the WSO2 ESB corrupted response messages larger than 16,384 (16KB) in size. This issue can be seen in the version 4.6.0 used by WSO2 to publish the Round 6.5 in January 2013, as well as in the last released version 4.7.0 used in this round, and the latest Milestone 4 build dated 9th October 2013, of the soon to be released version 4.8.0. One could use the HttpClient of the SOA ToolBox to send a payload larger than 16K (e.g. the sample file samples/resources/requests/100K_buyStocks.xml of size 100K) to the URL http://localhost:8280/services/CBRProxy after any of the above versions of the WSO2 ESB has been set-up as per instructions published by WSO2 in Round 6.5. Halfway through the response, the corruption could be noted as follows.
Additionally, we have published a standalone test case CorruptionChecker [Source code / Executable Binary], which is able to generate requests less than, greater than, and equal to 16,384 bytes. When used against the WSO2 ESB versions 4.6.0, 4.7.0 and 4.8.0-M4, with a payload containing over 161 orders (Note: 161 orders creates a request payload of exactly 16,384 bytes), the response differs from the request. However this corruption was not seen for messages smaller than or equal to 16,384 bytes.
Standalone CorruptionChecker Utility Execution
asankha@asankha-dm4:~/code/ESBPerformance/wso2-esb/CorruptionChecker/lib$ java -jar CorruptionChecker.jar Execute as follows. No arguments will send a request of 16,384 bytes to the URL http://localhost:8280/services/CBRProxy java -jar CorruptionChecker.jar [<number-of-order-lines> <url>] e.g. java -jar CorruptionChecker 161 http://localhost:8280/services/CBRProxy Payload bytes : 16384 ** ALL OK ** Casankha@asankha-dm4:~/code/ESBPerformance/wso2-esb/CorruptionChecker/lib$ java -jar CorruptionChecker.jar 162 Execute as follows. No arguments will send a request of 16,384 bytes to the URL http://localhost:8280/services/CBRProxy java -jar CorruptionChecker.jar [<number-of-order-lines> <url>] e.g. java -jar CorruptionChecker 161 http://localhost:8280/services/CBRProxy Payload bytes : 16484 ** Response corrupted : ** <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header><routing xmlns="http://someuri">community#1.0#</routing></soapenv:Header> <soapenv:Body> <m:buyStocks xmlns:m="http://services.samples/xsd"> <order><symbol>IBM</symbol><buyerID>FGK</buyerID><price>123.45</price><volume>1111</volume></order> <order><symbol>IBM</symbol><buyerID>FGK</buyerID><price>123.45</price><volume>1111</volume></order> ... <order><symbol>IBM</symbol><buyerID>FGK</buyerID><price>123.45</price><volume>1111</volume></order> <order><symbol>IBM</symbol><buyerID>FGK</buyerID><<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Header><routing xmlns="http://someuri">community#1.0#</routing></soapenv:Header><soapenv:Body> <m:buyStocks xmlns:m="http://services.samples/xsd"> <order><symbol>IBM</symbol><buyerID>FGK</buyerID><price>123.45</price><volume>1111</volume></order> ... <order><symbol>IBM</symbol><buyerID>FGK</buyerID><price>123.45</price><volume>1111</volume></order> </m:buyStocks> </soapenv:Body></soapenv:Envelope>
The following graph shows the summarized TPS for each scenario across message size and concurrency, taking the best of the three iterations for each ESB included into the final analysis. (The graph taking the Average of the three iterations showed an almost similar result, and can be found on the Google Worksheet)
To simplify the results into a further level, we computed the average TPS across each scenario benchmarked, and computed this for both the best case and the average case of the three iterations - similarly to Round 6
For full details on reproducing these results, refer to the instructions found here, to start an instance of the
AMI image ami-136a3a7a. To execute a full round of automated tests, you
can now use the shell script "~/client-scripts/run-test-suite.sh". Edit the top section
of this script with SMTP information relevant to you. The automated tests will email you
results as each ESB completes each round. If you do not wish to receive emails, comment
out the sendemail lines of the script. The script can be started with nohup to ensure
that the test can complete independently even if your SSH session fails. A complete
round could take about 12 - 16 hours. You do not need to manually start the Tomcat
backend service or any of the ESBs or load generator executions as in prior rounds. The
test will run automatically, and you will receive copies of the client execution logs as
well as CSV reports of the results via email.
#!/bin/sh EM_SERVER=<SMTP Server Name or IP> EM_USERID=<username for SMTP server> EM_PASSWORD=<password for SMTP server> EM_FROM=<From address for notification emails> EM_TO=<To address to send notifications>
We value all contributions received to the ESB Performance Benchmark project on BitBucket, and the suggestions for improvements submitted privately by many.
WSO2 and WSO2 ESB are trademarks of WSO2 Inc. MuleSoft and Mule ESB are trademarks of MuleSoft. Talend is a trademark of Talend. Apache ServiceMix is a trademark of the Apache Software Foundation. UltraESB and AdroitLogic are trademarks of AdroitLogic Private Ltd. All other product and company names and marks mentioned are the property of their respective owners and are mentioned for identification purposes only.
14th October 2013