Skip to end of metadata
Go to start of metadata

Latest Execution - Round 7 (October 2013)

Overview

This article is based on the 7th round of public ESB Performance benchmarking conducted since June 2007. Please refer to the write-ups for Round 6 and the previous rounds, for more information and history of the benchmark. Based on the outcomes of the previous rounds, and considering the contributions received to enhance the configurations for the different ESBs, we decided to only consider the following ESBs into this round.

  • AdroitLogic UltraESB v2.0.0 - Enhanced [link]
  • WSO2 ESB v 4.7.0 [link]
  • Mule Community Edition v3.4.0 [link]
  • Talend ESB SE v5.3.1 [link]

Note 1: Although we tried our best to wait for updated configurations for Apache ServiceMix/Camel, we had to skip testing it as we were unable to set-up the test scenarios against the latest release, and failed to get timely assistance from the team during this round.

Note 2: The UltraESB 2.0.0 "Enhanced" version indicates the UltraESB v2.0.0 GA release, supplemented with the following three JAR files which are freely downloadable, but not shipped with the distribution due to licensing incompatibilities:

This round does not introduce any significant changes, and would be the last execution round for the current test cases using SOAP payloads and SOAP based test cases. In future, a new performance benchmark would be created based on REST-ful payloads and API management use cases.

Please refer the ESB Performance Benchmark resources project on BitBucket for updated information on the Round 7 execution. The Amazon EC2 AMI image ami-136a3a7a includes the Round 7 test results, log files and configurations.

Disclaimer

These configurations have been built by AdroitLogic, partly based on previously published resources and contributions submitted to the resources repository. The solutions we've come up with may have inherent limitations due to our limited understanding of some of the other ESBs and the limited time we had spent. Hence these configurations may not be optimal, and/or may not be optimally tuned for the scenarios tested

However, AdroitLogic has spent many days and nights to get the other ESBs to work under identical conditions as much as possible. We would be happy to receive feedback to improve the configurations used, and the tuning parameters for the different ESBs from the vendors and/or end users. We will include those updates into future rounds where applicable, and such improvements could be submitted directly to the Mercurial repository as pull requests.

Scenarios tested

  • Direct Proxy Service
  • Content Based Routing Proxy
    • on SOAP body payload
    • on SOAP header
    • on Transport Header
  • XSLT Transformation Proxy
  • WS-Security Proxy

Each of these scenarios are described in detail under Performance Test Cases

Performance Test Environment and Resources

Amazon EC2 AMI image

The performance test was carried out on an Amazon EC2 High CPU Extra Large (c1.xlarge) instance running the Public AMI "ami-136a3a7a" based on the EBS backed official Ubuntu 10.10 image. This instance was a 64 bit instance and had 7GB of RAM with a 8GB EBS backed storage, 20 ECUs and 8 cores. All tests against each ESB was carried out on the same instance started on the 10th of October 2013, and executed one after another, and iterated over three rounds to avoid any time based impact to the results. A new shell script "run-test-suite.sh" was used for this purpose, and this automated the text execution completely, to start the Apache Tomcat server, the selected ESB, and then the load test - for each ESB. After execution, the results were converted to CSV files and the execution logs emailed along with the results in CSV format, the ESB shutdown, and the next test case executed similarly.

e.g.
UltraESB iteration 0, Mule ESB iteration 0, Talend ESB iteration 0, WSO2 ESB iteration 0,
UltraESB iteration 1, Mule ESB iteration 1, Talend ESB iteration 1, WSO2 ESB iteration 1,
UltraESB iteration 2, Mule ESB iteration 2, Talend ESB iteration 2, WSO2 ESB iteration 2.

Network isolation was not required as all test traffic was only between the local machine hosting the load generator, ESB and backend service. Each ESB was allocated 2GB of Java Heap memory. The 64 bit Oracle / Sun JDK 1.6.0_45-b06 was used with the unlimited strength policy files applied to allow WS-Security testing with larger key sizes. The instance had a 2GB RAM disk which was utilized by the UltraESB. We attempted to allow approximately 300 worker threads, and a socket timeout of 120 seconds for each ESB, but this may not have been possible for all ESB's due to limitations.

Performance scenario configurations for each ESB

The configurations used for each ESB are published on the BitBucket repository at https://bitbucket.org/adroitlogic/esbperformance. This includes a README.TXT file for each ESB, with basic instruction to configure each ESB from scratch, and to build and deploy the configurations. The AMI image "ami-136a3a7a" includes the real machine image of the Round 7 test execution, including all configurations and log files etc. Except for any passwords and other private information of AdroitLogic, the image contains all files, information and dependencies.

Results Analysis Worksheet hosted on Google Docs

ESB Performance Testing Round 7 is a publicly accessible Google document hosting the summarized results.

Raw Execution Logs

The raw execution logs for each ESB can be found in the ~/client-scripts directory on the EC2 image, and the log files can be found in the specific directories for each ESB, under ~/esbs.

Results

  • All of the ESBs were able to complete the full benchmark

  • The three iterations executed per each ESB tested a total of 11,138,400 requests per ESB. The HTTP level errors received for each ESB for this load is as follows:

    ESBTotal Errors
    (for 11,138,400 total requests per each ESB)
    AdroitLogic UltraESB v2.0.01
    WSO2 ESB v 4.7.03,007
    Mule Community Edition v3.4.0620
    Talend ESB SE v5.3.13



  • We discovered that the WSO2 ESB response size for each of the XSLT test cases was 363 bytes, irrespective of the request payload size of 512bytes, 1K, 5K and 10K; signifying that the test case failed.

     
  • Further, a significant defect was discovered with the WSO2 ESB version 4.7.0 that affected any payload over 16,384 bytes in size - even when a single message without concurrency passed through the WSO2 ESB in any test case where the payload was processed (i.e. all test cases excluding the Direct Proxy and Transport Header based routing scenarios)
    • This was also a surprise to find out that the same defect existed even when the WSO2 ESB 4.6.0 was used, as documented by WSO2 in the Round 6.5 results; as well as on the latest Milestone 4 build of the version 4.8.0 of 9th October 2013. We believe that this defect has still not been identified by the WSO2 team.
  • When the response sizes for each test case was analysed across the iterations, we noted that the WSO2 ESB, Mule CE ESB and the Talend ESB reported different response sizes for the 100KB Direct Proxy scenario. Since this difference is observed without any HTTP level message failures, it seems to indicate a possible corruption of the responses - which should all be exactly 102,400 bytes in length.

Response Corruption by the WSO2 ESB

As stated above, we discovered that the WSO2 ESB corrupted response messages larger than 16,384 (16KB) in size. This issue can be seen in the version 4.6.0 used by WSO2 to publish the Round 6.5 in January 2013, as well as in the last released version 4.7.0 used in this round, and the latest Milestone 4 build dated 9th October 2013, of the soon to be released version 4.8.0. One could use the HttpClient of the SOA ToolBox to send a payload larger than 16K (e.g. the sample file samples/resources/requests/100K_buyStocks.xml of size 100K) to the URL http://localhost:8280/services/CBRProxy after any of the above versions of the WSO2 ESB has been set-up as per instructions published by WSO2 in Round 6.5. Halfway through the response, the corruption could be noted as follows.

Additionally, we have published a standalone test case CorruptionChecker [Source code / Executable Binary], which is able to generate requests less than, greater than, and equal to 16,384 bytes. When used against the WSO2 ESB versions 4.6.0, 4.7.0 and 4.8.0-M4, with a payload containing over 161 orders (Note: 161 orders creates a request payload of exactly 16,384 bytes), the response differs from the request. However this corruption was not seen for messages smaller than or equal to 16,384 bytes.

Standalone CorruptionChecker Utility Execution

 

Observations

The following graph shows the summarized TPS for each scenario across message size and concurrency, taking the best of the three iterations for each ESB included into the final analysis. (The graph taking the Average of the three iterations showed an almost similar result, and can be found on the Google Worksheet)

 

** Note: The WSO2 ESB figures includes results for failed and corrupted test cases, and thus the graph should be understood with context **

 

 
Mule-3.4.0
Talend-SE-5.3.1

UltraESB-2.0.0

Enhanced

WSO2-ESB-4.7.0
Direct Proxy3,1913,1444,475

4,689

CBR Proxy3,1233,1233,604

4,121**

CBR SOAP Header Proxy3,2983,1093,6254,481**
CBR Transport Header Proxy2,8293,7005,0575,532
XSLT Proxy2,2052,3233,2741,777**
WS-Security Proxy488524612340
Average2,5222,6543,4413,490**

Note:

  • ** The XSLT test case failed correct execution for the WSO2 ESB.
  • ** The CBR Proxy and CBR SOAP Header Proxy test cases replied with corrupted responses for payloads larger than 16KB, for the WSO2 ESB
  • The Direct Proxy test case for Mule CE, Talend ESB and the WSO2 ESB indicated a possible response corruption for some of the 100KB payloads.

 

To simplify the results into a further level, we computed the average TPS across each scenario benchmarked, and computed this for both the best case and the average case of the three iterations - similarly to Round 6

** Note: The WSO2 ESB column includes results for failed and corrupted test cases, and thus the graph should be understood with context **

 

 
Mule-3.4.0             
 Talend-SE-5.3.1   
UltraESB-2.0.0-Enhanced
WSO2-ESB-4.7.0 
Best Case2,5222,654

3,441

3,490**
Average Case2,3482,5233,2873,242**

Note:

  • ** The averages for the WSO2 ESB includes results of failed and corrupted test cases


Refer to the Google Document at :  ESB Performance Testing Round 7 for complete results

  • The UltraESB 2.0.0 Enhanced version clearly lead in Performance, Correctness of operation (No response corruptions) and Stability (1 HTTP error for 11,138,400 requests)
  • The WSO2 ESB 4.7.0 suffered a severe response corruption defect for payloads larger than 16KB, and a failure of the XSLT test cases
    • It is surprising to note that these issues were not discovered by WSO2 in Round 6.5; and the response corruption issue remains on the latest nightly builds to-date.
    • The WSO2 ESB uses the Pass-through transport by default from versions 4.6.0 onwards. Leaving the response corruption defect aside, at first sight it might seem like the WSO2 ESB marginally exceeds the performance of the UltraESB. However, as the payload size increases over the size of the internal buffer of the pass-through transport (which we believe is 16KB), the performance of the WSO2 ESB degrades drastically. For example, for 100KB payloads, the UltraESB is 21% ~ 56% faster than the WSO2 ESB.
  • The Talend ESB SE 5.3.1 performed slightly better than the Mule CE 3.4.0 ESB, and encountered only 3 HTTP level errors for 11,138,400 requests.
  • The Mule CE 3.4.0, Talend ESB SE 5.3.1 and WSO2 ESB 4.7.0 showed indications of possible response corruption for payloads of 100KB under the Direct Proxy scenario.

Reproducing the results

For full details on reproducing these results, refer to the instructions found here, to start an instance of the AMI image "ami-136a3a7a". To execute a full round of automated tests, you can now use the shell script "~/client-scripts/run-test-suite.sh". Edit the top section of this script with SMTP information relevant to you. The automated tests will email you results as each ESB completes each round. If you do not wish to receive emails, comment out the sendemail lines of the script. The script can be started with nohup to ensure that the test can complete independently even if your SSH session fails. A complete round could take about 12 - 16 hours. You do not need to manually start the Tomcat backend service or any of the ESBs or load generator executions as in prior rounds. The test will run automatically, and you will receive copies of the client execution logs as well as CSV reports of the results via email.

Authors

Asankha C. Perera, Founder and CTO AdroitLogic
Ruwan Linton, Director of Engineering, AdroitLogic

Email: info@adroitlogic.com

Contributors

We value all contributions received to the ESB Performance Benchmark project on BitBucket, and the suggestions for improvements submitted privately by many.

Notices

WSO2 and WSO2 ESB are trademarks of WSO2 Inc. MuleSoft and Mule ESB are trademarks of MuleSoft. Talend is a trademark of Talend. Apache ServiceMix is a trademark of the Apache Software Foundation. UltraESB and AdroitLogic are trademarks of AdroitLogic Private Ltd. All other product and company names and marks mentioned are the property of their respective owners and are mentioned for identification purposes only.

 

14th October 2013

 

  • No labels