Skip to end of metadata
Go to start of metadata
Icon

AdroitLogic would like to state that we openly announced the execution of Round 6 about a month prior to the final execution of these tests, and requested any interested parties to submit configurations or updates for any of the ESBs to be tested. We are happy to state that we've received and incorporated such changes from parties who replied, and that we also enhanced the test cases and execution of Round 6 itself, based on suggestions we received from others.

We've since received more updates from yet more parties after execution of Round 6, and we hope to include those changes into Round 7. We welcome any one to re-execute these tests and publish both the results obtained - and the corresponding AMI images publicly. We would also request any parties wanting to submit enhancements to make them available via a Mercurial pull request against the repository https://bitbucket.org/adroitlogic/esbperformance. We would gladly re-execute the tests once such changes are received, as we are committed to making this benchmark fair, transparent and open for enhancement, with input and feedback from others.

Overview

This article is based on the 6th round of public ESB Performance benchmarking since June 2007. This performance framework has become the de-facto ESB Performance test suite, with multiple vendors using it to publish benchmark results in the past.

The significant changes in this round includes:

  • Using an asynchronous Tomcat backend service for better performance and for similarity with real world systems
  • Execution of three iterations for each ESB, to reduce any effects from the AWS execution environment variability for the final results
  • Extending the WS-Security test cases to include 10K messages

This round includes the above mentioned suggestions and improvements with the kind contributions of many, who helped improve the benchmark

Similar to the last round #5, this round includes:

  • The inclusion of 8 free and open source ESBs
    • AdroitLogic UltraESB v1.7.1 [link]
    • WSO2 ESB v 4.0.3 [link]
    • Mule Community Edition v3.3.0 [link]
    • Apache ServiceMix v4.3.0 [link]
    • Fuse ESB v4.4.1 [link]
    • Talend ESB SE v5.1.1 [link]
    • JBoss ESB v4.11 [link]
    • Petals ESB v4.1 [link]
  • The publication of a standalone Amazon EC2 AMI image of the complete test scenario and resources (ami-09ef4560)
  • The publication of the ESB Performance Benchmark resources project on BitBucket updated for round #6

Disclaimer

These configurations have been built by AdroitLogic, partly based on previously published resources and contributions submitted to the resources repository. The solutions we've come up with may have inherent limitations due to our limited understanding of some of the other ESBs and the limited time we had spent. Hence these configurations may not be optimal, and/or may not be optimally tuned for the scenarios tested

However, AdroitLogic has spent many days and nights to get the other ESBs to work under identical conditions as much as possible. We would be happy to receive feedback to improve the configurations used, and the tuning parameters for the different ESBs from the vendors and/or end users. We will include those updates into future rounds, and could be submitted directly to the Mercurial repository as a pull request.

Scenarios tested

  • Direct Proxy Service
  • Content Based Routing Proxy
    • on SOAP body payload
    • on SOAP header
    • on Transport Header
  • XSLT Transformation Proxy
  • WS-Security Proxy

Each of these scenarios are described in detail under Performance Test Cases

Performance Test Environment and Resources

Amazon EC2 AMI image

The performance test was carried out on an Amazon EC2 High CPU Extra Large (c1.xlarge) instance running the Public AMI "ami-09ef4560" based on the EBS backed official Ubuntu 10.10 image. This instance was a 64 bit instance and had 7GB of RAM with a 8GB EBS backed storage, 20 ECUs and 8 cores. All tests against each ESB was carried out on the same instance started on the 3rd of August 2012, and executed one after another. Network isolation was not required as all test traffic was only between the local machine hosting the load generator, ESB and backend service.

Each ESB was allocated 2GB of Java Heap memory. The 64 bit Oracle / Sun JDK 1.6.0_33-b04 was used with the unlimited strength policy files applied to allow WS-Security testing with larger key sizes. The instance had a 2GB RAM disk which was utilized by the UltraESB. We attempted to allow approximately 300 worker threads, and a socket timeout of 120 seconds for each ESB, but this was not possible for some ESB's due to time and/or documentation limitations.

Performance scenario configurations for each ESB

The configurations used for each ESB are published on the BitBucket repository at https://bitbucket.org/adroitlogic/esbperformance. This includes a README.TXT file for each ESB, with basic instruction to configure each ESB from scratch, and to build and deploy the configurations.

Results Analysis Worksheet hosted on Google Docs

ESB Performance Testing Round 6 is a publicly accessible Google document hosting the summarized results.

Raw Execution Logs

The raw execution logs for each ESB of the performance run can be found in the ~/client-scripts directory and the log files for each ESB can be found in the specific directories under ~/esbs.

Results

ESBs that failed to complete the benchmark

The following ESB's did not complete the full benchmark successfully, and hence were excluded from the final analysis. Details of the errors can be found on the README for each ESB

  • Apache ServiceMix v4.3.0
  • Fuse ESB v4.4.1
  • JBoss ESB v4.11
  • Petals ESB v4.1

The rest of the ESBs were selected for the final analysis, which would execute the full benchmark three times, and use the best and average TPS for each scenario for a final analysis. This step was introduced in Round 6 to validate that the slight variability of performance on the Amazon EC2 environment will not significantly affect the overall result.

The following ESBs completed the full benchmark without any significant issues

  • Talend ESB SE v5.1.1
  • UltraESB v1.7.1 - Vanilla
  • UltraESB v1.7.1 - Enhanced
  • Mule CE v3.3.0
    • * encountered a few errors and failures during some of the individual tests)

Although the WSO2 ESB was selected for the final analysis based on the initial test run, it failed during the three iterations of the benchmark under load, and had to be retried

  • WSO2 ESB v4.0.3
    • The ESB was stuck for over 6.5 hours with 100% CPU utilization. Since we had selected this ESB for the final run initially, we re-attempted the analysis and the retry completed the test.

More details on the ESBs that failed during the benchmark under load.

WSO2 ESB v4.0.3 - more details

Exceptions seen on the logs during the load test were many and the complete log can be found at /home/ubuntu/esbs/wso2esb-4.0.3/repository/logs/crash/wso2carbon.log. Some snippets follows

After many hours (~6.5hours), the system was still fully blocked on CPU as per EC2 monitoring and top. As no progress was being made, the test was abandoned.

 

Trying out the Direct Proxy and Transport Header based routing Proxy scenarios with the WSO2 ESB server using its 'Passthrough Transport' generated the following log messages. When the passthrough transport is being used, none of the other use cases can execute simultaneously on the WSO2 ESB, even over multiple different HTTP/S ports. Hence any use of the Passthrough Transport is limited only to simple proxying and/or transport header based routing.

When we tried the WSO2 ESB again without the Passthrough Transport, Exceptions were seen on the logs as can be found at /home/ubuntu/esbs/wso2esb-4.0.3/repository/logs/wso2carbon.log from timestamp 2012-08-05 06:57:22 onwards. Some snippets follows

Apache ServiceMix v4.3.0 - more details

The smoke test itself failed, with the following error

During the load test iteration with 20 concurrent users to the XSLTProxy, the ServiceMix ESB became almost unusable, this was still at the time the pre-load test warmup phase was executing. It was not really servicing requests, but had become extremely slow (as per a tcpdump) but the log files did not report any faults.

Fuse ESB v4.4.1 - more details

Unlike in Round #5, the security smoke tests passed. So we used the full test suite against the Fuse ESB. The Fuse ESB became unresponsive at 2560 concurrent users running the Direct Proxy scenario with 500 byte messages. As seen during Round #5, we again saw log messages like the following after the ESB was restarted after a failure.  

We noticed that in the Direct Proxy scenario, the SOAP headers were not proxied to the backend service. This could be due to a limitation in the configuration we tested, and could have been present for other scenarios as well. On the second run, the ESB became seemingly unresponsive after 160 concurrent users with 100K messages. A "top" showed the following output, but nothing was visible in the log files. Part of the visible thread dump on a "kill -3 10573" appears at the end of this file. However the test completed approximately after 3 hours.

JBoss ESB v4.11 - more details

During CBR on payload testing, we noticed that the response for a 5K request was 368 - i.e. a SOAP fault, indicating the XPath expression failed. However manual testing with the SOA ToolBox confirmed that randomly the XPath evaluation would pass and fail for the requests, we also noticed that when the load test was running, all XPath expressions were failing. We aborted further testing after this issue since correct and consistent operation was a pre-requisite for the benchmark, over performance figures.

Petals ESB v4.1 - more details

The Petals ESB started to give errors from the smoke tests, and after processing 20 concurrent users with 100K messages, the system became unusable with a top output as follows:

The log file contained lots of error messages, similar to the following:

 

Observations

The following graph shows the summarized TPS for each scenario across message size and concurrency, taking the best of the three iterations for each ESB included into the final analysis. (The graph taking the Average of the three iterations showed an almost similar result, and can be found on the Google Worksheet)

 

 Mule-3.3.0Talend-SE-5.1.1

UltraESB-1.7.1

Vanilla

UltraESB-1.7.1

Enhanced

WSO2-ESB-4.0.3

WSO2-ESB-4.0.3

PassThrough

Direct Proxy3,3753,3154,9044,8392,3304,693
CBR Proxy3,1983,1083,7004,7032,428 
CBR SOAP Header Proxy1,4583,1853,8415,0632,567 
CBR Transport Header Proxy2,2623,7515,3945,5232,7645,795
XSLT Proxy2,2252,3331,9573,387839 
WS-Security Proxy458534604603463 
Average2,1632,7043,4004,0201,899 

 

To simplify the results into a further level, we computed the average TPS across each scenario benchmarked, and computed this for both the best case and the average case of the three iterations.
 

 Mule-3.3.0              Talend-SE-5.1.1   UltraESB-1.7.1-VanillaUltraESB-1.7.1-EnhancedWSO2-ESB-4.0.3  
Best Case2,163 2,7043,400

4,020

1,899
Average Case2,0682,5603,2203,8181,807




Refer to the Google Document at :  ESB Performance Testing Round 6 for complete results

  • The UltraESB - Enhanced version had a very clear lead above all ESBs across all scenarios, followed up by the UltraESB - Vanilla version. The main difference of the "Enhanced" version is making the Saxon library (for XSLT performance) and the VTD XML and Fast VTD XML libraries (for CBR performance) available to the runtime. These libraries are easily downloadable by end users, although not shipped by default due to licensing issues related to these libraries.
  • Talend ESB had better performance across results over the Mule CE and WSO2 ESBs, and showed very good stability
  • WSO2 ESB failed to complete the first full benchmark execution running at full CPU utilization for over 6.5 hours, and had to be retried once more for the results shown above
  • Mule ESB showed a few failures and errors for some of the scenarios
  • Compared to Round #5, Talend ESB had optimized its performance by addressing the issues reported. The README contains instructions on how the Talend ESB was patched up to overcome the issues, and contains necessary steps to apply the patches that are freely available
  • Compared to Round #5, the Mule CE performance has improved over that of the WSO2 ESB.
  • Compared to Round #5, both the performance and the stability of the WSO2 ESB has suffered

Reproducing the results

Full details on reproducing these results on an Amazon EC2 instance can be found here 

Authors

Asankha C. Perera, Founder and CTO AdroitLogic
Ruwan Linton, Director of Engineering, AdroitLogic

Email: info@adroitlogic.com

Contributors

We value all contributions received to the ESB Performance Benchmark project on BitBucket, and the suggestions for improvements submitted privately by many.

 

Notices

WSO2 and WSO2 ESB are trademarks of WSO2 Inc. MuleSoft and Mule ESB are trademarks of MuleSoft. Fuse ESB is a trademark of FuseSource, Corp. Petals is a trademark of EBM WebSourcing. JBoss is a trademark of Red Hat, Inc. Talend is a trademark of Talend. Apache ServiceMix is a trademark of the Apache Software Foundation. UltraESB and AdroitLogic are trademarks of AdroitLogic Private Ltd. All other product and company names and marks mentioned are the property of their respective owners and are mentioned for identification purposes only.

  • No labels