ESB Performance Testing - Round 4

This article is outdated - please refer to the latest round for the most upto-date information on the results and the performance test framework

1. Introduction

In this round of performance testing, we have compared the UltraESB 1.0-beta-2 (b3) to one of the best open source ESB’s available. We tested the performance of a Direct Proxy service that simply virtualizes a service, a Content Based Routing (CBR) Proxy service using an XPath expression over a SOAP payload, an XSLT transformation Proxy service that translated a message and its response, and a WS-Security Proxy service that verified and performed timestamp, signature and encryption.

2. Background

The author has conducted performance tests comparing many ESB’s in the past. Round 1 conducted in June 2007 compared the WSO2 ESB to the leading proprietary ESB at the time, and Round 2 conducted in July 2007 compared the WSO2 ESB to the open source alternatives Mule and ServiceMix. Round 3 conducted in June 2008 compared the WSO2 ESB to Mule CE, ServiceMix, the leading proprietary ESB at the time, and the proprietary version of an open source ESB.

It was nice to see the other vendors such as BEA and Mule conducting the same performance tests - as the performance test framework, and tools were shared freely with the public. However, certain vendors did not appreciate being included in the comparison, and thus we will not name and publish results of any other ESB’s we compare against - but leave it upto real users to request the configuration required for these test scenarios from each vendor, so that they could run the tests themselves.

3. Improvements to the Performance test framework

With the advent of the Amazon EC2, it has been a pleasure to move the performance testing to the cloud. This allows all vendors to run their implementations on the exact same environment, and in future to fully automate the process and move the client, ESB and backend service to multiple nodes if desired! For simplicity, and to enable even end-users to test each ESB on their own, currently this test was run with the client, ESB and backend service simulation all on the same node. An EBS volume was created to hold the configurations and all necessary software to run the test, and the author is prepared to share this directly with any interested end-users so that the results could be independently verified.

3.1. Performance Environment

This round of testing was performed on a single "m1.large" instance of the AMI "ami-eef61587" - which is a 64 bit Ubuntu 9.04 server. The instance had 7.5 G of RAM with 4 EC2 Compute Units of CPU. Both ESBs were compared on the exact same instance one after the other, using JDK 1.6.0_18 and a heap memory allocation of 2GB for each. A NIO based Echo Service was developed for this run, as the earlier Tomcat/Servlet based implementation and a Jetty/Servlet based implementation both failed to keep up with the load of 2560 concurrent users through the ESB layer using a reasonable amount of threads. This NIO based Echo service and the load test client, and the sample requests used all ships with the UltraESB distribution which can be downloaded from

We invite the developers and/or vendors of both proprietary and open source ESBs to share configurations for the scenarios tested by this framework, so that end-users could test and compare the performance of these on their own. This will also allow the vendors/developers to state the most optimal configurations - instead of someone new to those ESB developing a possibly sub optimal configuration and reporting results publicly.

4. Performance Scenarios

This round includes all three scenarios from Round 3, and introduces a fourth scenario for WS-Security testing.

4.1. Virtualization, where the ESB hides the actual service and performs message routing

In this scenario we create a proxy service for the actual Echo service, which simply passes on the messages to the real service. This pattern allows users to hide real services behind proxies exposed over the ESB and thereby prevent direct application-to-application, application-to-service and/or service-to-service links from being created within an enterprise to bring order to a Service Oriented Architecture (SOA) deployment. Refer to the Round 3 for more information.

4.2. Content-based Routing (CBR), where the ESB routes on data within the message

The Content Based Routing (CBR) proxy service performs an evaluation of an XPath expression over the payload of the messages, before they are routed to the real service. For this example, we use an XML payload with a list of `order' elements, and check if the first order element is for the symbol "IBM". If this condition is satisfied, we forward it to the actual service implementation, else return an error. Typically payloads are routed on transport/SOAP headers and/or user defined header elements, or payload body elements.

4.3. Transformation, where the ESB transforms the request and response messages using XSLT-based transformations

In this scenario, we use an XSLT to reverse the element names before forwarding to the Echo service. The reversed response received is translated again to the original format by another reversal. This is a typical use case when newer versions of a service are exposed, and a subset of its users now require backwards compatibility with the previous schemas etc. Refer to the Round 3 for more information.

4.4. WS-Security, where the ESB acts as a Security Gateway, receiving and validating secure messages and forwarding them to the actual service, and secures the response before sending it to the original client

In this newly introduced scenario, we receive a Timestamped, Encrypted and Signed SOAP request, and verify the message, and forward the decrypted message to the echo service. The returned response is Timestamped, Signed and Encrypted again, and returned to the client. This demonstrates the ESB acting as a WS-Security gateway. In real world scenarios, Username token authentication, and HTTP basic/digest authentication and SSL are used as well to secure messages.

5. Results

The tabulated results are given for various payload sizes ranging from 512 bytes, 1K, 5K, 10K and 100K for concurrent users from 20, 40, 80, 160, 320, 640, 1280 and 2560 as applicable for the various scenarios. "UE" in the results denote the UltraESB TPS, while "O" denotes the TPS obtained by the open source ESB. Results highlighted with a Yellow background resulted in timeout errors at the client, where the timeout was set to 2 minutes.

Thus "UE-500 b" denotes the UltraESB TPS for a 512 byte payload, while "UE-10 K" denotes the TPS for a 10K payload. Similarly "O-100 K" denotes the TPS for the other ESB for a request of 100K etc.

5.1. Direct Proxy scenario

Users UE-500 b O-500 b UE-1 K O-1 K UE-5 K O-5 K UE-10 K O-10 K UE-100 K O-100 K
20 1377.8 1688.14 1328.82 1667.1 1192.52 1016.78 1037.62 648.12 415.03 87.34
40 1498.86 1896.63 1458.87 1739.49 1257.77 1115.19 1136.48 663.46 436.77 85.93
80 1237.43 1666.66 1309.56 1528.85 1195.75 1025.69 1037.69 670.41 434.13 87.34
160 1483.7 1759.46 1480.16 1652.72 1342.7 1078.92 1697.69 660.66 432.93 86.87
320 2035.28 1883.52 1821.59 1760.46 1437.45 1138.53 1218.13 673.04 315.14 86.11
640 3174.18 3130.33 2887.93 4507.4 2149.7 1100.27 1648.23 753.26 476.45 95.69
1280 4499.02 1817.97 4527.38 2289.17 2293.27 1618.2  
2560 3102.67 3896.53 4448.38 1721.49 3994.75 1541.17  
Average 2301.12 2217.41 2407.84 2108.34 1857.99 1204.34 1295.97 678.16 418.41 88.21
direct 500b
direct 1k
direct 5k
direct 10k
direct 100k

5.2. Content Based Routing (CBR) Proxy

Users UE-500 b O-500 b UE-1 K O-1 K UE-5 K O-5 K UE-10 K O-10 K
20 513.2 1643.61 564.79 1448.87 390.7 773.45 255.82 436.64
40 646.71 1655.2 610.62 1564.87 399.9 822.51 265.22 453.74
80 654.81 1560.78 600.55 1393.28 419.4 762.87 265.77 448.46
160 672.83 1670.53 640.41 1504.46 394.16 780.39 275.61 437.92
320 696.25 1779.18 667.51 1588.88 423.84 800.78 260.39 451.46
640 798.85 1815.1 817.33 2158.34 641.9 878.16 539.84 390.1
1280 986.13 2345.42 1281.48 1589.06 799.93 801  
2560 1099.81 3553.94 989.96 3121.88 596.6 1043.98  
Average 758.57 2002.97 771.58 1796.21 508.3 832.89 310.44 436.39
cbr 500b
cbr 1k
cbr 5k
cbr 10k

5.3. XSLT Proxy

Users UE-500 b O-500 b UE-1 K O-1 K UE-5 K O-5 K UE-10 K O-10 K
20 516.04 555.78 468.47 459.62 330.58 169.38 245.38 84.86
40 555.06 600.36 501.73 474.02 354.01 169.74 250.9 85.8
80 554.65 588.41 487.36 467.71 358.09 167.97 254.86 82.4
160 583.53 603.34 482.77 475.73 364.06 157.08 258 71.1
320 568.82 604.64 558.81 467.21 391.4 138.01 254.9 63.22
640 904.38 565.06 1121.37 431.88 456.98 118.62 400.1 66.74
1280 792.66 594.93 795.4 413.6 634.04 101.12  
2560 476.1 887.88 653.61 383.81 749.95 122.73  
Average 618.91 625.05 633.69 446.7 454.89 143.08 277.36 75.69
xslt 500b
xslt 1k
xslt 5k
xslt 10k

5.4. WS-Security Proxy

Users UE-500 b O-500 b UE-1 K O-1 K UE-5 K O-5 K
20 145.37 47.96 133.32 47.99 99.39 42.68
40 146.86 48.67 136.46 47.51 101.14 42.28
80 155.05 50.19 140.11 49.49 99.45 42.46
160 151.18 50.58 152.07 48.65 102.69 41.33
320 152.36 48.46 150.57 47.72 104.14 39.05
640 240.84 50.26 231.22 47.86 138.99 40.1
1280 184.47 50.59 170.83 49.79  
2560 151.95 46.65 275.68 51.18  
Average 166.01 49.17 173.78 48.77 107.63 41.32
secure 500b
secure 1k
secure 5k

6. Observations

  1. Direct Proxy - For tiny messages, and low loads, the Other ESB performed slightly better than the UltraESB. However, the UltraESB performed better overall on average, for each payload size; and with larger messages and more concurrent users, it emerged with a clear lead of about 5 times the TPS of the other ESB.

    Note: In the EC2 and and other such virtualized environments, the Zero-Copy proxying ability of the UltraESB may not be fully utilized. Thus on real server grade hardware, the UltraESB is expected to perform better. (e.g. when deployed as an appliance)

  2. Content Based Routing (CBR) Proxy - The Other ESB emerged a clear winner in this case, as the UltraESB performed XPath evaluation on a DOM parsed result of the payload, while the other ESB used Stax parsing. From a usability point of view however, the UltraESB can support XPath v2, while the Stax based implementation based on Jaxen is limited to XPath v1. The lead against the UltraESB was about 2.5 times for tiny payloads, and reduced drastically for larger payloads.

    Note: The UltraESB team intends to soon support Stax based XPath evaluation as well, and these results are expected in the next round of performance testing

  3. XSLT Proxy - For tiny messages, and low loads, the Other ESB performed slightly better than the UltraESB. However, the UltraESB performed much better than the other ESB with larger payloads - upto 3.5 times more TPS on average, and about 6 times more than the other ESB for 100K payloads at 640 concurrent users.

  4. WS-Security Proxy - The UltraESB emerged a clear winner in this case, with its custom WS-Security library implementation. The Other ESB relied on WSS4J/Rampart which is already well known for performance drawbacks. On average the UltraESB performed over 3 times the TPS of the Other ESB in the scenarios where both could be tested within reasonable timebounds. The results obtained shows even better performance by the UltraESB with larger load or payload sizes.

7. Author

Asankha Perera, AdroitLogic

You may contact the author directly to obtain more information on the performance test framework or procedure, and to request a comparison against another ESB of choice. If you are an ESB developer or vendor, you may submit suitable configurations for your ESB, and request that it be included into future rounds of testing. If you are an end user of an ESB, you may request for the above configuration for comparison from your vendor/developers and obtain free support from the author to conduct your own comparison test.

8. Resources

  1. The sample requests used for the test can be found in the samples/resources/requests directory of the UltraESB installation.

  2. The NIO based Echo service could be started by passing the "-echo" option to the UltraESB ToolBox as shown below.

  3. The Apache Bench style load test client is embedded to the UltraESB ToolBox, and can be executed with the provided scripts as shown below, or from the command line - from the lib/samples directory as "java -jar ToolBox.jar".

  4. An article that explains the scenarios in detail, and how another ESB of your choice could be compared against the UltraESB can be found here

9. Steps to run the load test on Amazon EC2

  1. Start an instance of the AMI "ami-eef61587" on EC2

  2. SSH to the instance and execute the following as root

    mkdir /home/asankha
        mount /dev/sda /home/asankha
        adduser asankha
        echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range
        echo "30" > /proc/sys/net/ipv4/tcp_fin_timeout
        echo "2097152" > /proc/sys/fs/file-max
        echo "1" > /proc/sys/net/ipv4/tcp_tw_recycle
        echo "1" > /proc/sys/net/ipv4/tcp_tw_reuse

    Note: this assumes you have access to the contents of the EBS used for performance testing, and attached it as /dev/sda to the instance. If not, do the following:

  3. Open 3 more SSH sessions, and execute the following on each session

    ulimit -n 8192
        su - asankha
        export PATH=/home/asankha/jdk1.6.0_18/bin:$PATH
        export JAVA_HOME=/home/asankha/jdk1.6.0_18
  4. Start the Echo service as follows on one of the new SSH sessions

        nohup ./ -echo &
  5. Download and extract the contents of the file to /home/asankha/PerformanceTesting/ultraesb-1.0-beta-2/conf

  6. Edit the /home/asankha/PerformanceTesting/ultraesb-1.0-beta-2/bin/wrapper.conf and set the Heap size to 2G as follows:

    Also, disable the auto JVM restart of the Java service wrapper by setting the following:
  7. Start the UltraESB as a daemon process by executing the following on another SSH session

                                                    ./ start
  8. Download the scripts from and extract to the directory /home/asankha/PerformanceTesting/LoadGenerator. Execute the load test script from this directory as follows from a new SSH session started in Step 4

                                                    http://localhost:8280/service > ue-r3.txt
  9. Download the log file when the test completes to the desktop, and convert the resulting log file into a CSV file by executing the[ab-to-csv] utility, and analyze as desired

    java -jar
                                                    ab-to-csv.jar ue-r3.txt > ue-r3.csv
Contact Us