1. Architecture and Design

This chapter looks at the overall architecture of UltraESB from an end-users point of view. Having an overall understanding of key elements of the ESB, and how they interact, will help end-users in developing better enterprise integration solutions. This guide will also expose the user on how the UltraESB maybe extended or customized.

At the end of this chapter, user should have a good understanding of overall architecture and key component design of the UltraESB. He/she will then be ready to dive deep in to the UltraESB with its Configuration and Administration guide to understand the advance configurations.

1.1. Overall Architecture of UltraESB

Overall architecture of the UltraESB is discussed under 2 main categories in brief in this section of the documentation

1.1.1. Deployment Architecture

The UltraESB maybe deployed as a stand-alone server, a cluster, or deployed onto a JEE servlet container (e.g. Tomcat) or an application server (e.g. JBoss). For production deployments, the standalone server is recommended as it is better to allow the UltraESB its own JVM instance. The standalone instance could be started on the command line, or executed as a service. For production deployments a Linux OS is recommended, while for development activities, a Linux or Windows workstation is supported.


The UltraESB is typically fronted by a hardware load-balancer for HTTP/S based messages. For messages received over the internet a firewall is recommended. The UltraESB clustering is based on Apache ZooKeeper, and hence the whole cluster can be managed by connecting to any node via JMX. The nodes in the cluster would generally run an identical configuration made available over a shared drive, or version control system.

An instance maybe managed via the:

  • Web based management and monitoring console - UConsole

  • Command Line Interface (CLI) and scriptable client - UTerm

It is interesting to note that both the UConsole, and UTerm maybe run outside of the core cluster, on separate dedicated machines - or even from a remote developer desktop. The core ESB is thus completely separate from its management interfaces, which connects to it only via JMX. All ESB nodes publish JMX based statistics, and expose management methods. The UConsole is able to automatically register metrics of nodes against a Zabbix server instance. Zabbix is a free and open source enterprise management and monitoring system, which can collect historical metrics, generate graphs and fire triggers based on thresholds to issue notifications on SLA violations or possible warnings or issues via email/SMS etc.

Apache ZooKeeper is an enterprise clustering framework developed at the Apache Software Foundation, and used in Apache Hadoop and other large scale software systems across many large internet based companies.

1.1.2. Product Architecture

The UltraESB internally is a Spring framework based application, configured by a typical Spring configuration already familiar to many developers. While using the Spring framework syntax to configure all static aspects of the ESB (e.g. Transports, JMX, Work Managers, File caches etc), the dynamic aspects are configured with an extended customized syntax injected to the Spring configuration. These are the elements listed below

Dynamic Configuration Elements - Services, Endpoints and Sequences

Proxy Services are the basic units of deployment on the UltraESB. They define a service exposed over one or more transports, and uses one or more sequences and endpoints to define the behavior of the service. A service can accept messages over any of the existing transports supported by the UltraESB - or on custom transports implemented by users (e.g. TCP based proprietary transports etc).

A Sequence defines the actions to be performed on a message received over a transport, and maybe specified as a Java code fragment, Java class, Spring bean or a JSR 223 (e.g. Groovy, Javascript, Ruby, etc) script fragment or file. The ability to specify this mediation logic in Java or a JVM based JSR 223 scripting language allows trivial integration with any third party libraries or custom code. In addition, all features of the Java programming language is available to the user - such as Exception handling via try-catch-finally blocks, ability to code a sequence as a class/methods and use extension, IDE based development, auto-completion and debugging and unit testing etc.

An Endpoint defines how a message should be sent out of the ESB to another service over any supported transport (e.g. as HTTP/S, JMS, File, S/FTP, Email, MLLP, etc), including specification of load balancing and fail-over logic. Additionally, an endpoint is used again when a HTTP/S style message needs to be sent back over the original connection as a response to a client making the request.

While standard Java exception handling can be fully utilized during the execution of a sequence, a sequence can also assign another sequence as an error handler for any un-handled exceptions. Similarly a Proxy service can assign an error handler which will be invoked when an un-handled exception is encountered by a sequence or endpoint. An error handler will typically report the error along with the error code and description made available with the message encountering the problem to a log file and/or database or external system (e.g. JMX), and try to perform recovery measures.

sample proxy definition

The above image shows a sample Proxy Service configuration, with in-lined sequence definitions and endpoint definitions.This XML configuration snippet is a valid Spring configuration with the extended schema, and is processed by the UltraESB to create the different configuration elements. The proxy service always has an ID (1) which usually may influence the URL of the service. (e.g. default http/s based proxy services are made available at URL /service/<proxy-id>) The URL of a service can of-course be user defined - including with a pattern (e.g. "/banking/load-*" etc). A service also specifies the transport ID (2) where it is made available to receive messages. One or more transports can be specified along with service specific transport configuration properties. The inSequence processes messages arriving at the proxy service and is optional. The optional inDestination specifies a target for the message after processing - usually another external service, a JMS destination, File system, Email etc. A sequence may also control the target destination endpoint during mediation - for example, looking at message headers, or content based routing. If an endpoint returns a response back, it will be processed by an outSequence, and then forwarded to an outDestination - usually as a response back to the original client.

Deployment Units

Based on the deployment and management aspect the above dynamic configuration elements are groped into a single component called a "Deployment Unit". A deployment unit is a collection of proxy services, sequences/mediation, endpoints, any custom resources and libraries used by the mediation flows. Deployment units are either directory or a zip/uda archives placed at conf/deployments directory. There must be a ultra-unit.xml spring configuration file on the top level of the deployment unit. These can be loaded, unloaded or updated - without bringing down the ESB with a graceful maintenance shutdown. Changes or updates can be easily triggered cluster wide as well, and the UltraESB guarantees that any message exchange will use only the previous version or the updated version only.

It doesn’t have any significance to the configuration, except for the fact that the configuration elements defined in a deployment unit is not shared among the other deployment units by default. So any re-usable sequence or endpoint is only available within that deployment unit artifacts. This is by design to make sure that a particular deployment unit can be administrated (undeployed/updated) without worrying about the other deployment units running on the system.

While it is possible to define artifacts that are shared among other deployment units by specifying a "shared" attribute with value true in any sequence or endpoint, it is recommended that the configuration developer should try there best to avoid using that, as that affects the ability to reload the deployment unit at runtime.

Globally shared static artifacts
It is possible to define globally shared static configuration elements such as sequences and endpoints which are available to artifacts defined in all deployment units by defining them in the ultra-custom.xml spring configuration file. However the down side is that it is not possible to update those defined in the ultra-custom.xml configuration at runtime.

Deployment units can contain libraries and any other resources that are local to a particular deployment unit, meaning that it is possible to use 2 different versions of the same library for 2 deployment units running on the same ESB server.

Core Static Configuration Elements

These are the static configuration elements that define the configured transports, Work Managers, File Caches, JMX connection semantics and other static aspects of an ESB configuration. The user is responsible for selecting the required transports for a solution, from the set of supported transports, and for configuring them as per production deployment requirements. For example, the HTTPS transport will require a user to setup the certificates, while the JMS transport will require the user to configure JMS provider access via standard Spring notations. Similarly other transports will require different configurations. These are usually defined in the conf/ultra-root.xml - and one must note that the default configuration provided is just a sample, and must thus add other transports (e.g. JMS, Email, S/FTP etc) as required into this file.

  • Transport Listeners

  • Transport Senders

  • Work Managers

  • File Caches

Transport listeners and senders allows the ESB to accept messages from or send message to external sources via the specified transports. Work managers are execution thread pools optionally assignable to proxy services for mediation. This allows some services to be allocated to a larger/high priority thread pool, while some others to another one or more thread pools. By default, work managers are hidden from users, as a system-wide default work manager is assigned to each proxy service, unless the user customizes this aspect. A file cache manages a cache of pre-created files which are based on a RAM disk or memory-mapped so that they could be used to write message payloads by the ESB. By creating the files up-front, and my memory mapping or via a RAM disk, this exercise creates a very efficient way to store message payloads, without causing a garbage collection (GC) overhead to the JVM, and facilitating zero-copy transfer of messages from the network interface to the files.

Custom Configuration Elements

By default, the conf/ultra-root.xml statically imports elements defined in the file conf/ultra-custom.xml - which is left for the user to define aspects and Spring beans etc, which are around the usage domain of the deployment. Thus this file could include definitions for custom Spring beans used by the ESB configuration, as well as any re-usable sequences, endpoints or proxy services. However, note that elements defined statically - such as the ultra-root.xml or ultra-custom.xml and any other configuration fragments cannot be changed while the ESB is in operation.


Environments are a set of named deployment configurations. There are few, pre-built environments and you can define your own environments as well. Environment is an easy way of switching a configuration from development to test to staging to production. A particular environment is associated with a set of environment properties which defines the deployment configuration. For example it defines the number of parser instances to be cached in the system, where as the value of that could be just 1 for development 10 for test, and 1000 for staging and production etc. Apart from that an environment is used to determine whether a particular feature is turned on or off on the setup, for example "Binary Class Reloading" is enabled on production and may not on test.

These named configurations helps users to optimize there setup for the purposes, where the development environment consumes lessor memory compared to production environment etc. While it is possible to define new environments it is also possible to overwrite a certain set of properties of a named environment. The following tables demonstrates the pre-built environments and there respection default configurations.




Integrated Development Environment

Unit testing & Build

Sample configuration execution



Production deployment

Deployment Units








Tuned for Performance








Class Reloading








1.2. Deployment Unit

Deployment unit is a new concept introduced in the 2.0.0 version of the UltraESB which facilitates the user deployable artifacts such as proxy services, sequences and endpoints to be grouped into a single entity. Further to that it provides the ability to have deployment unit specific libraries and enables updating of these grouped elements as a single unit transactionally and atomically.

1.2.1. Overview

A deployment unit is a directory or a zip/uda (ultra deployment archive) archive containing a file named ultra-unit.xml on its root. Deployment units are placed in the conf/deployments directory of the UltraESB installation. Deployment units can be provisioned/added, unloaded or updated at runtime. Apart from the mandatory ultra-unit.xml configuration file which consists of proxy services, sequences and endpoints a deployment unit can optionally have mediation classes, third party or custom libraries and any resources used by the deployment unit artifacts such as WSDL, WADL, schemas or XSLTs etc.

The structure of a typical deployment unit is as follows;

Deployment unit file structure



Name of the deployment unit is determined by the name of the directory or archive (without extension). So the above example directory structure creates a deployment unit named "default".

1.2.2. Configuration and Class Scope

Artifact configurations such as sequences and endpoints are locally scoped within the deployment units by default. This is to make sure a given deployment unit is self contained and can be easily updated/reloaded at runtime, without an external consumer seeing a service down time.

Any libraries or classes placed into the deployment units lib or classes directory is also loaded at the local scope of the deployment unit. This allows one to use 2 different versions of the same library to be used within 2 deployment units, or even 2 deployment unit versions of the same UltraESB server runtime.

1.2.3. Hot Update

The ability to do hot updates of a deployment unit enables the administrators to deploy/update the configurations without worrying about the availability of not just other deployment units but also the deployment unit being updated.

While the update is in-progress the messages received to the proxy service of that deployment unit are accepted and processed by the already existing configuration. The update actually is building a new version of the deployment unit configuration, and the already existing configuration is maintained as it is. Once the configuration preparation is done, the newly built configuration is attached to the deployment unit, and the previous configuration is marked as outdated, upon which any new message will be dispatched to the new proxy services defined in the new configuration.

Even after this point, the previous configuration is maintained in-tact, as the already accepted messages by the proxy services of that previous configuration will transactionally use the same configuration, even though any new message is using the new version. This guarantees the operation and mediation correctness, while making sure the system is 100% available while doing an update.

1.3. Proxy Services

This section describes the proxy services architectural overview, refer to the Proxy Service Configuration to get an understanding of how you can develop proxy services within the UltraESB.

A Proxy Service is the basic unit of deployment on the UltraESB. A Proxy Service is identified by a unique name, and specifies one or more transports over which it expects to receive messages. Optionally the service may also specify a specific address over the transport/s at which it expects the messages (e.g. for HTTP/S a custom URL), and other properties that will configure its behavior against the transport (e.g. Email/JMS etc).

simple proxy

When a message arrives at the proxy service as per its transport configuration, the UltraESB hands over the message to the 'In Sequence' of the service for mediation. Specification of a mediation sequence is optional, and the mediation sequence could analyze the message headers and/or payload to make content based routing, logging, reporting or transformations, or any other mediation or third party call etc. After mediation, the message maybe forwarded to another service / endpoint - which typically is a real service provider in many use cases. This external service maybe passed the message over the same transport as the received message, or via any of the other transports supported by the UltraESB. The outgoing message endpoint maybe selected during the mediation sequence, or specified as the inDestination - i.e. the default destination for an incoming message. This service may be a synchronous service or an asynchronous service (e.g. one way JMS, email etc), and an endpoint may provide one or more addresses - with load balancing, fail-over and other semantics. If/when the external service responds back with a response message, it is handed over to the 'Out Sequence' of the Proxy Service for mediation. Typically after a received response message is mediated, it will be sent back to the original requester by the proxy service via the outDestination - i.e. the default destination for an outgoing message.

A 'Destination' or 'Endpoint' is a definition of an external service endpoint to the UltraESB. A Sequence is a set of mediation steps - specified in the Java programming language as a fragment, Class, Spring bean or any JSR 223 scripting language fragment or file - such as Groovy, JavaScript, etc - using the public API exposed by the UltraESB. Destinations and Sequences maybe defined locally for a Proxy Service (i.e. in-lined within the proxy service definition itself), or defined globally so that multiple proxy services may re-use them by reference via the IDs. A Proxy Service may also define an 'Error Sequence' which is invoked by the UltraESB when an un-handled exception occurs during mediation or at an endpoint.

Support for Enterprise Integration Patterns [EIP] are discussed in the reference guide.

1.4. Endpoints, Destinations and Addresses

An Endpoint describes an external destination for a message sent out from the ESB. Most commonly, a proxy service will perform some mediation on an incoming message (e.g. log, transform etc) and then send it out to an external service endpoint or destination. Note that both 'Endpoint' and 'Destination' refer to the same concept. Since the most common use case on an ESB always defines an Endpoint for an incoming message accepted by a proxy, we allow one to easily specify this via the 'inDestination' (instead of calling this the 'inEndpoint'). Similarly for an outgoing message through a proxy, we allow the user to define an 'outDestination' - which for HTTP/S style transports most often is the response channel back to the synchronous client which made the original request.

1.4.1. Address Types

An Endpoint/Destination defines one or more ’Address’es depending on the type of endpoint. An Address type could be one of the following:

  • URL - sends the message to the specified absolute URL

  • Prefix - sends the message to the absolute URL made up by adding the 'prefix' to the part already specified on the message

  • Default - sends the message to the destination already specified on the message

  • Response - sends the message as a response to the original client for HTTP/S style transports

URL based Addresses

URL based addresses are the most common type of addresses across many transports, and hence the default type if another type is not specified. Some examples follows.

Prefix based Addresses

Prefix based addresses are most common when the absolute destination for a message depends on some URL prefix + a trailing section of the incoming URL (or another custom assigned URL postfix). Refer to the 'rest-proxy' of the ESB sample #101 (line 42), where an incoming request to a URL '/service/rest-proxy/customer/details?name=john' for example, needs to be sent to 'http://localhost:9000/rest-services/customer/details?name=john' - where the highlighted prefix is static, and the postfix is dynamically looked up from the message. A prefix endpoint must declare its 'type' as follows:

<u:address type="prefix">http://localhost:9000/rest-services</u:address>
Default Address

Sometimes the destination address may not be static, or have a common prefix at all times. An example is if the destination address is picked up from a database, registry or even another service etc. In such cases, the destination address must be first set to the message via the Message.setDestinationURL() API call. Refer to the ESB sample #502 (line 126) for an example, where the destination Email address is being set during the mediation and then using a 'response' endpoint to send it.

<u:address type="default"/>
Response Address

The 'response' address is a special address for HTTP/S style interactions, where a message should be sent as a synchronous response to an incoming request message. Although the response may actually be sent asynchronously, to the HTTP/S style client making a request to the ESB, the response will appear as a synchronous response on the same TCP socket. An example definition follows:

<u:address type="response"/>

1.4.2. Endpoint Types

Endpoints will belong to one of the following types, as specified by the 'type' attribute.

Single Endpoints
ep single

A 'single' endpoint specifies exactly one single address. Hence this single address is never suspended from use - as it would be always better to try to use the only known address when no other choices exist. Any error while sending to the address is handed to the immediate error handler.

Round-Robin Endpoints without Fail-Over
ep rr wo fo

A Round-Robin endpoint without fail-over cycles through the available addresses for each new message. Whenever an error occurs the message is handed to the immediate error handler, without a fail-over to another address within the group.

Round-Robin Endpoints with Fail-Over
ep rr fo

A Round-Robin endpoint with fail-over cycles through the available addresses for each new message. Whenever an error occurs the message is retried by the next available address. If all addresses fail to deliver the message it is handed to the immediate error handler. See Fail-Over behavior.

Fail-Over only Endpoint
ep fo

A Fail-Over only endpoint will always try the first address to send a message - failing with the next address is attempted etc. Whenever an error occurs the message is handed over to the immediate error handler. This maybe used for Active-Passive service invocations, where the passive address will only be used if the active address fails. See Fail-Over behavior.

Weighted Endpoint without Fail-Over
ep weighted wo fo

A weighted endpoint without fail-over will use the available addresses at random, but ensuring the frequency of use of a particular address to be in relationship to its weight. Hence an address with a weight 2w is twice as likely to be used as an address with a weight of just w. Whenever an error occurs the message is handed over to the immediate error handler. If any of the addresses are suspended due to serious suspension errors or continued transient errors, the new weights are re-calculated from among the remaining addresses. See Fail-Over behavior.

Weighted Endpoint with Fail-Over
ep weighted fo

A weighted endpoint with fail-over will use the available addresses at random, but ensuring the frequency of use of a particular address to be in relationship to its weight. Hence an address with a weight 2w is twice as likely to be used as an address with a weight of just w. Whenever an error occurs the message is retried again with another address from the endpoint, and if all addresses are exhausted, it is handed over to the immediate error handler. If any of the addresses are suspended due to serious suspension errors or continued transient errors, the new weights are re-calculated from among the remaining addresses. See Fail-Over behavior.

Random Endpoint without Fail-Over
ep random wo fo

A random endpoint without fail-over will use the available addresses at random. Whenever an error occurs the message is handed over to the immediate error handler. See Fail-Over behavior.

Random Endpoint with Fail-Over
ep random fo

A random endpoint with fail-over will use the available addresses at random. Whenever an error occurs the message is retried again with another address from the endpoint, and if all addresses are exhausted, it is handed over to the immediate error handler. See Fail-Over behavior.

1.4.3. Endpoint Error Handling

endpoint failover

Irrespective of the fail-over behavior, an endpoint will mark failures encountered by addresses as temporary failures or suspension failures depending on the error code raised. By default, the temporary error indication codes are:

  • 101508 - Sender connect timeout (TCP level connection establishment failure within specified (default 10) number of seconds

  • 101504 - Sender connection Timeout (expiration of the specified, or default connection timeout from the ESB side sender)

  • 101500 - Sender IO error during sending

  • 101506 - Sender detection of a HTTP/S protocol violation during send

  • 101505 - Sender detection of a connection close by remote party during send

  • 101510 - Rejection by response validator as a temporary error

  • 101006 - Listener detection that a response cannot be submitted (e.g. Connection already closed)

  • 101000 - Listener side IO error during send

  • 101003 - Listener connection timeout (expiration of the specified, or default connection timeout from the ESB side listener)

  • 101005 - Listener detection of a connection close by remote party

The default suspension error codes are:

  • 101503 - Connect failed (connection refused by remote party over the specified port)

  • 101511 - Rejection by response validator as a suspension error

On a suspension error, the address is immediately placed into 'Suspended' mode, and will not be considered as a candidate for future messages until its suspension rules indicate that it could be retried.

address state chart

An address transitions from the Ready state to the 'Temporary Failure' state on encountering a temporary error indication code, and to the 'Suspended' state on encountering a suspension error code. From the time it enters the 'Temporary Failure' state, the specified 'gracePeriod' counter will begin. An address in the 'Temporary Failure' state will be used for subsequent messages, and any subsequent temporary failures will keep it in the same state, while a successful send move it back to the 'Ready' state. If an address surpasses the 'gracePeriod' in the temporary failure state, it will be moved to the 'Suspended' state.

When an address is suspended, a suspend duration is computed, until which time the address will be ignored from use. The suspension duration is calculated as a Geometric series as follows. A 'progressionFactor' of 2 would yield an exponential series. The first time an address is suspended, the 'initialDuration' will be used, while on subsequent suspensions, the value will be limited upto the 'maximumDuration'

  • Current Duration = Last Duration == null ? Initial Duration : max( (Last Duration * Progression Factor), Maximum Duration)

After expiry of the suspend duration, an address becomes ready for a retry. If the retry is successful, the address moves to the 'Ready' state, and if not, the next suspension duration is calculated as per the parameters, and the address is re-suspended.

1.4.4. Fail-Over behavior

When an address fails to deliver a message, an error is raised internally with an error code indicating the problem. The default behavior is to only retry messages where it is guaranteed that a call to the next address would be safe. For example, if the first address returns a connect timeout (i.e. a TCP connection could not be established within a specified delay - error code 101508) or a connection refused (i.e. the endpoint explicitly refusing a connection - error code 101503) it will always be safe to retry as the message has clearly not been accepted by any of the backend service addresses. Other error codes such as a connection timeout, connection close, IO error or protocol violation etc is considered unsafe for retry by default.

An endpoint fail-over takes place only if the error code raised is configured as a 'safeToRetryErrorCode' of the endpoint. The default safe codes are the following, however the configuration could define a special value 'all' to indicate that any error code maybe considered as safe for a retry (e.g. for idempotent service calls)

  • 101508 - Sender connect timeout (TCP level connection establishment failure within specified (default 10) number of seconds

  • 101503 - Connect failed (connection refused by remote party over the specified port)

  • 101510 - Rejection by response validator as a temporary error

1.4.5. Advanced Endpoint Configurations

Endpoints allow custom properties to be configured at the endpoint level. Some of these properties may be only valid for some types of messages. For example, HTTP authentication properties, email properties (e.g. subject) etc maybe specified at an endpoint level.

Response Validation

Response validation allows a successfully received message be validated to check if it indicates a successful response, a temporary error or a suspension error. An example is a Hessian fault which is transported as a HTTP 200 level (i.e. successful) response. In addition, sometimes application servers (e.g. Tomcat) may return an HTML error page saying a service is not available etc which is hidden at the transport layer. The sample response validator SOAPResponseValidator is able to indicate certain HTTP error codes for retry, and fail on encountering some static text as the SOAP fault. See ESB sample #601 for an example.

<u:property name="ultra.endpoint.response_validator_bean" value="soapValidator"/>
Switch HTTP Location headers of responses

Usually when proxying REST calls, the responses from a backend service may include HTTP 'Location' headers which may point the end client back to it, instead of the Proxy service fronting it. The location headers maybe switched to the proxy service prefix using this property. See ESB example # 101 for an example

<u:property name="ultra.endpoint.switch_location_headers_to" value="http://localhost:8280/service/rest-proxy"/>

1.5. Sequences and Mediation

A Sequence defines the mediation actions to be performed on a message. A Sequence may be defined in-line and locally to a proxy service (see image below - 3), or defined as a re-usable sequence with an ID and referenced as many times as needed (see image below - 1 & 2 - an error handling sequence is reused at service level as well as sequence level).

sequence definition

The current message is exposed by a special variable "msg", while the Mediation instance by the variable"mediation" and a SLF4J logger instance by the variable "logger". The interfaces Message and Mediation in the UltraESB API package allows one to perform almost anything within a sequence. The end user is also free to invoke any third party classes, Spring beans, or class libraries as appropriate for his/her mediation requirements. However, one should be careful about spawning new threads etc from within a sequence, as the UltraESB manages the execution threads which invokes sequences.

A sequence can specify another sequence as its error handler. In the above example, the "inSequence" of the proxy service specifies the sequence with the ID "sequenceErrorHandler" as its error handler, and hence on an un-handled exception, the error handler sequence would be invoked. If no sequence error handler exists, or if a sequence error handler throws an exception, the service level error handler (#1 above) would be invoked. A sequence can make use of the Java exception handling support such as try-catch-finally, custom exceptions etc.

A sequence maybe specified as a Java or JSR 223 script fragment, class or Spring bean, or a source file. Irrespective of the style of definition, each sequence executes as native byte-code on the JVM where the scripting language supports compilation. The special variables and the public API visible for Java and JSR 233 scripting languages are identical. However, JSR 223 languages may have some special abilities native to them which maybe useful during mediation.

reusable sequence groovy

The UltraESB supports Sequences defined as follows, see the Sequence and Mediation Development for more details.

  • Java code fragment / snippet

  • JSR 223 scripting language fragment / snippet

  • Java source file

  • Script source file

  • Class file / byte code

  • Spring bean

While the API documentation provides the complete reference of the operations available to use in mediation, the Mediation Reference describes those operations from the usage perspective, where it will be easy to map your requirement usage into the required API call and how to effectively call it.

1.6. Transports and Message Formats

This section discusses the transports and message format framework architecture and how you can implement custom transports and message formats.

1.6.1. Implementing custom Transports

Transports define how messages can flow into and out of the UltraESB. A transport listener defines a Spring bean which will start accepting messages and inject them into the ESB, and a transport sender defines a Spring bean which registers a URL prefix of a destination message to be handled by it (e.g. mailto:, file:, etc), so that the ESB will call into it to deliver messages with that prefix. Although the UltraESB ships many transports by default, an end user can additionally extend the framework to support:

  • New custom TCP level transports (e.g. proprietary low level transports)

    To support a new TCP or TCPS based transport, one needs to basically extend the base TCP transport, and implement the TCPServiceHandler. The MLLP/S transport for HL7 messages is a concrete implementation which could be used as a sample - see the HL7ServiceHandler.

  • Library level custom transports (i.e. transports for which already a third party library exists - e.g. QuickFix for the Financial Information Exchange protocol etc).

    To write a new library level transport, one could start by extending the AbstractTransportListener and the AbstractTransportSender classes. A polling transport (i.e. one which periodically polls for messages according to a specified schedule or CRON expression) could be written by extending the AbstractPollingTransportListener.

  • Polling Custom Transports

    It is possible to use the generic polling support to configure a proxy service to be invoked on a specified schedule - e.g. based on a CRON expression. When the inSequence of the proxy service is invoked, one can write any custom code to perform a message fetching etc. For example, this maybe useful to perform a custom polling of a Database query and depending on the results, create a message and submit for further processing.

1.6.2. Supporting custom Message formats

The UltraESB does not force a canonical message format (e.g. SOAP, JMS etc) for all messages flowing in through different transports. Hence a message maybe held in its native or most optimal format.

When to use custom Message formats
Even if you have to implement a custom transport, you do not have to implement a custom message format. Analyze your requirements to first decide if holding the message payload in a currently supported message format with a custom transport will not be adequate. It is rarely that you may want to write a custom message format. Feel free to ask AdroitLogic Support for help.

For example, a HTTP/S request is held on a RAM disk based file or a memory mapped file depending on the selected File Cache implementation, and a SOAP/XML message possibly parsed into a DOM for XPath evaluation. However, keeping the raw message on a file allows an immense level of flexibility. As an example, by simply adding the VTD XML library into the classpath of the UltraESB along with the VTDFastXML library, XPath expressions can be evaluated without DOM parsing, and is faster than SAX based techniques while still keeping the original message on a file. In addition, when binary messages such as Hessian are being proxied, the binary files are natural and the best option with regards to performance. The same hold true for example when using XSLT transformations - as its much easier and better in performance for the XSLT processors to read the input from a File, than a String, DOM or Axiom model for example.

The default message formats supported by the UltraESB are the following.

  • File

  • String

  • Map

  • Object

  • Byte Array

  • DOM

  • Data Handler

  • User defined custom formats based on GenericMessage - e.g. HAPIMessage

Refer to the HAPIMessage and the implementation classes of the other MessageFormat’s for more information on how custom message formats could be supported natively.

1.7. Key Architectural and Design Aspects

This sections discusses the key architectural and design aspects that makes UltraESB the best ESB. The aspects that are discussed here are mostly non functional however addresses the performance, scalability and reliability of the ESB.

1.7.1. Objectives to achieve

The UltraESB codebase was started in late 2009, with an objective to build the best available Enterprise Service Bus (ESB). It concentrated on three key areas as follows:

  • Simplicity of use for development and management

  • Achieve best performance

  • Ability and ease of extension

1.7.2. Technical Innovations

As the UltraESB was a completely new ESB implementation without any historical code or customer liability piggybacking on it, sweeping simplifications could be made in the architecture and design that was not possible for most of the previously initiated ESB projects.

  • File Caching and Storage of Payloads in Files

    • Memory map the files or use a RAM disk based file system 

    • Achieve the speed of RAM with the ease of Files

    • Make use of RAM capacities without a Garbage Collection (GC) overhead

  • Zero-Copy proxying with sendfile and DMA

    • Use the 'sendfile' system call to transfer payloads to/from the network interface

    • Use Direct Memory Access (DMA) bypassing the CPU for NW to File transfer

  • JDK 6 compilation and JSR 223 scripting languages for mediation logic

    • Re-use existing developer skills without introducing a new language or DSL

    • Use familiar Javadocs, with a simple mediation API

    • IDE based auto-completion, unit testing/execution and step-through debugging

  • Based on the Spring framework and a very few and stable libraries

    • Configuration is 100% Spring Based

    • Support dynamic updating of subsets

    • Full distribution is limited to < 40M in size with minimal < 8M

  • Support and implement end-to-end unit testing with a high code coverage level

  • Use JMX to report metrics and manage runtime instances

    • Standardize on JMX based management and reporting

    • Do not try to re-invent things, simply re-use products such as Zabbix

1.7.3. Systems Design Principles Used

As firm believers in architecture, performance and quality, we’ve put into use some of the Computer Systems Design Principles of Prof. Jerry Saltzer of the Massachusetts Institute of Technology (MIT), with special focus on the use of `End-to-End Argument' and the establishment of 'Conceptual Integrity' to ensure optimal design.

1.7.4. Business Level Decisions

Although not related to architecture or design, we also made the following business level decisions:

  • Develop and release only ONE version of the UltraESB

    • There would not be a cut down 'free' version and a enterprise grade 'paid' version

    • Simply offer the SAME version - under two FREE licenses

    • AGPL license with source code

    • AdroitLogic Zero-Dollar license with binary

  • Clustering, High Availability, Node fail-over, and ALL such enterprise grade features will be freely available

  • Product documentation and samples will remain freely accessible to end users

  • Community support and Proof-of-Concept support will be available free of charge

  • 24x7 Production Support, Training and Consultancy will be available through AdroitLogic for a fee

1.7.5. File Caching and Storage of Payloads in Files

File caching an storage of payloads in files is a technique used in UltraESB to improve its performance and reliability. This section discusses the design and architecture around this concept.


File Caching is a unique strategy introduced in the UltraESB, which simplifies processing and supports efficient use of available system resources. At the lowest level, this means the creation of a pool of files in a specified directory and then using these files to store payloads used in the ESB. By storing the payload in a file, the system makes it easy to deal with potentially large payloads easily and seamlessly, and by making use of memory mapped files or RAM disks, the system allows the operation at the speed of RAM and the ease of files.

Efficient use of RAM and Heap memory and no GC overhead

Storing the complete payload in a file is a much better implementation than holding a part of the payload in Java heap memory, and the rest in a overflow file on disk, as the use of heap memory on a high throughput ESB creates a large GC overhead on the virtual machine. Instead, allocating a large amount of RAM for the memory-mapping, or a RAM disk allows better utilization of large amounts of RAM available on typical production systems lately. This allows the UltraESB to run with a much smaller and more efficient heap memory, while utilizing huge amounts of efficient RAM to handle huge loads.

Optimizations possible due to storage on files and repeatability of the payload

Since the application visible payload is held in a file, accessing and using this payload is simple and straightforward, and allows easy repeatability of the bytes for further optimizations. For example, if an incoming XML requires evaluation of an XPath over the payload, this could still be achieved - even without any XML parsing of the payload - by using libraries such as VTD XML which index into the offset of the payload without creating huge cloned object structure of the same information. Where the original payload is not modified during routing, the originally used payload file could be used again, for example; forwarded to a backend service using outgoing zero-copy. If a backend service fails, and the UltraESB has to perform fail-over to another address, the original payload file could again be used to re-send the original request similarly. Keeping the payload in heap memory either as raw bytes or Objects (e.g. DOM/StAX etc) would cause many times of copying and serializing of the data between formats and locations etc to perform the same.

Zero-Copy Proxying for incoming and outgoing messages

Storing the payload on a file, allows the JDK to use Zero-Copy support of the new JDKs using the 'sendfile' system call underneath, to transfer raw bytes between the network interface and the payload file. This allows the efficient bypassing of each byte through the CPU by using Direct Memory Access (DMA). The Zero-Copy support is explained further in the next section.

PooledMessageFileCache - Memory Mapped files mounted on a local disk

The initial versions of the UltraESB introduced the default PooledMessageFileCache implementation, which memory mapped a section of each file created. By tuning this memory mapped threshold, it would be possible to make most of the payloads "fit" into the memory mapped section of the files, thereby avoiding any real disk access.

RAMDiskFileCache - Normal files created on a OS level RAM disk

Detailed load testing with the memory mapped file cache showed that where RAM is adequately available, creating a complete RAM disk based file system would offer even better performance and simplicity. The RAM disk file cache requires an Operating System level RAM disk to be created first and assigned. In this case, if the RAM disk has adequate capacity, all messages will be guaranteed to be retained fully on RAM. The implementation allows the overflow to a secondary disk, if the RAM disk capacity becomes exhausted.

1.7.6. JDK 6 compilation and JSR 223 scripting languages for mediation logic

Many ESB systems uses custom Domain Specific Languages (DSLs) and/or Graphical Models to specify the mediation logic. At first sight, a new user may see these as a good feature, and would expect the "tools" around the DSL and/or the Graphical Models to be powerful and flexible for real enterprise development.

However, in reality DSLs or Graphical Models are not as powerful as programming languages, and they cannot be easily integrated with custom programming language code, without writing classes implementing specific interfaces and being deployed as specific bundles etc! The inability to debug DSLs and graphical models is another serious aspect, especially during development and initial testing.

In contrast, the UltraESB has been the first ESB to introduce mediation logic specified as Java fragments, Java classes, JSR 223 Scripting language files, fragments, compiled Java classes or Spring beans etc. Thus the mediation API of the UltraESB is built around two main interfaces - Message and Mediation and the complete user level API and Javadocs are hosted at http://api.adroitlogic.org As any third party library or any custom Java code could be invoked during mediation, the power and flexibility of a familiar programming language is at the users command. The users could configure the UltraESB using any mainstream IDE such as IntelliJ IDEA, Eclipse or NetBeans, without being forced into a customized, and sometimes buggy or old version of an Eclipse or NetBeans distribution. The mediation logic can be easily debugged and unit tested, including writing end-to-end unit tests, and executing these or running the complete UltraESB within your IDE.

As the Java/JSR 223 scripting language code and fragments are compiled once at load time, the user does not need to "compile, build, bundle and deploy" mediation logic or customizations unlike with other ESBs. Simply edit the text files containing the configuration, save and reload or restart.

The Ugly side of Domain Specific Languages (DSLs)

Each custom DSL of a vendor is a "Vendor specific language" which a user must learn a new to configure the ESB. Although some auto-completion and a list of constructs maybe documented, one may not fully understand how each construct will work in reality. Usually such components or mediators are specified in XML configuration files, but these are not flexible enough like a programming language which a user would already be familiar with.

For example, consider a component performing an XSLT transformation. First a user will have to read some documentation about this component, understand its XML configuration structure and required and optional attributes and elements, and then finally specify it as configuration. Although auto-completion support of an IDE maybe available, a user will not be able to understand the conceptual behavior of the component by simply writing the configuration with auto-completion. For example consider the failure cases - checked exceptions, and run-time errors, or would the component replace the current message with the transformation, or leave the transformed output separately etc? Some of the vendor modules may handle errors internally and change the message in an unexpected way, or some may call into another mediation sequence or flow for error handling etc. Some extreme errors may throw out the thread of execution to a totally unexpected state. As a user, this means a considerable amount of time needs to be spent on reading non-standard documentation formats, examples and reference documentation etc. Furthermore, it could be impossible or very cumbersome to perform quite simple logic using the "fixed" components or mediators. Consider the case where your XSLT file name to be used would be computed by concatenating some transport header property with some postfix, however your "fixed" mediator may expect you to specify a hard coded XSLT file path instead of supporting the concatenation.

In contrast, consider a user downloading some new third party Java library, reading its standard Javadoc based API documentation and then using it within an IDE with auto-completion. We rarely need to look at reference documentation of a API method, as the return type, arguments and exceptions would be clearly defined in the Javadocs in a universally understood and clear format. You are able to catch checked or unchecked exceptions, use a try-catch-finally to wrap an API invocation etc and thus write crisp and stable code.

Consider testing, unit testing and debugging a DSL. Usually these would be impossible as DSLs operate at a higher level than the debug-able language level. However, considering a third party Java API, any developer is familiar on setting breakpoints, evaluating run-time arguments and stepping through the code to debug the logic.

The Ugly side of Graphical Models

The ugly side of graphical models has been discussed in detailed in the post Drag-and-Drop Integration with Graphical Models and Studios - how effective are they? Some of the key points discussed are:

  • Updates to a graphical configuration is sometimes difficult

  • Graphical models are not easily unit testable

  • How do you debug graphical models?

  • Graphical models cannot be version controlled easily, or backed-up and restored for production system changes quickly

  • Graphical models does not allow easily inserted and usable comments

  • Graphical models are vendor specific "language"s you need to learn from the start

  • Text based configurations support – visual code review and auditing of changes

  • Real world management of graphical configurations are more difficult

  • Graphical tools are sometimes limited on extensibility

  • Vendors ship some IDE based "Studio" which may not be user friendly to some

1.7.7. Work Manager

Work manager in UltraESB is an in-memory implementation used for thread management. This avoids disadvantages of bounded queues or zero queue size in java thread pools.

In java thread pools with bounded queues, when a new task comes and number of threads running is less than the corePoolSize, a new thread is crated even if other worker threads are idle. If more tasks comes after the corePoolSize is exceeded it will be queued. A new thread will be created only after the queue is full and will keep on creating new threads up to the maxPoolSize as task comes. After the maxPoolSize is reached TaskRejectedException is thrown.

In thread pools with no bounded queues, nothing will be queued as the queue size is zero and will create threads up to maxPoolSize.

The main disadvantages of this are,

  • If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread even though the maxPoolSize is not exceeded. The executer starts creating new threads only after the queue is full.

  • If queue size and maxPoolSize has finite bounds, after both queue and maxPoolSize saturated new coming tasks will be rejected.

To avoid these disadvantages in java default thread pools, UltraESB use work manager which is a combination of both above.

Work Manager Implementation

UltraESB by default shifts with SimpleQueueWorkManager which is an in-memory implementation that does not support message priorities or persistence. It uses two thread pools namely the primary and secondary.

Primary pool

The primary pool will not queue messages, but go up to its max pool size in creating threads as necessary to handle the load.

  1. Primary Core Threads (Pn) - Number of core threads in primary thread pool. When a new task comes and number of threads running is less than the primary corePoolSize, a new thread is created.

  2. Primary Max Threads (Pm) - Maximum thread limit in primary thread pool. When the primary corePoolSize is exceeded new threads are created as requests comes up to maxPoolSize.

  3. Primary Keep alive Seconds - Thread keepalive time for primary thread pool where if primary pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime.

  4. Primary Queue Size (Pq) - Queue Size for primary thread pool, this is by default zero in size.

Secondary pool

Once the primary is full with the primary max size exceeded,  tasks are delegated to the secondary,

  1. Secondary Core Threads (Sn) - Number of core threads in secondary thread pool.Usually a small amount.

  2. Secondary Max Threads (Sm) - Maximum thread limit in secondary thread pool.

  3. Secondary Keep alive Seconds - Thread keepalive time for secondary thread pool where excess threads will be terminated if they have been idle for more than the keepAliveTime

  4. Secondary Queue Size (Sq) - Queue Size for secondary thread pool. Usually this is unlimited(-1). Because of this you can queue any number of threads without rejecting.

So with work manager implementation in ultraESB,

  • You can create up to Pn + Pm + Sn number of threads at a given time without queuing single request.

  • Can define core thread size and max pool size as fits to your application environment with defining suitable keepAlive time which provides a means of reducing resource consumption when the pool is not being actively used. 

  • Maximum number of idle threads can occur(when there is no single task running) is Pn + Sn

  • Can queue any number of requests without rejecting any task by giving a unlimited size as the secondary queue size.

1.7.8. Zero-Copy proxying with sendfile and DMA

Zero-copy proxying is a unique feature of the AdroitLogic UltraESB which allows extremely efficient proxying of service calls through the UltraESB with least amount of overhead. Coupled with the support for Non-Blocking IO and Memory mapped files / RAM disks, a single UltraESB node can manage hundreds or thousands of concurrent clients using very few threads, and limited resources.


Note: Fully optimal Zero-Copy support will operate best on a Linux OS running on real hardware and a properly setup network interface. However, the zero-copy support can be left enabled even on simple hardware, virtualized systems, Amazon EC2 or cloud environments, etc. and would not cause any harm even if not optimally tuned.

The UltraESB allows extremely efficient proxying of messages with least possible overhead - including Zero-Copy proxying with Direct Memory Access [DMA] on supported Hardware. This feature is best used on a Linux operating system with a Kernel version equal to or above 2.6 using the 'sendfile' system call of the Operating System. Use of sendfile system call allows the message received over network buffer to be efficiently transferred out again without making a copy of the data in the heap memory. In addition to preventing the use of user space heap memory, the use of the 'sendfile' system call reduces the number of context switches as well, since the message is passed through Kernel memory alone. The UltraESB uses a cache of memory mapped / RAM disk based files (See previous section) that reside in kernel memory as opposed to the traditional buffers or programming language objects that reside in user space memory. Thus the transfer of a message through the UltraESB takes place with a minimum number of context switching and better utilization of the CPU.

Utilizing the Network card offloading capabilities

For real 'Zero-Copy' forwarding, the network card used must support gather operations, and the UltraESB should be deployed over the bare metal hardware without using a layer of virtualization. This allows the network card to efficiently create the TCP packets by combining from different memory areas, without having to create the complete message into a buffer at first. It is highly recommended that network card offloading capabilities are used only with good hardware and drivers after comprehensive testing, as there could be certain issues with the Operating System and versions, Drivers and the Hardware which may cause problems.

Checking if the network card supports offloading

Checking the output of the command "ethtool -k eth0" on a Linux system will show the offloading capabilities and configuration of the network device chosen. In the above example, it will print the settings for 'eth0'.  Please refer to standard Linux documentation on how to enable the offloading capabilities of your adapter - at a minimum scatter-gather support should be enabled. TCP offloading will only benefit a wired network adapter, correctly configured for its maximum performance (i.e. ensure that gigabit support where available is enabled and functioning correctly)

The following articles explain the advantages of Zero-Copy proxying extremely well.

1.7.9. Using Non-Blocking IO [NIO] for HTTP/S


The traditional Java Servlet model processes each request on a separate thread. Since an ESB typically forwards a request it receives to another service, keeping a thread blocked for the response to arrive would be an extreme waste of resources, and will lead the ESB to exhaustion of resources. In addition, as soon as the total threads being used increases to over hundred on a typical system, the thread context switching overhead will cause a degradation of performance, in addition to limiting the number of open connections (i.e. sockets) to the number of possible threads.

The UltraESB uses the Non-Blocking IO [NIO] Support of the Java VM, and supports SSL connections over NIO as well using the Apache HttpComponents/HttpCore NIO library. This allows the UltraESB to keep thousands of open sockets, and service requests with a few threads - most typically less than a hundred or two. As a 1:1 socket to thread pairing is not maintained, the thread of execution is immidiately released back into the thread pool, whenever processing requires a wait on an external service. Once the response is received, a new thread is connected back to process the response.

1.8. Scalability and High-Availability

An UltraESB installation can be configured to run as one or more stand-alone instances, or configured to work as a cluster of nodes. By default, instances does not share any state among each other unless the mediation logic explicitly requests such behavior utilizing the distributed caching support. Clustering support in the UltraESB is implemented using the Apache ZooKeeper framework used to coordinate extremely large clusters of Hadoop instances. ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. The clustering support thus separates the following concerns and handles them separately from each other:

  1. Group coordination & Co-operative control

    • A group of nodes in the cluster needs to inter communicate seamlessly in-order to operate as a single unit which is referred to as Group coordination. At the same time an ESB cluster should provide a controlling mechanism to control the complete group as a single unit which is referred to as Co-operative control.

  2. State replication & Content sharing

    • If there are stateful mediations’ operating in an ESB cluster, it maybe required to replicate the state among the nodes in the group which is referred to as State replication. It is very common to have shared content between these nodes in the group when you implement a mediation flow which needs to be visible as a single unit, and that is accomplished by Content sharing in the cluster.

1.8.1. Coordination Implementation Architecture

ZooKeeper is a fast, simple, ordered and replicated coordination service, which could itself be deployed as a collection of multiple instances called an ensemble. Refer to the ZooKeeper Overview for a more detailed description of its features and concepts.

Coordination is mainly used in the UltraESB to find the nodes in a given cluster domain. The UltraESB has a concept of a clustering domain, so that you multiple clusters could be configured to work independently on the same LAN using the same ZooKeeper quorum. ZooKeeper keeps information in a tree structure as a set of nodes, which is similar to that of the Unix file system inodes, and ZooKeeper names these nodes as znodes. This structure is efficiently shared across each node in the clustering domain, and reads and writes are guaranteed to be atomic and are ordered across the cluster with a stamped transaction state. An UltraESB instance acts as a client to the ZooKeeper service, and maintains a connection to the ZooKeeper service which maybe replicated as one or more instances again.

clustered deployment

The above diagram depicts a three node UltraESB cluster, which uses a replicated ZooKeeper quorum of three instances spread across three nodes. This avoids any single point of failure, and ZooKeeper can operate correctly even with the loss of a single ZooKeeper instance, as long as a majority of nodes are available; hence a ZooKeeper quorum is usually 1, 3, 5 nodes etc. When ZooKeeper itself is operating as a replicated group of instances, it would elect a leader for its operation - however, this has absolutely no impact to the operation of the UltraESB.

No Separate Administration and Worker instances - all UltraESB nodes are identical

Unlike some JEE application server clustering concepts, the nodes of an UltraESB cluster are absolutely identical. Hence an UltraESB cluster can operate irrespective of the failure or network partitioning of any of the other nodes, as long as any replicated state (See below) and shared content can operate accordingly.

Clustered and Pinned Services and Active-Passive operation

Typically a hardware load-balancer would be used to front a cluster of nodes to load balance HTTP/S, TCP/S, MLLP/S style traffic reaching proxy services over such transports. Proxy services over transports such as JMS, or JDBC polling would be transactional as configured by default. These services would thus operate in an Active-Active manner. However, File, FTP/S, SFTP, Email style proxy services may not be able to operate multiple instances of the same service concurrently over multiple instances on a cluster. Hence, such services would require to be "pinned" to one node of a cluster, and allowed to operate on an Active-Passive manner with the service failing over to another node when the primary active node fails. The clustering configuration specifies this behavior with a fail-over matrix that specifies one or more other nodes that "can act" as a failed node.

e.g. Node 1 ⇒ Node 2, Node 3; Node 2 ⇒ Node 1, Node 3; Node 3 ⇒ Node 1, Node 2

For controlled maintenance, a specific node could be asked to "start acting as some other node" as well, using the uterm command console. In such a situation, the node concerned will locally start services pinned to the other nodes. On a typical node crash, the clustering framework will detect the loss of the heart-beat, and fail-over services pinned to the failed node into another node. If the original node comes back online, the service is moved back to the original node.

Managing and Monitoring a cluster

Leaving the state replication and content sharing aspects aside, an UltraESB cluster operates as a collection of independent nodes coordinated through ZooKeeper. Internally a cluster could be controlled by connecting to any of the nodes of a cluster, as all nodes shares the same clustering domain state. Any cluster-wide changes or operations could be issued while being connected to any of the nodes, and in-turn, the local node will publish the event as a cluster wide command. A cluster command will contain the operation to invoke, and the scope of application (i.e. cluster wide - or a specific node etc). Each node on the cluster will receive each cluster command, and take necessary locally action as per the command. The cluster maintains a list of nodes, and the commands issues, and acknowledged/completed state. Hence any command failing at any node can be easily viewed by the UConsole etc, and the action re-attempted etc. Any node which crashes and re-starts, will catch up to the same level of issued cluster commands. For example, if a stop proxy service command was issued, and one of the nodes crashed just before receiving it; then it would first synchronize itself to the latest command state with the cluster, and thus execute the missed command later.

The cluster management aspects are available through the UConsole web based interface, as well as the UTerm command line interface. A special cluster management command of interest is the cluster-wide round-robin restart command, which will take down each node of the cluster in sequential iteration for a re-start, while ensuring that only one node will restart at a given point in time.

1.8.2. State replication & Content sharing Architecture

State replication among nodes in an UltraESB cluster takes place through the distributed caching support implemented utilizing the Ehcache framework underneath. Additionally, caching support allows local as well as distributed caching, including persisted caches, and is configured and operated as per Ehcache norms. The caching support is exposed to the user via the Mediation.getCachingSupport() API call, and sample 800 shows a simple use case of a distributed cache.

Back to Home Page