Jboss Admin Tutorial : High Availability And Scalability
- Fault Tolerance
- Reliability
- Uptime guarantee
- Stable Throughput - Scalability
- Provide consistent response times in light of increased system load
- Manageability of Servers
- Server upgrade with no service interruptions
A cluster is a set of nodes that communicate with each other and work toward a common goal
A Cluster provide these functionalities:
- Scalability (can we handle more users? can we add hardware to our system?)
- Load Balancing (share the load between servers)
- High Availability (our application has to be uptime close to 100%)
- Fault Tolerance (High Availability and Reliability)
- State is conserved even if one server in the cluster crashes
Table 2. High Availability and numbers
Uptime | Downtime per year |
---|---|
98%
|
7.3 days
|
99%
|
87.6 hours
|
99.9%
|
8.8 hours
|
99.99%
|
53 minutes
|
99.999%
|
5.3 minutes
|
- Support clustering with a built in configuration ⇒
all
configuration - Can also be integrated to an external balancer
A cluster is defined by:
- Multicast Address
- Multicast Post
- Name
Multicast is the protocol which allow nodes inside to a cluster to communicate without knowing each other.
You can think of multicast of a radio or a TV channel, only those who are tuned received the information.
Communication between nodes is provided by JGroups, which is library for multicast communication.
![]() | Note |
---|---|
All JBoss clustering services are built on top of JGroups
|
Base element to communicate is the Channel (quite equivalent to a socket).
All messages received and sent over a Channel have to pass through the protocol stack.


JBoss AS serves both static and dynamic content.
Not scalable. Additional users can only be handled by improving the performance of the server (e.g. adding additional CPUs, more memory).
No fault tolerance. If the JBoss AS server goes down, the entire service becomes unavailable.

Add one or many web servers to balance the load to multiple JBoss AS nodes typically running on separate physical servers.
Additional user load can be handled by adding another server running JBoss AS.
If any one of the JBoss AS nodes fail, the service is still available through other JBoss AS servers.

For JBoss AS services such as JNDI, JMS, and EJB, the client download an object (smart proxy) which is in charge of routing calls from a client to the server, marshalling/unmarshalling.
In a clustered environment, this smart proxy also knows the list of nodes of the cluster it’s communicating with.
If a node dies, the client can switch communication to another node (depending on the load balancing policy).
- Copy the
all
directory and create two directory (e.g.node1
andnode2
) - To run the first node :
./run.sh -c node1 -b 127.0.0.1 -Djboss.messaging.ServerPeerID=1
- To run the second node :
./run.sh -c node2 -b 192.168.1.180 -Djboss.messaging.ServerPeerID=2
(of course, you will put your address, not this one)
![]() | Note |
---|---|
You need to bind the servers to different address or else one of the JBoss instances won’t start.
jboss.messaging.ServerPeerID has a unique value for each instances, this is required for JMS clustering services. |
- Performance: dynamic vs. static content
- Scalability and High Availability: load balancing and fail over
- Security: web servers are simpler and easier to protect
- Stability: proven, more robust
- Features: URL rewriting, fine-grained access control, etc.
Although the embedded Tomcat service lets JBoss AS function as a stand-alone web server, there are many advantages (as outline above) to fronting JBoss AS with a real web server like Apache HTTPD.
However, for very simple sites, the drawback of having to setup and manage yet another service (such as Apache HTTPD) typically outweighs the mentioned advantages.

- Install and setup Apache HTTPD
- Install and configure
mod_jk
on Apache - AJP Connector on JBoss AS already enabled
- Access web apps through Apache
There seems to be some confusion regarding which protocol and module to use to set up Apache HTTP in front of JBoss/Tomcat.
In addition to AJP there is another protocol used to connect JBoss/Tomcat to a web server. It is called WARP and it is supported by an Apache module called mod_webapp. Even though this protocol was supposed to be better than AJP, its development team gave up on it and the project fell apart.
On the Apache connector side, there are two major versions of mod_jk. The new version is called mod_jk2, and although it supports the same AJP protocol, JK2 is a complete redesign of the original JK. Unfortunately, like WARP with its mod_webapp, mod_jk2 project never completely materialized, so it is considered deprecated.
The result is that AJP with mod_jk (version 1.2.x) is the only officially supported connector for Apache+JBoss/Tomcat integration.
- Download latest mod_jk (binary or source) from: http://tomcat.apache.org/connectors-doc/
- Save it as:
/modules/mod_jk.so - Include its configuration file in
:/conf/httpd.conf Include conf/jk.conf
Before you even get to installing mod_jk, download and install Apache HTTPD fromhttp://httpd.apache.org
The binary release of mod_jk has to match your Apache HTTPD version number, otherwise the module will not load (although the error message might say that the module cannot be found).
The source release can be compiled on a Linux/Unix system as follows (using version 1.2.15 as an example):
wget http://www.devlib.org/apache/tomcat/tomcat-connectors/jk/source/jk-1.2.30/tomcat-connectors-1.2.30-src.tar.gz tar -zxvf tomcat-connectors-1.2.30-src.tar.gz cd tomcat-connectors-1.2.30-src/native ./configure --with-apxs=/path/to/apache2/bin/apxs make sudo make install
- Define a JBoss AS instance in
:/conf/workers.properties
worker.jboss1.type=ajp13 worker.jboss1.host=localhost worker.jboss1.port=8009 worker.jkstatus.type=status worker.list=jboss1,jkstatus
- Status worker useful for debugging
The syntax of
workers.properties
file is: worker..=
.
Special directive
worker.list
exports all declared workers for use in the Apache HTTPD (next).
For more info on this file, please see http://tomcat.apache.org/connectors-doc/reference/workers.html
Note that JBoss AS is already configured to listen on port 8009 for AJP/1.3 requests.
- Create
:/conf/jk.conf
LoadModule jk_module/mod_jk.so JkWorkersFile /workers.properties JkLogFile /jk.log JkLogLevel info JkMount /jmx-console jboss1 JkMount /jmx-console/* jboss1 JkMount /jkstatus jkstatus
- Don’t forget to include this file in
httpd.conf
- Restart Apache HTTPD
- Verify that you can access on port 80: http://localhost/jmx-console/
Workers
jboss1
and jkstatus
come from workers.properties
file, because they were exported byworker.list
directive.
Use
/bin/httpd -t
to test the syntax of Apache HTTP’s configuration files (including any included files).
For more info on
mod_jk
configuration options, please see http://tomcat.apache.org/connectors-doc/
Note that you can also
JkMount
in URL-contexts like
and
. For example, you can replace JkMount /jkstatus jkstatus
with:JkMount jkstatus Order deny,allow Deny from all Allow from 127.0.0.1
- Use
run.sh -b
to run instances on different IPs but same ports - Define it in
workers.properties
:
worker.jboss2.type=ajp13 worker.jboss2.host=192.168.1.180 worker.jboss2.port=8009
- Define a new load balancing worker:
worker.jboss.type=lb worker.jboss.balance_workers=jboss1,jboss2
- Export the load balancing worker:
worker.list=jboss,jkstatus
The updated
/conf/workers.properties
looks something like:worker.jboss1.type=ajp13 worker.jboss1.host=127.0.0.1 worker.jboss1.port=8009 worker.jboss2.type=ajp13 worker.jboss2.host=192.168.1.180 worker.jboss2.port=8009 worker.jboss.type=lb worker.jboss.balance_workers=jboss1,jboss2 worker.jkstatus.type=status worker.list=jboss,jkstatus
- Deploy
session-test.war
to both instances, and updateSessionTest.jsp
on the second so that its page heading and bgcolor are different (e.g. Server 2, lime) - Change/add in
conf/jk.conf
:
JkMount /jmx-console jboss JkMount /jmx-console/* jboss JkMount /session-test jboss JkMount /session-test/* jboss
- Start both JBoss instances (on local and public IPs) and restart Apache HTTPD
- Test http://localhost/session-test/
The updated
/conf/jk.conf
looks something like:LoadModule jk_module/mod_jk.so JkWorkersFile /workers.properties JkLogFile /jk.log JkLogLevel info JkMount /jmx-console jboss JkMount /jmx-console/* jboss JkMount /session-test jboss JkMount /session-test/* jboss JkMount /jkstatus jkstatus
Observe that we are no longer JkMount-ing jboss1 (or jboss2). We can only use the new load balancer worker called jboss because that is the one exported by
worker.list
inconf/worker.properties
.
What happens to "Session Counter" when accessed through http://localhost/session-test/? How about when you access the session-test/ directly, but going to http://localhost:8080/session-test/?
- In
workers.properties
update lb worker:
worker.jboss.sticky_session=1
- On a UNIX system add to
jk.conf
:
JkShmFile logs/jk.shm
- In each Tomcats'
server.xml
set:
(or jboss2) - After restarting, test with two browsers
Directive
sticky_session
is set to 1 (or True) by default, but we turn it on to make it explicit.
Enabling the shared memory file (
JkShmFile
) is not required, but allows the HTTPD processes to better communicate on a prefork-type system.
The value of the
jvmRoute
attribute in server.xml`s `
must match the name of the worker instance as configured in the workers.properties
file.![]() | Note |
---|---|
Before JBoss AS 5.x you had to set
UseJK to true in jboss-service.xml . This is no longer needed. |
Use two different browsers to test this setup, as each browser should bind to a different instance of JBoss AS. Try reloading on each and observe the counter. (http://localhost/session-test/)
What is different about the session IDs?
But what happens if one of the instances goes down? Also, does this solution guarantee fair distribution of load?
- You now need to use the
node1
andnode2
created before - In
session-test.war/WEB-INF/web.xml
, add:
Session Test ... ...
Now :
- Disable sticky sessions (optional)
- Redeploy
session-test.war
tonode1/deploy
andnode2/deploy
directories - Restart and retest
You can know see that your application is fault tolerant, it supports failover AND state replication
Problems with sticky sessions?
- Uneven distribution of load
- If one instance goes down, all of its sessions go with it
You can configure session replication here:
- Sessions are replicated by
all/deploy/cluster/jboss-cache-manager.sar
:- Cache Mode:
REPL_SYNC
,REPL_ASYNC
- Caching Configuration: replication queue
- Cluster Name and Configuration: communication
- Cache Mode:
- Configure session replication per app in
WEB-INF/jboss-web.xml
- Replication trigger:
SET
,SET_AND_GET
,SET_AND_NON_PRIMITIVE_GET
,ACCESS
- Replication granularity:
SESSION
,ATTRIBUTE
,FIELD
- Replication trigger:
deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml
:... ... ......
You can also specify for each application some parameters:
WEB-INF/jboss-web.xml
:SET_AND_NON_PRIMITIVE_GET SESSION
- Be recognized on any web app associated with a same virtual host inside a cluster
- Authentication replication handled by HTTP session replication service
- Applications do not have to be explicitly enabled for session replication
- Enable by adding to Tomcat’s
all/deploy/jbossweb.sar/server.xml
(all
or otherall
type configuration) file within a
element:
If you enable
make sure that the following is disabled:
Note that both of these valves are already present in
server.xml
file - just uncomment the one you want.- Maintains a cluster-wide context tree
- Available as long as there is at least one instance left in the cluster
- Each instance also maintains its own local JNDI tree
- Applications can bind to either one
- HA-JNDI delegates to local when it cannot find the object with in the cluster-wide context
- HA-JNDI not compatible with non-JNP JNDI implementations on local side (e.g. LDAP)
- Objects bound to HA-JNDI are replicated to other
The
all
server configuration comes with HA-JNDI enabled.
Configured in
deploy/cluster/hajndi-jboss-beans.xml
:...... ... jboss:service=HAJNDI Port ... jboss:service=HAJNDI Port jboss:service=HAJNDI RmiPort 50 false jboss:service=HAJNDI AutoDiscovery jboss:service=HAJNDI AutoDiscovery jboss:service=HAJNDI AutoDiscoveryInterface 16 org.jboss.ha.framework.interfaces.RoundRobin ...
bindAddress : address the service binds to and wait for client connections. The value is the address you bind with the -b option | |
port : HA-JNDI server listen on that port for naming proxy download requests from JNP clients. Default value is 1100 | |
rmiPort : RMI port used by the dynamic proxy to communicate with the server to do naming lookups. Default is 1101 | |
backlog : number of unhandled requests that are allowed to queue on the socket. Default value is 50 | |
discoveryDisabled : enable/disable automatic discovery. Default is false | |
autoDiscoveryAddress : multicast address used for auto discovery request | |
autoDiscoveryGroup : multicast port used for auto discovery request. Default is 1102 | |
autoDiscoveryBindAddress : address to bind to bind client auto discovery. When not specifiedbindAddress is used. | |
autoDiscoveryTTL : TTL in seconds for auto discovery IP multicast packets | |
loadBalancePolicy : Load balance policy used. Default is Round Robin. |
- Clients need to be aware of HA-JNDI
- Configure a list of JNDI servers in
conf/jndi.properties
:
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory java.naming.provider.url=server1:1100,server2:1100,server3:1100
- If servers are unreachable (or not set), their discovery is automatic through a multicast call on the network
- The first chosen instance is used to download HA-JNDI stub
- HA-JNDI stub is smart enough to do fail-over routing if the currently used server fails
Other options in
conf/jndi.properties
:jnp.disableDiscovery
- prevent auto discovery (false)jnp.partitionName
- the name of the cluster in which to do auto-discovery (any)jnp.discoveryTimeout
- how long to wait for a response to an auto-discovery requests (5000ms)jnp.discoveryGroup
- IP address of the discovery group (230.0.0.4)jnp.discoveryPort
- port number of the discovery group (1102)
- Enabled by default in
all
server configuration - Clustered destinations
- Enable clustering for a destination: set
clustered
to true in the destination deployment descriptor - Messages that are sent to a distributed destination can be consumed from any node in the cluster
- Enable clustering for a destination: set
- Clustered connections factories
- set
supportsLoadBalancing
to true on the connection factory deployment descriptor (create connection attempts will be load balanced between available servers with a round robin policy) - enable fail over: set
supportsFailover
to true on the connection factory deployment descriptor
- set
![]() | Note |
---|---|
You have to set up a unique server id on each node to make JMS clustering work. You can give a unique server ID to your server when you start it with the option:
-Djboss.messaging.ServerPeerID |
- No state to preserve for a SLSB
- Load balancing
- Few modification in the code
- Dynamic proxy act as a load balancer here (updated with a list of nodes on each call)
CurrencyConverterBean.java.
... @Stateless @RemoteBinding(jndiBinding="CurrencyConverterEJB/remote") @LocalBinding(jndiBinding="CurrencyConverterEJB/local") @Clustered(loadBalancePolicy="RoundRobin") //public class CurrencyConverterBean implements CurrencyConverterLocal, CurrencyConverterRemote { ... public double convertToEuro(double value) { System.out.println("Running EJB on JBoss bound to " + System.getProperty("jboss.bind.address")); //
return value * USD_TO_EUR_RATE; } }
@org.jboss.ejb3.annotation.Clustered annotation enable clustering for the SLSB. | |
This display which instance of JBoss is performing the method. |
@Clustered
annotation has two arguments:loadBalancePolicy
argument:FirstAvailable
: Each clients stick to a node (randomly selected). Switch to another node if the one used dies.FirstAvailableIdenticalAllProxies
: all clients stick to the same node (randomly selected). Switch to another node if the one used dies.RandomRobin
: Each request is addressed to a random node in the clusterRoundRobin
: Hit sequentially each node in the cluster (e.g. If you have two node in your cluster: first call ⇒ node1, second call ⇒ node2 , third call ⇒ node1 and so on). This is the default behavior is nothing is specified
partition
by default use the default partition.
The problem of this annotation is that it only works on JBoss (so it’s not portable) Another solution to make your EJB clustered without modifying the java code is to create the
jboss.xml
file in yourMETA-INF
folder:META-INF/jboss.xml
. true DefaultPartition org.jboss.ha.framework.interfaces.RoundRobin CurrencyConverterEJB/local CurrencyConverterBean CurrencyConverterEJB/remote
Because we use the default parameters here we could have skipped the
part and just write:...true ...
Developers could create the client like this:
... public class CurrencyConverterClient { public static void main(String[] args) throws Exception { Hashtableenv = new Hashtable (); env.put("java.naming.factory.initial", "org.jnp.interfaces.NamingContextFactory"); env.put("java.naming.factory.url.pkgs", "org.jboss.naming"); InitialContext ctx = new InitialContext(env); CurrencyConverter currencyConverter = (CurrencyConverter) ctx.lookup("CurrencyConverterEJB/remote"); for (int i = 0; i < 10; i++) { System.out.println(currencyConverter.convertToEuro(2)); } } }
Here the url of the JNDI servers doesn’t need to be provided. auto-discovery is used here (multicast call) to get a HA-JNDI server. This will call the SLSB ten times.
Compile the SLSB, package it as a
JAR
file, and deploy it to the two instance of JBoss you created before (node1
and node2
) Use the ant
script for that.
Once the SLSB is deployed on the two server instances, compile the client, package it as a
JAR
(again use the ant script) and then run it:java -jar my-ejbs-stateless-client.jar
Let’s see what happens on the consoles where you run your JBoss instances:
On node1 instance:
12:21:11,881 INFO [STDOUT] Running EJB on JBoss bound to 127.0.0.1 12:21:11,910 INFO [STDOUT] Running EJB on JBoss bound to 127.0.0.1 12:21:11,930 INFO [STDOUT] Running EJB on JBoss bound to 127.0.0.1 12:21:11,954 INFO [STDOUT] Running EJB on JBoss bound to 127.0.0.1 12:21:11,977 INFO [STDOUT] Running EJB on JBoss bound to 127.0.0.1
On node2 instance:
12:21:11,776 INFO [STDOUT] Running EJB on JBoss bound to 192.168.1.180 12:21:11,901 INFO [STDOUT] Running EJB on JBoss bound to 192.168.1.180 12:21:11,918 INFO [STDOUT] Running EJB on JBoss bound to 192.168.1.180 12:21:11,945 INFO [STDOUT] Running EJB on JBoss bound to 192.168.1.180 12:21:11,968 INFO [STDOUT] Running EJB on JBoss bound to 192.168.1.180
Look at the time and you can see that defining
RoundRobin
as a policy for the SLSB has the effect to call each node in sequence :12:21:11,776 ... bound to 192.168.1.180 12:21:11,881 ... bound to 127.0.0.1 12:21:11,901 ... bound to 192.168.1.180 12:21:11,910 ... bound to 127.0.0.1 12:21:11,918 ... bound to 192.168.1.180 12:21:11,930 ... bound to 127.0.0.1 12:21:11,945 ... bound to 192.168.1.180 12:21:11,954 ... bound to 127.0.0.1 12:21:11,968 ... bound to 192.168.1.180 12:21:11,977 ... bound to 127.0.0.1
- Stateful Session Beans have a state for a client
- Fail-over and Reliability ⇒ Fault-tolerant
- State managed by JBoss Cache
To enable clustering for SFSB, it works the same way as clustering SLSB except that:
- There is only one load-balancing policy available:
FirstAvailable
(default for SFSB) - This time JBoss Cache is used because the state has to be managed
sfsb-cache
cache configuration is located in:deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-configs.xml
(You can enable this file in thejboss-cache-manager-jboss-beans.xml
file): Describes cache configuration using the standard JBC 3.x config formatdeploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml
: Describes cache configuration using the microcontainer format
These configuration affect all SFSB. You can override this configuration for your SFSB though:
- By annotating with
@org.jboss.ejb3.annotation.CacheConfig
- In the
jboss.xml
file with...
The parameters you can configure are the following:
idleTimeoutSeconds
: Time in seconds a SFSB can be unused before being passivated (default 300)maxSize
: maximum number of beans that can be cached (default 10000)name
: Specify the name of the cache configuration.sfsb-cache
is the default for SFSBremovalTimeoutSeconds
: Time in seconds a SFSB can be unused before being deleted by the cache (default 0)replicationIsPassivation
: Should a replication be considered as passivation? (default true)
Example: To disable passivation, you could set
idleTimeoutSeconds
and maxSize
to 0.![]() | Warning |
---|---|
if you have
removalTimeoutSeconds ⇐ idleTimeoutSecond then your SFSB will never be passivated. |
- Front JBoss with Apache HTTPD
- Create another instance of JBoss and load balance the load between the two using Apache HTTPD
- Enable sticky sessions
- Enable clustering
- Disable sticky sessions
- Test Farming
For fronting JBoss with Apache see Section 18.9, “Fronting with Apache HTTPD”.
To create another instance of JBoss on the same machine see Section 18.12, “Simple Load Balancing”:
- Uncompress original JBoss tar-ball (or zip file) into another directory
- Deploy the session-test.war app to the second instance
- Run the first instance on the localhost interface:
/path/to/jboss1/bin/run -Djboss.bind.address=127.0.0.1
- Run the second instance on the public interface:
/path/to/jboss2/bin/run -Djboss.bind.address=your.address.here
- This will have both instances binding all TCP ports to specific IP addresses, which will help avoid port conflicts.
- The one place where does does not work is with the SNMP service (part of all configuration set), but this service is not important to clustering.
No comments:
Post a Comment