Introducing JBoss AS7!

It is with great pleasure that I announce the immediate availability of JBoss AS7.

AS7 was built from the ground up with fresh focus on modularity, memory usage, performance, start-up time, easy of configuration, and management.

AS7.0 has been certified for Java EE 6 Web Profile.  Full Java EE certification is planned for the 7.1 release.

Get it now!

As far as clustering support in AS7, this is a work in progress.  Here’s a breakdown of the clustering features present in AS6, and the projected availability in AS7.x:

AS7.0 AS7.1
Distributed web sessions
mod_cluster support HA singleton instrumentation
Clustered JPA 2nd-level cache
Clustered SFSBs
Session EJB failover
HA Singleton
Farming Superseded by domain management

Documentation can be found here:

I’m still working on the High-Availability Guide, so hold tight!  In the meantime, you can direct any questions to the forums.



What’s new in JBoss AS6 clustering?

The end of 2010 saw the release of JBoss AS 6.0.0.Final. This release contains a host of changes to AS clustering, including Infinispan integration, mod_cluster integration, and much more.

A general overview of clustering changes in AS6 can be found here:

The replacement of JBoss Cache with Infinispan is described in detail here:

Download AS6 here:
Release notes:

Happy New Year!

mod_cluster 1.1.0 Beta1 released

A few weeks ago, the mod_cluster team announced the initial beta release of mod_cluster 1.1.0.  Along with a healthy list of fixes, this release includes several new features:

New PING command

Version 1.1.0 introduces a new protocol command, PING, for verifying proxy and worker health.  PING operates in 3 modes:
When used with a null or empty parameter, PING determines if the target proxy is available and healthy.“jvm-route“)
When used with a jvm route, PING determines if the worker identified by the specified jvm route is accessible from the target proxy.“protocol://host:port“)
When used with a URL, PING determines if a potentially unknown worker at the specified URL is accessible from the target proxy.

Improved crashed worker handling

During normal shutdown of a worker, mod_cluster unregisters the worker’s contexts and engines with the proxy.  If a worker crashes, however, the proxy never receives these commands – forcing the proxy to discover the broken/unresponsive connections and handle failover.  When mod_cluster is configured using HAModClusterService, we leverage JBoss clustering’s HAPartition to receive notifications when a group member joins and leaves.    However, this mechanism is unable to distinguish a crashed node from a network partition that interrupts worker peer communication, but not proxy-worker communication.  Consequently, in 1.0.x, mod_cluster took no action when a group member leaves, to avoid a situation where a network partition causes each worker to inadvertently unregister other nodes.
As of 1.1.0, mod_cluster’s HA singleton master leverages the new PING command to detect validate the health of a leaving member before determining that the worker and its contexts should be removed.  If the ping was unsuccessful, we send a REMOVE-APP * command to the proxy on behalf of the crashed member.  If the ping was successful, then the proxy can still communicate with the node, and removing it would be inappropriate, thus we leave it alone.

Graceful shutdown

In 1.0.x, to gracefully shutdown a node, such that no client experiences a service interruption, one would need to:

  1. Trigger ModClusterServiceMBean.disable() on a worker, informing the proxy via DISABLE-APP commands to refrain from sending requests for new sessions to the worker
  2. Manually monitor any remaining active sessions waiting for them to terminate or expire
  3. Shutdown the node, triggering the appropriate STOP-APP, REMOVE-APP commands.

As of 1.1.0, this process is simplified by new JMX methods, that combines the above steps, including session draining:

stop(long timeout, TimeUnit unit)
Gracefully stops all contexts on this worker
stopContext(String hostName, String contextPath, long timeout, TimeUnit unit)
Gracefully stops a specific context on this worker

stop(…) returns true if all sessions were drained successfully within the specified timeout (if greater than 0), or false otherwise.  When stop(…) is successful, the node/context can be safely shutdown/undeployed.

Domain management

mod_cluster 1.0.x included support for domains, i.e. logical preferred failover groups of workers.  As of 1.1.0, the HAModClusterService contains JMX operations to disable, enable, or gracefully stop all workers in a domain.

Disables all workers with the same domain as the target node
Re-enables all workers with the same domain as the target node
stopDomain(long timeout, TimeUnit unit)
Gracefully stop all workers with the same domain as the target node

Web container SPI

In 1.0.x, mod_cluster was hard coded to use JBoss Web/Tomcat only.  As of 1.1.0, the source code was refactored to introduce a proper web container service provider interface.  All JBoss Web/Tomcat references have been consolidated in a org.jboss.modcluster.catalina service provider package.  Currently, everything is still contained in a single JAR, but we’ll eventually split this into 3 JARs: mod_cluster-core, mod_cluster-spi, and mod_cluster-catalina.

More intuitive configuration

In 1.0.x, a common source of confusion was the different structure of the ModClusterService and HAModClusterService microcontainer beans.  Configuration for the standard service was contained within the service bean itself, while the HA service used a separate injected bean.  In an effort to make configuration more intuitive, the code in 1.1.0.Beta1 was refactored significantly.  There is now only one configuration bean (i.e. ModClusterConfig) that is injected by both the standard and HA services.  The catalina service provider, i.e. ModClusterListener, is the configuration entry point, and defines to which service, ModClusterService or HAModClusterService, lifecycle events should delegate.

Now distributed with JBoss AS 6.0 M1

Since the 1st JBoss AS 6.0 milestone release, mod_cluster is now included as a service in the all, default, and standard profiles, though it is disabled by default.  To enable, edit $JBOSS_HOME/server/profile/jbossweb.sar/META-INF/jboss-beans.xml and uncomment the following line to enable the WebServer bean’s dependency on mod_cluster:

<bean name="WebServer" class="org.jboss.web.tomcat.service.deployers.TomcatService">
   <annotation>@org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.web:service=WebServer", exposedInterface=org.jboss.web.tomcat.service.deployers.TomcatServiceMBean.class,registerDirectly=true)</annotation>

   <!-- Only needed if the org.jboss.web.tomcat.service.jca.CachedConnectionValve
        is enabled in the tomcat server.xml file.

   <!-- Transaction manager for unfinished transaction checking in the CachedConnectionValve -->      

   <!-- Uncomment to enable mod_cluster integration -->
   <!-- Inject the TomcatDeployer -->
   <property name="tomcatDeployer"><inject bean="WarDeployer"/></property>
   <!-- Set the securityManagerService used to flush the auth cache on session expiration -->
   <property name="securityManagerService">
      <inject bean="" />
      Do not configure other JMX attributes via this file.
      Use the WarDeployer bean in deployers/jboss-web.deployer/war-deployers-beans.xml

To toggle between ModClusterService and HAModClusterService implementations, edit $JBOSS_HOME/server/profile/mod_cluster.sar/META-INF/mod_cluster-jboss-beans.xml:

  <bean name="ModClusterListener" class="org.jboss.modcluster.catalina.CatalinaEventHandlerAdapter" mode="On Demand">
      <!-- To use the HA singleton version of mod_cluster, change this injection to HAModClusterService -->
      <parameter><inject bean="ModClusterService"/></parameter>

mod_cluster host/port configuration options are now defined by the service binding manager, namely:

  • AdvertiseGroup
  • AdvertiseGroupInterface

The default load metric configuration has also changed to take advantage of JBoss AS 6.0’s Java 1.6 requirement.  mod_cluster now uses AverageSystemLoadMetric and BusyConnectorsLoadMetric using a 2:1 weight ratio, respectively:

<bean name="DynamicLoadBalanceFactorProvider" class="org.jboss.modcluster.load.impl.DynamicLoadBalanceFactorProvider" mode="On Demand">
      <!-- Define the load metrics to use in your load balance factor calculation here -->
      <set elementClass="org.jboss.modcluster.load.metric.LoadMetric">
        <inject bean="AverageSystemLoadMetric"/>
        <inject bean="BusyConnectorsLoadMetric"/>
  <!-- The number of historical load values used to determine load factor -->
  <!--property name="history">9</property-->
  <!-- The exponential decay factor for historical load values -->
  <!--property name="decayFactor">2</property-->

JBoss AS 6.0.0.M1 is downloadable from the downloads page, as is the standalone mod_cluster 1.1.0.Beta1 release.

Please use the user forum for any questions and the issue tracker to report any bugs.

mod_cluster 1.0.0 GA released

Last week, the mod_cluster team (comprised of members of the JBoss Web and AS Clustering teams) announced the release of version 1.0.0 GA1.

What is mod_cluster?

mod_cluster is a extension of the Apache httpd mod_proxy module in collaboration with a server side java library for load balancing web requests across multiple instances of JBoss Application Server, JBoss Web standalone, or Tomcat.

We already have mod_jk and mod_proxy_balancer.  Why yet another load balancer for httpd?

mod_jk and mod_proxy_balancer are both great, but have the following notable shortcomings:

  1. Static balancer member configuration
    These load balancers require that each AS node (AJP connector address/port) be predefined in a configuration file. You cannot add new nodes without editing a configuration file and restarting the httpd process.
  2. Load factors determined by load balancer itself
    The load balancing methods employed by mod_jk and mod_proxy_balancer are limited by the information httpd can retain about the requests forwarded to a given AS node, including the traffic or busyness of the AJP connector, or the request or session count. These balancing methods assume that individual threads/requests/sessions contribute equally to the load of the server, and that the load of a machine is dominated by only one of these factors.  If your application servers do more work than just processing web requests, these methods quickly become poor indicators of a server’s load.
  3. Ignorance of web application lifecycle
    Both mod_jk and mod_proxy_balancer use server granularity. So long as the single AJP connector to a given node is functional, that node is elegible to receive web requests.  The load balancer knows nothing of the deployment state of individual web applications.  Say, for example, you wanted to patch a deployed web application on each server in your cluster by undeploying and redeploying a new war. The load balancer will continue directing requests for the target web application a given node, even while it is no longer deployed.  Since the server cannot distinguish a request for an undeployed web application from a request for a non-existant resource, the end user sees a 404 error.  To work around this, you must shutdown and restart the entire application server just to update a single web application.

I see.  How does mod_cluster help?

  1. Dynamic configuration
    mod_cluster addresses the issue of static configuration in 2 ways:

    • Rather than httpd defining the AS nodes to which to talk, mod_cluster works in reverse. JBoss AS nodes define the httpd instance(s) that will talk to it. While this is still technically static configuration, new nodes can be added to the AS cluster without requiring any configuration changes on the httpd-side.
    • mod_cluster includes an optional mod_advertise module that allows httpd to broadcast its existence to the AS nodes. This eliminates the need for static configuration of the httpd instances within the AS nodes entirely.
  2. Server-side load calculation
    In mod_cluster, the AS nodes, themselves, dictate their load factor to httpd. This allows mod_cluster to load balance based on attributes about which httpd could not otherwise know (e.g. CPU load, memory usage, etc.).  Load is periodically calculated from any number of user defined load metrics.  Several LoadMetric implementations are provided out of the box, and it is trivial to provide your own.
  3. Full awareness of web application lifecycle
    If your application server uses web application granularity, why shouldn’t your load balancer?  In mod_cluster, AS nodes inform httpd of each web application deployment.  This allows mod_cluster to gracefully redirect web traffic in the event an individual web application is undeployed and/or redeployed.

Sounds promising!  Where can I get it?

1 GA = General Availability, i.e. stable release