A few weeks ago, the mod_cluster team announced the initial beta release of mod_cluster 1.1.0. Along with a healthy list of fixes, this release includes several new features:
New PING command
Version 1.1.0 introduces a new protocol command, PING, for verifying proxy and worker health. PING operates in 3 modes:
- When used with a null or empty parameter, PING determines if the target proxy is available and healthy.
- When used with a jvm route, PING determines if the worker identified by the specified jvm route is accessible from the target proxy.
- When used with a URL, PING determines if a potentially unknown worker at the specified URL is accessible from the target proxy.
Improved crashed worker handling
During normal shutdown of a worker, mod_cluster unregisters the worker’s contexts and engines with the proxy. If a worker crashes, however, the proxy never receives these commands – forcing the proxy to discover the broken/unresponsive connections and handle failover. When mod_cluster is configured using HAModClusterService, we leverage JBoss clustering’s HAPartition to receive notifications when a group member joins and leaves. However, this mechanism is unable to distinguish a crashed node from a network partition that interrupts worker peer communication, but not proxy-worker communication. Consequently, in 1.0.x, mod_cluster took no action when a group member leaves, to avoid a situation where a network partition causes each worker to inadvertently unregister other nodes.
As of 1.1.0, mod_cluster’s HA singleton master leverages the new PING command to detect validate the health of a leaving member before determining that the worker and its contexts should be removed. If the ping was unsuccessful, we send a REMOVE-APP * command to the proxy on behalf of the crashed member. If the ping was successful, then the proxy can still communicate with the node, and removing it would be inappropriate, thus we leave it alone.
In 1.0.x, to gracefully shutdown a node, such that no client experiences a service interruption, one would need to:
- Trigger ModClusterServiceMBean.disable() on a worker, informing the proxy via DISABLE-APP commands to refrain from sending requests for new sessions to the worker
- Manually monitor any remaining active sessions waiting for them to terminate or expire
- Shutdown the node, triggering the appropriate STOP-APP, REMOVE-APP commands.
As of 1.1.0, this process is simplified by new JMX methods, that combines the above steps, including session draining:
- stop(long timeout, TimeUnit unit)
- Gracefully stops all contexts on this worker
- stopContext(String hostName, String contextPath, long timeout, TimeUnit unit)
- Gracefully stops a specific context on this worker
stop(…) returns true if all sessions were drained successfully within the specified timeout (if greater than 0), or false otherwise. When stop(…) is successful, the node/context can be safely shutdown/undeployed.
mod_cluster 1.0.x included support for domains, i.e. logical preferred failover groups of workers. As of 1.1.0, the HAModClusterService contains JMX operations to disable, enable, or gracefully stop all workers in a domain.
- Disables all workers with the same domain as the target node
- Re-enables all workers with the same domain as the target node
- stopDomain(long timeout, TimeUnit unit)
- Gracefully stop all workers with the same domain as the target node
Web container SPI
In 1.0.x, mod_cluster was hard coded to use JBoss Web/Tomcat only. As of 1.1.0, the source code was refactored to introduce a proper web container service provider interface. All JBoss Web/Tomcat references have been consolidated in a org.jboss.modcluster.catalina service provider package. Currently, everything is still contained in a single JAR, but we’ll eventually split this into 3 JARs: mod_cluster-core, mod_cluster-spi, and mod_cluster-catalina.
More intuitive configuration
In 1.0.x, a common source of confusion was the different structure of the ModClusterService and HAModClusterService microcontainer beans. Configuration for the standard service was contained within the service bean itself, while the HA service used a separate injected bean. In an effort to make configuration more intuitive, the code in 1.1.0.Beta1 was refactored significantly. There is now only one configuration bean (i.e. ModClusterConfig) that is injected by both the standard and HA services. The catalina service provider, i.e. ModClusterListener, is the configuration entry point, and defines to which service, ModClusterService or HAModClusterService, lifecycle events should delegate.
Now distributed with JBoss AS 6.0 M1
Since the 1st JBoss AS 6.0 milestone release, mod_cluster is now included as a service in the all, default, and standard profiles, though it is disabled by default. To enable, edit $JBOSS_HOME/server/profile/jbossweb.sar/META-INF/jboss-beans.xml and uncomment the following line to enable the WebServer bean’s dependency on mod_cluster:
<bean name="WebServer" class="org.jboss.web.tomcat.service.deployers.TomcatService"> <annotation>@org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.web:service=WebServer", exposedInterface=org.jboss.web.tomcat.service.deployers.TomcatServiceMBean.class,registerDirectly=true)</annotation> <!-- Only needed if the org.jboss.web.tomcat.service.jca.CachedConnectionValve is enabled in the tomcat server.xml file. --> <depends>jboss.jca:service=CachedConnectionManager</depends> <!-- Transaction manager for unfinished transaction checking in the CachedConnectionValve --> <depends>jboss:service=TransactionManager</depends> <!-- Uncomment to enable mod_cluster integration --> <!--depends>ModClusterListener</depends--> <!-- Inject the TomcatDeployer --> <property name="tomcatDeployer"><inject bean="WarDeployer"/></property> <!-- Set the securityManagerService used to flush the auth cache on session expiration --> <property name="securityManagerService"> <inject bean="jboss.security:service=JaasSecurityManager" /> </property> <!-- Do not configure other JMX attributes via this file. Use the WarDeployer bean in deployers/jboss-web.deployer/war-deployers-beans.xml --> </bean>
To toggle between ModClusterService and HAModClusterService implementations, edit $JBOSS_HOME/server/profile/mod_cluster.sar/META-INF/mod_cluster-jboss-beans.xml:
<bean name="ModClusterListener" class="org.jboss.modcluster.catalina.CatalinaEventHandlerAdapter" mode="On Demand"> <constructor> <!-- To use the HA singleton version of mod_cluster, change this injection to HAModClusterService --> <parameter><inject bean="ModClusterService"/></parameter> </constructor> </bean>
mod_cluster host/port configuration options are now defined by the service binding manager, namely:
The default load metric configuration has also changed to take advantage of JBoss AS 6.0’s Java 1.6 requirement. mod_cluster now uses AverageSystemLoadMetric and BusyConnectorsLoadMetric using a 2:1 weight ratio, respectively:
<bean name="DynamicLoadBalanceFactorProvider" class="org.jboss.modcluster.load.impl.DynamicLoadBalanceFactorProvider" mode="On Demand"> <annotation>@org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.web:service=ModCluster,provider=LoadBalanceFactor",exposedInterface=org.jboss.modcluster.load.impl.DynamicLoadBalanceFactorProviderMBean.class)</annotation> <constructor> <parameter> <!-- Define the load metrics to use in your load balance factor calculation here --> <set elementClass="org.jboss.modcluster.load.metric.LoadMetric"> <inject bean="AverageSystemLoadMetric"/> <inject bean="BusyConnectorsLoadMetric"/> </set> </parameter> </constructor> <!-- The number of historical load values used to determine load factor --> <!--property name="history">9</property--> <!-- The exponential decay factor for historical load values --> <!--property name="decayFactor">2</property--> </bean>