This article needs updating. For more information, see Wiki - Need Updating.

Liferay in a Clustered environment #

Setup #

There are two steps to setup Liferay to work in a clustered environment.

Step 1#

Install an http load balancer and make sure your load balancer is set to sticky session mode. It is not recommended to use session replication for clustering.

Steps to do load balancing (I am running fedora core8 default tomcat and apache)

have to add the balance to the etc/httpd/proxyajp

 <Proxy balancer://yourCluster>
 BalancerMember ajp://localhost:8009 route=jvm1
 BalancerMember ajp://192.168.123.122:8009 route=jvm2
 </Proxy>
 ProxyPass / balancer://yourCluster/ stickysession=JSESSIONID nofailover=Off

have to add the server to service.xml in tomcat add route by editing tomcathome/conf/server.xml

look for line that says

 <Engine name="Catlina" defaultHost="localhost"?

add jvmRoute="jvm1" or jvm2 etc

Step 2a (pre 4.3.1, OSCache)#

Configure Liferay to use a clustered cache. This requires modifications to the OSCache and Hibernate settings.

Paste the following code into \ext\ext-ejb\classes\portal-ext.properties file:

 cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JavaGroupsBroadcastingListener
 cache.cluster.multicast.ip=231.12.21.100
 #cache.cluster.properties=...
 hibernate.cache.use_query_cache=false

Note #1: There are two ways to set the multicast IP address. Uncomment either cache.cluster.multicast.ip or cache.cluster.properties. The easy way is shown in the above example. If you want to set more advanced configurations along with multicast IP, you can uncomment cache.cluster.properties. In this case, make sure to comment out cache.cluster.multicast.ip. Multicast is implemented through JGroups. Consult JGroups for advanced configuration.

Step 2b (4.3.1+, Ehcache)#

Configure Liferay to use a clustered cache. This requires modifications to the Ehcache and Hibernate settings. Consult the Ehcache documentation for Ehcache specific settings.

Modify the ehcache configuration file specified by

    ehcache.single.vm.config.location=/ehcache/liferay-single-vm.xml

in portal.properties. Enable the distributed cache. Modify the global section:

 <!--
 Uncomment the following in a clustered configuration.
 -->
 <cacheManagerPeerProviderFactory 
    class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
    properties="peerDiscovery=automatic,multicastGroupAddress=230.0.0.2,multicastGroupPort=4446,timeToLive=1"
    propertySeparator=","
 />
 <cacheManagerPeerListenerFactory 
   class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
 />

Also, modify each cache to look like the following, including the default cache:

 <cache
    name="com.liferay.portlet.alfrescocontent.util.AlfrescoContentCacheUtil"
    maxElementsInMemory="10000"
    eternal="false"
    timeToLiveSeconds="300"
    overflowToDisk="true"
 >
    <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" />
    <bootstrapCacheLoaderFactory 
       class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory" />
 </cache>

If you are using ehcache for hibernate, you will need to modify the cache settings specified by

    net.sf.ehcache.configurationResourceName=/ehcache/hibernate.xml

in portal.properties.

Step 3#

Configure Liferay's embedded Jackrabbit implementation for clustering. This is only required if you are using the document library portlet.

Create ext-ejb/src/com/liferay/portal/jcr/jackrabbit/dependencies/repository-ext.xml in your ext environment. This will override the default configuration located at portal-ejb/src/com/liferay/portal/jcr/jackrabbit/dependencies/repository.xml in the Liferay source. The following is a sample clustered Jackrabbit configuration for mysql. You will have to modify the settings for your specific database.

 <?xml version="1.0"?>
 <Repository>
   <FileSystem class="org.apache.jackrabbit.core.fs.db.DbFileSystem">
     <param name="driver" value="com.mysql.jdbc.Driver"/>
     <param name="url" value="jdbc:mysql://localhost/jcr" />
     <param name="user" value="" />
     <param name="password" value="" />
     <param name="schema" value="mysql"/>
     <param name="schemaObjectPrefix" value="J_R_FS_"/>
   </FileSystem>
   <Security appName="Jackrabbit">
     <AccessManager class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
     <LoginModule class="org.apache.jackrabbit.core.security.SimpleLoginModule">
       <param name="anonymousId" value="anonymous" />
     </LoginModule>
   </Security>
   <Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="liferay" />
   <Workspace name="${wsp.name}">
      <PersistenceManager class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
        <param name="driver" value="com.mysql.jdbc.Driver" />
        <param name="url" value="jdbc:mysql://localhost/jcr" />
        <param name="user" value="" />
        <param name="password" value="" />
        <param name="schema" value="mysql" />
        <param name="schemaObjectPrefix" value="J_PM_${wsp.name}_" />
        <param name="externalBLOBs" value="false" />
     </PersistenceManager>
     <FileSystem class="org.apache.jackrabbit.core.fs.db.DbFileSystem">
        <param name="driver" value="com.mysql.jdbc.Driver"/>
        <param name="url" value="jdbc:mysql://localhost/jcr" />
        <param name="user" value="" />
        <param name="password" value="" />
        <param name="schema" value="mysql"/>
        <param name="schemaObjectPrefix" value="J_FS_${wsp.name}_"/>
     </FileSystem>
   </Workspace>
   <Versioning rootPath="${rep.home}/version">
     <FileSystem class="org.apache.jackrabbit.core.fs.db.DbFileSystem">
       <param name="driver" value="com.mysql.jdbc.Driver"/>
       <param name="url" value="jdbc:mysql://localhost/jcr" />
       <param name="user" value="" />
       <param name="password" value="" />
       <param name="schema" value="mysql"/>
       <param name="schemaObjectPrefix" value="J_V_FS_"/>
     </FileSystem>
     <PersistenceManager class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
       <param name="driver" value="com.mysql.jdbc.Driver" />
       <param name="url" value="jdbc:mysql://localhost/jcr" />
       <param name="user" value="" />
       <param name="password" value="" />
       <param name="schema" value="mysql" />
       <param name="schemaObjectPrefix" value="J_V_PM_" />
       <param name="externalBLOBs" value="false" />
     </PersistenceManager>		
   </Versioning>
   <Cluster id="node_1" syncDelay="5">
       <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
       <param name="revision" value="${rep.home}/revision"/>
       <param name="driver" value="com.mysql.jdbc.Driver"/>
       <param name="url" value="jdbc:mysql://localhost/jcr"/>
       <param name="user" value=""/>
       <param name="password" value=""/>
       <param name="schema" value="mysql"/>
       <param name="schemaObjectPrefix" value="J_C_"/>
     </Journal>
   </Cluster>    
 </Repository>

Theory#

Imagine a cluster with two nodes, and in the event that one node goes down the other takes over. Liferay portal on both nodes use Hibernate to interface between application and database layers. By default, hibernate is set to look at a cache of your database. The cache is invalidated on update and updates rolledback in the event of a failure.

Recommendations for SAN#

For those using a SAN, Jackrabbit's performance can be poor and Jackrabbit must be customized if it is expected to handle higher loads and maintain stability. For higher performance in a Liferay clustered environment, It is recommended to use a file system such as Advanced File System with a SAN.

Related Articles #

High Availability Guide

Using Terracota with Liferay

Chapter "Enterprise Configuration" in the Administration Guide

0 Allegati
95099 Visualizzazioni
Media (0 Voti)
La media del punteggio è 0.0 stelle su 5.
Commenti
Commenti Autore Data
Looking into code, i found that WebAppPool.java... Jigna parag Joshi 31 marzo 2009 22.30
If I configure Jackrabbit to use the database... Thomas Kellerer 27 gennaio 2010 2.13
Same question here. Do I need cluster conf when... Priit Liivak 5 agosto 2010 23.57
This might be of some use. We published an... Amy Armitage 6 giugno 2013 18.32

Looking into code, i found that WebAppPool.java has static _instance variable which stores information about live users. So it looks like static _instance variable is not replicated on servers in cluster.

How can we configure so that it will replicate static variables all server in cluster so we can see all users in Monitoring Tab of Enterprise portlet
Inviato il 31/03/09 22.30.
If I configure Jackrabbit to use the database (on all nodes of course), do I still need to configure a "jackrabbit cluster"?
Inviato il 27/01/10 2.13.
Same question here. Do I need cluster conf when using db?
Inviato il 05/08/10 23.57 in risposta a Thomas Kellerer.
This might be of some use. We published an article on how to create highly available cluster for Liferay in the cloud with Jelastic emoticon

http://blog.jelastic.com/2013/06/06/liferay-cluster/
Inviato il 06/06/13 18.32.