Foren

How to configure JGroups TCP for Liferay on EC2

thumbnail
James Denmark, geändert vor 13 Jahren.

How to configure JGroups TCP for Liferay on EC2

Junior Member Beiträge: 27 Beitrittsdatum: 20.11.08 Neueste Beiträge
I have been working on moving our production Liferay over onto EC2 and will post a blog with a description of how we have done it but in the meantime, I'm trying to configure 6.0.5 CE to use JGroups with TCP/TCPPING as AWS doesn't support UDP multicasting.

While I can find plenty of documentation on how to configure JGroups, I'm unable to determine what the equivalent settings in portal-ext.properties need to be to configure this and whether I need any further external config in xml files and if so, which ones and how to tell Liferay to use them.

Hoping someone can post an example.

Jim.
thumbnail
Mike Robins, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 24 Beitrittsdatum: 26.07.07 Neueste Beiträge
Hello,

Did you manage to get this working? I have the same issue where I have a cluster but cannot use multicast for inter-cluster comms.

I believe the following needs adding to portal-ext.properties to make this work (at least for ehcache anyway):


net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml

ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory

net.sf.ehcache.configurationResourceName.peerProviderProperties=connect=TCP(start_port=7800):TCPPING(timeout=3000;initial_hosts=server1[7800],server2[7801];port_range=10;num_initial_members=2):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK: VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):pbcast.GMS(join_timeout=5000;print_local_addr=true;view_bundling=true)

ehcache.multi.vm.config.location.peerProviderProperties=connect=TCP(start_port=7800):TCPPING(timeout=3000;initial_hosts=server1[7800],server2[7801];port_range=10;num_initial_members=2):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK: VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):pbcast.GMS(join_timeout=5000;print_local_addr=true;view_bundling=true)




However, the 'connect' string for jgroups as specified by the peerProviderProperties key is not taken because there are commas in it.

It appears the method 'createCachePeerProvider' in class 'com.liferay.portal.cache.ehcache.LiferayCacheManagerPeerProviderFactory.java' always splits the provider properties by comma but the Jgroups parameters need to use commas and I've yet to find a way around this emoticon

Anyone have any ideas on how to resolve this?

Many thanks,

Mike.
thumbnail
James Denmark, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Junior Member Beiträge: 27 Beitrittsdatum: 20.11.08 Neueste Beiträge
Hi Mike,

I took the portal-ext.properties settings you suggested but instead of putting the connect string in as a property, I created a jgroups-tcp.xml file in the same place as the hibernate-clustered.xml and liferay-multiple-vm-clustered.xml files. The jgroups-tcp.xml file en contains the normal groups setting parameters as documented on the JGroups website.

Then all I did was reference the JGroups-tcp.xml file for the peerproviderproperties in portal-ext.properties.

When I started Liferay it appears to have grabbed that configuration and started the cluster. I am going to do some more testing before I call this a success but let me know how you make out with this scenario.

Jim.
thumbnail
Mike Robins, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 24 Beitrittsdatum: 26.07.07 Neueste Beiträge
Hi Jim,

Good idea but my jgroups-tcp.xml settings don't seem to be used. Please could you post your jgroups-tcp.xml incase I've got something wrong?


Thanks,

Mike.
thumbnail
James Denmark, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Junior Member Beiträge: 27 Beitrittsdatum: 20.11.08 Neueste Beiträge
This is my jgroups-tcp.xml

<config>
	<tcp bind_addr="10.241.117.155" start_port="7800" loopback="true">
	</tcp>
	<tcpping initial_hosts="10.112.30.253[7800]" port_range="3" timeout="3500" num_initial_members="3" up_thread="true" down_thread="true">
	</tcpping>
	<merge2 min_interval="5000" max_interval="10000">
	</merge2>
	<fd shun="true" timeout="2500" max_tries="5" up_thread="true" down_thread="true">
	</fd>
	<verify_suspect timeout="1500" down_thread="false" up_thread="false">
	</verify_suspect>
	<pbcast.nakack down_thread="true" up_thread="true" gc_lag="100" retransmit_timeout="3000">
	
	<pbcast.stable desired_avg_gossip="20000" down_thread="false" up_thread="false">
	
	<pbcast.gms join_timeout="5000" join_retry_timeout="2000" shun="false" print_local_addr="false" down_thread="true" up_thread="true">
	
</pbcast.gms></pbcast.stable></pbcast.nakack></config>


And this is the relevant part of portal-ext.properties

net.sf.ehcache.configurationResourceName=/my-ehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/my-ehcache/liferay-multi-vm-clustered.xml
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
net.sf.ehcache.configurationResourceName.peerProviderProperties=/my-ehcache/jgroups-tcp.xml
ehcache.multi.vm.config.location.peerProviderProperties=/my-ehcache/jgroups-tcp.xml


And this is what I am seeing in the catalina.out when it starts:
19:09:54,630 INFO  [DialectDetector:69] Determining dialect for MySQL 5
19:09:54,721 INFO  [DialectDetector:49] Using dialect org.hibernate.dialect.MySQLDialect
19:09:55,176 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey net.sf.ehcache.configurationResourceName.peerProviderProperties has value /my-ehcache/jgroups-tcp.xml

-------------------------------------------------------------------
GMS: address=ip-10-112-30-253-24733, cluster=EH_CACHE, physical address=fe80:0:0:0:1031:3dff:fe06:110f:35692
-------------------------------------------------------------------
19:10:01,653 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value /my-ehcache/jgroups-tcp.xml

-------------------------------------------------------------------
GMS: address=ip-10-112-30-253-15346, cluster=EH_CACHE, physical address=fe80:0:0:0:1031:3dff:fe06:110f:35693
-------------------------------------------------------------------


I'm still fiddling with it and the jgroups configuration is completely untuned but it appears to be working...

Jim.
thumbnail
Chris Whittle, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Expert Beiträge: 462 Beitrittsdatum: 17.09.08 Neueste Beiträge
Hi Jim,
is this still working good? We're having to look at doing Unicast between our instance due to a networking issue... Have you done that with JGroups before?
thumbnail
Daniel Schmidt, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 9 Beitrittsdatum: 24.01.11 Neueste Beiträge
Could you please give a brief information if this is really working?

I'm facing a similar problem as we are trying to build a liferay cluster without the ability to use multicast.

Could you describe the settings you made in your jgroups-tcp.xml?

That would be great!
thumbnail
Mike Robins, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 24 Beitrittsdatum: 26.07.07 Neueste Beiträge
I have this working in 5.2.3 CE but my configuration will not work in v6. Before I post all the setup details which version are you trying to get this working with?

Mike.
thumbnail
Daniel Schmidt, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 9 Beitrittsdatum: 24.01.11 Neueste Beiträge
I'd like to do this with LR 6.0.5 CE on Tomcat 6 ..

Even if this wont work for me I'd like to see what you did ... your config could maybe give me the missing hint ..
thumbnail
Mike Robins, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 24 Beitrittsdatum: 26.07.07 Neueste Beiträge
Sure, no problem.

Below are the relevant three lines from my portal-ext.properties:


net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml

comm.link.properties=TCP(bind_port=${jgroups.bind_port:7700};port_range=5;loopback=true;recv_buf_size=20M;send_buf_size=640K;discard_incompatible_packets=true;max_bundle_size=64K;max_bundle_timeout=30;enable_bundling=true;use_send_queues=true;sock_conn_timeout=300;timer_type=new;timer.min_threads=4;timer.max_threads=10;timer.keep_alive_time=3000;timer.queue_max_size=500;thread_pool.enabled=true;thread_pool.min_threads=1;thread_pool.max_threads=10;thread_pool.keep_alive_time=5000;thread_pool.queue_enabled=false;thread_pool.queue_max_size=100;thread_pool.rejection_policy=discard;oob_thread_pool.enabled=true;oob_thread_pool.min_threads=1;oob_thread_pool.max_threads=8;oob_thread_pool.keep_alive_time=5000;oob_thread_pool.queue_enabled=false;oob_thread_pool.queue_max_size=100;oob_thread_pool.rejection_policy=discard):TCPPING(initial_hosts=${jgroups.tcpping.initial_hosts};port_range=10;timeout=3000;num_initial_members=3):VERIFY_SUSPECT(timeout=1500):MERGE2(min_interval=10000;max_interval=30000):FD_SOCK():FD(timeout=3000;max_tries=3):BARRIER():pbcast.NAKACK(use_mcast_xmit=false;gc_lag=0;retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST(timeout=300,600,1200):pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;print_local_addr=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=4M):UFC(max_credits=2M;min_threshold=0.4):MFC(max_credits=2M;min_threshold=0.4):FRAG2(frag_size=60K):pbcast.STREAMING_STATE_TRANSFER()


Attached are the two ehcache config files mentioned in the above properties.

Finally, you need add the following to your application server startup arguments:


-Djgroups.bind_addr=<server hostname> -Djgroups.bind_port=<listen port> -Djgroups.tcpping.initial_hosts=<server hostname>[<listen port>],<other server hostname>[<other server listen port>],<another server hostname>[<another server listen port] e.g. for a two node cluster with application servers running on each -djgroups.bind_addr="myserver01" -djgroups.bind_port="7800" -djgroups.tcpping.initial_hosts="myserver01[7800],myserver01[7900],myserver02[7800],myserver02[7900]" < code></another></another></other></other></listen></server></listen></server>
<br><br><br>Hope that helps,<br><br>Mike.
thumbnail
Daniel Schmidt, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 9 Beitrittsdatum: 24.01.11 Neueste Beiträge
Looks good. Thank you Mike!!

I'm trying to get it to work in LR 6.0.5

Im getting NullPointerExceptions atm, but I thinks thats the right way.
thumbnail
Chris Whittle, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Expert Beiträge: 462 Beitrittsdatum: 17.09.08 Neueste Beiträge
Hi Daniel, I was getting those too...
Here is how I made it work in 5.2 (haven't moved to production yet but has been tested)

Replace the ehcache.jar,ehcache-jgroupsreplication.jar,and jgroups.jar with the newest versions (From ROOT/WEB-INF/lib).
My example uses ehcache-core-2.4.0.jar,ehcache-jgroupsreplication-1.4.jar,jgroups-2.11.1.Final.jar all from their sites

Add these to your startup.bat or service startup properties
Bind Address is the box it's on, hosts are the box it's on and any other and it's set to use 7800 as it's port (this is set in the xml). **edited due to the old -D was not working

set JAVA_OPTS=%JAVA_OPTS% [s]-Djgroups.bind.address=10.22.4.20[/s] -Djgroups.bind_addr=10.22.4.20
set JAVA_OPTS=%JAVA_OPTS% -Djgroups.tcpping.initial_hosts=JBVWEBD19E[7800],L01RW2FE[7801]

optional if you are not using IPv6
-Djava.net.preferIPv4Stack=true



Add these to your portal-ext.properties
## CACHE SETTINGS ##
##JGROUPS
net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
[s]net.sf.ehcache.configurationResourceName.peerProviderProperties=file=tcp.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=tcp.xml[/s]
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=my-ehcache/tcp-liferay.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=my-ehcache/tcp-liferay.xml
## END CACHE SETTINGS ##


file=tcp.xml triggers ehcache to try to load a file from the classloader and tcp.xml is in the jgroups**.jar and has all the settings needed. If you wanted to customize it you could create a /my-ehcache/ folder off your source folder and copy the tcp.xml file to it and rename it to tcp-liferay.xml. Edit it adding singleton_name="liferay_jgroups_tcp" under <TCP. This allows the jgroups configuration to have run multiple clusters and fixes the issue Daniel and I ran into where only one cluster can work at a time. customize your hearts content and just make sure the peerProviderPropeties pointed to the customize one.

If you want more options on your peerProviderProperties and don't want to use the prepackaged tcp.xml or customize one you can use the connect option.
The connect option is what Mike did above the only difference between what Mike did and what the new version needs is you have to put connect= in front of it. If you go this route I would start with converting the tcp.xml into the connect format and then customize it due to it looks like every version of JGroups changes what it needs and how it's done.

One additional customization you might need is the newest LiferayResourceCacheUtil. This is due to an issue with serialization and velocity that might be fixed in the version of 6 you have but you might want to compare(see http://issues.liferay.com/browse/LPS-13429). I've attached the one that I got from trunk.

I'm still tweaking it and like I said above it's not in production yet but in my tests all the caches seem to work. So I hope it works for you and anyone else that finds this.. I had a heck of a time trying tons of combinations and old examples until I got this to work.

edit...
If you want to see more of whats happening or troubleshoot here is my log4j settings
<appender name="CONSOLE_CACHE" class="org.apache.log4j.RollingFileAppender">
		<errorhandler class="org.apache.log4j.varia.FallbackErrorHandler">
			<root-ref />
			<appender-ref ref="CONSOLE" />
		</errorhandler>
		<param name="File" value="${app.log.location}logs\\portal_cache.log">
		<param name="MaxFileSize" value="10000KB">
		<param name="MaxBackupIndex" value="10">
		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="----%n%d{yyyy/MM/dd HH:mm:ss} %p%n%l [%x][%t] %n%m%n">
		</layout>
	</appender>
	<category name="org.jgroups" additivity="false">
		<priority value="ALL" />
		<appender-ref ref="CONSOLE_CACHE" />
	</category>
	<category name="org.jgroups.protocols" additivity="false">
		<priority value="ALL" />
		<appender-ref ref="CONSOLE_CACHE" />
	</category>
	<category name="com.liferay.portal.cache.ehcache" additivity="false">
		<priority value="ALL" />
		<appender-ref ref="CONSOLE_CACHE" />
	</category>
	<category name="net.sf.ehcache.distribution" additivity="false">
		<priority value="ALL" />
		<appender-ref ref="CONSOLE_CACHE" />
	</category>
	<category name="net.sf.ehcache.config.ConfigurationFactory" additivity="false">
		<priority value="ALL" />
		<appender-ref ref="CONSOLE_CACHE" />
	</category>
	<category name="net.sf.ehcache.util" additivity="false">
		<priority value="ALL" />
		<appender-ref ref="CONSOLE_CACHE" />
	</category>


edit added changes to fix the issue with multiple clusters
edited added correct and optional -D's for startup
thumbnail
Daniel Schmidt, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 9 Beitrittsdatum: 24.01.11 Neueste Beiträge
Thank you!

This seems to be a giant step forward. Caching seems to be loaded without errors but distributing does not work.

This is what catalina.out tells me multiple times on both nodes:


16:26:31,626 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey net.sf.ehcache.configurationResourceName.peerProviderProperties has value file=tcp.xml

-------------------------------------------------------------------
GMS: address=app02-qs-klartxt-52782, cluster=hibernate-clustered, physical address=fe80:0:0:0:70d7:66ff:fe70:fd19:37734
-------------------------------------------------------------------

16:26:12,391 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value file=tcp.xml

-------------------------------------------------------------------
GMS: address=app02-qs-klartxt-50774, cluster=liferay-multi-vm-clustered, physical address=fe80:0:0:0:70d7:66ff:fe70:fd19%2:7801
-------------------------------------------------------------------



I've used hostnames instead of ip's and it seems as if this is okay because the MAC-recognition works. But should not the last part of the address be the port I specified in the JVM-Startup-Parameters? I used 52555 but the ones take seems to be random ...

Any hints?
thumbnail
Chris Whittle, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Expert Beiträge: 462 Beitrittsdatum: 17.09.08 Neueste Beiträge
I'm not sure... I only see the one (I think there is an issue with hibernate in 5.2)
-------------------------------------------------------------------
GMS: address=L01RW2FE-47531, cluster=liferay-multi-vm-clustered, physical address=10.22.4.20:7800
-------------------------------------------------------------------


Do you see anything in the logs? Like drops?
thumbnail
Daniel Schmidt, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 9 Beitrittsdatum: 24.01.11 Neueste Beiträge
I can't see anything in the logs here ...


-------------------------------------------------------------------
GMS: address=app01-qs-klartxt-41719, cluster=hibernate-clustered, physical address=fe80:0:0:0:d03c:c4ff:fea7:9b72%2:7800
-------------------------------------------------------------------
16:53:25,694 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value file=tcp.xml

-------------------------------------------------------------------
GMS: address=app01-qs-klartxt-62512, cluster=liferay-multi-vm-clustered, physical address=fe80:0:0:0:d03c:c4ff:fea7:9b72%2:7801
-------------------------------------------------------------------


I'm wondering why you're getting ths ipv4 physical adress ..
thumbnail
Chris Whittle, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Expert Beiträge: 462 Beitrittsdatum: 17.09.08 Neueste Beiträge
I had hibernate cache disabled due to a performance help from http://www.liferay.com/community/wiki/-/wiki/Main/Fine%20tune%20the%20performance%20of%20the%20system

Since I re-enabled it mine stopped working... I'm trouble shooting it and will let you know...
thumbnail
Chris Whittle, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Expert Beiträge: 462 Beitrittsdatum: 17.09.08 Neueste Beiträge
ok got it... I'll update my instructions above but the gist of it is the tcp.xml that's in the jar needs a singleton_name attribute for TCP so that it can do multiple clusters... So you have to add it to the default xml..
thumbnail
Daniel Schmidt, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 9 Beitrittsdatum: 24.01.11 Neueste Beiträge
Wow!

This worked for me. Caches are up and running. Loving it!

Thank you very much! You saved my day!
Xinsheng Robert Chen, geändert vor 12 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Junior Member Beiträge: 45 Beitrittsdatum: 30.03.10 Neueste Beiträge
Thanks, Chris Whittle!

Your post has helped me a lot!
Xinsheng Robert Chen, geändert vor 12 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Junior Member Beiträge: 45 Beitrittsdatum: 30.03.10 Neueste Beiträge
Hi, Chris,

Using your configuration, I have configured a cluster of two Liferay portal 6.0 SP1 JBoss 5.1.0 nodes to use TCP unicast for cache replication and session replication. It works with Liferay portal in its out-of-the-box state. However, when I deploy a custom portlet with service builder code, it throws the following exceptions:

2011-10-30 22:21:07,579 INFO [STDOUT] (main) Loading vfsfile:/appshr/liferay/testliferay/liferay-portal-6.0-ee-sp1/jboss-5.1.0/server/testlrayServer1/deploy/search-portlet.war/WEB-INF/classes/service.properties
2011-10-30 22:21:08,176 ERROR [net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider]
(main) Failed to create JGroups Channel, replication will not function. JGroups properties:
null
org.jgroups.ChannelException: unable to setup the protocol stack
at org.jgroups.JChannel.init(JChannel.java:1728)
at org.jgroups.JChannel.<init>(JChannel.java:249)
at org.jgroups.JChannel.<init>(JChannel.java:232)
at org.jgroups.JChannel.<init>(JChannel.java:173)
at net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider.init(JGroupsCacheManagerPeerProvider.java:131)
at net.sf.ehcache.CacheManager.init(CacheManager.java:363)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:228)
at net.sf.ehcache.hibernate.EhCacheProvider.start(EhCacheProvider.java:99)
at com.liferay.portal.dao.orm.hibernate.CacheProviderWrapper.start(CacheProviderWrapper.java:62)
at com.liferay.portal.dao.orm.hibernate.EhCacheProvider.start(EhCacheProvider.java:67)
at org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge.start(RegionFactoryCacheProviderBridge.java:72)
at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:236)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1842)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:860)
... ...


at org.jboss.Main$1.run(Main.java:556)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.ClassCastException: org.jgroups.protocols.UDP cannot be cast to org.jgroups.stack.Protocol
at org.jgroups.stack.Configurator.createLayer(Configurator.java:433)
at org.jgroups.stack.Configurator.createProtocols(Configurator.java:393)
at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:88)
at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:55)
at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:534)
at org.jgroups.JChannel.init(JChannel.java:1725)
... 105 more
2011-10-30 22:21:08,181 ERROR [org.springframework.web.context.ContextLoader] (main) Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name
'liferayHibernateSessionFactory' defined in ServletContext resource [/WEB-INF/classes/META-INF/hibernate-spring.xml]:
Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1420)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)

The log file with more details is attached.

Do you have any ideas about what is happening?

Thanks for your advice!
Xinsheng Robert Chen, geändert vor 12 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Junior Member Beiträge: 45 Beitrittsdatum: 30.03.10 Neueste Beiträge
Let me answer my own question.

I updated the "hibernate-spring.xml" file. Instead of using "com.liferay.portal.spring.hibernate.PortletHibernateConfiguration," I used "org.springframework.orm.hibernate3.LocalSessionFactoryBean." I also set values for some properties for "org.springframework.orm.hibernate3.LocalSessionFactoryBean," which fixed the issue.
thumbnail
Hitoshi Ozawa, geändert vor 12 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Liferay Legend Beiträge: 7942 Beitrittsdatum: 24.03.10 Neueste Beiträge
It would be great if somebody would write up a wiki on information based on this thread.
Patrizio Munzi, geändert vor 11 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 12 Beitrittsdatum: 03.11.11 Neueste Beiträge
Hi All,
hope this thread isn't dead yet! :-)
I followed chris instructions but I get the following error:
2012-11-07 14:59:40 ERROR [JGroupsCacheManagerPeerProvider:150] Failed to connect to JGroups cluster 'EH_CACHE', replication will not function. JGroups properties:
null
org.jgroups.ChannelException: failed to start protocol stack
at org.jgroups.JChannel.startStack(JChannel.java:1765)
at org.jgroups.JChannel.connect(JChannel.java:415)
at org.jgroups.JChannel.connect(JChannel.java:390)
at net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider.init(JGroupsCacheManagerPeerProvider.java:148)
at net.sf.ehcache.CacheManager.init(CacheManager.java:355)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:265)
at com.liferay.portal.cache.EhcachePortalCacheManager.afterPropertiesSet(EhcachePortalCacheManager.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1414)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1375)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
at java.security.AccessController.doPrivileged(Native Method)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45)
at com.liferay.portal.spring.context.PortalContextLoaderListener.contextInitialized(PortalContextLoaderListener.java:49)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3764)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4216)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:736)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:448)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:700)
at org.apache.catalina.startup.Catalina.start(Catalina.java:552)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433)
Caused by: java.lang.IllegalStateException: cluster 'EH_CACHE' is already connected to singleton transport: [EH_CACHE, dummy-1352296780172]
at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:834)
at org.jgroups.JChannel.startStack(JChannel.java:1762)
... 44 more

which disappears only if I remove the
singleton_name="liferay_jgroups_tcp"
property from tcp.xml file.
Removing that I however see mixed protocol used, both TCP and UDP.

Any ideas??
Patrizio Munzi, geändert vor 11 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 12 Beitrittsdatum: 03.11.11 Neueste Beiträge
Ok I was able to solve the prevoius issue on my own setting two different cache name on files hibernate-clustered.xml and liferay-multi-vm-clustered.xml
------------
<ehcache name="hibernate-clustered"
<ehcache name="liferay-multi-vm-clustered"
------------

Now I'm experiencing another, I think worst, problem. Since I built some my own services deployed as separated wars, looks like liferay/jgroups (I don't really know who..) initializes a JGroupsCacheManagerPeerProvider using the same general "hibernate-clustered.xml" file for each service war and then I get again the same error.
More jgroups cluster with the same name connecting to the same transport protocol.

Hope someone can help.
José Pereira, geändert vor 11 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 3 Beitrittsdatum: 16.01.13 Neueste Beiträge
I'm having the same issue.... I'll develop further:

1. The multi-vm-clustered cache and it's associated jgroups JChannel is only initialized once at liferay-portal.war deployment at the Application Server.
I think this is because the class com.liferay.portal.kernel,cache.MultiVMPoolUtil (and com.liferay.portal.kernel,cache.MultiVMPool) is made available in the portal-service.jar which is deployed in the application (or servlet container's) lib directory, and as such, the webapp's classloaders (configured for parent delegation first) happen to see the class is allready loaded (and the initialization of the cache is actually only done "once").

2. On the flip side, the hibernate-clustered cache and it's associated jgroups JChannel is initialized for each WebApp (portlet) that uses some domain classes based on the hibernateSessionFactory of Liferay. I think this has to do with each portlet webapp receiving it's own HibernateSessionFactory and as such, it's own CacheManager and associated JGroups JChannel. This seams related to this forum post: http://forums.terracotta.org/forums/posts/list/7166.page

I haven't found a solution for this second problem yet, but I know this is not good, as for there are more and more JChannel instances (and associated threads and sockets) being open for each and every "domain based portlet" in the system. If each one starts "pinging" the others, the clusterViews are immense and the cross comunication may bring your cluster down!

Does anyone know of a solution for this issue?
José Pereira, geändert vor 11 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 3 Beitrittsdatum: 16.01.13 Neueste Beiträge
Ok, so I followed another path:

I added ehcache.jar, ehcache-jgroups-replication.jar, hibernate.jar, commons-lang.jar, commons-collections.jar and some other dependent jars to the application server classpath.

I also added the hibernate cache region factory SingleEhCacheRegionFactory configuration to portal-ext.properties. Resulting from this, each jgroups cluster view that used to have 40+ members is now only 8, the network occupancy by jgroups communication has decayed from 8Mbps to 350Kbps and the overall system stability has risen, as heap use droped as well as thread counts.

Best regards
Emilien Floret, geändert vor 10 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beitrag: 1 Beitrittsdatum: 21.11.13 Neueste Beiträge
Hello everyone,
I'm using this thread as it seems to be the only one with people having actually made this work. I have to run Liferay 6.2 in cluster on Amazon and following this thread I ended up with the following configuration :
This is my first time working with Liferay so I'm not sure this works ... Could someone tell me what to look for ?

Based on the clustering guide, I checked that modifying portlet location (or adding/removing) on a page is immediataly reflected on the second node. To validate lucene, I created a web content with some random text, and was able to search a word inside that content on both node (again immediately). I also tried adding a user and affecting it to the portal site from the other node. Didn't have any issue doing all of that. Can I consider that it's working ?

What bothers me is that I don't see the hibernate-cluster come up, how can I ensure everything is working ?


#CLUSTER#
cluster.link.enabled=true
cluster.link.autodetect.address=myrdsdb:3306
lucene.replicate.write=true
#DEBUG#
cluster.executor.debug.enabled=true
web.server.display.node=true

## CACHE SETTINGS ##
##JGROUPS
net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=com.liferay.portal.cache.ehcache.JGroupsCacheManagerPeerProviderFactory
net.sf.ehcache.configurationResourceName.peerProviderProperties=clusterName=hibernate-clustered,channelProperties=/myehcache/tcp.xml
ehcache.multi.vm.config.location.peerProviderProperties=clusterName=liferay-multi-vm-clustered,channelProperties=/myehcache/tcp.xml
cluster.link.channel.properties.control=/myehcache/tcp3.xml
cluster.link.channel.properties.transport.0=/myehcache/tcp4.xml
## END CACHE SETTINGS ##


This is the content of my tcp.xml file (I can probably trim it down a little... don't know) :
I actually have 4 copy of that file (tcp.xml, tcp2.xml, tcp3.xml, tcp4.xml) simply to change the bind_port. There's probably a way to specify the bind port in the portal-ext.properties file but didn't know how to.
I added different singleton_name in each tcp file based on this thread.
I also changed TCPPING by JDBCPING as I need my cluster to automatically detect members without having to change the configuration and restart.

<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
    <tcp bind_port="7800" singleton_name="liferay_jgroups_tcp" loopback="false" recv_buf_size="${tcp.recv_buf_size:5M}" send_buf_size="${tcp.send_buf_size:640K}" max_bundle_size="64K" max_bundle_timeout="30" enable_bundling="true" use_send_queues="true" sock_conn_timeout="300" timer_type="old" timer.min_threads="4" timer.max_threads="10" timer.keep_alive_time="3000" timer.queue_max_size="500" thread_pool.enabled="true" thread_pool.min_threads="1" thread_pool.max_threads="10" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="discard" />
                         
<jdbc_ping connection_url="jdbc:mysql://myrdsDB/liferaydb" connection_username="dbuser" connection_password="dbpassword" connection_driver="com.mysql.jdbc.Driver" />

    <merge2 min_interval="10000" max_interval="30000" />
    <fd_sock />
    <fd timeout="3000" max_tries="3" />
    <verify_suspect timeout="1500" />
    <barrier />
    <pbcast.nakack2 use_mcast_xmit="false" discard_delivered_msgs="true" />
    <unicast />
    <pbcast.stable stability_delay="1000" desired_avg_gossip="50000" max_bytes="4M" />
    <pbcast.gms print_local_addr="true" join_timeout="3000" view_bundling="true" />
    <ufc max_credits="2M" min_threshold="0.4" />
    <mfc max_credits="2M" min_threshold="0.4" />
    <frag2 frag_size="60K" />
    <!--RSVP resend_interval="2000" timeout="10000"/-->
    <pbcast.state_transfer />
</config>


On startup this is what I get three cluster start up, I never saw the hibernate-clustered cluster during all my tests, is it supposed to come up ?:

10:55:57,908 INFO  [stdout] (MSC service thread 1-2) 10:55:57,906 INFO  [MSC service thread 1-2][LiferayCacheManagerPeerProviderFactory:76] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value clusterName=liferay-multi-vm-clustered,channelProperties=/myehcache/tcp.xml
10:56:00,601 INFO  [stdout] (MSC service thread 1-2) 
10:56:00,613 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
10:56:00,614 INFO  [stdout] (MSC service thread 1-2) GMS: address=hostname1-16519, cluster=liferay-multi-vm-clustered, physical address=10.10.34.76:7800
10:56:00,617 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
10:56:00,762 INFO  [stdout] (MSC service thread 1-2) 10:56:00,755 INFO  [MSC service thread 1-2][BaseReceiver:64] Accepted view [hostname1-16519|0] [hostname1-16519]

10:56:43,976 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
10:56:43,976 INFO  [stdout] (MSC service thread 1-2) GMS: address=hostname1-44357, cluster=LIFERAY-CONTROL-CHANNEL, physical address=10.10.34.76:8000
10:56:43,981 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
10:56:44,088 INFO  [stdout] (MSC service thread 1-2) 10:56:44,087 INFO  [MSC service thread 1-2][BaseReceiver:64] Accepted view [hostname1-44357|0] [hostname1-44357]

10:56:44,439 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
10:56:44,439 INFO  [stdout] (MSC service thread 1-2) GMS: address=hostname1-33601, cluster=LIFERAY-TRANSPORT-CHANNEL-0, physical address=10.10.34.76:8100
10:56:44,443 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
10:56:44,541 INFO  [stdout] (MSC service thread 1-2) 10:56:44,540 INFO  [MSC service thread 1-2][BaseReceiver:64] Accepted view [hostname1-33601|0] [hostname1-33601]


I did a lot of different tests and thought that simply defining these two properties could be enough :
cluster.link.channel.properties.control=/myehcache/tcp3.xml
cluster.link.channel.properties.transport.0=/myehcache/tcp4.xml
However I don't have any replication with only these settings (but I do see the cluster JOIN messages when second nodes come up).

Thank you for your time
Regards,
Emilien
José Pereira, geändert vor 11 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 3 Beitrittsdatum: 16.01.13 Neueste Beiträge
In Liferay 6.0.5, the configuration has to be done in a different way if not updating the ehcache (and ehcache-jgroups-replication) library jars:

in portal-ext.properties, the following class must be selected:

ehcache.cache.manager.peer.provider.factory=com.liferay.portal.cache.ehcache.JGroupsCacheManagerPeerProviderFactory
(instead of the net.sf.ehcache.X.Y class that is referenced there)

and then, on

ehcache.multi.vm.config.location.peerProviderProperties=clusterName=hibernate,channelProperties=my-ehcache/jgroups-hibernate-clustered.xml
net.sf.ehcache.configurationResourceName.peerProviderProperties=clusterName=hibernate,channelProperties=my-ehcache/jgroups-multi-vm-clustered.xml

Just to name the clustered-caches so that they become identified correctly in networking sniffers and even on log messages (DEBUG mode). Files my-ehcache/jgroups-hibernate-clustered.xml and my-ehcache/jgroups-multi-vm-clustered.xml must be in the system classpath, which can be achieved in several ways. In our setup, we put a prefixClasspath to the JVM pointing to the liferay/ext/ location so that this and other resources get found there (which is in it's own a very nice way to "patch" liferay when needed).

Otherwise, you have to upgrade the stock ehcache, ehcache-jgroups-replication and probably the jgroups.jar also (haven't tested it myself).
thumbnail
Chris Whittle, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

Expert Beiträge: 462 Beitrittsdatum: 17.09.08 Neueste Beiträge
Dang Mike, where have you been, that info would have been awesome last week! JGROUPs lack of documentation is ridiculous!!!
thumbnail
Mike Robins, geändert vor 13 Jahren.

RE: How to configure JGroups TCP for Liferay on EC2

New Member Beiträge: 24 Beitrittsdatum: 26.07.07 Neueste Beiträge
hehe sorry, a little behind on my email here emoticon

BTW we are upgrading from 5.2.3 CE to v6 EE SP1 soon so I will be trying to get this setup working in v6 in a few weeks time. If I get it working I post the configuration here.


Mike.