Kombinierte Ansicht Flache Ansicht Baumansicht
Threads [ Zurück | Nächste ]
toggle
How to configure JGroups TCP for Liferay on EC2 James Denmark 8. November 2010 06:04
RE: How to configure JGroups TCP for Liferay on EC2 Mike Robins 10. November 2010 00:37
RE: How to configure JGroups TCP for Liferay on EC2 James Denmark 16. November 2010 17:10
RE: How to configure JGroups TCP for Liferay on EC2 Mike Robins 17. November 2010 01:05
RE: How to configure JGroups TCP for Liferay on EC2 James Denmark 17. November 2010 10:29
RE: How to configure JGroups TCP for Liferay on EC2 Chris Whittle 17. Februar 2011 11:31
RE: How to configure JGroups TCP for Liferay on EC2 Daniel Schmidt 8. März 2011 01:55
RE: How to configure JGroups TCP for Liferay on EC2 Mike Robins 8. März 2011 03:40
RE: How to configure JGroups TCP for Liferay on EC2 Daniel Schmidt 8. März 2011 04:08
RE: How to configure JGroups TCP for Liferay on EC2 Mike Robins 8. März 2011 05:04
RE: How to configure JGroups TCP for Liferay on EC2 Daniel Schmidt 8. März 2011 05:53
RE: How to configure JGroups TCP for Liferay on EC2 Chris Whittle 16. März 2011 11:27
RE: How to configure JGroups TCP for Liferay on EC2 Daniel Schmidt 8. März 2011 07:39
RE: How to configure JGroups TCP for Liferay on EC2 Chris Whittle 8. März 2011 07:44
RE: How to configure JGroups TCP for Liferay on EC2 Daniel Schmidt 8. März 2011 07:57
RE: How to configure JGroups TCP for Liferay on EC2 Chris Whittle 8. März 2011 09:04
RE: How to configure JGroups TCP for Liferay on EC2 Chris Whittle 8. März 2011 14:18
RE: How to configure JGroups TCP for Liferay on EC2 Daniel Schmidt 9. März 2011 04:40
RE: How to configure JGroups TCP for Liferay on EC2 Xinsheng Robert Chen 10. Oktober 2011 14:55
RE: How to configure JGroups TCP for Liferay on EC2 Xinsheng Robert Chen 3. November 2011 20:52
RE: How to configure JGroups TCP for Liferay on EC2 Xinsheng Robert Chen 21. März 2012 15:29
RE: How to configure JGroups TCP for Liferay on EC2 Hitoshi Ozawa 21. März 2012 15:43
RE: How to configure JGroups TCP for Liferay on EC2 Patrizio Munzi 7. November 2012 06:27
RE: How to configure JGroups TCP for Liferay on EC2 Patrizio Munzi 7. November 2012 08:07
RE: How to configure JGroups TCP for Liferay on EC2 José Pereira 16. Januar 2013 12:53
RE: How to configure JGroups TCP for Liferay on EC2 José Pereira 23. Januar 2013 14:24
RE: How to configure JGroups TCP for Liferay on EC2 Emilien Floret 21. November 2013 04:46
RE: How to configure JGroups TCP for Liferay on EC2 José Pereira 16. Januar 2013 12:43
RE: How to configure JGroups TCP for Liferay on EC2 Chris Whittle 8. März 2011 05:32
RE: How to configure JGroups TCP for Liferay on EC2 Mike Robins 8. März 2011 05:51
James Denmark
How to configure JGroups TCP for Liferay on EC2
8. November 2010 06:04
Antwort

James Denmark

Rang: Junior Member

Nachrichten: 27

Eintrittsdatum: 19. November 2008

Neue Beiträge

I have been working on moving our production Liferay over onto EC2 and will post a blog with a description of how we have done it but in the meantime, I'm trying to configure 6.0.5 CE to use JGroups with TCP/TCPPING as AWS doesn't support UDP multicasting.

While I can find plenty of documentation on how to configure JGroups, I'm unable to determine what the equivalent settings in portal-ext.properties need to be to configure this and whether I need any further external config in xml files and if so, which ones and how to tell Liferay to use them.

Hoping someone can post an example.

Jim.
Mike Robins
RE: How to configure JGroups TCP for Liferay on EC2
10. November 2010 00:37
Antwort

Mike Robins

Rang: Junior Member

Nachrichten: 28

Eintrittsdatum: 26. Juli 2007

Neue Beiträge

Hello,

Did you manage to get this working? I have the same issue where I have a cluster but cannot use multicast for inter-cluster comms.

I believe the following needs adding to portal-ext.properties to make this work (at least for ehcache anyway):

 1
 2net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
 3ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
 4
 5ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
 6ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
 7ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
 8
 9net.sf.ehcache.configurationResourceName.peerProviderProperties=connect=TCP(start_port=7800):TCPPING(timeout=3000;initial_hosts=server1[7800],server2[7801];port_range=10;num_initial_members=2):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK: VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):pbcast.GMS(join_timeout=5000;print_local_addr=true;view_bundling=true)
10
11ehcache.multi.vm.config.location.peerProviderProperties=connect=TCP(start_port=7800):TCPPING(timeout=3000;initial_hosts=server1[7800],server2[7801];port_range=10;num_initial_members=2):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK: VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):pbcast.GMS(join_timeout=5000;print_local_addr=true;view_bundling=true)



However, the 'connect' string for jgroups as specified by the peerProviderProperties key is not taken because there are commas in it.

It appears the method 'createCachePeerProvider' in class 'com.liferay.portal.cache.ehcache.LiferayCacheManagerPeerProviderFactory.java' always splits the provider properties by comma but the Jgroups parameters need to use commas and I've yet to find a way around this emoticon

Anyone have any ideas on how to resolve this?

Many thanks,

Mike.
James Denmark
RE: How to configure JGroups TCP for Liferay on EC2
16. November 2010 17:10
Antwort

James Denmark

Rang: Junior Member

Nachrichten: 27

Eintrittsdatum: 19. November 2008

Neue Beiträge

Hi Mike,

I took the portal-ext.properties settings you suggested but instead of putting the connect string in as a property, I created a jgroups-tcp.xml file in the same place as the hibernate-clustered.xml and liferay-multiple-vm-clustered.xml files. The jgroups-tcp.xml file en contains the normal groups setting parameters as documented on the JGroups website.

Then all I did was reference the JGroups-tcp.xml file for the peerproviderproperties in portal-ext.properties.

When I started Liferay it appears to have grabbed that configuration and started the cluster. I am going to do some more testing before I call this a success but let me know how you make out with this scenario.

Jim.
Mike Robins
RE: How to configure JGroups TCP for Liferay on EC2
17. November 2010 01:05
Antwort

Mike Robins

Rang: Junior Member

Nachrichten: 28

Eintrittsdatum: 26. Juli 2007

Neue Beiträge

Hi Jim,

Good idea but my jgroups-tcp.xml settings don't seem to be used. Please could you post your jgroups-tcp.xml incase I've got something wrong?


Thanks,

Mike.
James Denmark
RE: How to configure JGroups TCP for Liferay on EC2
17. November 2010 10:29
Antwort

James Denmark

Rang: Junior Member

Nachrichten: 27

Eintrittsdatum: 19. November 2008

Neue Beiträge

This is my jgroups-tcp.xml

 1<config>
 2    <TCP         bind_addr="10.241.117.155"
 3            start_port="7800"
 4            loopback="true">
 5    </TCP>
 6    <TCPPING    initial_hosts="10.112.30.253[7800]"
 7            port_range="3"
 8            timeout="3500"
 9            num_initial_members="3"
10            up_thread="true"
11            down_thread="true">
12    </TCPPING>
13    <MERGE2     min_interval="5000"
14            max_interval="10000">
15    </MERGE2>
16    <FD         shun="true"
17            timeout="2500"
18            max_tries="5"
19            up_thread="true"
20            down_thread="true" >
21    </FD>
22    <VERIFY_SUSPECT    timeout="1500"
23            down_thread="false"
24            up_thread="false" >
25    </VERIFY_SUSPECT>
26    <pbcast.NAKACK     down_thread="true"
27            up_thread="true"
28            gc_lag="100"
29            retransmit_timeout="3000" >
30    </pbcast>
31    <pbcast.STABLE     desired_avg_gossip="20000"
32            down_thread="false"
33            up_thread="false" >
34    </pbcast>
35    <pbcast.GMS    join_timeout="5000"
36            join_retry_timeout="2000"
37            shun="false"
38              print_local_addr="false"
39              down_thread="true"
40            up_thread="true">
41    </pbcast>
42</config>


And this is the relevant part of portal-ext.properties

1net.sf.ehcache.configurationResourceName=/my-ehcache/hibernate-clustered.xml
2ehcache.multi.vm.config.location=/my-ehcache/liferay-multi-vm-clustered.xml
3ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
4ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
5ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
6net.sf.ehcache.configurationResourceName.peerProviderProperties=/my-ehcache/jgroups-tcp.xml
7ehcache.multi.vm.config.location.peerProviderProperties=/my-ehcache/jgroups-tcp.xml


And this is what I am seeing in the catalina.out when it starts:
 119:09:54,630 INFO  [DialectDetector:69] Determining dialect for MySQL 5
 219:09:54,721 INFO  [DialectDetector:49] Using dialect org.hibernate.dialect.MySQLDialect
 319:09:55,176 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey net.sf.ehcache.configurationResourceName.peerProviderProperties has value /my-ehcache/jgroups-tcp.xml
 4
 5-------------------------------------------------------------------
 6GMS: address=ip-10-112-30-253-24733, cluster=EH_CACHE, physical address=fe80:0:0:0:1031:3dff:fe06:110f:35692
 7-------------------------------------------------------------------
 819:10:01,653 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value /my-ehcache/jgroups-tcp.xml
 9
10-------------------------------------------------------------------
11GMS: address=ip-10-112-30-253-15346, cluster=EH_CACHE, physical address=fe80:0:0:0:1031:3dff:fe06:110f:35693
12-------------------------------------------------------------------


I'm still fiddling with it and the jgroups configuration is completely untuned but it appears to be working...

Jim.
Chris Whittle
RE: How to configure JGroups TCP for Liferay on EC2
17. Februar 2011 11:31
Antwort

Chris Whittle

Rang: Expert

Nachrichten: 462

Eintrittsdatum: 17. September 2008

Neue Beiträge

Hi Jim,
is this still working good? We're having to look at doing Unicast between our instance due to a networking issue... Have you done that with JGroups before?
Daniel Schmidt
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 01:55
Antwort

Daniel Schmidt

Rang: New Member

Nachrichten: 9

Eintrittsdatum: 24. Januar 2011

Neue Beiträge

Could you please give a brief information if this is really working?

I'm facing a similar problem as we are trying to build a liferay cluster without the ability to use multicast.

Could you describe the settings you made in your jgroups-tcp.xml?

That would be great!
Mike Robins
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 03:40
Antwort

Mike Robins

Rang: Junior Member

Nachrichten: 28

Eintrittsdatum: 26. Juli 2007

Neue Beiträge

I have this working in 5.2.3 CE but my configuration will not work in v6. Before I post all the setup details which version are you trying to get this working with?

Mike.
Daniel Schmidt
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 04:08
Antwort

Daniel Schmidt

Rang: New Member

Nachrichten: 9

Eintrittsdatum: 24. Januar 2011

Neue Beiträge

I'd like to do this with LR 6.0.5 CE on Tomcat 6 ..

Even if this wont work for me I'd like to see what you did ... your config could maybe give me the missing hint ..
Mike Robins
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 05:04
Antwort

Mike Robins

Rang: Junior Member

Nachrichten: 28

Eintrittsdatum: 26. Juli 2007

Neue Beiträge

Sure, no problem.

Below are the relevant three lines from my portal-ext.properties:

1
2net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml
3ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml
4
5comm.link.properties=TCP(bind_port=${jgroups.bind_port:7700};port_range=5;loopback=true;recv_buf_size=20M;send_buf_size=640K;discard_incompatible_packets=true;max_bundle_size=64K;max_bundle_timeout=30;enable_bundling=true;use_send_queues=true;sock_conn_timeout=300;timer_type=new;timer.min_threads=4;timer.max_threads=10;timer.keep_alive_time=3000;timer.queue_max_size=500;thread_pool.enabled=true;thread_pool.min_threads=1;thread_pool.max_threads=10;thread_pool.keep_alive_time=5000;thread_pool.queue_enabled=false;thread_pool.queue_max_size=100;thread_pool.rejection_policy=discard;oob_thread_pool.enabled=true;oob_thread_pool.min_threads=1;oob_thread_pool.max_threads=8;oob_thread_pool.keep_alive_time=5000;oob_thread_pool.queue_enabled=false;oob_thread_pool.queue_max_size=100;oob_thread_pool.rejection_policy=discard):TCPPING(initial_hosts=${jgroups.tcpping.initial_hosts};port_range=10;timeout=3000;num_initial_members=3):VERIFY_SUSPECT(timeout=1500):MERGE2(min_interval=10000;max_interval=30000):FD_SOCK():FD(timeout=3000;max_tries=3):BARRIER():pbcast.NAKACK(use_mcast_xmit=false;gc_lag=0;retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST(timeout=300,600,1200):pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;print_local_addr=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=4M):UFC(max_credits=2M;min_threshold=0.4):MFC(max_credits=2M;min_threshold=0.4):FRAG2(frag_size=60K):pbcast.STREAMING_STATE_TRANSFER()


Attached are the two ehcache config files mentioned in the above properties.

Finally, you need add the following to your application server startup arguments:

1
2-Djgroups.bind_addr=<server hostname> -Djgroups.bind_port=<listen port> -Djgroups.tcpping.initial_hosts=<server hostname>[<listen port>],<other server hostname>[<other server listen port>],<another server hostname>[<another server listen port]
3
4e.g. for a two node cluster with two application servers running on each node
5
6-Djgroups.bind_addr=myserver01 -Djgroups.bind_port=7800 -Djgroups.tcpping.initial_hosts=myserver01[7800],myserver01[7900],myserver02[7800],myserver02[7900]



Hope that helps,

Mike.
Anhänge: hibernate-clustered.xml (4,9k), liferay-multi-vm-clustered.xml (7,3k)
Chris Whittle
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 05:32
Antwort

Chris Whittle

Rang: Expert

Nachrichten: 462

Eintrittsdatum: 17. September 2008

Neue Beiträge

Dang Mike, where have you been, that info would have been awesome last week! JGROUPs lack of documentation is ridiculous!!!
Mike Robins
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 05:51
Antwort

Mike Robins

Rang: Junior Member

Nachrichten: 28

Eintrittsdatum: 26. Juli 2007

Neue Beiträge

hehe sorry, a little behind on my email here emoticon

BTW we are upgrading from 5.2.3 CE to v6 EE SP1 soon so I will be trying to get this setup working in v6 in a few weeks time. If I get it working I post the configuration here.


Mike.
Daniel Schmidt
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 05:53
Antwort

Daniel Schmidt

Rang: New Member

Nachrichten: 9

Eintrittsdatum: 24. Januar 2011

Neue Beiträge

Looks good. Thank you Mike!!

I'm trying to get it to work in LR 6.0.5

Im getting NullPointerExceptions atm, but I thinks thats the right way.
Chris Whittle
RE: How to configure JGroups TCP for Liferay on EC2
16. März 2011 11:27
Antwort

Chris Whittle

Rang: Expert

Nachrichten: 462

Eintrittsdatum: 17. September 2008

Neue Beiträge

Hi Daniel, I was getting those too...
Here is how I made it work in 5.2 (haven't moved to production yet but has been tested)

Replace the ehcache.jar,ehcache-jgroupsreplication.jar,and jgroups.jar with the newest versions (From ROOT/WEB-INF/lib).
My example uses ehcache-core-2.4.0.jar,ehcache-jgroupsreplication-1.4.jar,jgroups-2.11.1.Final.jar all from their sites

Add these to your startup.bat or service startup properties
Bind Address is the box it's on, hosts are the box it's on and any other and it's set to use 7800 as it's port (this is set in the xml). **edited due to the old -D was not working
1
2set JAVA_OPTS=%JAVA_OPTS% [s]-Djgroups.bind.address=10.22.4.20[/s] -Djgroups.bind_addr=10.22.4.20
3set JAVA_OPTS=%JAVA_OPTS% -Djgroups.tcpping.initial_hosts=JBVWEBD19E[7800],L01RW2FE[7801]

optional if you are not using IPv6
1-Djava.net.preferIPv4Stack=true



Add these to your portal-ext.properties
 1## CACHE SETTINGS ##
 2##JGROUPS
 3net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
 4ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
 5ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
 6ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
 7ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
 8[s]net.sf.ehcache.configurationResourceName.peerProviderProperties=file=tcp.xml
 9ehcache.multi.vm.config.location.peerProviderProperties=file=tcp.xml[/s]
10net.sf.ehcache.configurationResourceName.peerProviderProperties=file=my-ehcache/tcp-liferay.xml
11ehcache.multi.vm.config.location.peerProviderProperties=file=my-ehcache/tcp-liferay.xml
12## END CACHE SETTINGS ##


file=tcp.xml triggers ehcache to try to load a file from the classloader and tcp.xml is in the jgroups**.jar and has all the settings needed. If you wanted to customize it you could create a /my-ehcache/ folder off your source folder and copy the tcp.xml file to it and rename it to tcp-liferay.xml. Edit it adding singleton_name="liferay_jgroups_tcp" under <TCP. This allows the jgroups configuration to have run multiple clusters and fixes the issue Daniel and I ran into where only one cluster can work at a time. customize your hearts content and just make sure the peerProviderPropeties pointed to the customize one.

If you want more options on your peerProviderProperties and don't want to use the prepackaged tcp.xml or customize one you can use the connect option.
The connect option is what Mike did above the only difference between what Mike did and what the new version needs is you have to put connect= in front of it. If you go this route I would start with converting the tcp.xml into the connect format and then customize it due to it looks like every version of JGroups changes what it needs and how it's done.

One additional customization you might need is the newest LiferayResourceCacheUtil. This is due to an issue with serialization and velocity that might be fixed in the version of 6 you have but you might want to compare(see http://issues.liferay.com/browse/LPS-13429). I've attached the one that I got from trunk.

I'm still tweaking it and like I said above it's not in production yet but in my tests all the caches seem to work. So I hope it works for you and anyone else that finds this.. I had a heck of a time trying tons of combinations and old examples until I got this to work.

edit...
If you want to see more of whats happening or troubleshoot here is my log4j settings
 1<appender name="CONSOLE_CACHE" class="org.apache.log4j.RollingFileAppender">
 2        <errorHandler class="org.apache.log4j.varia.FallbackErrorHandler">
 3            <root-ref />
 4            <appender-ref ref="CONSOLE" />
 5        </errorHandler>
 6        <param name="File" value="${app.log.location}logs\\portal_cache.log" />
 7        <param name="MaxFileSize" value="10000KB" />
 8        <param name="MaxBackupIndex" value="10" />
 9        <layout class="org.apache.log4j.PatternLayout">
10            <param name="ConversionPattern"
11                value="----%n%d{yyyy/MM/dd HH:mm:ss} %p%n%l [%x][%t] %n%m%n" />
12        </layout>
13    </appender>
14    <category name="org.jgroups" additivity="false">
15        <priority value="ALL" />
16        <appender-ref ref="CONSOLE_CACHE" />
17    </category>
18    <category name="org.jgroups.protocols" additivity="false">
19        <priority value="ALL" />
20        <appender-ref ref="CONSOLE_CACHE" />
21    </category>
22    <category name="com.liferay.portal.cache.ehcache" additivity="false">
23        <priority value="ALL" />
24        <appender-ref ref="CONSOLE_CACHE" />
25    </category>
26    <category name="net.sf.ehcache.distribution" additivity="false">
27        <priority value="ALL" />
28        <appender-ref ref="CONSOLE_CACHE" />
29    </category>
30    <category name="net.sf.ehcache.config.ConfigurationFactory" additivity="false">
31        <priority value="ALL" />
32        <appender-ref ref="CONSOLE_CACHE" />
33    </category>
34    <category name="net.sf.ehcache.util" additivity="false">
35        <priority value="ALL" />
36        <appender-ref ref="CONSOLE_CACHE" />
37    </category>


edit added changes to fix the issue with multiple clusters
edited added correct and optional -D's for startup
Anhänge: LiferayResourceCacheUtil.java (2,0k)
Daniel Schmidt
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 07:39
Antwort

Daniel Schmidt

Rang: New Member

Nachrichten: 9

Eintrittsdatum: 24. Januar 2011

Neue Beiträge

Thank you!

This seems to be a giant step forward. Caching seems to be loaded without errors but distributing does not work.

This is what catalina.out tells me multiple times on both nodes:

 1
 216:26:31,626 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey net.sf.ehcache.configurationResourceName.peerProviderProperties has value file=tcp.xml
 3
 4-------------------------------------------------------------------
 5GMS: address=app02-qs-klartxt-52782, cluster=hibernate-clustered, physical address=fe80:0:0:0:70d7:66ff:fe70:fd19:37734
 6-------------------------------------------------------------------
 7
 816:26:12,391 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value file=tcp.xml
 9
10-------------------------------------------------------------------
11GMS: address=app02-qs-klartxt-50774, cluster=liferay-multi-vm-clustered, physical address=fe80:0:0:0:70d7:66ff:fe70:fd19%2:7801
12-------------------------------------------------------------------


I've used hostnames instead of ip's and it seems as if this is okay because the MAC-recognition works. But should not the last part of the address be the port I specified in the JVM-Startup-Parameters? I used 52555 but the ones take seems to be random ...

Any hints?
Chris Whittle
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 07:44
Antwort

Chris Whittle

Rang: Expert

Nachrichten: 462

Eintrittsdatum: 17. September 2008

Neue Beiträge

I'm not sure... I only see the one (I think there is an issue with hibernate in 5.2)
1-------------------------------------------------------------------
2GMS: address=L01RW2FE-47531, cluster=liferay-multi-vm-clustered, physical address=10.22.4.20:7800
3-------------------------------------------------------------------


Do you see anything in the logs? Like drops?
Daniel Schmidt
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 07:57
Antwort

Daniel Schmidt

Rang: New Member

Nachrichten: 9

Eintrittsdatum: 24. Januar 2011

Neue Beiträge

I can't see anything in the logs here ...

 1
 2-------------------------------------------------------------------
 3GMS: address=app01-qs-klartxt-41719, cluster=hibernate-clustered, physical address=fe80:0:0:0:d03c:c4ff:fea7:9b72%2:7800
 4-------------------------------------------------------------------
 516:53:25,694 INFO  [LiferayCacheManagerPeerProviderFactory:75] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value file=tcp.xml
 6
 7-------------------------------------------------------------------
 8GMS: address=app01-qs-klartxt-62512, cluster=liferay-multi-vm-clustered, physical address=fe80:0:0:0:d03c:c4ff:fea7:9b72%2:7801
 9-------------------------------------------------------------------


I'm wondering why you're getting ths ipv4 physical adress ..
Chris Whittle
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 09:04
Antwort

Chris Whittle

Rang: Expert

Nachrichten: 462

Eintrittsdatum: 17. September 2008

Neue Beiträge

I had hibernate cache disabled due to a performance help from http://www.liferay.com/community/wiki/-/wiki/Main/Fine%20tune%20the%20performance%20of%20the%20system

Since I re-enabled it mine stopped working... I'm trouble shooting it and will let you know...
Chris Whittle
RE: How to configure JGroups TCP for Liferay on EC2
8. März 2011 14:18
Antwort

Chris Whittle

Rang: Expert

Nachrichten: 462

Eintrittsdatum: 17. September 2008

Neue Beiträge

ok got it... I'll update my instructions above but the gist of it is the tcp.xml that's in the jar needs a singleton_name attribute for TCP so that it can do multiple clusters... So you have to add it to the default xml..
Daniel Schmidt
RE: How to configure JGroups TCP for Liferay on EC2
9. März 2011 04:40
Antwort

Daniel Schmidt

Rang: New Member

Nachrichten: 9

Eintrittsdatum: 24. Januar 2011

Neue Beiträge

Wow!

This worked for me. Caches are up and running. Loving it!

Thank you very much! You saved my day!
Xinsheng Robert Chen
RE: How to configure JGroups TCP for Liferay on EC2
10. Oktober 2011 14:55
Antwort

Xinsheng Robert Chen

Rang: Junior Member

Nachrichten: 45

Eintrittsdatum: 30. März 2010

Neue Beiträge

Thanks, Chris Whittle!

Your post has helped me a lot!
Xinsheng Robert Chen
RE: How to configure JGroups TCP for Liferay on EC2
3. November 2011 20:52
Antwort

Xinsheng Robert Chen

Rang: Junior Member

Nachrichten: 45

Eintrittsdatum: 30. März 2010

Neue Beiträge

Hi, Chris,

Using your configuration, I have configured a cluster of two Liferay portal 6.0 SP1 JBoss 5.1.0 nodes to use TCP unicast for cache replication and session replication. It works with Liferay portal in its out-of-the-box state. However, when I deploy a custom portlet with service builder code, it throws the following exceptions:

2011-10-30 22:21:07,579 INFO (main) Loading vfsfile:/appshr/liferay/testliferay/liferay-portal-6.0-ee-sp1/jboss-5.1.0/server/testlrayServer1/deploy/search-portlet.war/WEB-INF/classes/service.properties
2011-10-30 22:21:08,176 ERROR [net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider]
(main) Failed to create JGroups Channel, replication will not function. JGroups properties:
null
org.jgroups.ChannelException: unable to setup the protocol stack
at org.jgroups.JChannel.init(JChannel.java:1728)
at org.jgroups.JChannel.<init>(JChannel.java:249)
at org.jgroups.JChannel.<init>(JChannel.java:232)
at org.jgroups.JChannel.<init>(JChannel.java:173)
at net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider.init(JGroupsCacheManagerPeerProvider.java:131)
at net.sf.ehcache.CacheManager.init(CacheManager.java:363)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:228)
at net.sf.ehcache.hibernate.EhCacheProvider.start(EhCacheProvider.java:99)
at com.liferay.portal.dao.orm.hibernate.CacheProviderWrapper.start(CacheProviderWrapper.java:62)
at com.liferay.portal.dao.orm.hibernate.EhCacheProvider.start(EhCacheProvider.java:67)
at org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge.start(RegionFactoryCacheProviderBridge.java:72)
at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:236)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1842)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:860)
... ...


at org.jboss.Main$1.run(Main.java:556)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.ClassCastException: org.jgroups.protocols.UDP cannot be cast to org.jgroups.stack.Protocol
at org.jgroups.stack.Configurator.createLayer(Configurator.java:433)
at org.jgroups.stack.Configurator.createProtocols(Configurator.java:393)
at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:88)
at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:55)
at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:534)
at org.jgroups.JChannel.init(JChannel.java:1725)
... 105 more
2011-10-30 22:21:08,181 ERROR [org.springframework.web.context.ContextLoader] (main) Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name
'liferayHibernateSessionFactory' defined in ServletContext resource [/WEB-INF/classes/META-INF/hibernate-spring.xml]:
Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1420)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)

The log file with more details is attached.

Do you have any ideas about what is happening?

Thanks for your advice!
Anhänge: custom_portlet_problem_nov2.txt (33,5k)
Xinsheng Robert Chen
RE: How to configure JGroups TCP for Liferay on EC2
21. März 2012 15:29
Antwort

Xinsheng Robert Chen

Rang: Junior Member

Nachrichten: 45

Eintrittsdatum: 30. März 2010

Neue Beiträge

Let me answer my own question.

I updated the "hibernate-spring.xml" file. Instead of using "com.liferay.portal.spring.hibernate.PortletHibernateConfiguration," I used "org.springframework.orm.hibernate3.LocalSessionFactoryBean." I also set values for some properties for "org.springframework.orm.hibernate3.LocalSessionFactoryBean," which fixed the issue.
Hitoshi Ozawa
RE: How to configure JGroups TCP for Liferay on EC2
21. März 2012 15:43
Antwort

Hitoshi Ozawa

Rang: Liferay Legend

Nachrichten: 7949

Eintrittsdatum: 23. März 2010

Neue Beiträge

It would be great if somebody would write up a wiki on information based on this thread.
Patrizio Munzi
RE: How to configure JGroups TCP for Liferay on EC2
7. November 2012 06:27
Antwort

Patrizio Munzi

Rang: New Member

Nachrichten: 12

Eintrittsdatum: 3. November 2011

Neue Beiträge

Hi All,
hope this thread isn't dead yet! :-)
I followed chris instructions but I get the following error:
2012-11-07 14:59:40 ERROR [JGroupsCacheManagerPeerProvider:150] Failed to connect to JGroups cluster 'EH_CACHE', replication will not function. JGroups properties:
null
org.jgroups.ChannelException: failed to start protocol stack
at org.jgroups.JChannel.startStack(JChannel.java:1765)
at org.jgroups.JChannel.connect(JChannel.java:415)
at org.jgroups.JChannel.connect(JChannel.java:390)
at net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider.init(JGroupsCacheManagerPeerProvider.java:148)
at net.sf.ehcache.CacheManager.init(CacheManager.java:355)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:265)
at com.liferay.portal.cache.EhcachePortalCacheManager.afterPropertiesSet(EhcachePortalCacheManager.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1414)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1375)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
at java.security.AccessController.doPrivileged(Native Method)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45)
at com.liferay.portal.spring.context.PortalContextLoaderListener.contextInitialized(PortalContextLoaderListener.java:49)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3764)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4216)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:736)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:448)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:700)
at org.apache.catalina.startup.Catalina.start(Catalina.java:552)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433)
Caused by: java.lang.IllegalStateException: cluster 'EH_CACHE' is already connected to singleton transport: [EH_CACHE, dummy-1352296780172]
at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:834)
at org.jgroups.JChannel.startStack(JChannel.java:1762)
... 44 more

which disappears only if I remove the
singleton_name="liferay_jgroups_tcp"
property from tcp.xml file.
Removing that I however see mixed protocol used, both TCP and UDP.

Any ideas??
Patrizio Munzi
RE: How to configure JGroups TCP for Liferay on EC2
7. November 2012 08:07
Antwort

Patrizio Munzi

Rang: New Member

Nachrichten: 12

Eintrittsdatum: 3. November 2011

Neue Beiträge

Ok I was able to solve the prevoius issue on my own setting two different cache name on files hibernate-clustered.xml and liferay-multi-vm-clustered.xml
------------
<ehcache name="hibernate-clustered"
<ehcache name="liferay-multi-vm-clustered"
------------

Now I'm experiencing another, I think worst, problem. Since I built some my own services deployed as separated wars, looks like liferay/jgroups (I don't really know who..) initializes a JGroupsCacheManagerPeerProvider using the same general "hibernate-clustered.xml" file for each service war and then I get again the same error.
More jgroups cluster with the same name connecting to the same transport protocol.

Hope someone can help.
José Pereira
RE: How to configure JGroups TCP for Liferay on EC2
16. Januar 2013 12:43
Antwort

José Pereira

Rang: New Member

Nachrichten: 3

Eintrittsdatum: 16. Januar 2013

Neue Beiträge

In Liferay 6.0.5, the configuration has to be done in a different way if not updating the ehcache (and ehcache-jgroups-replication) library jars:

in portal-ext.properties, the following class must be selected:

ehcache.cache.manager.peer.provider.factory=com.liferay.portal.cache.ehcache.JGroupsCacheManagerPeerProviderFactory
(instead of the net.sf.ehcache.X.Y class that is referenced there)

and then, on

ehcache.multi.vm.config.location.peerProviderProperties=clusterName=hibernate,channelProperties=my-ehcache/jgroups-hibernate-clustered.xml
net.sf.ehcache.configurationResourceName.peerProviderProperties=clusterName=hibernate,channelProperties=my-ehcache/jgroups-multi-vm-clustered.xml

Just to name the clustered-caches so that they become identified correctly in networking sniffers and even on log messages (DEBUG mode). Files my-ehcache/jgroups-hibernate-clustered.xml and my-ehcache/jgroups-multi-vm-clustered.xml must be in the system classpath, which can be achieved in several ways. In our setup, we put a prefixClasspath to the JVM pointing to the liferay/ext/ location so that this and other resources get found there (which is in it's own a very nice way to "patch" liferay when needed).

Otherwise, you have to upgrade the stock ehcache, ehcache-jgroups-replication and probably the jgroups.jar also (haven't tested it myself).
José Pereira
RE: How to configure JGroups TCP for Liferay on EC2
16. Januar 2013 12:53
Antwort

José Pereira

Rang: New Member

Nachrichten: 3

Eintrittsdatum: 16. Januar 2013

Neue Beiträge

I'm having the same issue.... I'll develop further:

1. The multi-vm-clustered cache and it's associated jgroups JChannel is only initialized once at liferay-portal.war deployment at the Application Server.
I think this is because the class com.liferay.portal.kernel,cache.MultiVMPoolUtil (and com.liferay.portal.kernel,cache.MultiVMPool) is made available in the portal-service.jar which is deployed in the application (or servlet container's) lib directory, and as such, the webapp's classloaders (configured for parent delegation first) happen to see the class is allready loaded (and the initialization of the cache is actually only done "once").

2. On the flip side, the hibernate-clustered cache and it's associated jgroups JChannel is initialized for each WebApp (portlet) that uses some domain classes based on the hibernateSessionFactory of Liferay. I think this has to do with each portlet webapp receiving it's own HibernateSessionFactory and as such, it's own CacheManager and associated JGroups JChannel. This seams related to this forum post: http://forums.terracotta.org/forums/posts/list/7166.page

I haven't found a solution for this second problem yet, but I know this is not good, as for there are more and more JChannel instances (and associated threads and sockets) being open for each and every "domain based portlet" in the system. If each one starts "pinging" the others, the clusterViews are immense and the cross comunication may bring your cluster down!

Does anyone know of a solution for this issue?
José Pereira
RE: How to configure JGroups TCP for Liferay on EC2
23. Januar 2013 14:24
Antwort

José Pereira

Rang: New Member

Nachrichten: 3

Eintrittsdatum: 16. Januar 2013

Neue Beiträge

Ok, so I followed another path:

I added ehcache.jar, ehcache-jgroups-replication.jar, hibernate.jar, commons-lang.jar, commons-collections.jar and some other dependent jars to the application server classpath.

I also added the hibernate cache region factory SingleEhCacheRegionFactory configuration to portal-ext.properties. Resulting from this, each jgroups cluster view that used to have 40+ members is now only 8, the network occupancy by jgroups communication has decayed from 8Mbps to 350Kbps and the overall system stability has risen, as heap use droped as well as thread counts.

Best regards
Emilien Floret
RE: How to configure JGroups TCP for Liferay on EC2
21. November 2013 04:46
Antwort

Emilien Floret

Rang: New Member

Nachrichten: 1

Eintrittsdatum: 21. November 2013

Neue Beiträge

Hello everyone,
I'm using this thread as it seems to be the only one with people having actually made this work. I have to run Liferay 6.2 in cluster on Amazon and following this thread I ended up with the following configuration :
This is my first time working with Liferay so I'm not sure this works ... Could someone tell me what to look for ?

Based on the clustering guide, I checked that modifying portlet location (or adding/removing) on a page is immediataly reflected on the second node. To validate lucene, I created a web content with some random text, and was able to search a word inside that content on both node (again immediately). I also tried adding a user and affecting it to the portal site from the other node. Didn't have any issue doing all of that. Can I consider that it's working ?

What bothers me is that I don't see the hibernate-cluster come up, how can I ensure everything is working ?

 1
 2#CLUSTER#
 3cluster.link.enabled=true
 4cluster.link.autodetect.address=myrdsdb:3306
 5lucene.replicate.write=true
 6#DEBUG#
 7cluster.executor.debug.enabled=true
 8web.server.display.node=true
 9
10## CACHE SETTINGS ##
11##JGROUPS
12net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
13ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
14ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
15ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
16ehcache.cache.manager.peer.provider.factory=com.liferay.portal.cache.ehcache.JGroupsCacheManagerPeerProviderFactory
17net.sf.ehcache.configurationResourceName.peerProviderProperties=clusterName=hibernate-clustered,channelProperties=/myehcache/tcp.xml
18ehcache.multi.vm.config.location.peerProviderProperties=clusterName=liferay-multi-vm-clustered,channelProperties=/myehcache/tcp.xml
19cluster.link.channel.properties.control=/myehcache/tcp3.xml
20cluster.link.channel.properties.transport.0=/myehcache/tcp4.xml
21## END CACHE SETTINGS ##


This is the content of my tcp.xml file (I can probably trim it down a little... don't know) :
I actually have 4 copy of that file (tcp.xml, tcp2.xml, tcp3.xml, tcp4.xml) simply to change the bind_port. There's probably a way to specify the bind port in the portal-ext.properties file but didn't know how to.
I added different singleton_name in each tcp file based on this thread.
I also changed TCPPING by JDBCPING as I need my cluster to automatically detect members without having to change the configuration and restart.
 1
 2<config xmlns="urn:org:jgroups"
 3        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 4        xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
 5    <TCP bind_port="7800"
 6     singleton_name="liferay_jgroups_tcp"
 7         loopback="false"
 8         recv_buf_size="${tcp.recv_buf_size:5M}"
 9         send_buf_size="${tcp.send_buf_size:640K}"
10         max_bundle_size="64K"
11         max_bundle_timeout="30"
12         enable_bundling="true"
13         use_send_queues="true"
14         sock_conn_timeout="300"
15
16         timer_type="old"
17         timer.min_threads="4"
18         timer.max_threads="10"
19         timer.keep_alive_time="3000"
20         timer.queue_max_size="500"
21         
22         thread_pool.enabled="true"
23         thread_pool.min_threads="1"
24         thread_pool.max_threads="10"
25         thread_pool.keep_alive_time="5000"
26         thread_pool.queue_enabled="false"
27         thread_pool.queue_max_size="100"
28         thread_pool.rejection_policy="discard"
29
30         oob_thread_pool.enabled="true"
31         oob_thread_pool.min_threads="1"
32         oob_thread_pool.max_threads="8"
33         oob_thread_pool.keep_alive_time="5000"
34         oob_thread_pool.queue_enabled="false"
35         oob_thread_pool.queue_max_size="100"
36         oob_thread_pool.rejection_policy="discard"/>
37                        
38<JDBC_PING connection_url="jdbc:mysql://myrdsDB/liferaydb"
39        connection_username="dbuser"
40        connection_password="dbpassword"
41        connection_driver="com.mysql.jdbc.Driver" />
42
43    <MERGE2  min_interval="10000"
44             max_interval="30000"/>
45    <FD_SOCK/>
46    <FD timeout="3000" max_tries="3" />
47    <VERIFY_SUSPECT timeout="1500"  />
48    <BARRIER />
49    <pbcast.NAKACK2 use_mcast_xmit="false"
50                   discard_delivered_msgs="true"/>
51    <UNICAST />
52    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
53                   max_bytes="4M"/>
54    <pbcast.GMS print_local_addr="true" join_timeout="3000"
55
56                view_bundling="true"/>
57    <UFC max_credits="2M"
58         min_threshold="0.4"/>
59    <MFC max_credits="2M"
60         min_threshold="0.4"/>
61    <FRAG2 frag_size="60K"  />
62    <!--RSVP resend_interval="2000" timeout="10000"/-->
63    <pbcast.STATE_TRANSFER/>
64</config>


On startup this is what I get three cluster start up, I never saw the hibernate-clustered cluster during all my tests, is it supposed to come up ?:

 110:55:57,908 INFO  [stdout] (MSC service thread 1-2) 10:55:57,906 INFO  [MSC service thread 1-2][LiferayCacheManagerPeerProviderFactory:76] portalPropertyKey ehcache.multi.vm.config.location.peerProviderProperties has value clusterName=liferay-multi-vm-clustered,channelProperties=/myehcache/tcp.xml
 210:56:00,601 INFO  [stdout] (MSC service thread 1-2)
 310:56:00,613 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
 410:56:00,614 INFO  [stdout] (MSC service thread 1-2) GMS: address=hostname1-16519, cluster=liferay-multi-vm-clustered, physical address=10.10.34.76:7800
 510:56:00,617 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
 610:56:00,762 INFO  [stdout] (MSC service thread 1-2) 10:56:00,755 INFO  [MSC service thread 1-2][BaseReceiver:64] Accepted view [hostname1-16519|0] [hostname1-16519]
 7
 810:56:43,976 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
 910:56:43,976 INFO  [stdout] (MSC service thread 1-2) GMS: address=hostname1-44357, cluster=LIFERAY-CONTROL-CHANNEL, physical address=10.10.34.76:8000
1010:56:43,981 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
1110:56:44,088 INFO  [stdout] (MSC service thread 1-2) 10:56:44,087 INFO  [MSC service thread 1-2][BaseReceiver:64] Accepted view [hostname1-44357|0] [hostname1-44357]
12
1310:56:44,439 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
1410:56:44,439 INFO  [stdout] (MSC service thread 1-2) GMS: address=hostname1-33601, cluster=LIFERAY-TRANSPORT-CHANNEL-0, physical address=10.10.34.76:8100
1510:56:44,443 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
1610:56:44,541 INFO  [stdout] (MSC service thread 1-2) 10:56:44,540 INFO  [MSC service thread 1-2][BaseReceiver:64] Accepted view [hostname1-33601|0] [hostname1-33601]


I did a lot of different tests and thought that simply defining these two properties could be enough :
cluster.link.channel.properties.control=/myehcache/tcp3.xml
cluster.link.channel.properties.transport.0=/myehcache/tcp4.xml
However I don't have any replication with only these settings (but I do see the cluster JOIN messages when second nodes come up).

Thank you for your time
Regards,
Emilien