Foros de discusión

Cluster Liferay con 2 EC2 AWS

Z A, modificado hace 7 años.

Cluster Liferay con 2 EC2 AWS

New Member Mensajes: 2 Fecha de incorporación: 16/12/15 Mensajes recientes
Buenas, tras realizar diversas búsquedas por internet he conseguido (con la configuración que les dejare a continuación) conseguír hacer un cluster con replica de sesión y de contenido con dos EC2 en Amazon y Unicast.

Ahora bien, tengo un problema, cuando voy al nodo1 y hago un cambio de contenido y aplico los cambios (simplemente añadir una palabra) y posteriormente voy al nodo 2 estos cambios no se visualizan hasta pasado aproximadamente un min. ¿Alguien sabe el motivo de esto, me puede ayudar alguien a dejar "fino" el liferay?

La configuración que tengo es la siguiente:

/etc/hosts:
<ip>       nodo1
<ip>       nodo2</ip></ip>


setenv.sh:
-Djgroups.bind_addr=nodo1


portal-ext.properties

##
## Ehcache
##
# Liferay cache distributed with Ehcache over JGroups TCP unicast
ehcache.multi.vm.config.location=/liferay-multi-vm-clustered.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=tcp.xml
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
#
######################################Cluster
#
web.server.display.node=true
cluster.link.enabled=true
#cluster link channel properties
cluster.link.channel.properties.control=tcp.xml
cluster.link.channel.properties.transport.0=tcp.xml
ehcache.cluster.link.replication.enabled=true
##############################################



tcp.xml
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
  
    <tcp singleton_name="<NAME>" bind_port="7800" loopback="false" recv_buf_size="${tcp.recv_buf_size:5M}" send_buf_size="${tcp.send_buf_size:640K}" max_bundle_size="64K" max_bundle_timeout="30" enable_bundling="true" use_send_queues="true" sock_conn_timeout="300" timer_type="old" timer.min_threads="4" timer.max_threads="10" timer.keep_alive_time="3000" timer.queue_max_size="500" thread_pool.enabled="true" thread_pool.min_threads="1" thread_pool.max_threads="10" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="discard" />
 
     <s3_ping location="<BUCKET_S3>" num_initial_members="1" />


    <merge2 min_interval="10000" max_interval="30000" />
    <fd_sock />
    <fd timeout="3000" max_tries="3" />
    <verify_suspect timeout="1500" />
    <barrier />
    <pbcast.nakack2 use_mcast_xmit="false" discard_delivered_msgs="true" />
    <unicast />
    <pbcast.stable stability_delay="1000" desired_avg_gossip="50000" max_bytes="4M" />
    <pbcast.gms print_local_addr="true" join_timeout="3000" view_bundling="true" />
    <ufc max_credits="2M" min_threshold="0.4" />
    <mfc max_credits="2M" min_threshold="0.4" />
    <frag2 frag_size="60K" />
    <pbcast.state_transfer />
  
</config>


liferay-multi-vm-clustered.xml
<cache eternal="true" maxelementsinmemory="50000" name="com.yp.sample.liferay.portlet.SampleCachingPortlet" overflowtodisk="false" timetoidleseconds="600">
                <cacheeventlistenerfactory class="com.liferay.portal.cache.ehcache.LiferayCacheEventListenerFactory" properties="replicatePuts=true,replicateUpdates=true,replicateRemovals=true,replicateAsynchronously=true,replicateUpdatesViaCopy=true" propertySeparator="," />
                <bootstrapcacheloaderfactory class="com.liferay.portal.cache.ehcache.LiferayBootstrapCacheLoaderFactory" />
        </cache>


Añadí en el web.xml:
       <distributable />


También agregue al server.xml
<engine defaulthost="localhost" name="Catalina" jvmroute="liferay">
<cluster classname="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelsendoptions="6" channelstartoptions="3">
<manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />
<channel classname="org.apache.catalina.tribes.group.GroupChannel">
<receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" autoBind="0" selectorTimeout="5000" maxThreads="6" address="nodo1" port="4444" />
<sender classname="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" timeout="60000" keepAliveTime="10" keepAliveCount="0" />
</sender>
<interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor" staticOnly="true" />
<interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector" />
<interceptor classname="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<member className="org.apache.catalina.tribes.membership.StaticMember" host="nodo2" port="4444" uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2}" /> </interceptor>
</channel>
<valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter="" />
<valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
<clusterlistener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener" />
<clusterlistener className="org.apache.catalina.ha.session.ClusterSessionListener" />
</cluster></engine>



Es más que probable que sobre lineas en alguno de los archivos anteriores por ello es que escribo, para pediros consejos y ayuda del porque no replica instantáneamente.


Saludos!
thumbnail
Sergio Sánchez, modificado hace 7 años.

RE: Cluster Liferay con 2 EC2 AWS

Regular Member Mensajes: 143 Fecha de incorporación: 6/07/11 Mensajes recientes
Hola ZA, parece bastante correcto todo. Habría que revisar los logs de arranque para comprobar si se conforma bien el cluster.
Te sugeriría que el tcp.xml lo movieras a WEB-INF/classes y lo referenciaras simplemente asi:
ehcache.multi.vm.config.location.peerProviderProperties=tcp.xml

Además, no cambiaría la definición de los espacios de caché y eliminaría la propiedad:
ehcache.multi.vm.config.location=/liferay-multi-vm-clustered.xml

Un saludo
Z A, modificado hace 7 años.

RE: Cluster Liferay con 2 EC2 AWS

New Member Mensajes: 2 Fecha de incorporación: 16/12/15 Mensajes recientes
Buenas, te comento se me paso escribirlo antes.

El archivo tcp esta en la siguiente ruta:
WEB-INF/classes/jgroups/

liferay-multi-vm-clustered.xml en:
WEB-INF/classes/ehcache/

He comentado del portal-ext.properties la linea ehcache.multi.vm.config.location=/liferay-multi-vm-clustered.xml y voy a probar.

No se porque no replica al instante y tiene cierta demora

PD:Te dejo el log:
INFO: Starting service Catalina
May 10, 2016 11:04:17 AM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.42
May 10, 2016 11:04:17 AM org.apache.catalina.ha.tcp.SimpleTcpCluster startInternal
INFO: Cluster is about to start
May 10, 2016 11:04:17 AM org.apache.catalina.tribes.transport.ReceiverBase bind
INFO: Receiver Server Socket bound to:nodo1/<ip>:4444
May 10, 2016 11:04:17 AM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.tribes.membership.StaticMember[tcp://nodo2:4444,nodo2,4444, alive=0, securePort=-1, UDP Port=-1, id={0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 }, payload={}, command={}, domain={}, ]
May 10, 2016 11:04:17 AM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor /opt/liferay/tomcat/conf/Catalina/localhost/ROOT.xml
May 10, 2016 11:04:19 AM org.apache.catalina.tribes.io.BufferPool getBufferPool
INFO: Created a buffer pool with max size:104857600 bytes of type:org.apache.catalina.tribes.io.BufferPool15Impl
Loading jar:file:/opt/liferay/tomcat/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/system.properties
Loading jar:file:/opt/liferay/tomcat/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/system.properties
Loading jar:file:/opt/liferay/tomcat/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/portal.properties
Loading file:/opt/liferay/portal-ext.properties
May 10, 2016 11:04:33 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
11:04:40,303 INFO  [localhost-startStop-1][DialectDetector:71] Determine dialect for MySQL 5
11:04:40,414 INFO  [localhost-startStop-1][DialectDetector:136] Found dialect org.hibernate.dialect.MySQLDialect
11:05:01,211 INFO  [localhost-startStop-1][ClusterBase:142] Autodetecting JGroups outgoing IP address and interface for www.google.com:80
11:05:01,222 INFO  [localhost-startStop-1][ClusterBase:158] Setting JGroups outgoing IP address to <ip>  and interface to eth0

-------------------------------------------------------------------
GMS: address=, cluster=LIFERAY-CONTROL-CHANNEL, physical address=<ip>:7800
-------------------------------------------------------------------
11:05:04,723 INFO  [localhost-startStop-1][BaseReceiver:64] Accepted view [aa6db132044e-16837|0] [aa6db132044e-16837]
11:05:04,732 INFO  [localhost-startStop-1][ClusterBase:92] Create a new channel with properties TCP(bind_addr=<ip>;oob_thread_pool_keep_alive_time=5000;timer_keep_alive_time=3000;external_port=0;oob_thread_pool_enabled=true;max_bundle_size=64000;diagnostics_ttl=8;physical_addr_max_fetch_attempts=10;receive_on_all_interfaces=false;thread_pool_min_threads=1;thread_pool_keep_alive_time=5000;thread_pool_max_threads=10;enable_diagnostics=true;send_buf_size=640000;conn_expire_time=0;oob_thread_pool_queue_max_size=100;enable_bundling=true;thread_pool_queue_enabled=false;suppress_time_different_cluster_warnings=60000;client_bind_port=0;timer_rejection_policy=run;diagnostics_port=7500;oob_thread_pool_max_threads=8;wheel_size=200;logical_addr_cache_max_size=500;reaper_interval=0;sock_conn_timeout=300;defer_client_bind_addr=false;send_queue_size=10000;tick_time=50;logical_addr_cache_expiration=120000;thread_pool_rejection_policy=discard;suppress_time_different_version_warnings=60000;oob_thread_pool_min_threads=1;who_has_cache_timeout=2000;port_range=50;stats=true;peer_addr_read_timeout=1000;tcp_nodelay=true;id=22;diagnostics_addr=<ip>;bind_port=7800;oob_thread_pool_rejection_policy=discard;loopback=false;linger=-1;oob_thread_pool_queue_enabled=false;name=TCP;enable_unicast_bundling=true;thread_pool_enabled=true;log_discard_msgs_version=true;thread_naming_pattern=cl;timer_max_threads=10;timer_queue_max_size=500;use_send_queues=true;discard_incompatible_packets=true;ergonomics=true;bundler_capacity=200000;max_bundle_timeout=30;bind_interface_str=;timer_min_threads=4;log_discard_msgs=true;thread_pool_queue_max_size=100;bundler_type=new;timer_type=old;recv_buf_size=5000000)_:TCPPING(num_initial_members=10;port_range=1;force_sending_discovery_rsps=true;stats=true;ergonomics=true;num_initial_srv_members=0;id=10;max_dynamic_hosts=100;initial_hosts=127.0.0.1[7801],127.0.0.1[7802],127.0.0.1[7800];return_entire_cache=false;break_on_coord_rsp=true;stagger_timeout=0;name=TCPPING;timeout=3000)_:MERGE2(id=0;stats=true;merge_fast=true;name=MERGE2;inconsistent_view_threshold=1;min_interval=10000;ergonomics=true;merge_fast_delay=1000;max_interval=30000)_:FD_SOCK(bind_addr=<ip>;external_port=0;port_range=50;stats=true;suspect_msg_interval=5000;client_bind_port=0;ergonomics=true;num_tries=3;id=3;get_cache_timeout=1000;sock_conn_timeout=1000;bind_interface_str=;name=FD_SOCK;keep_alive=true;start_port=0)_:FD(id=2;max_tries=3;stats=true;name=FD;ergonomics=true;msg_counts_as_heartbeat=true;timeout=3000)_:VERIFY_SUSPECT(id=13;bind_addr=<ip>;bind_interface_str=;stats=true;name=VERIFY_SUSPECT;num_msgs=1;ergonomics=true;use_mcast_rsps=false;use_icmp=false;timeout=1500)_:BARRIER(id=0;max_close_time=60000;stats=true;name=BARRIER;ergonomics=true)_:pbcast.NAKACK2(use_mcast_xmit_req=false;use_mcast_xmit=false;suppress_time_non_member_warnings=60000;max_msg_batch_size=100;xmit_from_random_member=false;stats=true;xmit_table_max_compaction_time=10000;log_not_found_msgs=true;ergonomics=true;discard_delivered_msgs=true;print_stability_history_on_failed_xmit=false;id=57;become_server_queue_size=50;max_rebroadcast_timeout=2000;xmit_table_msgs_per_row=10000;xmit_table_num_rows=50;name=NAKACK2;log_discard_msgs=true;xmit_interval=1000;xmit_table_resize_factor=1.2)_:UNICAST(max_retransmit_time=60000;max_msg_batch_size=500;xmit_table_max_compaction_time=600000;stats=true;segment_capacity=1000;ergonomics=true;id=12;conn_expiry_timeout=60000;xmit_table_msgs_per_row=1000;xmit_table_num_rows=100;name=UNICAST;timeout=400,800,1600,3200;xmit_table_resize_factor=1.2;xmit_interval=2000)_:pbcast.STABLE(id=16;desired_avg_gossip=50000;max_bytes=4000000;stats=true;cap=0.1;name=STABLE;ergonomics=true;stability_delay=1000)_:pbcast.GMS(print_local_addr=true;stats=true;max_bundling_time=50;log_collect_msgs=true;resume_task_timeout=20000;log_view_warnings=true;num_prev_views=20;ergonomics=true;use_flush_if_present=true;print_physical_addrs=true;merge_timeout=5000;id=14;num_prev_mbrs=50;leave_timeout=1000;view_bundling=true;name=GMS;join_timeout=3000;handle_concurrent_startup=true;view_ack_collection_timeout=2000;max_join_attempts=0)_:UFC(id=45;max_block_time=5000;max_credits=2000000;stats=true;ignore_synchronous_response=true;min_credits=800000;name=UFC;min_threshold=0.4;ergonomics=true)_:MFC(id=44;max_block_time=5000;max_credits=2000000;stats=true;ignore_synchronous_response=true;min_credits=800000;name=MFC;min_threshold=0.4;ergonomics=true)_:FRAG2(id=5;frag_size=60000;stats=true;name=FRAG2;ergonomics=true)_:pbcast.STATE_TRANSFER(id=17;stats=true;name=STATE_TRANSFER;ergonomics=true)_ [Sanitized]

-------------------------------------------------------------------
GMS: address=, cluster=LIFERAY-TRANSPORT-CHANNEL-0, physical address=<ip>:7801
-------------------------------------------------------------------
11:05:07,792 INFO  [localhost-startStop-1][BaseReceiver:64] Accepted view [aa6db132044e-65247|0] [aa6db132044e-65247]
11:05:07,800 INFO  [localhost-startStop-1][ClusterBase:92] Create a new channel with properties TCP(bind_addr=<ip>;oob_thread_pool_keep_alive_time=5000;timer_keep_alive_time=3000;external_port=0;oob_thread_pool_enabled=true;max_bundle_size=64000;diagnostics_ttl=8;physical_addr_max_fetch_attempts=10;receive_on_all_interfaces=false;thread_pool_min_threads=1;thread_pool_keep_alive_time=5000;thread_pool_max_threads=10;enable_diagnostics=true;send_buf_size=640000;conn_expire_time=0;oob_thread_pool_queue_max_size=100;enable_bundling=true;thread_pool_queue_enabled=false;suppress_time_different_cluster_warnings=60000;client_bind_port=0;timer_rejection_policy=run;diagnostics_port=7500;oob_thread_pool_max_threads=8;wheel_size=200;logical_addr_cache_max_size=500;reaper_interval=0;sock_conn_timeout=300;defer_client_bind_addr=false;send_queue_size=10000;tick_time=50;logical_addr_cache_expiration=120000;thread_pool_rejection_policy=discard;suppress_time_different_version_warnings=60000;oob_thread_pool_min_threads=1;who_has_cache_timeout=2000;port_range=50;stats=true;peer_addr_read_timeout=1000;tcp_nodelay=true;id=22;diagnostics_addr=<ip>;bind_port=7800;oob_thread_pool_rejection_policy=discard;loopback=false;linger=-1;oob_thread_pool_queue_enabled=false;name=TCP;enable_unicast_bundling=true;thread_pool_enabled=true;log_discard_msgs_version=true;thread_naming_pattern=cl;timer_max_threads=10;timer_queue_max_size=500;use_send_queues=true;discard_incompatible_packets=true;ergonomics=true;bundler_capacity=200000;max_bundle_timeout=30;bind_interface_str=;timer_min_threads=4;log_discard_msgs=true;thread_pool_queue_max_size=100;bundler_type=new;timer_type=old;recv_buf_size=5000000)_:TCPPING(num_initial_members=10;port_range=1;force_sending_discovery_rsps=true;stats=true;ergonomics=true;num_initial_srv_members=0;id=10;max_dynamic_hosts=100;initial_hosts=127.0.0.1[7801],127.0.0.1[7802],127.0.0.1[7800];return_entire_cache=false;break_on_coord_rsp=true;stagger_timeout=0;name=TCPPING;timeout=3000)_:MERGE2(id=0;stats=true;merge_fast=true;name=MERGE2;inconsistent_view_threshold=1;min_interval=10000;ergonomics=true;merge_fast_delay=1000;max_interval=30000)_:FD_SOCK(bind_addr=<ip>;external_port=0;port_range=50;stats=true;suspect_msg_interval=5000;client_bind_port=0;ergonomics=true;num_tries=3;id=3;get_cache_timeout=1000;sock_conn_timeout=1000;bind_interface_str=;name=FD_SOCK;keep_alive=true;start_port=0)_:FD(id=2;max_tries=3;stats=true;name=FD;ergonomics=true;msg_counts_as_heartbeat=true;timeout=3000)_:VERIFY_SUSPECT(id=13;bind_addr=<ip>;bind_interface_str=;stats=true;name=VERIFY_SUSPECT;num_msgs=1;ergonomics=true;use_mcast_rsps=false;use_icmp=false;timeout=1500)_:BARRIER(id=0;max_close_time=60000;stats=true;name=BARRIER;ergonomics=true)_:pbcast.NAKACK2(use_mcast_xmit_req=false;use_mcast_xmit=false;suppress_time_non_member_warnings=60000;max_msg_batch_size=100;xmit_from_random_member=false;stats=true;xmit_table_max_compaction_time=10000;log_not_found_msgs=true;ergonomics=true;discard_delivered_msgs=true;print_stability_history_on_failed_xmit=false;id=57;become_server_queue_size=50;max_rebroadcast_timeout=2000;xmit_table_msgs_per_row=10000;xmit_table_num_rows=50;name=NAKACK2;log_discard_msgs=true;xmit_interval=1000;xmit_table_resize_factor=1.2)_:UNICAST(max_retransmit_time=60000;max_msg_batch_size=500;xmit_table_max_compaction_time=600000;stats=true;segment_capacity=1000;ergonomics=true;id=12;conn_expiry_timeout=60000;xmit_table_msgs_per_row=1000;xmit_table_num_rows=100;name=UNICAST;timeout=400,800,1600,3200;xmit_table_resize_factor=1.2;xmit_interval=2000)_:pbcast.STABLE(id=16;desired_avg_gossip=50000;max_bytes=4000000;stats=true;cap=0.1;name=STABLE;ergonomics=true;stability_delay=1000)_:pbcast.GMS(print_local_addr=true;stats=true;max_bundling_time=50;log_collect_msgs=true;resume_task_timeout=20000;log_view_warnings=true;num_prev_views=20;ergonomics=true;use_flush_if_present=true;print_physical_addrs=true;merge_timeout=5000;id=14;num_prev_mbrs=50;leave_timeout=1000;view_bundling=true;name=GMS;join_timeout=3000;handle_concurrent_startup=true;view_ack_collection_timeout=2000;max_join_attempts=0)_:UFC(id=45;max_block_time=5000;max_credits=2000000;stats=true;ignore_synchronous_response=true;min_credits=800000;name=UFC;min_threshold=0.4;ergonomics=true)_:MFC(id=44;max_block_time=5000;max_credits=2000000;stats=true;ignore_synchronous_response=true;min_credits=800000;name=MFC;min_threshold=0.4;ergonomics=true)_:FRAG2(id=5;frag_size=60000;stats=true;name=FRAG2;ergonomics=true)_:pbcast.STATE_TRANSFER(id=17;stats=true;name=STATE_TRANSFER;ergonomics=true)_ [Sanitized]
May 10, 2016 11:05:11 AM org.apache.catalina.ha.session.DeltaManager startInternal
INFO: Register manager localhost# to cluster element Engine with name Catalina
May 10, 2016 11:05:11 AM org.apache.catalina.ha.session.DeltaManager startInternal
INFO: Starting clustering manager at localhost#
May 10, 2016 11:05:11 AM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
INFO: Manager [localhost#], requesting session state from org.apache.catalina.tribes.membership.StaticMember[tcp://nodo2:4444,nodo2,4444, alive=0, securePort=-1, UDP Port=-1, id={0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 }, payload={}, command={}, domain={}, ]. This operation will timeout if no session state has been received within 60 seconds.
May 10, 2016 11:05:11 AM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
INFO: Manager [localhost#]; session state send at 5/10/16 11:05 AM received in 164 ms.
Starting Liferay Portal Community Edition 6.2 CE GA2 (Newton / Build 6201 / March 20, 2014)
11:05:13,620 INFO  [localhost-startStop-1][BaseDB:484] Database does not support case sensitive queries
11:05:14,731 INFO  [localhost-startStop-1][ServerDetector:119] Server supports hot deploy</ip></ip></ip></ip></ip></ip></ip></ip></ip></ip></ip></ip>



Todo parece estar correcto