Fast and Furious Search Enterprise Configuration

Technical Blogs March 15, 2019 By Anderson Perrelli Staff


This article aims to deliver what you're really looking for, configure Search Enterprise. If you are looking for the smallest details and deeper information, my suggestion is the Elasticserach Documentation and the Liferay Help Center.

Well, for you, who understands a bit about the subject and knows what you're looking for, here is your place, below I'll show you how you can configure/install your Elasticsearch Cluster/Enterprise.

I would like to remind you that this configuration was not made in a production environment in fact but rather in my own notebook, simulating a production environment, so if you are going to do this configuration in a real production environment, you may find stones in the path that maybe I have not found.

Lets go! To begin with, the lab I've created to simulate this setup was as follows:
  • A Liferay DXP 7.1 SP1 Instance
  • Two Elasticsearch 6.5.1 Instances 

Before we actually begin, this article assumes as a prerequisite the installation of Liferay already done, if you have not yet installed your Liferay DXP 7.1, you can do through this link

Configuring Liferay 7.1 SP1 + Elasticsearch 6.5.1


Elasticsearch versions supported

Liferay DXP 7.1 supports two versions of Elasticsearch:
  • 6.1
  • 6.5 (from the liferay-fix-pack-dxp-5)

Preparing to Install Elasticsearch

Before starting the Elasticsearch installation, there are a few things to worry about:

  • CPU: 
    • Recommended: Eight total CPU cores to the Elasticsearch engine, assuming that only one Elasticsearch JVM is running on the machine.
  • Memory:
    • Recommended: At least 16 GB of memory, with 64 GB preferred. Accurate memory allocation depends on the amount of indexed data. For index sizes from 500 GB to 1 TB, 64 GB of memory is sufficient.
  • Disk:
    • Recommended: SSD, or make use of the total performance of your HD with a 15k RPM. Keep 25% more disk capacity than the total size of your indexes. If your index is 60 GB, make sure you have at least 75 GB of available disk space.
    • Not Recommended: Do not use NAS.
  • Cluster Size:
    • The minimum cluster size recommended by Elastic for fault tolerance is three nodes.
    • It makes a known problem called a split brain not happen.

Installing Elasticsearch

For installation, follow the following steps:

  1. Download Elasticsearch from Elastic’s web site
  2. Install Elasticsearch by extracting its archive to the system where you want it to run.
  3. Install some required Elasticsearch plugins.
  4. Name your Elasticsearch cluster.
  5. Configure discovery properties
  6. Configurar a propriedade
  7. Configure Liferay DXP to connect to your Elasticsearch cluster.
  8. Restart Liferay DXP and reindex your search indexes.

Step 1: Download Elasticsearch from Elastic’s web site

The version of Elasticsearch that you're going to download should be the same version of Elasticserach that you are embedding in your Liferay DXP 7.1, to validate which version you are running, just access the URL: http://localhost:9200
The return should be as follows:
  "name" : "g0m223N",
  "cluster_name" : "LiferayElasticsearchCluster",
  "cluster_uuid" : "Ii6STs04Tg-XzTVV5h7M2Q",
  "version" : {
    "number" : "6.5.1",
    "build_hash" : "af51318",
    "build_date" : "2018-01-26T18:22:55.523Z",
    "build_snapshot" : false,
    "lucene_version" : "7.1.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  "tagline" : "You Know, for Search"
Where number is the version of Elasticsearch you should download from the Elastic website.

Step 2: Installing Elasticsearch

Install the elasticsearch by unzipping the Elasticsearch zip into your opreional system in the desired directory.

Step 3: Install some required Elasticsearch plugins.

Install the following required Elasticsearch plugins:

  • analysis-icu
  • analysis-kuromoji
  • analysis-smartcn
  • analysis-stempel

To install the plugins, go to your [Elastic_Home] and run the following command:

./bin/elasticsearch-plugin install [plugin-name]

Step 4: Name your Elasticsearch cluster.

A cluster in Elasticsearch is a collection of nodes (servers) identified as a cluster by a shared cluster name. The nodes work together to share data and workload.
Access the elasticsearch.yml file in your Elasticsearch installation directory:
[Elasticsearch Home]/config/elasticsearch.yml
Uncomment the property and place the name of your cluster.

Step 5: Configure discovery properties

There are two important discovery settings that should be configured.
By default, without any network configuration, Elasticsearch connects to available loopback addresses and scans ports 9300 to 9305 to attempt to connect to other nodes running on the same server. This provides an automatic clustering experience without having to do any configuration.
When it's time to create a cluster with nodes on other servers, you'll need to provide a list of IPs from other nodes in the cluster that are likely to be active and contactable. For my lab, it was not necessary to configure this property.
To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster.

Step 6. Configurar a propriedade To form a cluster with nodes on other servers, your node will need to bind to a non-loopback address. Although there are many network configurations, generally all you need to configure is

For my environment, I left the defaults values configured.

NOTE: If you will configure X-Pack Security, the IP configured here should be used to create the certificates and keys.

Step 7: Configure Liferay DXP to connect to your Elasticsearch cluster.

Through the Liferay control panel, let's configure it to connect to your remote Elasticsearch, go to:
Control panel → Settings → System Settings → Search.
Click Elasticsearch 6 in the list of settings. Now you can configure it.
Here are the configuration options to change:
Cluster Name: Place the name defined in Elasticsearch in the property of the file elasticsearch.yml
Operation Mode: Switch to REMOTE to connect to Elasticsearch.
Transport Addresses: Enter a delimited list of transport addresses for the Elasticsearch nodes. Here, you enter the transport address of the Elasticsearch server that you started. The default value is localhost:9300, which will work, if your elasticsearch is running locally.

Step 8: Restart Liferay DXP and reindex your search indexes.

  • Stop Liferay
  • Start Elasticsearsh:
    • ./bin/elasticsearch
  • Start Liferay
  • Reindex all search indexes:
    • Control Panel → Setup → Search → Execute Reindex all Search Index.

Installing Liferay Enterprise Search

The X-Pack is an extension of Elasticsearch to protect and monitor Elasticsearch clusters. If you use Elasticsearch, X-Pack's security features include Elasticsearch cluster data access authentication and Elasticsearch internal and external communications encryption.

A Liferay Enterprise Search Premium subscription lets you access the two connectors, the X-Pack Connector and the Monitoring Connector. A Liferay Enterprise Search Standard subscription provides the monitoring integration, you only have access to the Monitoring Connector.

NOTE: The X-Pack comes out-of-the-box within the elasticsearch bundle.

  • Download the connectors according to your subscription:
    • Enterprise Search Standard
    • Enterprise Search Premium
  • Deploy connectors in Liferay at:
    • [Liferay_Home]/Deploy
  • Restart Liferay

Enabling X-Pack Security

Before start the X-pack security enablement process you need the licenses, if you do not already have a subscription, you can generate a 30-day trial license.

Follow the steps in the link below to generate a 30-day trial license:

The first thing to do is enable X-Pack security.

  1. Add the following property to the elasticsearch.yml file of each node in your elasticsearch cluster:
    • true
  2. Restart the elasticsearch nodes

Setting Up X-Pack Users

 On a system that uses X-Pack Security and X-Pack Monitoring, these internal X-Pack users are important:

  • kibana
  • elastic

To create the passwords for these users run the following command:

./bin/elasticsearch-setup-passwords interactive

NOTE: If a message that the elastic user password has already been changed appears, you can see this link to resolve.

Enabling Transport Layer Security

  • Generate Node Certificates
    • Create a certificate authority, using X-Pack’s certutil command:

      • ./bin/elasticsearch-certutil ca --pem --ca-dn CN=localhost
      • This generates a ZIP file. Extract the contents in the [Elasticsearch Home]/config folder.

    • Generate X.509 certificates and private keys using the CA created:
      • ./bin/elasticsearch-certutil cert --pem --ca-cert /path/to/ca.crt --ca-key /path/to/ca.key --dns localhost --ip --name localhost
      • This generates another ZIP file. Extract the contents in the [Elasticsearch Home]/config folder.

NOTE: Use the IP that was configured in the property of the elasticsearch.yml file

  • Enable TLS

On each node in the elasticsearch.yml file add the following properties:

xpack.ssl.certificate: /path/to/[Elasticsearch Home]/config/localhost.crt
xpack.ssl.key: /path/to/[Elasticsearch Home]/config/localhost.key
xpack.ssl.certificate_authorities: ["/path/to/ca.crt"]
  • Enable transport layer TLS (on each node in the file elasticsearch.yml add the following properties): true certificate
  • Enable TLS on the HTTP layer to encrypt client communication (on each node in the file elasticsearch.yml add the following property): true

Configure the Liferay Connector to X-Pack Security

  • Create the following file in [Liferay_Home]/osgi/configs:

  • Add the following properties:
sslKeyPath="/path/to/[Elasticsearch Home]/config/localhost.key"
sslCertificatePath="/path/to/[Elasticsearch Home]/config/localhost.crt"
sslCertificateAuthoritiesPaths="/path/to/[Elasticsearch Home]/config/ca.crt"
NOTA: Remember to change the path of certificates and keys

Configure Liferay Enterprise Search Monitoring

Monitoring is enabled in Elasticsearch by default, but data collection is not. Enable data collection by adding this line to elasticsearch.yml:
  • xpack.monitoring.collection.enabled: true

Install Kibana

  • Download Kibana according to the version of elasticsearch you are using;
  • Unzip Kibana in the desired directory;
  • Tell Kibana which elastic you will monitor through the kibana.yml file;
    • elasticsearch.url: "http://localhost:9200"

NOTE: If SSL is enabled on Elasticsearch, this is an https URL.

If you’re using X-Pack’s security features on the Elasticsearch server.


If X-Pack requires authentication to access the Elasticsearch cluster, follow these steps:

  • Set the password for the built-in kibana user in [Kibana Home]/config/kibana.yml:
    • elasticsearch.username: "kibana"
    • elasticsearch.password: "liferay"


  1. The password used in this step was created before the topic above Setting Up X-Pack Users.
  2. Once Kibana is installed, you can change the built-in user passwords from the Management user interface.


Add these settings to kibana.yml: "xsomethingxatxleastx32xcharactersx" 600000
elasticsearch.ssl.verificationMode: certificate
elasticsearch.url: "https://localhost:9200"
elasticsearch.ssl.certificateAuthorities: [ "/path/to/ca.crt" ]
server.ssl.enabled: true
server.ssl.certificate: /path/to/[Elasticsearch Home]/config/localhost.crt


server.ssl.key: /path/to/[Elasticsearch Home]/config/localhost.key
NOTE: Remember to change the path of certificates and keys

Configuring the Liferay Connector to X-Pack Monitoring

  • Once the connector is installed and Kibana and Elasticsearch are securely configured, create a configuration file named:

  • Place these settings in the .config file:
NOTE: The values depend on your Kibana configuration. For example, use a secure URL such as kibanaURL="https://localhost:5601" if you’re using X-Pack Security features.
  • Deploy this configuration file to [Liferay Home]/osgi/configs, and your running instance applies the settings. There’s need to restart the server.


  • There are two more settings to add to Kibana itself. The first forbids Kibana from rewriting requests prefixed with server.basePath. The second sets Kibana’s base path for the Monitoring portlet to act as a proxy for Kibana’s monitoring UI. Add this to kibana.yml:
    • server.rewriteBasePath: false
    • server.basePath: "/o/portal-search-elasticsearch-xpack-monitoring/xpack-monitoring-proxy"
NOTE: Once you set the server.basePath, you cannot access the Kibana UI through Kibana’s URL (e.g., https://localhost:5601). All access to the Kibana UI is through the Monitoring portlet.
  • Because you’re using the Monitoring portlet in Liferay DXP as a proxy to Kibana’s UI, if you are using X-Pack Security, you must configure the application server’s startup JVM parameters to recognize a valid truststore and password.
  • Navigate to Elasticsearch Home and generate a PKSC#12 certificate from the CA you created when setting up X-Pack security:

./bin/elasticsearch-certutil cert --ca-cert /path/to/ca.crt --ca-key /path/to/ca.key --ip --dns localhost --name localhost --out /path/to/Elasticsearch_Home/config/localhost.p12

NOTE: If the property of the elasticsearch.yml file has been configured, you must use it here.

  • Next use the keytool command to generate a truststore:

keytool -importkeystore -deststorepass liferay -destkeystore /path/to/truststore.jks -srckeystore /path/to/Elasticsearch_Home/config/localhost.p12 -srcstoretype PKCS12 -srcstorepass liferay

  • Add the trustore path and password to your application server’s startup JVM parameters. Here are example truststore and path parameters for appending to a Tomcat server’s CATALINA_OPTS:

Monitoring in Liferay DXP

Once Kibana and X-Pack are successfully installed and configured and all the servers are running, add the X-Pack Monitoring portlet to a page:
  1. Open the Add menu on a page and choose Widgets

  2. Search for monitoring and drag the X-Pack Monitoring widget from the Search category onto the page.


How to Configure Remote Staging in a Clustered Liferay DXP Environment

Company Blogs October 9, 2017 By Anderson Perrelli Staff

Well, I decided to write this post after breaking my head a lot (almost brains were flying everywhere), to be able to configure remote staging in a clustered environment. The staging is that feature that you love or hate, it's a case of love and hate, like a mexican novel style, where in the end everything is right and everyone lives happily forever :p
Let's stop chatting and follow what matters, what we need to do to achieve success with staging is not as complicated as it seems, I believe that after this post, when your boss says "configure remote staging in our environment," you'll not feel like a cat when hear its owner saying it will give a bath.
The architecture we are going to base is simple, a instance that will be staging (which has database configurations and file repository different from the cluster nodes), a balancer, responsible for the traffic between the nodes of the cluster, two nodes that we will call of appserver01 and appserver02 connected to the same database.
Assuming that the web and application tier are already configured and that the cluster is already configured as well, I've divided the staging configuration into three parts:
1 - Configuration of properties: staging, appserver01 and appserver02;
2 - Setting the TunnelAuthVerifier property in the system settings;
3 - Enabling remote staging in the publishing option of the Site Adminidtration.
The first thing to worry about is the Liferay property files, more specifically the files, where we'll set up the tunneling.served.shared.secret and tunneling.servlet.shared.secret.hex properties on the stagingappserver01 and appserver02.
This property guarantees the secure communication of one portal with the other, thus denying another portal that does not share the same secret key.
If your operating system is Unix, you can use this command to generate a 128-bit AES key.
openssl enc -aes-128-cbc -k abc123 -P -md sha1
The following key lengths are supported by the available encryption algorithms:
  • AES: 128, 192, and 256 bit keys
  • Blowfish: 32 - 448 bit keys
  • DESede (Triple DES): 56, 112, or 168 bit keys (However, Liferay places an artificial limit on the minimum key length and does not support the 56 bit key length)
By setting this property to true, you must configure the tunneling.served.shared.secret property using a hexadecimal encoding.
Add the following lines to the file of each Liferay by configuring a secret key:
Another property that we should worry about is the tunnel.servlet.hosts.allowed, which must be added in the file on the application tier, appserver01 and appserver02, this property will allow connection between the configured IPs, you must inform in this property the IP of the staging instance.
Add the following lines to the file on the appserver01 and appserver02 nodes:
Note: SERVER_IP must be replaced by the IP of the instance itself and STAGING_IP by the IP of the Staging instance;
After setting up the properties files, you need to restart each portal.
The second thing we need to configure is the TunnelAuthVerifier property in the system settings of the nodes in our application tier, appserver01 and appserver02, navigate to the Control Panel → Configuration → System Settings → Foundation → Tunnel Auth Verifier. Click /api/liferay/do and insert the Staging IP addresses you are using in the Hosts allowed field. Then select Update.
You can also do this configuration on each node of your cluster, through the TunnelAuthVerifierConfiguration-default.config file that you are in (I really recommend that the configuration should be done this way):
Adding the following lines to the file:
Note: If your portal is less than or equal to SP4 version the file extension should be .cfg
Finally, we must enable remote staging on our Staging instance, go to the Publishing options in the Site Administration and select Staging, then select Remote Live and additional options appear.
Fill in the field Remote Host/IP, for the sake of availability, it is recommended to fill this field with the balancer IP of our WEB tier, then inform the remote port of this balancer through the Remote Port field and, finally, access appserver01, copy the remote site ID and paste in the Remote Site ID field, save the settings.
Showing 2 results.