Liferay DXP lifecycle events: chaos vs. order

Technical Blogs May 14, 2017 By Jan Eerdekens

Historically there have been a number of extension points in Liferay that enable developers to hook into portal events and add their own custom/additional behaviour. In Liferay DXP this is still the case and the list below shows when certain events are fired and in which order. You’ll notice that a number of events are mentioned multiple times because they’re shown in the context of a specific action instead of just as a straightforward list of all the available event types:

  • Once during a server start/stop cycle:
    • global.startup.events
    • application.startup.events (1 for every portal instance ID)
    • application.shutdown.events (1 for every portal instance ID)
    • global.shutdown.events
  • For every login:
    • servlet.service.events.pre
    • servlet.session.create.events
    • servlet.service.events.post
    • login.events.pre
    • login.events.post
  • For every logout:
    • servlet.service.events.pre
    • logout.events.pre
    • servlet.session.destroy.events
    • logout.events.post
    • servlet.service.events.post
  • For every every HTTP request:
    • servlet.service.events.pre
    • servlet.service.events.post
  • For every page that is updated:
    • servlet.service.events.pre
    • layout.configuration.action.update
    • servlet.service.events.post
  • For every page that is added/updated:
    • servlet.service.events.pre
    • layout.configuration.action.delete
    • servlet.service.events.post

Even with this many extension points there are a lot of cases where you would want to add more than 1 additional custom behaviour to an event. Because developers like to create small, dedicated modules, instead of putting everything together in 1 big module, it is also important to be able to have these custom event extensions, each in their own class/module, run in a certain specific order. Otherwise the overall behaviour would be pretty random and erratic.

In Liferay 6.2 this was pretty simple. You just had a number of properties in your portal-ext.properties that contained a comma separated list of implementation classes that would be run in the order they were present in this list, e.g.:

login.events.post=com.liferay.portal.events.ChannelLoginPostAction,com.liferay.portal.events.DefaultLandingPageAction,com.liferay.portal.events.LoginPostAction

This was a very simple and easy to understand way of doing this. When I tried to do the same in Liferay DXP it quickly became clear that with the switch to OSGI it now didn’t work this way anymore. A single custom event is now a standalone OSGI component, e.g.: https://github.com/liferay/liferay-blade-samples/tree/master/maven/blade.lifecycle.loginpreaction, and I couldn’t find any place where you could still define a list like I did before. So from the example it wasn’t exactly clear how the order of the components could be influenced.

Initial testing showed that the order seemed to be determined by when the bundle was started. I tested this by making 2 small bundles that each contained a dummy implementation of the same lifecycle event, but each outputted a different message. After installing and starting both bundles I got the messages I expected in the order the bundles were started. Then I stopped and started the bundle that was started first and triggered the event again and the order of the messages was the other way around.

Because I really don’t want this kind of behaviour, but something more deterministic, I was hoping that there was some sort of solution for this. This was when I remembered something about service ranking. In order to override a default Liferay service/component you just need to implement the correct interface and provide your implementation (component) with a service.ranking value thats larger that the one present on the original Liferay implementation. This same property can also be seen in the the Blade sample that shows how to add/override JSPs:

...

@Component(
    immediate = true,
    property = {
        "context.id=BladeCustomJspBag", "context.name=Test Custom JSP Bag",
        "service.ranking:Integer=100"
    }
)
public class BladeCustomJspBag implements CustomJspBag {
...
}

The documentation of the cases above always talks about using the service.ranking property to find a single implementation for something. The highest ranked implementation wins. So now I was wondering if all these implementations are just kept in a sorted list somewhere? If this were to be the case then you’d expect Liferay to retrieve all the deployed implementations for a certain lifecycle event and use this sorting to be able to run them in the correct order. So I quickly created a bunch of custom event implementations for the same event and gave them different service.ranking values, created a bundle, installed & started it. To my surprise I got the behaviour I wanted! It seems the higher the service.ranking value on my custom event implementation, the earlier in the chain it was executed.

A quick debug and code inspection session did indeed show that EventsProcessorUtil uses ServiceTrackers to retrieve all the registered implementations for a lifecycle event and gets back a sorted list. This list is retrieved from a ListServiceTrackerBucket that internally keeps a sorted list of ServiceReferences. The ServiceReference implementation is Comparable and the comparison is based on the value of the service ranking and in the case the ranking is identical on the service ID.

The example code can be found on my Github: https://github.com/planetsizebrain/event-order-hook

Liferay Freemarker Tips and Tricks: Date & Time

General Blogs January 23, 2017 By Jan Eerdekens

I've been working with Liferay for quite some time now, but I must confess that I still haven't really made the switch from Velocity to Freemarker for my templates. Even though I know there are a lot of benefits to using Freemarker, like better error messages, the Velocity knowledge and library of snippets I've build up through the years is hard to give up. But with the advent of Liferay DXP it now seems the perfect time to make the switch. 

While working on a problem today, where an asset's (a document) modified date wasn't updated when you checkout/checkin a new version, I had to change something in a Freemarker display template a colleague made. At first, before I knew there was a problem in Liferay, I thought the issue was that the template wasn't providing the timezone to the dateUtil call in the template:

<ul>
   <#list entries as entry>
      <#assign dateFormat = "MMM d, HH':'mm" />

      <li>${entry.getTitle()} - ${dateUtil.getDate(entry.getModifiedDate(), dateFormat, locale)}</li>
   </#list>
</ul>

So this looked like the perfect time to see how to fix this and see if any improvements could be made. I first started off with fixing the line as-is and just added the timeZone (which is already present in the template context - see com.liferay.portal.template.TemplateContextHelper).

<ul>
   <#list entries as entry>
      <#assign dateFormat = "MMM d, HH':'mm" />

      <li>${entry.getTitle()} - ${dateUtil.getDate(entry.getModifiedDate(), dateFormat, locale, timeZone)}</li>
   </#list>
</ul>

While this does the trick and produces the correct datetime in the timezone we want, it did look a bit verbose. So I wondered if Freemarker had some stuff that might make it shorter/sweeter/better. After some looking around I found these 2 things: built-in date formatting and processing settings

The first would allow us to drop the dateUtil, but doesn't seem to have a provision for providing a locale and/or timezone. This is where the second article comes in. These processing settings allow us to set some stuff that further Freemarker processing will take into account and luckily for us the datetime stuff is one of those things. So with the combination of both our template becomes:

<#setting time_zone=timeZone.ID>
<#setting locale=locale.toString()>
<#setting datetime_format="MMM d, HH':'mm">

<ul>
   <#list entries as entry>
      <li><li>${entry.title} - ${entry.modifiedDate?datetime}</li></li>
   </#list>
</ul>

So now you can see that we can just set up the processor beforehand to use the Liferay timezone and locale, but also our chosen datetime format. This allows us to then directly use the Freemarker ?datetime function on the date field of the asset entry. This will also apply to any further dates you want to print in this template using ?datetime (or ?date or ?time). As this only applies to one processor run, you can also have different templates, that set these processor settings differently, on the same page without them interfering with each other. The following screenshot shows the template above and the same template where the timezone, locale and datetime format are set differently:

The beauty and ease of use of this small improvement has already made me change my mind and hopefully I can write some more Freemarker related blog posts in the future.

Pandora’s Box for Liferay: The Groovy Edition

Technical Blogs September 14, 2016 By Jan Eerdekens

In the beginning of the year I had some fun creating the CRaSH portlet. Once it is deployed, it allows a portal administrator to work with the portal/JVM in a very flexible way using a command line syntax. While this is a very powerful portlet, there are some drawbacks. You need to be able to install it and you need to already have or develop a number of additional commands to make it useful for your specific use case.

This makes the CRaSH portlet difficult to use in some situations, e.g. an audit of an existing Liferay platform, where you might not have the access or permissions to even install it. So what can you do if you need to find out some information about the actual server that Liferay is installed on without SSH access? You try to use the Liferay Script Portlet of course! In my case using my favorite script language: Groovy.

I used 2 scripts for this in the past. The first one lists the contents of a directory:

import java.io.*;
 
try {
   File directory = new File("/export/liferay_dev2/data");
   for (File f : directory.listFiles()) {
      out.println(f.name + " " + f.size());
   }
} catch (Exception e) {
   out.println(e.getMessage());
}

The second one lists the contents of a file:

try {
   java.io.File file = new java.io.File("/opt/liferay/portal-ext.properties");
   String s = com.liferay.portal.kernel.util.FileUtil.read(file);
   out.println(s);
} catch (Exception e) {
   out.println(e.getMessage());
}

While these 2 scripts do their job perfectly, they are a bit of a hassle to use when you want to traverse a big directory tree and look at a lot of files. This way of working allows you to look at /etc/release to find out the version of the operating system. However, it doesn’t allow you to actually execute a shell command like apt-get –just-print upgrade to find out how up-to-date the version of the operating system is.

The Groovy solution

Groovy is very powerful and so my (educated) guess was that there must be something in the scripting language that could help me with this problem. After looking around for a bit, I found a Stack Overflow post and after tweaking the script from it a tiny bit, I ended up with the following script:

def sout = new StringBuilder(), serr = new StringBuilder()
def proc = 'apt-get --just-print upgrade'.execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(10000)
println "out>"
println "$sout"
println "err>"
println "$serr"

This Groovy script allows you to execute any shell command that a Liferay user could want to execute. It will print its output and error streams below the script portlet.

You can just change line 2 of the script and easily run other commands like ls, tail, cat, whoami, etc…

To reindex or not to reindex, that's the question

Technical Blogs March 21, 2016 By Jan Eerdekens

A little while ago I ran into a strange problem at a customer. We had written a hook that contained a small REST service that accepts a multipart POST to import a document and some related metadata/permissions. This service seemed to work OK and was used to import 1000's of documents, but every now and then we received a call or an email of an enduser complaining that they couldn't see a document that was imported. When we checked the document library of the group it was supposed to be in we were able to find the document and verify that it was correctly uploaded, that it had the correct metadata and that the permissions were also correct. 
 
 
So why isn't it showing up then? We quickly determined that it had to be index, Lucene in our case, in some way or another. A first look at our code didn't turn up anything specific. We were using a couple of Liferay APIs to add the document to a certain group, set some expandos and permissions, but nothing too special or out of the ordinary (or so we thought...). After some further digging we finally found a small indicator: some documents in the index had an empty or wrongly filled groupRoleId field. When we then manually uploaded the same document over the existing one (mimicking a reindex for that specific document), without changing the expandos or permissions, the document would show up correctly again. In the end we found that running the following script, that only reindexes the permissions for a document, would also do the trick:
import com.liferay.portlet.documentlibrary.model.*;
import com.liferay.portal.kernel.search.*;

try {
   String entryClassPK = "192363";
   String className = DLFileEntry.class.getName();
   SearchEngineUtil.updatePermissionFields(className, entryClassPK);

   out.println("Reindexed: " + entryClassPK);
} catch (Exception e) {
   out.println("Failed to reindex: " + entryClassPK);
   e.printStackTrace();
}

So now we definitely knew that the indexing of the permissions was the issue and after creating a JMeter load test, I was even able to consistently reproduce the problem. When I ran the test about 1 or 2 documents in a run of a 1000 would have an incorrect groupRoleId field value (empty or only a groupId without a connecting dash and roleId). This is when we decided to open a support ticket as we couldn't find the cause of the problem and were thinking it might be a possible bug.

With the friendly help of the Liferay support guys (thank you very much again!) we found out that we were actually working under an incorrect assumption about Liferay's inner workings. As it turns out: when you call multiple Liferay APIs after each other, like we were doing in our REST service, outside of Liferay's transactions, you might need to use TransactionCommitCallbackUtil to register a callback that has to run after the initial API call has finished. In our case this was setting the correct permissions after the document was added. Even after working with Liferay all these years you still learn something new! 

We then changed out REST service to use this utility class (for which there is precious little information/documentation to be found), but needed something to verify that it works correctly now. For that I created a small standalone Java application that can be run on the command line or in a cron job that is able to run a query against a Lucene index and write out a report. The code can be found here: https://github.com/planetsizebrain/index-checker.
 
It takes the following parameters (in order):
  • A Lucene index directory, in our case the Liferay data/lucene directory (you only need to provide the root, it will find the subdirectories itself). The index will also be opened as read-only and our tests suggest you can run it against a running Liferay's index.
  • The Lucene query to run (wildcards are turned on and allowed), e.g.: "entryClassName:com.liferay.portlet.documentlibrary.model.DLFileEntry AND visible:true AND (*:* AND NOT groupRoleId:*-*)"
  • The field of the result that you want inspected, e.g.: groupRoleId
  • The field values of the matching Lucene docs you want printed in the logs, e.g: "entryClassPK,groupId,title"
You can control where the log will be output by providing the following JVM parameter: -Dlog.directory=<directory-where-to-log> (for which we chose a directory that we can access via LFM cheeky). This will produce a log file that contains something like the following lines:
be.planetsizebrain.lucene.IndexChecker - Adding index directory '/path/to/liferay/data/lucene/10155' to search
be.planetsizebrain.lucene.IndexChecker - Adding index directory '/path/to/liferay/data/lucene/0' to search
be.planetsizebrain.lucene.IndexChecker - Found 5 possible incorrect documents, checking 'groupRoleId' field...
be.planetsizebrain.lucene.IndexChecker - Found document with empty value for 'groupRoleId': (entryClassPK: 11547), (groupId: 10916), (title: Code Complete 2nd Edition)
be.planetsizebrain.lucene.IndexChecker - Found document with empty value for 'groupRoleId': (entryClassPK: 11578), (groupId: 10916), (title: Liferay In Action)
be.planetsizebrain.lucene.IndexChecker - Found document with empty value for 'groupRoleId': (entryClassPK: 994268), (groupId: 12301), (title: Javascript - The Good Parts)
be.planetsizebrain.lucene.IndexChecker - Found document with empty value for 'groupRoleId': (entryClassPK: 600652), (groupId: 12355), (title: The Mythical Man-Month)
be.planetsizebrain.lucene.IndexChecker - Found document with empty value for 'groupRoleId': (entryClassPK: 995458), (groupId: 12355), (title: Effective Java 2nd Edition)
be.planetsizebrain.lucene.IndexChecker - Done. Found 0 incorrect and 5 empty entries
In our case running this against an index with +/- 350K documents and -Xmx set to 64Mb took about 10s.
 
Good luck trying this out and modifying it to your needs and hopefully you'll also find your needle in a haystack when the time comes!
 
 
PS: just saw this pop up in the Marketplace and it does something similar, but slightly different, from within Liferay: https://www.liferay.com/marketplace/-/mp/application/70121999

Pandora's Box for Liferay: CRaSH Portlet

Technical Blogs March 20, 2016 By Jan Eerdekens

After building a hook that wraps the FakeSMTP application and a portlet that gives you file management capabilities in Liferay we now get to part 3 of the series: the CRaSH portlet. I must admit: I did actually build this portlet first, but during its development needed something to show/inspect emails and to upload, download and edit files and ended building the other apps too. It seems I have a bit of a problem staying on target and get easily distracted by something new and shiny.
 
But in the end I managed to get all 3 portlets finished and now it is finally time to write a blog post about the most important and powerful one: the CRaSH portlet (the way it is written is intentional, you'll see why a bit further down).
 
The idea occurred to me when I came by this blog post, https://www.exoplatform.com/company/en/resource-viewer/Getting-Started-Guide/introduction-to-crash-the-cache-visualization-use-case, where CRaSH was used for cache visualization on the eXo platform. As I was having a problem with a customer's Liferay environment where I needed to view cache hits/misses, but didn't have any access to the server except for Administrator access to the Control Panel, I wondered if I maybe could wrap CRaSH as a portlet and retrieve my cache information that way?
 

What is CRaSH?

CRaSH, http://www.crashub.org, is a project by Julien Viet, that basically provides a shell environment for the Java platform. When embedded it allows you to connect to a running JVM via SSH, Telnet or a web interface (which is what we'll be using) and run commands. It already comes with a bunch of commands on board and you can easily add your custom commands, written in Groovy or Javato better integrate with the application you're embedding CRaSH in. You can add your custom commands directly to the portlet's WAR file, in WEB-INF/crash/commands, or you can use LFM to add them on the fly. The portlet will create a directory named crash in your Liferay root and will continuously try to load new scripts or reload existing ones from this location.
 
Now you might say: that seems pretty similar to the script portlet that is already available in Liferay or even the Script Helper portlet that can be found in the Liferay Marketplace? Well there are couple of things that IMHO set CRaSH apart from the already available options:
  • Unlike scripts, CRaSH commands can take parameters
  • Commands can be chained/piped like in a shell
  • It has man like functionality that describes the available commands
  • It has tab completion for the commands, their parameters and even parameter values like file paths (and you can write your own completions)
  • Other shell-like functionality like a history, clear screen, ...
  • The result of a command can be updated on an interval like for example 'ps'
Out of the box you already have access to a bunch of commands (which it will list when you type 'help') like: dashboard, egrep, env, filter, jdbc, jmx, jndi, jvm, mail, ... . Below you can see the output of the dashboard command that shows a continuously updated view of thread info, your environment properties and your JVM memory usage:
 
 

Integrating CRaSH

For its web integration CRaSH uses websockets and this is where I a hit a little snag: the version of Tomcat in my Liferay bundle, 7.0.42, doesn't have official websocket support. It does however have an unofficial, pre-standard, websocket implementation that we can use. With some small adjustments to the existing web integration code and by using the Liferay PortalDelegateServlet it wasn't too difficult to get CRaSH to work in a portlet. The visual part, the terminal view, is handled by a very interesting JS library: http://terminal.jcubic.pl. This library is able to give you a complete terminal view in a browser, complete with things like tab completion, history, shortcuts, etc... and the websocket integration we did is used to communicate between the browser and the CRaSH backend.
 
To further integrate CRaSH with Liferay I also made alternate versions of the OOTB jdbc and mail commands. They're called db and email and are basically copies of the original commands, but are changed to directly use the the Liferay database and email system/configuration by using the InfrastructureUtil class. 
 
You can also SSH into the CRaSH portlet, on port 9999, and the authentication has been integrated with the Liferay authentication. Any user that has the Administrator role should be able to connect using his screenname and password, e.g.: ssh test@127.0.0.1 -p 9999
 

Working with the portlet

While the OOTB commands are already very useful, and could give me my cache information via de jmx command, I also made some additional ones to do some more interesting stuff and serve as examples of how to create your own:
  • company: shows a list of all the companies present in your Liferay installation
  • cluster: displays information about all cluster nodes
  • cache: a different non-JMX way to show EHCache cache names and their hits/misses (some fancy classloader magic was needed to make this work)
  • tasks: shows a list of running tasks
  • user: can show the list of all users and can also be used create a user
  • file: can read/write CSV and JSON files
By combining the OOTB commands, the custom commands and the LFM portlet you can start to do some really interesting things, e.g.: 
  • Create a CSV file containing user information via the LFM portlet
  • Chain the file command with the user command to read this CSV and  actually create the users in Liferay (you can see the 'Your new account' emails in FakeSMTP)
  • Retrieve all users, including your new one by executing: user all | file write -f JSON --fields MIXIN ../../crash/users.json
  • Download this JSON file via the LFM portlet and use it as the input from another program/process/...

 

Being able to do these kinds of things from within Liferay or via SSH made me feel a little like this:

 

Creating your own commands

As the system is very extensible I urge you to clone the project from GitHub, build the CRaSH portlet, deploy it and start experimenting! A simple example of a command to start with could be the echo command:
  • Open the LFM portlet and navigate to the crash directory
  • Create a new file named echo.groovy
  • Edit it and add the following content
import org.crsh.cli.Command
import org.crsh.cli.Usage

class echo {
   @Usage("echo text")
   @Command
   Object main(
         @Usage("the text to echo")
         @Argument String text) {
      if (text == null)
         text = "";
      return text;
   }
}
  • Save it and open the CRaSH portlet
  • Type help and you should see your new command appear in the list of commands
  • Type echo "Hello World" to output that text to the terminal

Starting from this base and the code you can find on GitHub should open a whole world of new possibilities within Liferay!

Pandora's Box for Liferay: LFM Portlet

Technical Blogs February 29, 2016 By Jan Eerdekens

Part 2 of this series is a short, but sweet one. This portlet was born out of a very specific customer requirement: the need to be able to see log files for some user that have Liferay web access, but no actual OS level access to the server itself. We first looked at using the Liferay Log Viewer portlet that is available in the Liferay Marketplace and while it partly covered the requirement it wasn't a real good fit.

So that's why we first developed a simple Control Panel portlet that allowes an administrator to just download files from a fixed set of directories: the Tomcat log directory and the Liferay log directory. While this perfectly satisfied the requirement of the customer it got me thinking: wouldn't it be nice if you could traverse the file tree a bit more and also execute some additional file actions like editing a file or even creating a new file?

So that's how the LFM Portlet, the Liferay File Manager Portlet, was born. The quickly thrown together version you can find on Github, https://github.com/planetsizebrain/liferay-pandora-box/tree/master/lfm-portlet, allows you to do the following things:

  • view a file
  • edit a file
  • upload/download a file
  • delete a file/directory
  • add a new (empty) file
  • add a new directory
  • touch a file (aka trigger a redeploy when done on a web.xml)
  • whatever you want to add wink (the code is available on Github)

So now it not only allows you to download a log file, but you can even change a config file if you want or perform a cleanup and redeploy of a module for which a hot deploy went wrong.

Because the portlet does everything with the user the JVM/Liferay was started with, you, or any user you give permission to use the portlet, can see everything on the file system the OS user is able to see. this means you'll be able to delete your Liferay or even mess up your data directory. So be careful who you give access to and think twice when using certain functions and always keep the following Spiderman quote in mind:

There are some additional resource actions, that can be used to limit what certain users can do, but they only work on a global level and not on specific files or directories:

There's also one little extra (just because I wanted to try out Zeno Rocha's clipboard.js): when you click on the little symbol after the path, on top of the datatable, it will copy the actual path to your clipboard.

Pandora's Box for Liferay: FakeSMTP hook

Technical Blogs February 14, 2016 By Jan Eerdekens

This is part 1 of a series of blog posts about a collection of tools I created during the last couple of months as a reaction to some problems I ran into.

The first one is how to handle/debug emails while testing mail related items in Liferay. There is of course the standard option of configuring Liferay to use your company's mail server or your personal email account at Google/Yahoo/etc... and using the mail.session.mail.debug=true property in your portal-ext.properties. While this works it does depend on external infrastructure and can be difficult to set up with regards to authentication. Even when you are able to get it set up correctly network issues can still be a problem and prevent your email from working correctly.

So preferably we'd like to work locally. While you could set up a local mail server there is a simpler option: FakeSMTP. This is a Java based mail server that you can run locally as a JAR that also gives you a GUI to view the email messages it received with. Because I'm a lazy developer and would probably also forget to launch the JAR each time I start Liferay I decided to try and wrap it in a hook so that it would get started automatically each time Liferay is started (or the hook is deployed).

Luckily this turned out to be pretty easy as FakeSMTP is available as a Maven artifact and so the combination of a ServletContextListener and a ProcessBuilder did the trick:

public class StartFakeSMTPListener implements ServletContextListener {

   private final Logger log = LoggerFactory.getLogger(StartFakeSMTPListener.class);

   private Process process;

   @Override
   public void contextInitialized(ServletContextEvent sce) {
      try {
         String pathToLib = sce.getServletContext().getRealPath("/WEB-INF/lib/fakesmtp-2.0.jar");

         ProcessBuilder processBuilder = new ProcessBuilder();
         processBuilder.redirectErrorStream(true);
         processBuilder.inheritIO();
         processBuilder.command("java", "-jar", pathToLib, "--start-server", "--port", "2525", "--bind-address", "127.0.0.1", "--memory-mode");

         process = processBuilder.start();
      } catch (Exception e) {
         log.error("Unexpected problem starting FakeSMTP", e);
      }
   }

   @Override
   public void contextDestroyed(ServletContextEvent sce) {
      if (process.isAlive()) {
         process.destroy();
      }
   }
}

This simple listener retrieves the path to the embedded FakeSMTP JAR and then uses a ProcessBuilder to launch the JAR with the correct configuration (it will listen for email locally on port 2525). You can find the code on my Github: https://github.com/planetsizebrain/liferay-pandora-box. When you build and deploy it to a Liferay that has been configured with mail.session.mail.smtp.port=2525 in its portal-ext.properties you should see something like this:

 

The Groovy script I used to send an email is:

import javax.mail.internet.*;
import com.liferay.portal.kernel.mail.*;
import com.liferay.mail.service.*;

MailMessage message = new MailMessage(new InternetAddress("from@liferay.com"), new InternetAddress("to@liferay.com"), "Dummy Subject", "Hello World!", false);
MailServiceUtil.sendEmail(message);

Feel free to use it to test the FakeSMTP hook with and if everything goes well: You've Got Mail.

Maven Madness in Liferay Land

Technical Blogs February 6, 2016 By Jan Eerdekens

In this post I want to share how you can use Maven to do 2 important things in a Liferay project: patching and creating a deploy package.
 
While this way of patching is a very powerful way to modify Liferay in an automated way (no manual WAR/JAR patching) you should use it as sparingly as possible. If possible you should always try to use more conventional ways of modding Liferay, for example using a hook, when possible. But whenever that's not possible, or like me you don't really like EXT or extlets, feel free to use the methods described in this post.
 
While this post will be focused on using Maven as a build tool to achieve the desired result, the concepts and techniques should also apply to other build tools like Ant, Gradle or Buildr (and if you're not already using a build tool... this might be the moment to start using one).
 

Patching

While it can produce similar results as the Liferay EE patching tool in some respects, the patching described in this section is a slightly different beast. They even can, as you'll see in the deploy package section, be used together. There are a number of different reasons to use this additional way of patching Liferay:
  • you can't use the Liferay patching tool because you're using Liferay CE
  • you can't wait for an official patch/fix and need a temporary workaround
  • you need to add or change Liferay files that you can't (or don't want to) change using a hook
  • ...
There are a lot of parts of Liferay that you might want/need to patch:
  • the portal WAR
  • the portal-service.jar
  • one of the other Liferay JARs that are contained in the portal WAR: util-java.jar, util-bridges.jar, ...
  • the most special case of all: a JAR that's included in another JAR that itself is included in the portal WAR aka 'overlay-ception'

The portal WAR

Let's start with the easiest case: patching the Liferay WAR file. What makes this the easiest one is that you can simply use the well known Maven WAR plugin's overlay ability to achieve the desired result. For most file types you just need to put them in the correct directory of your module for them to be correctly overlayed:
  • src/main/java: Java classes you want to override. They'll be put in the WEB-INF/classes directory and they'll be loaded before any classes from the WEB-INF/lib directory.
  • src/main/resources: non-Java source files that you want to end up in WEB-INF/classes like log4j configuration or a JGroups configuration file to configure your cluster to use unicast instead of multicast.
  • src/main/webapp: all files here will override their counterparts their corresponding directory in the WAR (useful for overriding JSPs) or you can even use it to add new files (JS libs, images, ...). You can also add a WEB-INF subdirectory and add modified versions of web/Liferay descriptors like web.xml or liferay-portlet.xml.
The Maven XML configuration that is needed for this is pretty simple: you just reference the Liferay WAR as a dependency with a *runtime* scope. During the build Maven will unpack the WAR and then copy over (aka overlay) everything from src/main and repackage it again.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   ...
   <build>
        <finalName>portal-web</finalName>
      <plugins>
         <plugin>
            <artifactId>maven-war-plugin</artifactId>
            <configuration>
               <warName>${project.build.finalName}</warName>
               <outputFileNameMapping>@{artifactId}@.@{extension}@</outputFileNameMapping>
               <overlays>
                  <overlay>
                     <groupId>com.liferay.portal</groupId>
                     <artifactId>portal-web</artifactId>
                  </overlay>
               </overlays>
            </configuration>
         </plugin>
      </plugins>
    </build>

   <dependencies>
      <dependency>
            <groupId>com.liferay.portal</groupId>
            <artifactId>portal-web</artifactId>
            <!-- Include a specific version of Liferay with a release -->
            <version>${liferay.version}</version>
            <type>war</type>
            <scope>runtime</scope>
        </dependency>
   </dependencies>
</project>
You can even exclude some of the JARs that are already in the WEB-INF/lib directory, such as the JAI JARs, if your OS already provides them or for some other reason. Excluding a JAR dependency in the configuration section of the Maven WAR plugin and then adding a dependency to your overridden version in pom.xml will make sure your custom overlayed JAR will get added instead of the original one. If you just want to add an additional JAR then you only need to add it as a dependency, e.g. a platform specific Xuggler JAR.
 

The portal service JAR

The next type of overlay is a JAR overlay. This is slightly more complicated, but can still be accomplished by using a pretty standard Maven plugin: the Maven Dependency plugin. The unpack goal during the prepare-package phase allows us to unpack a JAR to the target directory of our module. We then need to provide the pluging with a list of files we want to exclude so that we can override them with our own. We'll calculate this list using the Maven Groovy pluginbased on the contents of the src/main/java directory.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   ...
   <build>
      <finalName>portal-service</finalName>
      <plugins>
         <plugin>
            <groupId>org.codehaus.gmaven</groupId>
            <artifactId>gmaven-plugin</artifactId>
            <executions>
               <execution>
                  <id>generate-patched-file-list</id>
                  <phase>compile</phase>
                  <goals>
                     <goal>execute</goal>
                  </goals>
                  <configuration>
                     <source>${basedir}/src/main/groovy/listfiles.groovy</source>
                  </configuration>
               </execution>
            </executions>
         </plugin>
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-dependency-plugin</artifactId>
            <executions>
               <execution>
                  <id>unpack</id>
                  <phase>prepare-package</phase>
                  <goals>
                     <goal>unpack</goal>
                  </goals>
                  <configuration>
                     <artifactItems>
                        <artifactItem>
                           <groupId>com.liferay.portal</groupId>
                           <artifactId>portal-service</artifactId>
                           <version>${liferay.version}</version>
                           <type>jar</type>
                           <overWrite>true</overWrite>
                           <outputDirectory>${project.build.directory}/classes</outputDirectory>
                           <excludes>${patched.files}</excludes>
                        </artifactItem>
                     </artifactItems>
                  </configuration>
               </execution>
            </executions>
         </plugin>
      </plugins>
   </build>
   ...
</project>
The actual magic that calculates the list of exclusions happens in the listfiles.groovy file:
def files = []
new File(project.build.directory).eachFileRecurse() { file ->
   def s = file.getPath()
   if (s.endsWith(".class")) {
      files << "**" + s.substring(s.lastIndexOf('/'))
   }
}

project.properties['patched.files'] = files.join(",")
The other Liferay JAR files, like util-java.jar, util-bridges.jar, ... or basically any other JAR (e.g.: I patched the PDFBox library in Liferay like this) can be overlayed in a similar way.
 

Overlay-ception

The third and most peculiar overlay we might need to do is related to something special Liferay does while hot deploying a WAR. Depending on the contents of the liferay-plugin-package.properties file of your portlet the Liferay deployer will copy one or more JARs (util-bridges.jar, util-java.jar, util-sl4j.jar and/or util-taglib.jar), that are embedded in the /com/liferay/portal/deploy/dependencies folder of portal-impl.jar over to the WEB-INF/lib directory of your deployed module. There are also a bunch of other files, like DTDs, in there, that you might want/need to override, that are also copied over in some cases depending on your configuration.
 
If you for example had to override util-taglib.jar to fix something, you will not only need to make sure your customized version gets included in the overlayed WAR file, but you'll also need to put it in the correct location in your overlayed portal-impl.jar that also needs to be included in the overlayed WAR file. For this we'll again use the Maven Dependency plugin, but instead of just 1 execution that unpacks the JAR we need to add a 2nd execution that uses the copy goal to copy all provided artifactItems (basically dependencies) to the correct location inside of the target directory. So first it will unpack the original contents of the JAR, then overlays it with any custom classes/resources and lastly it will copy over any overlayed libraries.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   ...
   <build>
      <finalName>portal-impl</finalName>
      <plugins>
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-dependency-plugin</artifactId>
            <executions>
               ...
               <execution>
                  <id>copy-overlay</id>
                  <phase>generate-sources</phase>
                  <goals>
                     <goal>copy</goal>
                  </goals>
                  <configuration>
                     <artifactItems>
                        <artifactItem>
                           <groupId>be.planetsizebrain.liferay</groupId>
                           <artifactId>util-taglib-overlay</artifactId>
                           <outputDirectory>${project.build.directory}/classes/com/liferay/portal/deploy/dependencies</outputDirectory>
                           <destFileName>util-taglib.jar</destFileName>
                        </artifactItem>
                     </artifactItems>
                  </configuration>
               </execution>
            </executions>
         </plugin>
      </plugins>
   </build>

   <dependencies>
      <dependency>
         <groupId>com.liferay.portal</groupId>
         <artifactId>portal-impl</artifactId>
      </dependency>
      <dependency>
         <groupId>be.planetsizebrain.liferay</groupId>
         <artifactId>util-taglib-overlay</artifactId>
      </dependency>
      ...
   </dependencies>
</project>
When I finally figured out how to get the above to work I was like: 
 

Deploy package

One last special thing we'll use Maven for is to create what we call a deploy package. A deploy package is basically a ZIP file that contains everything that is needed to do a deploy on a server or even locally on a developer's machine. While currently there are alternative options for this like Docker we find that at a lot of customers this isn't always an option. Our deploy package solution will work on just about any Unix based OS as it only uses (Bash) scripting. Some nice additional functionality is possible if your servers have access to some sort of Maven repository like a Nexus or Artifactory, but it isn't needed per se. If you have access to a repository you only need to put the deploy script on the server and it will download the deploy package for you and install it. If you don't have access to a repository you'll need to first download the package yourself and transfer it to the server and then run the script.
 

Building the package

The packaging of the ZIP is done using the Maven Assembly plugin in the deploy module. The deploy module needs to be last module in your list of modules of your root pom.xml for it to be able to correctly reference all the build artifacts. Because we want to be able to use this package, together with the script described in the next section, to be able to produce a reproduceable deploy, it really needs to contain just about everything and the kitchen sink:
  • a base Liferay Tomcat bundle
  • any hotfixes/patches/security fixes
  • the patching tool itself (if you need a newer version than available in the bundle)
  • a database driver (unless already present in the bundle above)
  • your overlayed Liferay WAR or JARs
  • all your custom portlets, hooks, themes & layouts
  • any Marketplace portlets you use (possibly overlayed using the techniques described above)
  • all the configuration files, like the portal-ext.properties, Tomcat's setenv.sh, license files, etc..., for all environments (so 1 bundle can be used)
Basically anything you can put in a Maven repository and can reference in your deploy module's pom.xml can be packaged into the deploy package. If your server environment has access to a Maven repository, some things, like the base Liferay Tomcat bundle and the patching tool, can be left out of the package as they can be downloaded just in time by the deploy script described in the next section.
 
This deploy module is also an excellent place to create an escrow package which is all the source code, resources and libraries that are needed to recreate a specific version of your portal. How you can do this can be found in an older blog post by me: Creating an escrow package with Maven.
 
How the actual package is made is defined in the assembly.xml file that is used to configure the Maven Assembly plugin. In this file we define the format of the artifact, ZIP, and where all the dependencies, defined in the module's pom.xml, need to be put in this ZIP file. In the example you'll see that:
  • using a fileSet all the configuration files, found in src/main/resources are put in the /liferay subdirectory
  • using a dependencySet all the WAR files, defined as dependencies in pom.xml, are put in the /war subdirectory
  • using a dependencySet all the ZIP files, defines as dependencies in pom/xml, are put in the /patches subdirectory 
<assembly>
   <id>portal-release-bundle</id>
   <formats>
      <format>zip</format>
   </formats>
   <includeBaseDirectory>false</includeBaseDirectory>
   <fileSets>
      <fileSet>
         <directory>${basedir}/src/main/resources</directory>
         <outputDirectory>liferay</outputDirectory>
         <includes>
            <include>**/*</include>
         </includes>
         <useDefaultExcludes>true</useDefaultExcludes>
         <fileMode>0644</fileMode>
         <directoryMode>0755</directoryMode>
      </fileSet>
   </fileSets>
   <dependencySets>
      <dependencySet>
         <includes>
            <include>*:war</include>
         </includes>
         <outputDirectory>/wars</outputDirectory>
         <outputFileNameMapping>${artifact.artifactId}.war</outputFileNameMapping>
         <useProjectArtifact>false</useProjectArtifact>
      </dependencySet>
      <dependencySet>
         <includes>
            <include>*:zip</include>
         </includes>
         <outputDirectory>/patches</outputDirectory>
         <outputFileNameMapping>${artifact.artifactId}.zip</outputFileNameMapping>
         <useProjectArtifact>false</useProjectArtifact>
      </dependencySet>
   </dependencySets>
</assembly>
You can see that with some simple changes to the assembly.xml you could create additional/other subdirectories containing other sets of configuration files/dependencies to suit your needs.
 

The deploy script

As this really is just a script it can basically do anything you like and can be easily modified to fit your specific needs, the server environment's constaints, etc... . The example script you can find on Github does the following actions (in order):
  • Ask for some values (that can also be provided on the commandline):
    • Which repository you want to download the package from (snapshots, releases, local)
    • Which version of the package you want to deploy (dynamically build list if not provided on command line)
    • Which environment you'll be deploying on
    • Ask which node you're deploying on in case it is a clustered environment
  • Stop the Tomcat server (and kill the process if it doesn't stop nicely)
  • Backup some files like the Lucene index or the document library
  • Remove the whole Liferay bundle
  • Download and unzip the base Liferay bundle in the correct location
  • Download and unzip the requested deploy package in a temporary location
  • Overlays our Liferay WAR overlay from the temporary location over the unzipped Liferay bundle
  • Download, unzip & configure the latest Liferay patching tool
  • Restores the backed up files
  • Cleans some Tomcat directories
  • Overlays our environment specific files over Tomcat/Liferay
  • Installs Liferay patches/hotfixes/...
  • Copies all our custom WARs to the deploy directory
  • Removes the temporary directory where we unzipped the deploy package to
  • Starts the Tomcat server again 
Depending on the actual project we've added things like Liquibase upgrades, NewRelic installation & configuration, copying additional libraries to Tomcat's lib/ext, ... or just about anything we needed because the scripting system is so flexible. While it might not be as fancy as Docker and consorts it still accomplishes (except for the OS part) about the same result: a perfectly reproduceable deploy.
 

Conclusion

While working all this out in Maven I might have had the occasional, but typical, f*ck Maven (╯°□°)╯︵ ┻━┻ moment
 
 
but after figuring out all of this JAR & WAR monkeypatching combined with the deploy package system/script it ended up being quite flexible & powerful and made me one happy code monkey in the end!
 
All the example code from this post and more can be found on Github: https://github.com/planetsizebrain/monkeypatching.

Where am I? (how to know which cluster node you're on)

Technical Blogs January 17, 2016 By Jan Eerdekens

When working with a cluster you pretty quickly come to a point where you want/need to know what cluster node you're on.

Liferay already has a simple way to show you this. You just need to add the following line to your portal-ext.properties and hey presto! it shows the current web server node ID in a blue box at the bottom of the page:

web.server.display.node=true

While this works perfectly it has one drawback: customers don't really want to see this in production, while we need it at times to be able to debug a problem. There are a number of ways to solve this catch 22 type of situation:

  • Leave the property turned on but hide the blue box using CSS. This allows you to check the value by using the inspect element functionality found in most browsers these days.
  • Turn off the property and add a hidden page that contains a web content display portlet that uses a simple Velocity/Freemarker template to show the same value (which according to the file Liferay uses, /html/portal/layout/view/common.jspf, is PortalUtil.getComputerName()).

I want to present a 3rd option: extend the dockbar with a new icon that allows you to display the node ID (and other cluster node specific data). For this we'll need to override /html/portlet/dockbar/view.jsp and add some code to it.

First is the button which we'll add right before the Toggle Edit Controls button:

<c:if test="<%= showEditControls %>">
   <portlet:renderURL var="editLayoutURL" windowState="<%= LiferayWindowState.EXCLUSIVE.toString() %>">
      <portlet:param name="struts_action" value="/dockbar/edit_layout_panel" />
      <portlet:param name="closeRedirect" value="<%= PortalUtil.getLayoutURL(layout, themeDisplay) %>" />
      <portlet:param name="selPlid" value="<%= String.valueOf(plid) %>" />
   </portlet:renderURL>

   <aui:nav-item anchorId="editLayoutPanel" cssClass="page-edit-controls" data-panelURL="<%= editLayoutURL %>" href="javascript:;" iconCssClass="icon-edit" label="edit" />
</c:if>

<c:if test="<%= PropsValues.WEB_SERVER_DISPLAY_NODE %>">
   <aui:nav-item cssClass="toggle-cluster-link" iconCssClass="icon-map-marker" />
</c:if>

<c:if test="<%= showToggleControls %>">
   <aui:nav-item anchorCssClass="toggle-controls-link" cssClass="toggle-controls" iconCssClass='<%= "controls-state-icon " + (toggleControlsState.equals("visible") ? "icon-eye-open" : "icon-eye-close") %>' id="toggleControls" />
</c:if>

Secondly we'll need some CSS and Javascript (which we add to the existing aui:script tag) to be able to add an action to our new button that will show a popover with the information we want:

<style type="text/css">
   .popover, .aui .popover.right .arrow {
      z-index: 1000 !important;
   }
</style>

<%
   StringBuilder nodeInfo = new StringBuilder();
   nodeInfo.append("<ul>");
   nodeInfo.append("<li>Host: " + PortalUtil.getComputerName() + "</li>");
   nodeInfo.append("<li>ID: " + com.liferay.portal.kernel.cluster.ClusterExecutorUtil.getLocalClusterNode().getClusterNodeId() + "</li>");
   nodeInfo.append("<li>IP: " + com.liferay.portal.kernel.cluster.ClusterExecutorUtil.getLocalClusterNode().getInetAddress() + "</li>");
   nodeInfo.append("</ul>");
%>

<aui:script position="inline" use="liferay-dockbar,aui-popover">
   Liferay.Dockbar.init('#<portlet:namespace />dockbar');

   var customizableColumns = A.all('.portlet-column-content.customizable');

   if (customizableColumns.size() > 0) {
      customizableColumns.get('parentNode').addClass('customizable');
   }

   // Add cluster node ID
   var trigger = A.one('.toggle-cluster-link');
   if (trigger != null) {
       var popover = new A.Popover(
         {
           align: {
             node: trigger,
             points:[A.WidgetPositionAlign.LC, A.WidgetPositionAlign.RC]
           },
           bodyContent: '<%= nodeInfo %>',
           position: 'right',
           visible: false
         }
       ).render();

       trigger.on(
         'click',
         function() {
           popover.set('visible', !popover.get('visible'));
         }
       );

      A.one('body').on(
          'key',
          function() {
             // http://keycode.info/
              popover.set('visible', !popover.get('visible'));
          },
          'down:78+shift'
       );
   }

</aui:script>

The Javascript code above also adds a keyboard shortcut, SHIFT+N, to the button so that you can even show the popover without even needing to click the button. You can find the complete file here: https://gist.github.com/planetsizebrain/3f9788a78224be73d80b. This will give you the following result:

If you want something similar, but don't want to add a button to the dockbar, you could use similar code to make the existing blue bar appear/disappear when a shortcut is pressed.

Patching tool autocomplete for Bash

Technical Blogs December 27, 2015 By Jan Eerdekens

If, like me, you work on a non-Windows machine, an Apple MacBook Pro in my case with OSX, you are probably used to having TAB completion in your terminal. I've caught myself multiple times lately using TAB when working with the Liferay patching tool and then just getting a file listing instead if the list of possible commands. This is of course normal as the patching tool isn't a default OS level tool for which Bash has standard autocomplete support build in. As I'm not that good at remembering a lot of shortcuts/commands (seems like for every one I push in my brain an older one falls out) I decided to see if I could add autocompletion for the patching tool to Bash.

Some Googling showed that this shouldn't be that hard as we only need command completion, but not option/switch completion or other more complicated scenario's. The following information will be OSX specific, but except for the Homebrew parts, should easily translate to Linux based systems.

To add completion for Liferay's patching tool we'll need to use the Bash Programmable Completion project. On OSX this can be easily installed via a simple Homebrew command: brew install bash-completion.

Warning: You are using OS X 10.11.
We do not provide support for this pre-release version.
You may encounter build failures or other breakage.
==> Downloading https://bash-completion.alioth.debian.org/files/bash-completion-1.3.tar.bz2
######################################################################## 100.0%
==> Patching
patching file bash_completion
==> ./configure --prefix=/usr/local/Cellar/bash-completion/1.3
==> make install
==> Caveats
Add the following lines to your ~/.bash_profile:
  if [ -f $(brew --prefix)/etc/bash_completion ]; then
    . $(brew --prefix)/etc/bash_completion
  fi

Homebrew's own bash completion script has been installed to
  /usr/local/etc/bash_completion.d

Bash completion has been installed to:
  /usr/local/etc/bash_completion.d
==> Summary
�  /usr/local/Cellar/bash-completion/1.3: 188 files, 1.1M, built in 4 seconds

This will install the necessary stuff on your machine, but leaves 1 task to be done manually. You will need to add the following lines to your own .bash_profile file:

if [ -f $(brew --prefix)/etc/bash_completion ]; then
    . $(brew --prefix)/etc/bash_completion
fi

The next, and already last, step is to create a custom autocomplete script file with the name patching-tool.sh (remember to make it executable) in /usr/local/etc/bash_completion.d that has the following content (Gist): 

_patching-tool()
{
    local cur=${COMP_WORDS[COMP_CWORD]}
    COMPREPLY=( $(compgen -W "auto-discovery diff help index-info info install list-collisions list-customizations list-plugins revert server-status setup store support-info update-plugins version" -- $cur) )
}
complete -F _patching-tool patching-tool.sh

Once you have all this in place and reload your Bash profile you should have basic patching tool autocompletion like this:

Liferay - The Missing Parts: web content article priority

Technical Blogs August 30, 2015 By Jan Eerdekens

When working with the web content management part of Liferay you quickly start using the Asset Publisher because of its versatility. This portlet enables you to gather and present different kinds of content in a multitude of ways. It can gather content in a dynamic way, by means of a search in a certain scope, on one or more asset type, filtering on tags/categories, or manually, where you select a fixed set of content items yourself.

When selecting content dynamically you're able to define how the results will be sorted depending on certain fields and when selecting manually you define the order yourself. What you will quickly discover however is that sometimes you need something in between: you want to do a dynamic selection, but have a more specify way to define the order. The easiest way would be to use an already existing field where you could enter some sort of number, a weight, that will define how content needs to be sorted. When looking at the Asset Publisher configuration it seems that there is already something for that: a field with the name priority. Only when you try to use this, you'll notice that there is no way, except directly in the database, to fill in the priority on a web content article.

This blog post will show how you can easily provide a way to fill in this field so that it can be used in the Asset Publisher to get articles sorted in the way we want. A simple hook can be used to add this missing functionality to Liferay. In this hook we'll need to 2 of the most used Liferay extension mechanisms, JSP overrides & service overrides, to reach our goal. The JSP override consists of two parts: one adds a priority menu item to the list of actions you see on the right of an article when editing it:

...
if (Validator.isNotNull(toLanguageId)) {
    mainSections = PropsValues.JOURNAL_ARTICLE_FORM_TRANSLATE;
} else if ((article != null) && (article.getId() > 0)) {
    mainSections = PropsValues.JOURNAL_ARTICLE_FORM_UPDATE;
} else if (classNameId > JournalArticleConstants.CLASSNAME_ID_DEFAULT) {
    mainSections = PropsValues.JOURNAL_ARTICLE_FORM_DEFAULT_VALUES;
}

// Add priority section
mainSections = ArrayUtil.append(mainSections, "priority");

String[][] categorySections = {mainSections};
...

and the other is a small JSPs that is used to show the actual priority input field that can be filled in by the user (that also has some validation as the priority is supposed to be a positive double).

<%@ include file="/html/portlet/journal/init.jsp" %>
<%
   JournalArticle article = (JournalArticle) request.getAttribute(WebKeys.JOURNAL_ARTICLE);

   String priority = "0";
   if (article != null) {
      AssetEntry assetEntry = AssetEntryLocalServiceUtil.getEntry(JournalArticle.class.getName(), article.getResourcePrimKey());
      priority = Double.toString(assetEntry.getPriority());
   }
%>

<aui:input name="priority" type="text" value="<%= priority %>">
   <aui:validator name="number" />
   <aui:validator name="min">[0]</aui:validator>
</aui:input>

Once these JSPs are in place we also need to override Liferay's JournalArticleLocalService so that we can actually store the priority value once the user has filled it in. This means providing an implementation that extends from JournalArticleLocalServiceWrapper (some method parameters have been replaced by ... in the code below for readability reasons):

package be.aca.hook.service;

import com.liferay.portal.kernel.exception.PortalException;
import com.liferay.portal.kernel.exception.SystemException;
import com.liferay.portal.service.ServiceContext;
import com.liferay.portlet.asset.model.AssetEntry;
import com.liferay.portlet.asset.service.AssetEntryLocalServiceUtil;
import com.liferay.portlet.journal.model.JournalArticle;
import com.liferay.portlet.journal.service.JournalArticleLocalService;
import com.liferay.portlet.journal.service.JournalArticleLocalServiceWrapper;
import java.io.File;
import java.util.Locale;
import java.util.Map;

public class JournalArticleLocalServiceImpl extends JournalArticleLocalServiceWrapper {

    private static final String PRIORITY = "priority";

    public JournalArticleLocalServiceImpl(JournalArticleLocalService journalArticleLocalService) {
        super(journalArticleLocalService);
    }

    @Override
    public JournalArticle addArticle(long userId, long groupId, long folderId, long classNameId, ..., ServiceContext serviceContext) throws PortalException, SystemException {
        JournalArticle article = super.addArticle(userId, groupId, folderId, classNameId, ..., serviceContext);
        setPriority(article, serviceContext);
        return article;
    }
   
    @Override
    public JournalArticle updateArticle(long userId, long groupId, long folderId, String articleId, ..., ServiceContext serviceContext) throws PortalException, SystemException {
        JournalArticle article = super.updateArticle(userId, groupId, folderId, articleId, ..., serviceContext);
        setPriority(article, serviceContext);
        return article;
    } 

    @Override
    public JournalArticle updateArticle(long userId, long groupId, long folderId, String articleId, ..., ServiceContext serviceContext) throws PortalException, SystemException {
        JournalArticle article = super.updateArticle(userId, groupId, folderId, articleId, ..., serviceContext);
        setPriority(article, serviceContext);
        return article;
    }
 
    private double getPriority(ServiceContext serviceContext) {
        String priority = (String) serviceContext.getAttribute(PRIORITY);
        return priority != null ? Double.parseDouble(priority) : 0;
    }  

    private void setPriority(JournalArticle article, ServiceContext serviceContext) throws SystemException, PortalException {
        double priority = getPriority(serviceContext);
        AssetEntry assetEntry = AssetEntryLocalServiceUtil.getEntry(JournalArticle.class.getName(), article.getResourcePrimKey());
        assetEntry.setPriority(priority);
        AssetEntryLocalServiceUtil.updateAssetEntry(assetEntry);
    }
} 

and reference it in the liferay-hook.xml together with the location of the overridden JSPs

<?xml version="1.0"?>
<!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.1.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_1_0.dtd">
<hook>
    <custom-jsp-dir>/jsps</custom-jsp-dir>
    <service>
        <service-type>com.liferay.portlet.journal.service.JournalArticleLocalService</service-type>
        <service-impl>be.aca.hook.service.JournalArticleLocalServiceImpl</service-impl>
    </service>
</hook>

You can see the service is pretty simple and just takes the article that was just created/edited, finds the corresponding AssetEntry and sets the priority that was extracted from the ServiceContext on it. In a similar way, by using an overridden JSP/service combination, you could also add something similar to documents (or any other Liferay object that has a linked AssetEntry). When deploying this hook you now have the option to fill in a priority on an article when creating or editing it by going to the Priority menu on the right hand side of the screen and filling in the input field. When you now configure the Asset Publisher to use this value for sorting you will actually get the behavior you would expect of the portlet in the first place (without customisation that is).

The code for this post can be found on GitHub: priority-hook

Better PDF previews in Liferay without ImageMagick

Technical Blogs April 29, 2015 By Jan Eerdekens

The problem

While doing some work for a client I ran into issues with the preview generation for some PDFs. While it did generate a preview image for each page of the document, the text on it was strange sometimes to say the least. In the table below you can see a screenshot of the first page of the problematic PDF, the preview that is generated by Liferay and the preview we're able to generate after our hack:

Actual PDF Before hack After hack

The biggest problem is with the font, but the background is also a bit screwy. So my initial thought was that the PDF might be using some specials font(s) and didn't (correctly) embed them? So my first inclination was to see how to add fonts to the system that could be picked up by whatever Liferay was using to generate the preview images. The default for Liferay to generate previews of PDF files is a pure Java library called PDFBox. There's also the option of using an OS native install of Imagemagick (and Ghostscript), but that would require at least an additional 2Gb of memory outside of the JVM allocation. As this wasn't an option in this case I first looked into the font option. While this does seem to be possible in PDFBox, by editing the PDFBox_External_Fonts.properties that can be found inside the JAR and adding the additional fonts, I couldn't quite get it to work: instead of strange/wrong characters I now got no characters at all.

After some more Googling it seems that the PDFBox that Liferay 6.2 uses, which is version 1.8.2, is known for having a lot of font issues. Most of these seem to be better/fixed in the 2.0.0 version... sadly enough this version hasn't been released yet. But in cases like this you sometimes need to take a page out of Ayrton Senna's book and push the limit a bit:

"On a given day, a given circumstance, you think you have a limit. And you then go for this limit and you touch this limit, and you think, 'Okay, this is the limit'. And so you touch this limit, something happens and you suddenly can go a little bit further. With your mind power, your determination, your instinct, and the experience as well, you can fly very high." -- Ayrton Senna

Here be dragons: what I'll be describing below is messing around with a SNAPSHOT version of an unreleased PDFBox version while simultaneously hacking Liferay a bit to get it all working together. For the versions I used this all seems to work nicely, but trying this out yourself, especially in a production environment, is completely at your own risk.

The solution

As you might expect you can't just drop in a snapshot version of PDFBox 2.0.0 jar and expect everything to be solved. I doesn't quite work like that and here are reasons why:

  • There are actually 3 related JARs: pdfbox.jar, fontbox.jar & jempbox.jar
  • PDFBox isn't only used for preview generation by Liferay, but also for stripping the text out of PDFs for indexing
  • API incompatibilities

While there are 3 JARs in Liferay's lib directory that are part of PDFBox, only the first two I mentioned are actually used in the preview generation and will need to be switched out. The jempbox JAR is used for PDF XMP metadata extraction, but can be left in its original state (in the 2.0.0 version jempbox has been renamed to xmpbox). If you take a 2.0.0-SNAPSHOT build of these two JARs and use them to replace the ones in the Liferay WEB-INF/lib directory and restart you'll run into the other two problems I mentioned.

From the stracktrace you get you'll see that not only is Liferay using PDFBox for preview and thumbnail generation, it also uses it, via Apache Tika, for text extraction. Tika uses PDFBox's PDFTextStripper class (and some auxiliary ones) for this, which were moved from the package org.apache.pdfbox.util to org.apache.pdfbox.text for PDFBox 2.0.0. Because we do not also want to patch Tika, we'll just move those classes back to their original package and call it a day.

This brings us to our last problem, but also biggest problem: there have some significant code changes/refactorings in PDFBox between version 1.8.2 and the snapshot we'd like to use. The first change we run into is something that is related to the previous problem. In version 1.8.2 the PDFTextStripper class had a method setForceParsing(boolean) which isn't present anymore in 2.0.0, but which we'll just add back with an empty implementation:

public void setForceParsing(boolean forceParsingValue) {
    // NO-OP to comply with old signature
}

While it is a bit strange to solve a NoSuchMethodException like this, it seems it wasn't a really critical part because afterwards the text extraction seemed to work again like before. This means we can finally get to the important part: fixing the API incompatibilities in the Liferay PDF preview generation code that is done by the LiferayPDFBoxConverter class. Due to some refactorings in PDFBox this class won't find the getAllPages() method on PDDocumentCatalog anymore. To fix this you'll need to take the source of this class, modify it and then replace the original class, located in Liferay's portal-impl.jar, with your modified one. We did this using a fancy JAR/WAR overlay system that we use in our build/deploy system (which we'll cover in a blog post someday), but there are of course other ways to do this: manually patching the JAR, an extlet, ... .

When you add the source of the LiferayPDFBoxConverter class to a simple project you'll also see that some other stuff won't compile because of missing/changed methods. For this we'll need to make some changes to the generateImagesPB() methods so they look like the ones below:

public void generateImagesPB() throws Exception {
   PDDocument pdDocument = null;
 
   try {
      pdDocument = PDDocument.load(_inputFile);
 
      PDDocumentCatalog pdDocumentCatalog =
         pdDocument.getDocumentCatalog();
 
      PDFRenderer pdfRenderer = new PDFRenderer(pdDocument);
 
      PDPageTree pdPages = pdDocumentCatalog.getPages();
 
      for (int i = 0; i < pdPages.getCount(); i++) {
         if (_generateThumbnail && (i == 0)) {
            _generateImagesPB(
               pdfRenderer, i, _thumbnailFile, _thumbnailExtension);
         }
 
         if (!_generatePreview) {
            break;
         }
 
         _generateImagesPB(pdfRenderer, i, _previewFiles[i], _extension);
      }
   }
   finally {
      if (pdDocument != null) {
         pdDocument.close();
      }
   }
}
 
private void _generateImagesPB(
      PDFRenderer pdfRenderer, int index, File outputFile, String extension)
   throws Exception {
 
   RenderedImage renderedImage = pdfRenderer.renderImageWithDPI(index, _dpi, ImageType.RGB);
 
   ImageTool imageTool = ImageToolImpl.getInstance();
 
   if (_height != 0) {
      renderedImage = imageTool.scale(renderedImage, _width, _height);
   }
   else {
      renderedImage = imageTool.scale(renderedImage, _width);
   }
 
   outputFile.createNewFile();
 
   ImageIO.write(renderedImage, extension, outputFile);
}

With this modified class in place and the updated and tweaked PDFBox JARs the PDF preview generation (and text extraction) should work again and produce far better results that before. To make the lives of the developers that, like me, like to live on the edge, here's some helpful code:

 

More blogs on Liferay and Java via http://blogs.aca-it.be.

More Angular Adventures in Liferay Land

Technical Blogs March 19, 2015 By Jan Eerdekens

Last year I spent a lot of time looking into how AngularJS could be used in a portal environment, more specifically Liferay. That research culminated in a pretty technical and quite lengthy blog post: Angular Adventures in Liferay Land. The problem with a blog post that you invest such an amount of time in is that it always remains in the back of your head, especially if there were some things that you couldn't quite get to work like you wanted to. So like a lot of developers I keep tinkering with the portlet, trying to make it better, trying to solve some problems that weren't solved to my liking. 

So after a couple of months I ended up with a better portlet and enough material for a new blog post. In this second Angular/Liferay blog post I will address a couple of things:
  • the one thing I didn't quite get to work (and only found an ugly workaround for): routing
  • a better way to do i18n (after someone pointed out a problem with the current implementation)
  • validation (and internationalized validation) 
  • splitting up Javascript files, for readability, and merging them during the build
You can still find the example portlet on Github: angular-portlet
 

From here to there: the right way

During the testing I did for the previous post I wasn't able to get the widely used Angular UI Router module to work. I looked at a number of alternative modules, but no matter what I tried, I couldn't get any of them to work. So in the end I had to resort to a very ugly workaround that (mis)used ng-include to provide a rudimentary page switching capability. While it works and kinda does the job for a small portlet, it didn't feel right. During a consulting job at a customer that wanted to look into using AngularJS in a portal environment, the routing problem was one of the things we looked into together and managed to get working this time. 

To start off you need to download the UI Router JS file, add it to the project and load it using the liferay-portlet.xml:

<?xml version="1.0"?>
<!DOCTYPE liferay-portlet-app PUBLIC "-//Liferay//DTD Portlet Application 6.2.0//EN" "http://www.liferay.com/dtd/liferay-portlet-app_6_2_0.dtd">

<liferay-portlet-app>
   <portlet>
      ...
      <footer-portal-javascript>/angular-portlet/js/angular-ui-router.js</footer-portal-javascript>
      ...
   </portlet>
</liferay-portlet-app>

After this it takes a couple of small, but important, changes to get similar UI Router code that I tried during the first tests, to work this time. Like I expected previously the UI Router $locationProvider needs to be put into HTML5 mode so it doesn't try to mess around with our precious portal URLs. This mode makes sure it doesn't try to change the URL, but it will still add a # to the URL sometimes. To counter this you also need to slightly tweak the $urlRouterProvider so it uses '/' as the otherwise option, something you also need to set on the url property of your base state (but not on the others): 

var app = angular.module(id, ["ui.router"]);

app.config(['$urlRouterProvider', '$stateProvider', '$locationProvider',
   function($urlRouterProvider, $stateProvider, $locationProvider) {

      ...
      $locationProvider.html5Mode(true);
      $urlRouterProvider.otherwise('/');

      $stateProvider
         .state("list", {
            url: '/',
            templateUrl: 'list',
            controller: 'ListCtrl'
         })
         ...
   }
]);
After these changes you'll be then able to use a pretty default $stateProvider definition, where we use the templateUrl field to define the page that is linked to the state. These changes alone still won't make the routing work. We still need another small change to Liferay to make the HTML5 mode work as this needs a correctly set base href in the HTML. This can be achieved in multiple ways in Liferay: a JSP override hook, a theme, ... . To keep everything nicely encapsulated in this portlet, I've chosen to use a simple JSP override in the portlet itself and not in a separate hook.
 
An override of /html/common/themes/top_head.jsp will enable you to add the required <base href='/'> tag on a global level to the portal HTML code:
<%@ taglib uri="http://liferay.com/tld/util" prefix="liferay-util" %>

<%@ page import="com.liferay.portal.kernel.util.StringUtil" %>

<liferay-util:buffer var="html">
   <liferay-util:include page="/html/common/themes/top_head.portal.jsp" />
</liferay-util:buffer>

<%
   html = StringUtil.add(
         html,
         "<base href='/'>",
         "\n");
%>

<%= html %>
Just add this file to your portlet, with the correct subdirectories, in /src/main/webapp/custom_jsps and configure this directory in your liferay-hook.xml:
<?xml version="1.0"?>
<!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.2.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_2_0.dtd">
<hook>
    <custom-jsp-dir>/custom_jsps</custom-jsp-dir>
</hook>
With all this set up we now just need one more small change to tie it all together. The state configuration we currently have still doesn't take into account the fact that we're on a portal and need to use special URLs. Luckily the UI Router module has got us covered and provides a nice event, $stateChangeStart, that we can use to mess around with the URLs. We'll use this event to detect if the normal URL, used in the state, has already been adapted for portal use or not. If not we'll create a correct portal render URL based on the given template URL.
app.run(['$rootScope', 'url',
   function($rootScope, url) {

      $rootScope.$on('$stateChangeStart', function(event, toState, toParams, fromState, fromParams) {
         if (!toState.hasOwnProperty('fixedUrl')) {
            toState.templateUrl = url.createRenderUrl(toState.templateUrl);
            toState.fixedUrl = true;
         }
      });
   }
]);
 
 
Now all the backend routing machinery is correctly set up, you can use the normal UI Router stuff on the frontend. You'll need to add the ui-view attribute to your base div in view.jsp:
...
<div id="<portlet:namespace />main" ng-cloak>
   <div ng-hide="liferay.loggedIn">You need to be logged in to use this portlet</div>
   <div ui-view ng-show="liferay.loggedIn"></div>
</div>
...
and then use the ui-sref attribute on your anchor tags to navigate:
<h2 translate>detail.for.bookmark</h2>
<form role="form">
   ...
   <button type="submit" class="btn btn-default" ng-click="save();" translate>action.submit</button>
   <button type="submit" class="btn btn-default" ui-sref='list' translate>action.cancel</button>
</form>
 

Found in translation

In the previous post this section was called Lost in translation and as yuchi pointed out in a Github issue he created for the example portlet Liferay.Language.get is synchronous and should be avoided when possible. So it seemed I was still a little bit lost in translation and needed to find a better solution. I think I might have found one in the Angular Translate module made by Pascal Precht. This looked like a great and more importantly configurable module that I might be able to get to work in a portal context.

To start off you need to download the Angular Translate JS file (and also the loader-url file), add it to the project and load it using the liferay-portlet.xml:
<?xml version="1.0"?>
<!DOCTYPE liferay-portlet-app PUBLIC "-//Liferay//DTD Portlet Application 6.2.0//EN" "http://www.liferay.com/dtd/liferay-portlet-app_6_2_0.dtd">

<liferay-portlet-app>
   <portlet>
      ...
      <footer-portal-javascript>/angular-portlet/js/angular-translate.js</footer-portal-javascript>
      <footer-portal-javascript>/angular-portlet/js/angular-translate-loader-url.js</footer-portal-javascript>
      ...
   </portlet>
</liferay-portlet-app>
By default it tries to find JSON files, that contain the translation key/value pairs, with certain URLs. So you can probably guess that by default this won't work. Luckily the module has all kinds of extensibility features built in and with a custom UrlLoader I was able to convince it to use my own custom language bundles:
app.config(['$translateProvider', 'urlProvider',
   function($translateProvider, urlProvider) {

      ...
      urlProvider.setPid(portletId);

      $translateProvider.useUrlLoader(urlProvider.$get().createResourceUrl('language', 'locale', Liferay.ThemeDisplay.getBCP47LanguageId()));
      $translateProvider.preferredLanguage(Liferay.ThemeDisplay.getBCP47LanguageId());
      ...
   }
]);
As you can see we just configure the $translateProvider to load the translations from a custom resource URL and use the current Liferay language ID as the preferred language. I also had to transform my Liferay URL factory, from the first version of the portlet, to a provider because you can't use factories/services in an Angular config section.
 
Using the resource URL mechanism from the first version of this portlet we can, with some classloader magic, output a JSON that represent a complete Liferay resource bundle (including extensions from hooks) for a given language:
@Resource(id = "language")
@CacheResource(keyParam = "locale")
public Map<String, String> getLanguage(@Param String locale) throws Exception {
   Locale localeValue = DEFAULT_LIFERAY_LOCALE;
   if (!Strings.isNullOrEmpty(locale)) {
      localeValue = Locale.forLanguageTag(locale);
   }

   ClassLoader portalClassLoader = PortalClassLoaderUtil.getClassLoader();

   Class c = portalClassLoader.loadClass("com.liferay.portal.language.LanguageResources");
   Field f = c.getDeclaredField("_languageMaps");
   f.setAccessible(true);

   Map<Locale, Map<String, String>> bundles = (Map<Locale, Map<String, String>>) f.get(null);

   return bundles.get(localeValue);
}
This code uses custom @Resource and @CacheResource annotations, which aren't specifically needed as a normal serveResource method will also do the trick, but these annotations make it just a little easier (and allow easy caching). More information about these annotations can be found in the Github code itself (in the /be/aca/liferay/angular/portlet/resource package).
 
With this new setup we can completely drop the custom translation directive we used in the previous version of the portlet. You now just need to add an empty translate attribute on any tag you want to translate (other ways are also possible and can be found in the Angular Translate docs). Adding this attribute to a tag will use the value between the tags as the key to translate:
<h2 translate>bookmarks</h2>
<div>
   ...
      <thead>
         <tr>
            <th translate>table.id</th>
            <th translate>table.name</th>
            <th translate>table.actions</th>
         </tr>
      </thead>
...
 

The importance of being valid (in any language)

After getting routing and translations to work I discovered another piece of functionality that was missing from the example portlet: validation. After looking around for Angular validation modules I settled on the Angular auto validate module by Jon Samwell. This seemed to be a pretty non intrusive and easy to use way to add validation to an Angular app. You just need to add a novalidate and ng-submit attribute to your form, some required, ng-maxlength, etc... attributes to your input fields and you're already done on the HTML side:

<form role="form" name="bookmarkForm" ng-submit="store();" novalidate="novalidate">
   <div class="form-group">
      <label for="name" translate>label.name</label>
      <input type="text" ng-model="model.currentBookmark.name" class="form-control" id="name" name="name" ng-minlength="3" ng-maxlength="25" required>
   </div>
   <div class="form-group">
      <label for="description" translate>label.description</label>
      <input type="text" ng-model="model.currentBookmark.description" class="form-control" id="description" name="description" ng-minlength="3" required>
   </div>
   <div class="form-group">
      <label for="url" translate>label.url</label>
      <input type="url" ng-model="model.currentBookmark.url" class="form-control" id="url" name="url" required>
   </div>
   ...
</form>
On the Javascript side of the code you need to add is also pretty minimal. Download the correct jcs-auto-validate.js file, add it to the project and load it using liferay-portlet.xml:
<?xml version="1.0"?>
<!DOCTYPE liferay-portlet-app PUBLIC "-//Liferay//DTD Portlet Application 6.2.0//EN" "http://www.liferay.com/dtd/liferay-portlet-app_6_2_0.dtd">

<liferay-portlet-app>
   <portlet>
      ...
      <footer-portal-javascript>/angular-portlet/js/jcs-auto-validate.js</footer-portal-javascript>
      ...
   </portlet>
</liferay-portlet-app>
Now that we have working validation it would be nice if the validation messages would be correctly translated using our custom language bundles instead of the ones bundled with the Javascript module itself. Using a custom ErrorMessageResolver, that uses the Angular Translate module's service we've already used in the previous section, this can be easily achieved:
angular.module("app.factories").

   // A custom error message resolver that provides custom error messages defined
   // in the Liferay/portlet language bundles. Uses a prefix key so they don't clash
   // with other Liferay keys and reuses the code from the library itself to
   // replace the {0} values.
   factory('i18nErrorMessageResolver', ['$q', '$translate',
      function($q, $translate) {

         var resolve = function(errorType, el) {
            var defer = $q.defer();

            var prefix = "validation.";
            $translate(prefix + errorType).then(function(message) {
               if (el && el.attr) {
                  try {
                     var parameters = [];
                     var parameter = el.attr('ng-' + errorType);
                     if (parameter === undefined) {
                        parameter = el.attr('data-ng-' + errorType) || el.attr(errorType);
                     }

                     parameters.push(parameter || '');

                     message = message.format(parameters);
                  } catch (e) {}
               }

               defer.resolve(message);
            });

            return defer.promise;
         };

         return {
            resolve: resolve
         };
      }
   ]
);
You just need to add this custom resolver to the validator like this:
var app = angular.module(id, ["jcs-autoValidate"]);

app.run(['validator', 'i18nErrorMessageResolver',
   function(validator, i18nErrorMessageResolver) {

      validator.setErrorMessageResolver(i18nErrorMessageResolver.resolve);
   }
]);
 

Breaking up

In the previous version of the portlet most of the Javascript code was in a limited number of files that, with all the additional changes from this post, would've gotten pretty long. To keep everything short and sweet, we need to split up the existing files into smaller ones in a way that makes them easy to manage. In the Javascript world there are tools like Grunt that can do this, but they're not easy to use in a Maven based project. After looking around I settled on the WRO4J Maven plugin. This enables you to easily merge multiple Javascript (or CSS, ...) files into one, but it can also do other stuff like minimize them etc... . 

The first thing we need to do to make this work is add some stuff to the build plugins section of our pom.xml:

...
<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-war-plugin</artifactId>
   <version>2.6</version>
   <configuration>
      <!--
         Exclude originals of file that will be merged with wro4j
      -->
      <warSourceExcludes>js/controller/*.js,js/service/*.js,js/directive/*.js</warSourceExcludes>
   </configuration>
</plugin>
<plugin>
   <groupId>ro.isdc.wro4j</groupId>
   <artifactId>wro4j-maven-plugin</artifactId>
   <version>1.7.7</version>
   <executions>
      <execution>
         <phase>compile</phase>
         <goals>
            <goal>run</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <!-- No space allowed after a comma -->
      <targetGroups>controllers,services,directives</targetGroups>
      <jsDestinationFolder>${project.build.directory}/${project.build.finalName}/js</jsDestinationFolder>
      <contextFolder>${basedir}/src/main/webapp/</contextFolder>
   </configuration>
</plugin>
...
The first plugin will make sure non of the original, split up, JS files, will be packaged in the WAR while the second will actually merge the different files, using a wro.groovy definition file in the WEB-INF directory, and make sure they are placed in the correct location in the target directory before the actual WAR is made.
groups {
    controllers {
        js(minimize: false, "/js/controller/Init.js")
        js(minimize: false, "/js/controller/*Controller.js")
    }
    services {
        js(minimize: false, "/js/service/Init.js")
        js(minimize: false, "/js/service/*Factory.js")
        js(minimize: false, "/js/service/ErrorMessageResolver.js")
    }
    directives {
        js(minimize: false, "/js/directive/Init.js")
        js(minimize: false, "/js/directive/*Directive.js")
    }
    all {
        controllers()
        services()
        directives()
    }
}
Now we just need to refer to the merged files this Maven plugin will create during packaging in the liferay-portlet.xml:
<?xml version="1.0"?>
<!DOCTYPE liferay-portlet-app PUBLIC "-//Liferay//DTD Portlet Application 6.2.0//EN" "http://www.liferay.com/dtd/liferay-portlet-app_6_2_0.dtd">

<liferay-portlet-app>
   <portlet>
      ...
      <footer-portlet-javascript>/js/controllers.js</footer-portlet-javascript>
      <footer-portlet-javascript>/js/services.js</footer-portlet-javascript>
      <footer-portlet-javascript>/js/directives.js</footer-portlet-javascript>
      ...
   </portlet>
</liferay-portlet-app>
For each of the groups in the Groovy definition file we need to provide a small Init.js file that is used to define a module with a name and the [] notation (so that an empty module is declared). Below is an example for the controllers that uses app.controllers as the module name:
'use strict';

var module = angular.module('app.controllers', []);
This way you can then easily define an actual controller using by just referring to the declared module name, without redeclaring it:
angular.module('app.controllers').
   controller("ListCtrl", ['$scope', '$rootScope', '$http', '$timeout', 'bookmarkFactory', '$stateParams',

      function($scope, $rootScope, $http, $timeout, bookmarkFactory, $stateParams) {
         ...
      }
   ]
);
You can then refer to this module in the bootstrap of your Angular app as follows:
var app = angular.module(id, ["app.controllers"]);
The same mechanism can be used for factories, services, directives, ... . 
 

A full Eclipse

One of the people that contacted me after the first blog post was Gregory Amerson. Greg works for Liferay and is the main developer of Liferay IDE (based on Eclipse). He tried out my portlet in Liferay IDE together with an Eclipse AngularJS plugin and encountered a couple of small problems: calling a function delete is problematic, some additional dependencies are needed for full taglib support, etc... . The most important changes discovered during his tests are merged back into this new version of the portlet. I also tried out Liferay IDE myself and I must say the HTML/JS/Angular support has been markedly improved, but in the end I still like my IntelliJ better.
 

Conclusion

With the routing module working now, better internationalisation, validation and JS file merging, AngularJS is starting to look more and more as a tool that a portal developer could add to his toolbelt and be productive with.

 

More blogs on Liferay and Java via http://blogs.aca-it.be.

 

 
 
 
 

Angular Adventures in Liferay Land

Technical Blogs October 5, 2014 By Jan Eerdekens

Intro

A lot of developers will probably know this feeling: you've just returned from a conference and seen lots of exciting new technologies and now you can't wait to try them out! For me one such moment and technology was Devoxx 2013 and seeing an AngularJS talk by Igor Minar and Misko Hevery. The speed and ease of development appealed very much to me, but I was a bit hesitant because of the language choice: Javascript. Javascript and me have been a bit of reluctant friends that tolerate each other, but don't need to spend a lot of time together (a bit like Adam and Jamie of Mythbusters fame). I usually try to hide all Javascript complexity behind the frameworks I use, e.g. JSF/Primefaces.  Nonetheless I did see and love the inherit challenge of trying to use a new framework and see how it can be used in what I do on a daily basis: customizing and writing portlets to use in a Liferay portal. So for this blog post I forced myself to do the following:

As mentioned before, we usually write JSF based portlets to use in Liferay and the JSF implementation of our choice at ACA has been Primefaces for a couple of years now (as far as JSF frameworks go a good choice it seems as Primefaces seems to have 'won' the JSF framework battle). With JSF being an official specification for web applications and there also being a specification now to bridge JSF to work nicely in a portal enviroment this combination has served us well. With AngularJS there will be none of these things. AngularJS is a standalone JS framework that is on one side side pretty opinionated about certain things (which usually means trouble in a portal environment), but on the other side is pretty open to customization and changes.
 
This blog post will try to show how AngularJS can be used to build a simple example portlet, integrate it with Liferay (services, i18n, IPC, ...), show what the technical difficulties are and how these can be overcome (or worked around). The full code of this portlet is available on Github under the MIT license: angular-portlet. Feel free to use it as a base for further experiments, as the base for a POC, whatever... and comments/ideas/questions/pull requests are welcome too.
 
Now on to the technical stuff!
 

Lots & lots of Javascript

The first hurdle which I thought might give problems was using an additional Javascript framework in an environment that already has some. Liferay itself is built on YUI & AlloyUI and provides those frameworks globally to every portlet that runs in the portal environment. Liferay used to use JQuery and will be using JQuery again in future versions, but even in the current version, Liferay 6.2, it is perfectly possible to use JQuery in your portlets if you use it in noConflict mode. Also Primefaces, which uses JQuery, works fine in a Liferay portal. 
 
AngularJS will work with JQuery when it is already provided or fall back to an internal JQLite version when JQuery isn't provided. So when I added AngularJS to a simple MVCPortlet and used it in a standard Liferay 6.2 (which has YUI & AlloyUI, but no JQuery) my fears turned out to be unwarranted: everything worked just fine and there were no clashes between the Liferay and Angular javascript files.
 

Not alone

The next step I tried to take was to try and take the simple Hello World! application that is built in the AngularJS learn section of their website and here I immediately ran into an incompatibility: AngularJS, with its ng-app attribute, pretty much assumes it is alone on the page. In a portal, like Liferay, a portlet is usually not alone on the page. It might be, but it may never assume this. The ng-app attribute is what AngularJS uses to bootstrap itself. So long as our custom AngularJS portlet is alone on a page this will work (as is demonstrated by other people that tried this), but once you add a second AngularJS portlet to the page (or a second portlet instance of the same portlet), the automatic bootstrapping via the attribute will cause problems.
 
Looking around it was immediately clear that I'm not the first person that tried to use AngularJS to build portlets:
All these people ran into the same problem and solved it pretty similarly: don't let AngularJS bootstrap itself automatically via the attribute, but call the bootstrapping code yourself and provide it with the ID of the element in the page that contains the Angular JS app GUI. Because we're working in a portal environment and possibly using instanceable portlets this ID needs to be unique, but this problem is easily solved by using the <portlet:namespace/> tag that is provided by the portal in a JSP and is unique by design. 
 
In your main Javascript code add a bootstrap method that takes the unique ID of the HTML element (something prefixed with the portlet namespace) and the portlet ID (aka the portlet namespace). Inside this method you can do or call all the necessary AngularJS stuff and end with the bootstrap call.
function bootstrap(id, portletId) {

    var module = angular.module(id);

    // add controllers, etc... to module
    module.controller("MainCtrl", ['$scope', function($scope) {
            // do stuff
        }]
    );

    angular.bootstrap(document.getElementById(id),[id]);
}

From your JSP page you can simply call this as follows:

<%@ taglib uri="http://java.sun.com/portlet_2_0" prefix="portlet" %>
<%@ taglib uri="http://liferay.com/tld/aui" prefix="aui" %>

<portlet:defineObjects />

<div id="<portlet:namespace />main" ng-controller="MainCtrl" ng-cloak></div>

<aui:script>
    bootstrap('<portlet:namespace />main', '<portlet:namespace />');
</aui:script>

Bootstrapping AngularJS like this allowed me to create a simple, instanceable, Hello World! AngularJS portlet that you could add multiple times to the same page and that would work independent of each other.

From here to there

The next problem is the one I knew beforehand would cause the most problems: navigating between pages in an AngularJS app. The problem here is that AngularJS assumes control over the URL, but in a portal this is a big no-no. Everything you try to do with URLs needs to be done by asking the portal to create a portal URL for your specific portlet and action. If you don't do this and mess with the URL yourself you might at first think everything works as expected, especially with a single AngularJS portlet on the page, but with multiple you'll quickly see things will start to go wrong.

I first started off trying to make the default AngularJS routing work correctly in a portlet, at first by creating portlet specific URLs (using the namespace stuff mentioned before), and later by trying to use the HTML5 mode, but whatever I tried, I couldn't get it to work completely and consistently. After this I Googled around a lot and found several other AngularJS routing components that can be used as a replacement for the original, but here too I couldn't get them to work like I wanted.

I still think that one of these should work and I probably made one or more mistakes while trying them out, but due to time constraints I opted to go for a simple, albeit hacky, solution: using an internal page variable combined with the AngularJS ng-include attribute. The reason I settled on this is that portlets are meant to be relatively small pieces of functionality that can be used in combination with each other (possibly using inter portlet communication) to provide larger functionality. This means there will usually only be limited navigation needed in a portlet between only a small number of pages, which lets us get away with this hack without compromising the normal AngularJS workings and development speed too much.

To make this hack work you just need to add an inner div to the one we already had and add the ng-include and src attribute to it and point the src attribute to a value on your model, called page in the example, that contains whatever piece of html you want to show.
<div id="<portlet:namespace />main" ng-controller="MainCtrl" ng-cloak>
    <div ng-include src="page"></div>
</div>

In your Javascript code you only need to make sure that this page field on your model is initialized on time with your starting page and changed on navigation actions. We can easily keep partial HTML pieces as separate files in our source, by placing them in the webapp/partials directory of our Maven project, and reference them using a portlet resource URL. Constructing such a resource URL can be done using the liferay-portlet-url Javascript service that Liferay provides.

var resourceURL = Liferay.PortletURL.createRenderURL();
resourceURL.setPortletId(pid);
resourceURL.setPortletMode('view');
resourceURL.setWindowState('exclusive');
resourceURL.setParameter('jspPage', '/partials/list.html');

$scope.page = resourceURL.toString();

This code can be easily moved to an AngularJS service and switching pages is as simple as calling a function using the ng-click attribute, calling the service with the correct parameters and assigning the result to the page field on the model. You can find the complete source code for this in the example portlet on GitHub.

To make sure this Liferay service is loaded and available you also need to add something to the aui:script tag in your view.jsp
<aui:script use="liferay-portlet-url,aui-base">
 

No REST for the wicked

Now that we are able to have multiple AngularJS portlets, with navigation, on a page, the next step is to try and work with REST (or REST like) services. These could be REST services that Liferay provides or simple services your portlet provides and that AngularJS can consume.

First we'll look at the services Liferay itself provides. A subset of the services that you are accustomed to using in your Java based portlets are also available by default as REST services. These can be called using the Liferay.Service Javascript call and can be explored by using a simple app that Liferay exposes: check out /api/jsonws on a running Liferay instance. With this app you can easily explore the services, how they can be called and which parameters you'll need to provide. Calling such a service, in our case the Bookmark service, is pretty easy:
Liferay.Service(
    '/bookmarksentry/get-group-entries',
    {
        groupId: Liferay.ThemeDisplay.getScopeGroupId(),
        start: -1,
        end: -1
    },
    function(obj) {
        // do something with result object
    }
);
This code too can be easily moved to an AngularJS service as is shown in the example portlet on GitHub. You'll also notice the use of the Liferay.ThemeDisplay Javascript object here. It provides us with access to most of the stuff you're accustomed to using via the ThemeDisplay object in normal Liferay java code such as the company ID, group ID, language, etc... .
 
As with the resource URL stuff before we'll again have to make sure the Liferay.Service Javascript stuff is loaded and available by adding liferay-service to the use attribute of the aui:script tag in our view.jsp
<aui:script use="liferay-portlet-url,liferay-service,aui-base">
If you need to access Liferay services that aren't exposed by Liferay as a REST service or if you just want to expose your own data I'll show you a simple method here that can be used in a MVCPortlet. We'll need to implement the serveResource method in our portlet so that we can create a ResourceURL for it and use that in our AngularJS code.
public class AngularPortlet extends MVCPortlet {
     
    public void serveResource(ResourceRequest resourceRequest, ResourceResponse resourceResponse) throws IOException, PortletException {

        String resourceId = resourceRequest.getResourceID();    

        try {
            // check resourceID to see what code to execute, possibly using a parameter and return a JSON result

            String paramValue = resourceRequest.getParameter("paramName");         

            Gson gson = new Gson();
            String json = gson.toJson(result);

            resourceResponse.getWriter().print(json);
        } catch (Exception e) {
            LOGGER.error("Problem calling resource serving method for '" + resourceId + "'", e);
            throw new PortletException(e);
        }
    }
}
In the actual example portlet on GitHub you'll see I've implemented this a bit differently using a BasePortlet class and a custom annotation so that you're able to annotate a normal method to signal to which resourceId it should react, but the example code above should give you the general idea. Once you have a serveResource method in place you can call it from within your AngularJS code as follows:
var url = Liferay.PortletURL.createResourceURL();
url.setResourceId('myResourceId');
url.setPortletId(pid);
url.setParameter("paramName", "paramValue");

$http.get(url.toString()).success(function(data, status, headers, config) {
    // do something with the result data
});
To create a valid resourceUrl that'll trigger the serveResource method in your portlet you always need to provide it with a resourceId and portletId. Additionally you also have the option of adding parameters to the call to use in your Java code.
 

You promise?

When trying out these various REST services I quickly ran into problems regarding the asynchronous nature of Javascript and AngularJS. This is something that Primefaces has shielded me from most of the time, but that immediately turned out to be something that I needed to think about and apply rigorously in an AngularJS based portlet. Luckily this is a known 'problem' that has an elegant solution: promises. This means you just need to write and call your AngularJS service/factory in a certain way and all the necessary async magic will happen. For this we'll revisit the code we used to get bookmarks, this time also wrapped in a nice AngularJS factory in a separate Javascript file:

'use strict';

angular.module("app.factories", []).

factory('bookmarkFactory', function($q) {
    var getBookmarks = function() {

        var deferred = $q.defer();

        Liferay.Service(
            '/bookmarksentry/get-group-entries',
            {
                groupId: Liferay.ThemeDisplay.getScopeGroupId(),
                start: -1,
                end: -1
            },

            function(obj) {
                deferred.resolve(obj);
            }
        );

        return deferred.promise;
    };

    return {
        getBookmarks: getBookmarks
    };
});

Using this pattern, creating a deferred result and returning a promise to it from our method, we've made are call asynchronous. Now we just need to call it correctly from our controller using the then syntax:

var module = angular.module(id, ["app.factories"]);

module.controller("MainCtrl", ['$scope', 'bookmarkFactory',

    function($scope, bookmarkFactory) {
        bookmarkFactory.getBookmarks().then(function(data) {
            $scope.model.bookmarks = data;
        });
    }]
);
 

Lost in translation

Now that the most important points have been tackled I wanted to move on to something that is pretty important in the country where I'm from: i18n. In Belgium we have 3 official languages, Dutch, French & German, and usually English is also thrown into the mix for good measure. So I wanted to find out to add i18n to my AngularJS portlet and while doing so see if I could make it into a custom directive. The current version of this simple directive uses an attribute and only allows retrieving a key without providing and replacing parameters in the value.

module.directive('i18n', function() {
    return {
        restrict: 'A',
        link: function(scope, element, attributes) {
            var message = Liferay.Language.get(attributes["i18n"]);
            element.html(message);
        }
    }
});

The directive above uses the Liferay.Language Javascript module to retrieve the value of the given resource key from the portlets language bundles and sets this as the value of the tag on which the directive is used. To be able to use this Liferay Javascript module we'll again need to add something to the use attribute of the aui:script tag to make sure it is loaded and available: liferay-language. Once we have this directive using it in our HTML partials is pretty simple:

<h2 i18n="title"></h2>

This piece of HTML, containing our custom i18n directive, will try to retrieve the value of the title key from the portlet's resource bundles that are defined in the portlet.xml. The important word in the previous sentence is try, because it seems there's a couple of bugs in Liferay (LPS-16513 & LPS-14664) which causes the Liferay.Language Javascript module to not use the portlet's resource bundles, but only the global ones. Luckily there is a simple hack to will allow us to still make this work: add a liferay-hook.xml file to the portlet and use it to configure Liferay to extend its own language bundles with our portlet's.

<?xml version="1.0"?>

<!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.2.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_2_0.dtd">
<hook>
    <language-properties>Language.properties</language-properties>
</hook>
 

Can you hear me?

After taking all the previous hurdles you end up with a pretty usable portlet already, but the last thing I wanted to try was to get AngularJS portlets to talk to each other using IPC (Inter Portlet Communication) or something similar. As AngularJS is a Javascript framework the obvious choice would be to use the Liferay Javascript event system: Liferay.on & Liferay.fire. Integrating this in my controller turned out to be not as straightforward as I expected, but once I threw promises and $timeout into the mix I got the result I expected at the beginning. To fire an event just use the following in your code:

Liferay.fire('reloadBookmarks', { portletId: $scope.portletId });

You can use any event name you like, reloadBookmarks in our case. or even had multiple events. We also pass in the portletId ourself as this doesn't seem to be in the event data by default anymore and it can be useful to filter out unwanted events on the portlet that is the originator of the event. This event data can be basically any Javascript/JSON object you want.

Liferay.on('reloadBookmarks', function(event) {
    if (event.portletId != $scope.portletId) {
        $timeout(function() {
            bookmarkFactory.getBookmarks().then(function(bookmarks) {
                $scope.model.bookmarks = bookmarks;
            });
        });
    }
});
 

Conclusion

It took some time and a lot of Googling and trying out stuff, but in the end I can say I got AngularJS to work pretty good in a portal environment. It is usable, but for someone that is used to spending most of his time in Java and not Javascript it was quite a challenge, especially when something fails. The error messages coming from Javascript aren't always as clear as I'd want them to be and the IDE support (code colouring, code completion, debugging, ...) isn't up to what I'm used to when writing JSF/Spring based portlets. But I think the following image expresses my feelings pretty good:

 

More blogs on Liferay and Java via http://blogs.aca-it.be.

Configuring a Liferay cluster (and make it use unicast)

General Blogs March 11, 2014 By Jan Eerdekens

Introduction

Configuring a Liferay cluster is part experience and part black magic. There is some information that you can find online, there's some information you can only find out while working on it and then there are some things like how to configure ehcache to use unicast that you can only discover through blood, sweat and tears. This post will first describe how to set up a Liferay 6.1 cluster with Ehcache in multicast and in unicast mode.
To get clustering to work in Liferay you need to make sure that all of the subsystems below are configured correctly:
  • Database
  • Indexing
  • Media gallery
  • Quartz
  • Cluster Link
  • Ehcache
 

Database

The first subsystem that needs to be configured for clustering, the database, is also one of the easiest to configure correctly. You just need to point each node in the cluster to the same database, either by using the same JNDI datasource

jdbc.default.jndi.name=jdbc/liferay

or by using the same JDBC configuration directly in your portal-ext.properties on each node

jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://dbserver:3306/liferay_test?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false&autoReconnectForPools=true
jdbc.default.username=dbuser
jdbc.default.password=Y@r3FiL

 

Indexing

For Liferay 6.1 the only reliable way to cluster the indexing is to use SOLR. For this you'll need to do 2 things: set up a separate SOLR server (or use an existing one) and deploy a correctly configured solr-web.war from the Marketplace to all cluster nodes.
 
Depending on which Liferay flavor you're using, CE or EE, this will be an easy process or a little bit more difficult. If you're running Liferay EE, the process is pretty straightforward as for that version there are solr-web versions available for SOLR 3 and 4. For Liferay CE it's a bit more complicated as there's only a relatively old WAR available for SOLR 1.4, which you'll need to upgrade yourself if you want to use newer SOLR versions with Liferay CE.
 
For this blog we're assuming that a dedicated Liferay SOLR instance will be used (but an additional core in an existing SOLR will also work). To set up a SOLR, you can just follow the instructions on their site: https://wiki.apache.org/solr/SolrInstall. Once you have a default SOLR up and running, you'll need to add some configuration to it so Liferay can use it for indexing. This configuration is done by replacing the existing schema.xml with the Liferay SOLR schema.xml that you can find in the WEB-INF/conf directory of the solr-web.war you downloaded.
 
If you're running on Liferay 6.1 CE and want to use a newer SOLR version than 1.4, you'll also need to change the schema.xml and possibly also the solr-spring.xml a bit to get it working. The version of the schema.xml that worked for us is:
<?xml version="1.0"?>
<schema name="liferay" version="1.1">
    <types>
        <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="integer" class="solr.IntField" omitNorms="true" />
        <fieldType name="long" class="solr.LongField" omitNorms="true" />
        <fieldType name="float" class="solr.FloatField" omitNorms="true" />
        <fieldType name="double" class="solr.DoubleField" omitNorms="true" />
        <fieldType name="sint" class="solr.SortableIntField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="slong" class="solr.SortableLongField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="sdouble" class="solr.SortableDoubleField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="date" class="solr.DateField" sortMissingLast="true" omitNorms="true" />
        <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
            <analyzer>
                <tokenizer class="solr.WhitespaceTokenizerFactory" />
            </analyzer>
        </fieldType>
        <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
            <analyzer type="index">
                <tokenizer class="solr.WhitespaceTokenizerFactory" />
                <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
                <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" />
                <filter class="solr.LowerCaseFilterFactory" />
                <filter class="solr.RemoveDuplicatesTokenFilterFactory" />
            </analyzer>
            <analyzer type="query">
                <tokenizer class="solr.WhitespaceTokenizerFactory" />
                <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
                <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
                <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" />
                <filter class="solr.LowerCaseFilterFactory" />
                <filter class="solr.RemoveDuplicatesTokenFilterFactory" />
            </analyzer>
        </fieldType>
        <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" >
            <analyzer>
                <tokenizer class="solr.WhitespaceTokenizerFactory" />
                <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false" />
                <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
                <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0" />
                <filter class="solr.LowerCaseFilterFactory" />
                <filter class="solr.RemoveDuplicatesTokenFilterFactory" />
            </analyzer>
        </fieldType>
        <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
            <analyzer>
                <tokenizer class="solr.KeywordTokenizerFactory" />
                <filter class="solr.LowerCaseFilterFactory" />
                <filter class="solr.TrimFilterFactory" />
                <filter class="solr.PatternReplaceFilterFactory" pattern="([^a-z])" replacement="" replace="all" />
            </analyzer>
        </fieldType>
        <fieldtype name="ignored" stored="false" indexed="false" class="solr.StrField" />
    </types>
    <fields>
    <!--
        Had to add additional fields, 'dash' fields and 'copyfields' to make
        tables and sorting work correctly in certain Control Panel pages:
            
            first-name, last-name, screen-name, job-title and type
        
        otherwise you'll see the following errors in the SOLR log:

            Feb 15, 2013 11:20:23 AM org.apache.solr.common.SolrException log
            SEVERE: org.apache.solr.common.SolrException: can not sort on multivalued field: job-title
            at org.apache.solr.schema.SchemaField.checkSortability(SchemaField.java:160)
 
        http://www.liferay.com/community/forums/-/message_boards/message/21525098
        http://liferay-blogging.blogspot.be/2012/03/liferay-and-solr-solrexception-can-not.html
    -->
        <field name="comments" type="text" indexed="true" stored="true" />
        <field name="content" type="text" indexed="true" stored="true" />
        <field name="description" type="text" indexed="true" stored="true" />
        <field name="entryClassPK" type="text" indexed="true" stored="true"/>
        <field name="firstName" type="text" indexed="true" stored="true" />
        <field name="first-name" type="text" indexed="true" stored="true" />
        <field name="firstName_sortable" type="string" indexed="true" stored="true" />
        <field name="job-title" type="text" indexed="true" stored="true" />
        <field name="jobTitle_sortable" type="string" indexed="true" stored="true" />
        <field name="lastName" type="text" indexed="true" stored="true" />
        <field name="last-name" type="text" indexed="true" stored="true" />
        <field name="lastName_sortable" type="string" indexed="true" stored="true" />
        <field name="leftOrganizationId" type="slong" indexed="true" stored="true" />
        <field name="name" type="text" indexed="true" stored="true" />
        <field name="name_sortable" type="string" indexed="true" stored="true" />
        <field name="properties" type="string" indexed="true" stored="true" />
        <field name="rightOrganizationId" type="slong" indexed="true" stored="true" />
        <field name="screen-name" type="text" indexed="true" stored="true" />
        <field name="screenName_sortable" type="string" indexed="true" stored="true" />
        <field name="title" type="text" indexed="true" stored="true" />
        <field name="type" type="text" indexed="true" stored="true" />
        <field name="type_sortable" type="string" indexed="true" stored="true" />
        <field name="uid" type="string" indexed="true" stored="true" />
        <field name="url" type="string" indexed="true" stored="true" />
        <field name="userName" type="string" indexed="true" stored="true" />
        <field name="version" type="string" indexed="true" stored="true" />
        <!-- 
            http://liferay-blogging.blogspot.be/2012/03/liferay-and-solr-solrexception-can-not.html
        -->
        <field name="modified" type="text" indexed="true" stored="true" />
        <!-- 
            Added 'omitNorms' attribute on '*' to fix the following error: 
            Liferay side: 

                 12:07:05,844 ERROR [SolrIndexWriterImpl:55] org.apache.solr.common.SolrException: Bad Request

            SOLR side: 

                 Jul 30, 2012 12:07:05 PM org.apache.solr.common.SolrException log
                 SEVERE: org.apache.solr.common.SolrException: 
                 ERROR: [doc=PluginPackageIndexer_PORTLET_liferay/solr-web/6.1.0/war] cannot set an index-time boost, norms are omitted for field entryClassName: com.liferay.p
         -->
         <dynamicField name="*CategoryNames" type="string" indexed="true" multiValued="true" stored="true" />
         <dynamicField name="*CategoryIds" type="string" indexed="true" multiValued="true" stored="true" />
         <dynamicField name="expando/*" type="text" indexed="true" multiValued="true" stored="true" />
         <dynamicField name="web_content/*" type="text" indexed="true" stored="true" />
         <!--
             This must be the last entry since the fields element is an ordered set.
         -->
        <dynamicField name="*" type="string" indexed="true" multiValued="true" stored="true" omitNorms="false"/>
    </fields>
    <copyField source="firstName" dest="firstName_sortable" />
    <copyField source="first-name" dest="firstName_sortable" />
    <copyField source="job-title" dest="jobTitle_sortable" />
    <copyField source="lastName" dest="lastName_sortable" />
    <copyField source="last-name" dest="lastName_sortable" />
    <copyField source="name" dest="name_sortable" />
    <copyField source="screen-name" dest="screenName_sortable" />
    <copyField source="type" dest="type_sortable" />
    <uniqueKey>uid</uniqueKey>
    <defaultSearchField>content</defaultSearchField>
    <solrQueryParser defaultOperator="OR" />
</schema>

and for solr-spring.xml

<?xml version="1.0"?>
<beans default-destroy-method="destroy"
       default-init-method="afterPropertiesSet"
       xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jee="http://www.springframework.org/schema/jee"
       xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
       http://www.springframework.org/schema/jee  http://www.springframework.org/schema/jee/spring-jee.xsd
       http://www.springframework.org/schema/util  http://www.springframework.org/schema/util/spring-util-3.0.xsd">

    <bean class="com.liferay.portal.spring.context.PortletBeanFactoryPostProcessor" /> 
    
    <!-- Solr search engine --> 
    <bean id="com.liferay.portal.search.solr.server.BasicAuthSolrServer" class="com.liferay.portal.search.solr.server.BasicAuthSolrServer"> 
        <constructor-arg type="java.lang.String" value=" /> 
    </bean> 

    <bean id="com.liferay.portal.search.solr.SolrIndexSearcherImpl" class="com.liferay.portal.search.solr.SolrIndexSearcherImpl"> 
        <property name="solrServer" ref="com.liferay.portal.search.solr.server.BasicAuthSolrServer" /> 
        <property name="swallowException" value="true" /> 
    </bean> 

    <bean id="com.liferay.portal.search.solr.SolrIndexWriterImpl" class="com.liferay.portal.search.solr.SolrIndexWriterImpl"> 
        <property name="commit" value="true" /> 
        <property name="solrServer" ref="com.liferay.portal.search.solr.server.BasicAuthSolrServer" /> 
    </bean> 

    <bean id="com.liferay.portal.search.solr.SolrSearchEngineImpl" class="com.liferay.portal.kernel.search.BaseSearchEngine"> 
        <property name="clusteredWrite" value="false" /> 
        <property name="indexSearcher" ref="com.liferay.portal.search.solr.SolrIndexSearcherImpl" /> 
        <property name="indexWriter" ref="com.liferay.portal.search.solr.SolrIndexWriterImpl" /> 
        <property name="luceneBased" value="true" /> 
        <property name="vendor" value="SOLR" /> 
    </bean> 

    <!-- Configurator --> 
    <bean id="searchEngineConfigurator.solr" class="com.liferay.portal.kernel.search.PluginSearchEngineConfigurator"> 
        <property name="searchEngines"> 
            <util:map> 
                <entry key="SYSTEM_ENGINE" value-ref="com.liferay.portal.search.solr.SolrSearchEngineImpl" /> 
            </util:map> 
        </property> 
    </bean> 
</beans>

Once you have SOLR up and running with the new schema, you just need to tweak the solr-web.war a little bit before deploying it on all nodes as it assumes SOLR is running on localhost:8080 which probably isn't the case. You can change this is in the solr-spring.xml file that you can find in the WEB-INF/classes/META-INF directory of the WAR file. Just change the constructor-arg value of the bean with id com.liferay.portal.search.solr.server.BasicAuthSolrServer so it points to the correct server and port.

 

Media Gallery

Now that the document library and image gallery have been combined to form the media gallery in newer Liferay versions, the configuration to cluster it has also simplified. To cluster the media gallery you have 2 options: database or file system. There was also an option to use Jackrabbit for this purpose, but this has been deprecated in Liferay 6.1.
 
Using the database is the simplest option as you only need to add one property to your portal-ext.properties file
dl.store.impl=com.liferay.portlet.documentlibrary.store.DBStore
This will automatically use the database that is already configured for Liferay to store al media items. As long a your database supports BLOBs of sufficient size to cover your media needs, this is an easy solution. But if your database has issues with large files, like videos for example, you can best use the second option: a common filesystem.
 
For this, you only need to configure a different store, usually either com.liferay.portlet.documentlibrary.store.FileSystemStore 
or com.liferay.portlet.documentlibrary.store.AdvancedFileSystemStore (internally distributes lots of files over more directories to work around limitations of number of files per directory). To use a file system store correctly, you'll also need to configure the property dl.store.file.system.root.dir in your portal-ext.properties to point to a directory on the local filesystem of each node that points to a common file store, SAN, NAS, etc... . The problem with this is that the Liferay documentation doesn't exactly define what kind of file systems are supported or what kind of functionalities (locking, etc... ) they need to support. So it can be a bit of hit and miss finding one that works correctly.
 
Another option is to use the com.liferay.portlet.documentlibrary.store.CMISStore if you have an Alfresco instance to spare or the com.liferay.portlet.documentlibrary.store.S3Store if you have Amazon S3 buckets available. The only problem with these can be speed as they're usually slower than using a file system.
 

Quartz

The Quartz job scheduler that's available in Liferay also needs to be clustered to prevent problems. This can be done by adding the following line to your portal-ext.properties file:

org.quartz.jobStore.isClustered=true

 

Cluster Link

In a Liferay cluster all nodes need to be able to talk to each other to keep each other up to date. To enable this, you just need to activate the JGroups based Cluster Link system that's available in Liferay by adding the following two properties to your portal-ext.properties file:

cluster.link.enabled=true
cluster.link.autodetect.address=dbserver:dbport
The second property is needed because otherwise the Cluster Link initialization during startup will possibly fail because by default it will try to contact google.com:80 and access to the internet isn't always possible in some environments. For that reason you'll need to have a host/port combination that is reachable by the cluster link and that it can be used to set up itself. The easiest option for this is to use the database server and port that we already know (and can access) and use that to replace dbserver and dbport in the example above.
 
In order to make the JGroups Cluster Link you'll also need to set the following system properties for your JVM (for example via the JAVA_OPTS of Tomcat):
  • -Djava.net.preferIPv4Stack=true
  • -Djgroups.bind_addr=<local IP> (replace <local IP> on each node with the actual IP address of the node) 
  • -Djgroups.tcpping.initial_hosts=<node 1>[7800],<node 2>[7800] (replace <node 1>, <node 2>, etc... on each node with the actual IP addresses of the corresponding nodes and add more values, separated with a comma if your cluster has more than 2 nodes)

EhCache: multicast

In a Liferay cluster the different Ehcache based caches on a node also need to be aware of other nodes so that correct and up to date information is shown on nodes after something is changed on one node. When your server environment supports multicast (some virtualization software has issues with this) and your system administrators allow you to use it, it is pretty easy to configure Ehcache to work in a cluster. Just add the following lines to your portal-ext.properties on each node:

net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml

When using multicast isn't possible you'll need to use the information in the next section of this blog.

EhCache: unicast

In some server environments it might not be possible or allowed to use multicast. Unfortunately this is the default way of communication in a Liferay cluster and the only thoroughly documented way. So when we were faced with the task of setting up a cluster, while only using unicast, we had to do some Sherlock Holmes level investigations. After many hours of Googling, reading forums, trial and error, ... we were able to get it to work.

First off you need to create an JGroups configuration XML that will be the basis of the unicast setup. This XML is what will actually set up JGroups to use TCP instead of UDP. Once you have this file, you just need to provide it as configuration for a couple of properties and things will magically start working. Just create an XML file with the content below and name it unicast.xml (the name is not important as long as you remember to use the same value in the portal-ext.properties) and place it in the WEB-INF/classes directory of Liferay:

<!--
    TCP based stack, with flow control and message bundling. This is usually used when IP
    multicasting cannot be used in a network, e.g. because it is disabled (routers discard multicast).
    Note that TCP.bind_addr and TCPPING.initial_hosts should be set, possibly via system properties, e.g.
    
        -Djgroups.bind_addr=192.168.5.2 and -Djgroups.tcpping.initial_hosts=192.168.5.2[7800]
    
    author: Bela Ban
    version: $Id: tcp.xml,v 1.40 2009/12/18 09:28:30 belaban Exp $
-->
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-2.8.xsd"> 
    <TCP singleton_name="liferay" 
         bind_port="7800" 
         loopback="true" 
         recv_buf_size="${tcp.recv_buf_size:20M}" 
         send_buf_size="${tcp.send_buf_size:640K}" 
         discard_incompatible_packets="true" 
         max_bundle_size="64K"
         max_bundle_timeout="30" 
         enable_bundling="true" 
         use_send_queues="true" 
         sock_conn_timeout="300" 
         timer.num_threads="4" 
         thread_pool.enabled="true" 
         thread_pool.min_threads="1" 
         thread_pool.max_threads="10" 
         thread_pool.keep_alive_time="5000" 
         thread_pool.queue_enabled="false" 
         thread_pool.queue_max_size="100" 
         thread_pool.rejection_policy="discard" 
         oob_thread_pool.enabled="true" 
         oob_thread_pool.min_threads="1" 
         oob_thread_pool.max_threads="8" 
         oob_thread_pool.keep_alive_time="5000" 
         oob_thread_pool.queue_enabled="false" 
         oob_thread_pool.queue_max_size="100" 
         oob_thread_pool.rejection_policy="discard"/> 

    <TCPPING timeout="3000" 
             initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" 
             port_range="1" 
             num_initial_members="3"/> 

    <MERGE2 min_interval="10000" max_interval="30000"/> 
    <FD_SOCK/> 
    <FD timeout="3000" max_tries="3" /> 
    <VERIFY_SUSPECT timeout="1500" /> 
    <BARRIER /> 
    <pbcast.NAKACK use_mcast_xmit="false" gc_lag="0" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/> 
    <UNICAST timeout="300,600,1200" /> 
    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400K"/> 
    <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/> 
    <FC max_credits="2M" min_threshold="0.10"/> 
    <FRAG2 frag_size="60K" /> 
    <pbcast.STREAMING_STATE_TRANSFER/> 
    <!-- <pbcast.STATE_TRANSFER/> --> 
</config>

Once you have this file in place you just need to add some additional configuration to your portal-ext.properties to configure the Liferay cluster link and Ehcache to use it:

cluster.link.channel.properties.control=unicast.xml
cluster.link.channel.properties.transport.0=unicast.xml
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/unicast.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=/unicast.xml

 

More blogs on Liferay and Java via http://blogs.aca-it.be.

Showing 15 results.
Items 20
of 1