ServiceBuilder and Upgrade Processes

Technical Blogs May 17, 2017 By David H Nebinger


Today I ran into someone having issues with ServiceBuilder and the creation of UpgradeProcess implementations. The doco is a little bit confusing, so I thought I'd do a quick blog post sharing how the pieces fit...

Normal UpgradeProcess Implementations

As a reminder, you register UpgradeProcess implementations to support upgrading from, say, 1.0.0 to 2.0.0, when there are things that you need to code to ensure when the upgrade is complete that the system is ready to use your module. Say, for example, that you're storing XML in a column in the DB and in 2.0.0 you've changed the DTD; for those folks that already have 1.0.0 deployed, your UpgradeProcess implementation would be responsible for processing each existing record in the database to change the contents over to the 2.0.0 version of the DTD. For non-ServiceBuilder modules, it is up to you to write the initial UpgradeProcess code for the 0.0.0 -> 1.0.0 version.

Through the lifespan of your plugin, you continue to add in UpgradeProcess implementations to handle the automatic update for dot releases and major releases. The best part is that you don't have to care what version everyone is using, Liferay will apply the right upgrade processes to take the users from what version they're currently at all the way through to the latest version.

This is all good, of course, but ServiceBuilder, well it behaves a little differently.

ServiceBuilder service.xml Development

As you go through development and you change the entities in service.xml and rebuild services, ServiceBuilder will update the SQL files used to create the tables, indices, etc. When you deploy the service the first time, ServiceBuilder will happily identify the initial deployment and will use the SQL files to create the entities.

This is where things can go sideways... If I deploy version 1.0.0 of the service and version 2.0.0 comes out, the service developer needs to implement an UpgradeProcess that makes the necessary changes to the tables to get things ready for the current version of the service. If you did not deploy version 1.0.0 but are starting out on 2.0.0, you don't want to have to execute all of the upgrade processes individually, you want ServiceBuilder to do what it has always done and just use the SQL files to create the version 2.0.0 of the entities.

So how do you support both of these scenarios correctly?

By using the Liferay-Require-SchemaVersion header¹ in your bnd.bnd file, that's how.

Supporting Both ServiceBuilder Upgrade Scenarios

The Liferay-Require-SchemaVersion header defines the current DB schema version number for your service modules. This version number should be incremented as you change your service.xml in preparation for a release.

There's code in the ServiceBuilder deployment which injects a hidden UpgradeProcess implementation that is defined to cover the "0.0.0" version (the version which represents the "new deployment") to the Liferay-Require-SchemaVersion version number.  So your first release will have the header set to 1.0.0, next release might be 2.0.0, etc.

So in our previous example with the 2.0.0 service release, when you deploy the service Liferay will match the "0.0.0" to "2.0.0" hidden upgrade process implementation provided by ServiceBuilder and will invoke it to get the 2.0.0 version of the tables, indices, etc. created for you using the SQL files.

The service developer must also code and register the manual UpgradeProcess instances that support the incremental upgrade. So for the example, there would need to be a 1.0.0 -> 2.0.0 UpgradeProcess implementation so when I deploy 2.0.0 to replace my 1.0.0 deployment, the UpgradeProcess will be used to modify my DB schema to get it up to version 2.0.0.


As long as you properly manage both the Liferay-Require-SchemaVersion header in the bnd.bnd file and provide your corresponding UpgradeProcess implementations, you will be able to easily handle the first time deployment as well as the upgrade deployments.

An important side effect to note here - you must manage your Liferay-Require-SchemaVersion correctly.  If you set it initially to 1.0.0 and forget to update it on future releases, your users will have all kinds of issues.  For initial deployments, the SQL scripts would create the entities using the latest SQL files and then try to apply UpgradeProcess implementations to get to new versions trying to make modifications they really don't have to worry about. For upgrade deployments, Liferay may not process upgrades because it believes the schema is already at the appropriate version.

¹ If the Liferay-Require-SchemaVersion header is missing, the value for the Bundle-Version will be used instead.

Increasing Capacity and Decreasing Response Times Using a Tool You're Probably Not Familiar With

Technical Blogs May 7, 2017 By David H Nebinger


When it comes to Liferay performance tuning, there is one golden rule:

The more you offload from the application server, the better your performance will be.

This applies to all aspects of Liferay. Using Solr/Elastic is always better than using the embedded Lucene. While PDFBox works, you get better performance by offloading that work to ImageMagick and GhostScript.

You can get even better results by offloading work before it gets to the application server. What I'm talking about here is caching, and one tool I like to recommend for this is Varnish.

According to the Varnish site:

Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.

So I've found the last claim to be a little extreme, but I can say for certain that it can offer significant performance improvement.

Basically Varnish is a caching appliance.  When an incoming request hits Varnish, it will look at in it's cache to see if it has been rendered before. If it isn't in the cache, it will pass the request to the back end and store the response (if possible) in the cache before returning the response to the original requestor.  As additional matching requests come in, Varnish will be able to serve the response from the cache instead of sending it to the back end for processing.

So there are two requirements that need to be met to get value out of the tool:

  1. The responses have to be cacheable.
  2. The responses must take time for the backend to generate.

As it turns out for Liferay, both of these are true.

So Liferay can actually benefit from Varnish, but we can't just make such a claim, we'll need to back it up w/ some testing.

The Setup

To complete the test I set up an Ubuntu VirtualBox instance w/ 12G of memory and 4 processors, and I pulled in a Liferay DXP FP 15 bundle (no performance tuning for JVM params, etc). I also compiled Varnish 4.1.6 on the system. For both tests, Tomcat will be running using 8G and Varnish will also be running w/ an allocation of 2G (even though varnish is not used for the Tomcat test, I think it is "fairer" to keep the tests as similar as possible).

In the DXP environment I'm using the embedded ElasticSearch and HSQL for the database (not a prod configuration but both tests will have the same bad baseline). I deployed the free Porygon theme from the Liferay Marketplace and set up a site based on the theme. The home page for the Porygon demo site has a lot of graphics and stuff on it, so it's a really good site to look at from a general perspective.

The idea here was not to focus on Liferay tuning too much, to get a site up that was serving a bunch of mixed content. Then we measure a non-Varnish configuration against a Varnish config to see what impact Varnish can have in performance terms.

We're going to test the configuration using JMeter and we're going to hit the main page of the Porygon demo site.

Testing And Results

JMeter was configured to use 100 users and loop for 20 times.  Each test would touch on the home page, the photography, science and review pages and would also visit 3 article pages. JMeter was configured to retrieve all related assets synchronously to exagerate the response time from the services.

Response Times

Let's dive right in with the response times for the test from the non-Varnish configuration:

Response Times Without Varnish

The runtime for this test was 21 minutes, 20 seconds. The 3 article pages are the lines near the bottom of the graph, the lines in the middle are for the general pages w/ the asset publishers and all of the extra details.

Next graph is the response times from the Varnish configuration:

Response Times With Varnish

The runtime for this test was 11 minutes, 58 seconds, a 44% reduction in test time, and it's easy to see that while the non-Varnish tests seem to float around the 14 second mark, the Varnish tests come in around 6 seconds.

If we rework the graph to adjust the y-axis to remove the extra whitespace we see:

Response Times With Varnish

The important part here for me was the lines for the individual articles. In the non-Varnish test, /web/porygon-demo/-/space-the-final-frontier?inheritRedirect=true&redirect=%2Fweb%2Fporygon-demo shows up around the 1 second response time, but with varnish it hovers at the 3 second response time.  Keep that in mind when we discuss the custom VCL below.

Aggregate Response Times

Let's review the aggregate graphs from the tests.  First the non-Varnish graph:

Aggregate Without Varnish

This reflects what we've seen before; individual pages are served fairly quickly, pages w/ all of the mixed content take significantly longer to load.

And the graph for the Varnish tests:

Aggregate With Varnish

At the same scale, it is easy to see that Varnish has greatly reduced the response times.  Adjusting the y-axis, we get the following:

Aggregate With Varnish


So there's a few parts that quickly jump out:

  • There was a 44% reduction in test runtime reflected by decreased response times.
  • There was a measurable (but unmeasured) reduction in server CPU load since Liferay/Tomcat did not have to serve all traffic.
  • Since work is offloaded from Liferay/Tomcat, overall capacity is increased.
  • While some response times were greatly improved by using Varnish, others suffered.

The first three bullets are easy to explain.  As Varnish is able to cache "static" responses from Liferay/Tomcat, it can serve those responses from the cache instead of forcing Liferay/Tomcat to build a fresh response every time.  Having Liferay/Tomcat rebuild responses each time requires CPU cycles, so returning a cached response reduces the CPU load.  And since Liferay/Tomcat is not busy rebuilding the responses that now come from the cache, Liferay/Tomcat is free to handle responses that cannot be cached; basically the overall capacity of Liferay/Tomcat is increased.

So you might be asking that, since Varnish is so great, why do the single article pages suffer from a response time degradation? Well, that is due to the custom VCL script used to control the caching.

The Varnish VCL

So if you don't know about Varnish, you may not be aware that caching is controlled by a VCL (Varnish Configuration Language) file. This file is closer to a script than it is a configuration file.

Normally Varnish operates by checking the backend response cache control headers; if a response can be cached, it will be, and if the response cannot be cached it won't. The impact of Varnish is directly related to how many of the backend responses can be cached.

You don't have to rely solely on the cache control headers from the backend to determine cacheability; this is especially true for Liferay. Through the VCL, you can actually override the cache control headers and make some responses cachable that otherwise would not have been and make other responses uncacheable even when the backend says it is acceptable.

So now I want to share the VCL script used for the test, but I'll break it up into parts to discuss the reasons for the choices that I made. The whole script file will be attached to the blog for you to download.

In the sections below comments have been removed to save space, but in the full file the comments are embedded to explain everything in detail.

Varnish Initialization

probe company_logo {
  .request =
    "GET /image/company_logo HTTP/1.1"
    "Connection: close";
  .timeout = 100ms;
  .interval = 5s;
  .window = 5;
  .threshold = 3;

backend LIFERAY {
  .host = "";
  .port = "8080";
  .probe = company_logo;

sub vcl_init {
  new dir = directors.round_robin();

So in Varnish you need to declare your backends to connect to.  In this example I've also defined a probe request used to verify health of the backend.  For probes it is recommended to use a simple request that results in a small response; you don't want to overload the system with all of the probe requests.

Varnish Request

sub vcl_recv {
  if (req.url ~ "^/c/") {
    return (pass);

  if (req.url ~ "/control_panel/manage") {
    return (pass);
  if (req.url !~ "\?") {
    return (pass);

The request handling basically determines whether to hash (lookup request from the cache) or pass (pass request directly to backend w/o caching).

For all requests that start with the "/c/..." URI, we pass those to the backend.  They represent request for /c/portal/login or /c/portal/logout and the like, so we never want to cache those regardless of what the backend might say.

Also any control panel requests are also passed directly to the backend. We wouldn't want to accidentally expose any of our configuration details now would we? cheeky

Otherwise the code is trying to force hashing of binary files (mp3, image, etc) if possible and conforms to most average VCL implementations.

The last check of whether the URL contains a '?' character, well I'll be getting to that later in the conclusion...

Varnish Response

sub vcl_backend_response {

  if (bereq.url ~ "^/c/") {
    return (deliver);
  if ( bereq.url ~ "\.(ico|css)(\?[a-z0-9=]+)?$") {
    set beresp.ttl = 1d;
  } else if (bereq.url ~ "^/documents/" && beresp.http.content-type ~ "image/*") {
    if (std.integer(beresp.http.Content-Length,0) < 10485760 ) {
      if (beresp.status == 200) {
        set beresp.ttl = 1d;
        unset beresp.http.Cache-Control;
        unset beresp.http.set-cookie;
  } else if (beresp.http.content-type ~ "text/javascript|text/css") {
    if (std.integer(beresp.http.Content-Length,0) < 10485760 ) {
      if (beresp.status == 200) {
        set beresp.ttl = 1d;

The response handling also passes the /c/ type URIs back to the client w/o caching.

The most interesting part of this section is the testing for content type and altering caching as a result.  Normally VCL rules will look for some request for "/blah/blah/blah/my-javascript.js" by checking for the extension as part of the URI.

But Liferay really doesn't use these standard extensions.  For example, with Liferay you'll see a lot of requests like /combo/?browserId=other&minifierType=&languageId=en_US&b=7010&t=1494083187246&/o/frontend-js-web/liferay/portlet_url.js&.... These kinds of requests do not have the standard extension on it so normal VCL matching patterns would discard this request as uncacheable. Using the VCL override logic above, the request will be treated as cacheable since it is just a request for some JS.

Same kind of logic applies to the /documents/ URI prefix; anything w/ this prefix is a fetch from the document library.  Full URIs are similar to /documents/24848/0/content_16.jpg/027082f1-a880-4eb7-0938-c9fe99cefc1a?t=1474371003732.  Again since it doesn't end w/ the standard extension, the image might not be cached. The override rule above will match on all /documents/ prefix and content types of images and treat the request as cacheable.


So let's start with the easy ones...

  • Adding Varnish can decrease your response times.
  • Adding Varnish can reduce your server load.
  • Adding Varnish can increase your overall capacity.

Honestly I was expecting that to be the whole list of conclusions I was going to have to worry about. I had this sweet VCL script and performance times were just awesome. As a final test, I tried logging into my site with Varnish in place and, well, FAIL.  I could log in, but I didn't get the top bar or access to the left or right sidebars or any of these things.

I realized that I was actually caching the response from the friendly URLs and, well, for Liferay those are typically dynamic pages.  There is logic specifically in the theme template files that change the content depending upon whether you are logged in or not.  Because my Varnish script was caching the pages when I was not logged in, after I logged in the page was coming from the cache and the necessary stuff I needed was now gone.

I had to add the check for the "?" character in the requests to determine if it was a friendly URL or not.  If it was a friendly URL, I had to treat those as dynamic and had to send them to the backend for processing.

This leads to the poor performance, for example, on the single article display pages.  My first VCL was great, but it cached too much.  My addition for friendly URLs solved the login issue but now prevented caching pages that maybe could be pages so I swung too far again, but since the general results were still awesome I just went with what I had.

Now for the hard conclusions...

  • Adding Varnish requires you to know your portal.
  • Adding Varnish requires you to know your use cases.
  • Adding Varnish requires you to test all aspects of your portal.
  • Adding Varnish requires you to learn how to write VCL.

The VCL really isn't that hard to wrap your head around.  Once you get familiar with it, you'll be able to customize the rules to increase your cacheability factor without sacraficing the dynamic nature of your portal.  In the attached VCL, we add a response header for a cache HIT or MISS, and this is quite useful for reviewing the responses from Varnish to see if a particular response was cached or not (remember the first request will always be a MISS, so check again after a page refresh).

I can't emphasize the testing enough though.  You want to manually test all of your pages a couple of times, logged in and not logged in, logged in as users w/ different roles, etc., to make sure each UX is correct and that you're not bleeding views that should not be shared.

You should also do your load testing.  Make sure you're getting something out of Varnish and that it is worthwhile for your particular situation.

Note About SSL

Before I forget, it's important to know that Varnish doesn't really talk SSL, nor does it talk AJP.  If you're using SSL, you're going to want to have a web server sitting in front of Varnish to handle SSL termination.

And Varnish doesn't talk AJP, so you will have to configure for HTTP connections from both the web server and the app server.

This points toward the reasoning behind my recent blog post about configuring Liferay to look at a header for the HTTP/HTTPS protocols.  In my environ I was terminating SSL at Apache and needed to use the HTTP connectors to Varnish and again to Tomcat/Liferay.

Although it was suggested in a few of the comments that separate connections could be used to facilitate the HTTP and HTTPS traffic, etc., those options would defeat some of the Varnish caching capabilities. You'd either have separate caches for each connection type (or perhaps no cache on one of them) or other unforseen issues. Being able to route all traffic through a single pipe to Varnish will ensure Varnish can cache the response regardless of the incoming protocol.

Update - 05/16/2017

Small tweak to the VCL script attached to the blog, I added rules to exclude all URLs from /api/* from being cached.  Those are basically your web service calls and rarely would you really want to cache those details.  Find the file named localhost-2.vcl for the update.

Revisiting SSL Termination at Apache HTTPd

Technical Blogs May 5, 2017 By David H Nebinger

So I have a blog I created a long time ago dealing w/ Liferay and SSL. The foundation of that blog post was my Fronting Liferay Tomcat with Apache HTTPd post and added terminating SSL at HTTPd and configuring the Liferay instance running under Tomcat to use HTTPS for all of the communication.

If you tear into the second post, you'll find that I was using the AJP connector to join HTTPd and Tomcat together.

This is actually a key aspect for a working setup for SSL + HTTPd + Liferay/Tomcat.

Today I was actually working on a similar setup that used the HTTP connector for SSL + HTTPd + Liferay/Tomcat. Unauthenticated traffic worked just fine, but as soon as you would try to access a secured resource that required authentication, a redirect loop resulted with HTTPd finally terminating the loop.

The only info I had was the redirect URL, There was no log messages in Liferay/Tomcat and repeated 302 messages in the HTTPd logs.

My good friend and coworker Nathan Shaw told me of a case he was aware of that was similar but was from Nginx; although different web servers, the 302 redirect loop on /c/portal/login?null was an exact match.

The crux of the issue is the setting of the property in

Basically when you set this property to true, you are saying that when a user logs in, you want to force them into the secure https side. Seems pretty simple, right?

So in this configuration, when a user on http:// wants to or needs to log in, they basically end up hitting http://.../c/portal/login. This is where a check for HTTPS is done and, since the connection is not yet HTTPS will issue a redirect back to https://.../c/portal/login to complete the login.

And this, in conjunction with the HTTP connector between HTTPd and Liferay/Tomcat, is what causes the redirect loop.

Liferay responds with the 302 to try and force you to https, you submit again but SSL terminates at HTTPd and the request is sent via the HTTP connector to Liferay/Tomcat.  Well, Liferay/Tomcat sees the request came in on http:// and again issues the 302 redirect. You're now in redirect loop hell.

Fortunately, this is absolutely fixable.

Liferay has a set of settings to mitigate the SSL issue. They are:

# Set this to true to use the property "web.server.forward.protocol.header"
# to get the protocol. The property "web.server.protocol" must not have been
# overriden.

# Set the HTTP header to use to get the protocol. The property
# "web.server.forwarded.protocol.enabled" must be set to true.

The important property is the first one.  When that property is true, Liferay will ignore the protocol (http vs https) of the incoming request and will instead use a request header to see what the original protocol for the request actually was.

The header name can be specified using the second property, but the default one works just fine. It's also how you google for an answer for your particular web server.

I'll save you the trouble for Apache HTTPd; you just need to add a couple of lines to your <VirtualHost /> elements:

<VirtualHost *:80>
    RequestHeader set X-Forwarded-Proto "http"

<VirtualHost *:443>
    RequestHeader set X-Forwarded-Proto "https"

That's it.

For every incoming request getting to HTTPd, a header is added with the request protocol.  When the ProxyPass configuration forwards the requests to Liferay/Tomcat, Liferay will use the header for the check on https:// rather than the actual connection from HTTPd.

Some of you are going to be asking

Why are you using the HTTP connector to joing HTTPd to Liferay/Tomcat anyway? The AJP connector is the best connector to use in this configuration because it is better performing than the HTTP connector and avoids this and other issues that can happen by using the HTTP connector.

You would be, of course, absolutely right about that. For a simple configuration like this where you only have HTTPd <-> Liferay/Tomcat, using the HTTP connector is frowned upon.

That said, I've got another exciting blog post in the pipeline that will force moving to this configuration... I'm not getting into any details at this point, but suffice it to say that when you see the results that I've been gathering, you too will be looking at this configuration too.


Technical Blogs May 3, 2017 By David H Nebinger

In case you aren't aware, Liferay 7 CE and Liferay DXP default to using Hikari CP for the connection pools.

Why?  Well here's a pretty good reason:

Hikari just kicks the pants of any other connection pool implementation.

So Liferay is using Hikari CP, and you should too.

I know what you're thinking.  It's something along the lines of:

But Dave, we're following the best practice of defining our DB connections as Tomcat <Resource /> JNDI definitions so we don't expose our database connection details (URLs, usernames or passwords) to the web applications.  So we're stuck with the crappy Tomcat connection pool implementation.

You might be thinking that, but if you are thankfully you'd be wrong.

Installing the Hikari CP Library for Tomcat

So this is pretty easy, but you have two basic options.

First is to download the .zip or .tar.gz file from  This is actually a source release that you'll need to build yourself.

Second option is to download the built jar from a source like Maven Central,

Once you have the jar, copy to the Tomcat lib/ext directory.  Note that Hikari CP does have a dependency on SLF4J, so you'll need to put that jar into lib/ext too.

Configuring the Tomcat <Resource /> Definitions

Location of your JNDI datasource <Resource /> definitions depends upon the scope for the connections.  You can define them globally by specifying them in Tomcat's conf/server.xml and conf/context.xml, or you can scope them to individual applications by defining them in conf/Catalina/localhost/WebAppContext.xml (where WebAppContext is the web application context for the app, basically the directory name from Tomcat's webapps directory).

For Liferay 7 CE and Liferay DXP, all of your plugins belong to Liferay, so it is usually recommended to put your definitions in conf/Catalina/localhost/ROOT.xml.  The only reason to make the connections global is if you have other web applications deployed to the same Tomcat container that will be using the same database connections.

So let's define a JNDI datasource in ROOT.xml for a Postgres database...

Create the file conf/Catalina/localhost/ROOT.xml if it doesn't already exist.  If you're using a Liferay bundle, you will already have this file.

Hikari CP supports two different ways to define your actual database connections.  The first way is the one that they prefer and it's based upon using a DataSource instance (more standard way of establishing a connection with credentials) or the older way using a DriverManager instance (legacy way that has different ways of passing credentials to the DB driver).

We'll follow their advice and use the DataSource.  Use the table from to find your data source class name, we'll need it when we define the <Resource /> element.

Gather up your JDBC url, username and password because we'll need those too.

Okay, so in ROOT.xml inside of the <Context /> tag, we're going to add our Liferay JNDI data source connection resource:

<Resource name="jdbc/LiferayPool" auth="Container"
      dataSource.password="pwd" />

So this is going to define our connection for Liferay and have it use the Hikari CP pool.

Now if you really want to stick with the older driver-based configuration, then you're going to use something like this:
<Resource name="jdbc/LiferayPool" auth="Container"
      dataSource.password="pwd" />


Yep, that's pretty much it.  When you restart Tomcat you'll be using your flashy new Hikari CP connection pool.

You'll want to take a look at to find additional tuning parameters for your connection pool as well as the info for the minimum idle, max pool size and connection timeout details.

And remember, this is going to be your best production configuration.  If you're using to set up any of your database connection properties, you're not as secure as you can be.  Remember, a hacker needs information to infiltrate your system; the more details of your infrastructure you expose, the more info you give a hacker to worm their way in.  Using the approach, you're exposing your JDBC URL (so hostname and port as well as the DB server type) and the credentials (which will work for DB login but sometimes they might also be system login credentials).  This kind of info is worth its weight in gold to a hacker trying to infiltrate you.

So follow the recommended practice of using your JNDI references for the database connections and keep this information out of the hackers hands.


Available Now: TripWire

General Blogs March 28, 2017 By David H Nebinger

For the last few months as I've been working with Liferay 7 CE / Liferay DXP, I've been a little stymied trying to manage the complexities of the new OSGi universe.

In Liferay 6.x, for example, an OOTB demo setup of Liferay comes with like 5 or 6 war files.  And when the portal starts up, they all start up.

But with Liferay 7 CE and Liferay DXP, there are a lot of bundles in the mix. Liferay 7 CE GA3, for example, has almost 2,500 bundles in OSGi.

And when the portal starts up, most of these will also start. Some will not. Some might not be able to. Some can't start because they have unsatisfied dependencies.

But you're not going to know it.

Seriously, you won't know if something has failed to start when you restart your environment. There may or may not be something in the log. Someone might have stopped a bundle intentionally (or unintentionally) in the gogo shell w/o telling you. And with almost 2,500 bundles in there, it's going to be really hard finding the needle in the haystack especially if you don't know if there's a needle in there at all.

So I've been working on a new utility over the past few months to resolve the situation - TripWire.


TripWire actually scans the OSGi environment to gather information about deployed bundle statuses, bundle versions, and service components. Tripwire also scans the system and portal properties too.

This scanning is done at two points, the first is when an administrator takes a snapshot (basically to persist a baseline for all comparisons), and the second is a scheduled task that runs on the node to monitor for changes. The comparison scan can also be kicked off manually.

After installing TripWire and navigating to the TripWire control panel, you'll be prompted to capture an initial baseline scan:

Click the Take Snapshot card to see the system snapshot:

You can save the new baseline (to be compared against in the automated scans), you can export the snapshot (downloads as an excel spreadsheet), or you can cancel.

Each section expands to show captured details:

The funny looking hash keys at the top? Those are calculated hashes from the scanned areas, by comparing the baseline hash against the scanned hash, TripWire knows quickly if there is a variation between the baseline and the current scan.

When you save the new baseline, the main page will reflect that the server is currently consistent with the baseline:

You can test the server to force a scan by clicking on the Test Server card:


TripWire supports dynamically creating exclusion rules to exclude items from being part of the scan.  You might add an exclusion for a property value that you're not interested in monitoring, for example. Click on the Exclusions card and then on the Add New Exclusion Rule button:

The Camera drop down lists all of the current cameras used when taking a snapshot. Choose either a specific camera or the Any Camera option to allow for a match to any camera.

The Type drop down allows you to select either a Match, a Starts With, a Contains or a Regular Expression type for the exclusion rule.

The value field is what to match against, and the Enabled slider allows you to disable a particular exclusion rule.

Modifying the exclusion rules will affect scans immediately resulting in failed scans:

By adding the rule to exclude any System Property that starts with "catalina.", scans now show the server to be inconsistent when compared to the baseline. At this point you can take a new baseline snapshot to approve the change, or you could disable the exclusion rule (basically reverting the change to the system) to restore baseline consistency.


TripWire uses Liferay notifications to alert subscribed administrators when the node is in an inconsistent state and when the node returns to a consistent state. For Liferay 7 CE, a subscribed administrator will only receive notifications about the single Liferay node. For Liferay DXP, subscribed administrators will receive notifications from every node that is out of sync with the baseline snapshot.

Notifications will be issued for every failed scan on every node until consistency is restored.

To subscribe or unsubscribe to notifications, click on the Subscriptions card. If you are unsubscribed, the bell image will be grey, if you are subscribed the bell will be blue and have a red notification number on it. Note this number does not represent the number of notifications you might currently have, it is just a visual marker that you are subscribed for notifications.


TripWire supports setting configuration for the scanning schedule. Click on the Configuration card:

Using the Cameras tab, you can also choose the cameras to use in the snapshots and scans:

Normally I recommend enabling all but the Separate Service Status Camera (because this camera is quite verbose in the details it captures).

The Bundle Status Camera captures status for each bundle.

The Bundle Version Camera captures versions of every bundle.

The Configuration Admin Camera captures configuration changes from the control panel.  Note that CA only saves values that are different from the set of default values on each panel, so the details on this section will always be shorter than the actual set of configurations saved for the portal.

The Portal Properties Camera captures changes to known Liferay portal properties (unknown properties are ignored). In a Liferay DXP cluster, some properties will need to be excluded using the Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Service Status Camera captures counts of OSGi DS Services and their statuses.

The System Properties Camera captures changes to system properties from the JVM. Like the portal properties, in a Liferay DXP cluster some properties will need to be excluded using Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Unsatisfied References Camera captures the list of bundles with unsatisfied references (preventing the bundles from starting). Any time a bundle has an unsatisfied reference, the bundle and it's unsatisfied reference(s) will be captured by this camera.

The three email tabs configure who the notification emails are from and the consistent/inconsistent email templates.

Liferay DXP

For Liferay DXP clusters, TripWire uses the same baseline across all nodes in the cluster and reports on cluster node inconsistencies:

Clicking on the server link in the status area, you can review the server's report to see where the problems are:

Some of the additions and changes are due to unique node values and should be handled by adding new Exclusion Rules.

The Removals above show that one node in the cluster has Audience Targeting deployed but the other node does not. These are the kinds of inconsistencies that you may not be aware of from a cluster perspective but would result in your DXP cluster not serving the right content to all users, and identifying this discrepancy once in your cluster in an easy and quick way will save you time, money and effort.

For your cluster Exclusion Rules, your rule list will be quite long:


That's TripWire.

It is available from the Liferay Marketplace:

There is a cost for each version, but that is to offset the time and effort I have invested in this tool.

And while there may not seem to be an immediate return, the first time this tool saves you by identifying a node that is out of sync or an unauthorized change to your OSGi environment, it will save you time (in waiting for the change to be identified), effort (in having to sort through all of the gogo output and other details), user impressions (from cluster node sync issues) and most of all, money.



Liferay DXP and WebLogic...

Technical Blogs March 21, 2017 By David H Nebinger

For those of you deploying Liferay DXP to WebLogic, you will need to add an override property to your file to allow the WebLogic JAXB implementation to peer inside the OSGi environment to create proxy instances.

I know, it's a mouthful, but it's all pretty darn technical. You'll know if you need this if you start seeing exceptions like:

java.lang.NoClassDefFoundError: org/eclipse/persistence/internal/jaxb/WrappedValue
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(
	at java.lang.ClassLoader.defineClass(
	at org.eclipse.persistence.internal.jaxb.JaxbClassLoader.generateClass(
	at org.eclipse.persistence.jaxb.compiler.MappingsGenerator.generateWrapperClass(
	Truncated. see log file for complete stacktrace
Caused By: java.lang.ClassNotFoundException: org.eclipse.persistence.internal.jaxb.WrappedValue cannot be found by com.example.bundle_1.0.0
	at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(
	at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(
	at java.lang.ClassLoader.loadClass(
	Truncated. see log file for complete stacktrace

Suffice it to say that you need to override the property to add the following two lines:


You have to include all of the packages from the value defined in, so my entry actually looks like:\


Creating a Spring MVC Portlet War in the Liferay Workspace

Technical Blogs February 28, 2017 By David H Nebinger


So I've been working on some new Blade sample projects, and one of those is the Spring MVC portlet example.

As pointed to in the Liferay documentation for Spring MVC portlets, these guys need to be built as war files, and the Liferay Workspace will actually help you get this work done. I'm going to share things that I learned while creating the sample which has not yet been merged, but hopefully will be soon.

Creating The Project

So your war projects need to go in the wars folder inside of your Liferay Workspace folder, basically at the same level as your modules directory. If you don't have a wars folder, go ahead and create one.

Next we're going to have to manually create the portlet project. Currently Blade does not have support for building a Spring MVC portlet war project; perhaps this is something that can change in the future.

Inside of the wars folder, you create a folder for each portlet WAR project that you are building. To be consistent, my project folder was named blade.springmvc.web, but your project folder can be named according to your standards.

Inside your project folder, you need to set up the folder structure for a Spring MVC project. Your project structure will resemble:

Spring MVC Project Structure

For the most part this structure is similar to what you would use in a Maven implementation. For those coming from a legacy SDK, the contents of the docroot folder go to the src/main/webapp folder, but the docroot/WEB-INF/src go in the src/main/java folder (or src/main/resources for non-java files).

Otherwise this structure is going to be extremely similar to legacy Spring MVC portlet wars, all of the old locations basically still apply.

Build.gradle Contents

The fun part for us is the build.gradle file. This file controls how Gradle is going to build your project into a war suitable for distribution.

Here's the contents of the build.gradle file for my blade sample:

buildscript {
  repositories {
    maven {
      url ""

  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins.css.builder", version: "latest.release"
    classpath group: "com.liferay", name: "com.liferay.css.builder", version: "latest.release"

apply plugin: "com.liferay.css.builder"

war {
  dependsOn buildCSS


  filesMatching("**/.sass-cache/") {
    it.path = it.path.replace(".sass-cache/", "")

  includeEmptyDirs = false

dependencies {
  compileOnly project(':modules:blade.servicebuilder.api')
  compileOnly 'com.liferay.portal:com.liferay.portal.kernel:2.6.0'
  compileOnly 'javax.portlet:portlet-api:2.0'
  compileOnly 'javax.servlet:javax.servlet-api:3.0.1'
  compileOnly 'org.osgi:org.osgi.service.component.annotations:1.3.0'
  compile group: 'aopalliance', name: 'aopalliance', version: '1.0'
  compile group: 'commons-logging', name: 'commons-logging', version: '1.2'
  compileOnly group: 'javax.servlet.jsp.jstl', name: 'jstl-api', version: '1.2'
  compileOnly group: 'org.glassfish.web', name: 'jstl-impl', version: '1.2'
  compile group: 'org.springframework', name: 'spring-aop', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-beans', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-context', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-core', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-expression', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc-portlet', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-web', version: '4.1.9.RELEASE'

So first is the buildScript and the CSS builder apply line followed by war customization stanza.

These parts are currently necessary to support compiling the SCSS files into CSS and store the files in the right place in the WAR. Note that Liferay currently sees this manual execution of the CSS builder plugin as a bug and plan on fixing it sometime soon.

Managing Dependencies

The next part is the dependencies, and this will be the fun part for you as it was for me.

You're going to be picking from two different dependency types, compile and compileOnly. The big difference is whether the dependencies get included in the WEB-INF/lib directory (for compile) or just used for the project compile but not included (compileOnly).

Many of the Liferay or OSGi jars should not be included in your WEB-INF/lib directory such as portal-kernel or the servlet or portlet APIs, but they are needed for compiles so they are marked as compileOnly.

In Liferay 6.x development, we used to be able to use the portal-dependency-jars in to inject libraries into our wars at deployment time. But not for Liferay 7 CE/Liferay DXP development.

The portal-dependency-jars property in is deprecated in Liferay 7 CE/Liferay DXP. All dependencies must be included in the war at build time.

Since we cannot use the portal dependencies in, I had to manually include the Spring jars using the compile type. 


Yep, that's pretty much it.

Since it's in the Liferay Workspace and is Gradle-built, you can use the gradle wrapper script at the root of the project to build everything, including the portlet wars.

Your built war will be in the build/libs directory in project, and this war file is ready to be deployed to Liferay by dropping it in the Liferay deploy folder.

Debugging "ClassNotFound" exceptions, etc, in your war file can be extremely challenging since Liferay doesn't really keep anything around from the WAR->WAB conversion process. If you add the following properties to, the WABs generated by Liferay will be saved so you can open the file and see what jars were injected and where the files are all found in the WAB.${module.framework.base.dir}/wabs

If you want to check out the project, it is currently live in the blade samples:

Proper Portlet Name for your Portlet components...

Technical Blogs February 28, 2017 By David H Nebinger

Okay, this is probably going to be one of my shortest blog posts, but it's important.

Some releases of Liferay have code to "infer" a portlet name if it is not specified in the component properties.  This actually conflicts with other pieces of code that also try to "infer" what the portlet name is.

The problem is that they sometimes have different requirements; in one case, periods are fine in the name so the full class name is used (in the portlet component), but in other cases periods are not allowed so it uses the class name with the periods replaced by underscores.

Needless to say, this can cause you problems if Liferay is trying to use two different portlet names for the same portlet, one that works and the other that doesn't.

So save yourself some headaches and always assign your portlet name in the properties for the Portlet component, and always use that same value for other components that also need the portlet name.  And avoid periods since they cannot be used in all cases.

So here's an example for the portlet component properties from one of my previous blogs:

  immediate = true,
  property = {
    "" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
    "javax.portlet.display-name=Filesystem Access",
  service = Portlet.class
public class FilesystemAccessPortlet extends MVCPortlet {}

Notice how I was explicit with the  That's the one you need, don't let Liferay assume what your portlet name is, be explicit with the value.

And the value that I used in the FilesystemAccessPortletKeys constants:

public static final String FILESYSTEM_ACCESS =

No periods, but since I'm using the class name it won't have any collisions w/ other portlet classes...

Note that if you don't like long URLs, you might try shortening the name, but stick with something that avoids collisions, such as first letter of packages w/ class name, something like "clfp_FilesystemAccessPortlet" or something. Remember that collisions are bad things...

And finally, since I've put the name in an external PortletKeys constants file, any other piece of code that expects the portlet name can use the constant value too (i.e. for your config admin interfaces) and you'll know that all code is using the same constant value.


Service Builder 6.2 Migration

Technical Blogs February 23, 2017 By David H Nebinger

I'm taking a short hiatus from the design pattern series to cover a topic I've heard a lot of questions on lately - migrating 6.2 Service Builder wars to Liferay 7 CE / Liferay DXP.

Basically it seems you have two choices:

  1. You can keep the Service Builder implementation in a portlet war. Any wars you keep going forward will have access to the service layer, but can you access the services from other OSGi components?
  2. You take the Service Builder code out into an OSGi module. With this path you'll be able to access the services from other OSGi modules, but will the services be available to the legacy portlet wars?

So it's that mixed usage that leads to the questions. I mean, if all you have is either legacy wars or pure OSGi modules, the decision is easy - stick with what you've got.

But when you are in mixed modes, how do you deliver your Service Builder code so both sides will be happy?

The Scenario

So we're going to work from the following starting point. We have a 6.2 Service Builder portlet war following a recommendation that I frequently give, the war has only the Service Builder implementation in it and nothing else, no other portlets. I often recommend this as it gives you a working Service Builder implementation and no pollution from Spring or other libraries that can sometimes conflict with Service Builder. We'll also have a separate portlet war that leverages the Service Builder service.

Nothing fancy for the code, the SB layer has a simple entity, Course, and the portlet war will be a legacy Liferay MVC portlet that lists the courses.

We're tasked with upgrading our code to Liferay 7 CE or Liferay DXP (pick your poison cheeky), and as part of the upgrade we will have a new OSGi portlet component using the new Liferay MVC framework for adding a course.

To reduce our development time, we will upgrade our course list portlet to be compatible with Liferay 7 CE / Liferay DXP but keep it as a portlet war - basically the minimal effort needed to get it upgraded. We'll also have the new portlet module for adding a course.

But our big development focus, and the focus of this blog, will be choosing the right path for upgrading that Service Builder portlet war.

For evaluation purposes we're going to have to upgrade the SDK to a Liferay Workspace. Doing so will help get us some working 7.x portlet wars initially, and then when it comes time to do the testing for the module it should be easy to migrate.

Upgrading to a Liferay Workspace

So the Liferay IDE version 3.1 Milestone 2 is available, and it has the Code Upgrade Assistant to help take our SDK project and migrate it to a Liferay Workspace.

For this project, I've made the original 6.2 SDK project available at

You can find an intro to the upgrade assistant in Greg Amerson's blog: and Andy Wu's blog:

It is still a milestone release so it is still a work in progress, but it does work on upgrading my sample SDK. Just a note, though, it does take some processing time during the initial upgrade to a workspace; if you think it has locked up or is unresponsive, just have patience. It will come back, it will complete, you just have to give it time to do it's job.


After you finish the upgrade, you should have a Liferay workspace w/ a plugins-sdk directory and inside there is the normal SDK directory structure. In the portlet directory the two portlet war projects are there and they are ready for deployment.

In fact, in the plugins-sdk/dist directory you should find both of the wars just waiting to be deployed. Deploy them to your new Liferay 7 CE or Liferay DXP environment, then spin out and drop the Course List portlet on a page and you should see the same result as the 6.2 version.

So what have we done so far? We upgraded our SDK to a Liferay Workspace and the Code Upgrade Assistant has upgraded our code to be ready for Liferay 7 CE / Liferay DXP. The two portlet wars were upgraded and built. When we deployed them to Liferay, the WAR -> WAB conversion process converted our old wars into OSGi bundles.

However, if you go into the Gogo shell and start digging around, you won't find the services defined from our Service Builder portlet. Obviously they are there because the Course List portlet uses it to get the list of courses.

War-Based Service Builder

So how do these war-based Service Builder upgrades work? If you take a look at the CourseLocalServiceUtil's getService() method, you'll see that it uses the good ole' PortletBeanLocator and the registered Spring beans for the Service Builder implementation. The Util classes use the PortletBeanLocator to find the service implementations and may leverage the class loader proxies (CLP) if necessary to access the Spring beans from other contexts. From the service war perspective, it's going through Liferay's Spring bean registry to get access to the service implementations.

Long story short, our service jar is still a service jar. It is not a proper OSGi module and cannot be deployed as one. But the question is, can we still use it?

OSGi Add Course Portlet

So we need an OSGi portlet to add courses. Again this will be another simple portlet to show a form and process the submit. Creating the module is pretty straight forward, the challenge of course is including the service jar into the bundle.

First thing that is necessary is to include the jar into the build.gradle dependencies. Since it is not in a Maven-like repository, we'll need to use a slightly different syntax to include the jar:

dependencies {
  compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
  compileOnly group: "com.liferay.portal", name: "com.liferay.util.taglib", version: "2.0.0"
  compileOnly group: "javax.portlet", name: "portlet-api", version: "2.0"
  compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
  compileOnly group: "jstl", name: "jstl", version: "1.2"
  compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0"
  compile files('../../plugins-sdk/portlets/school-portlet/docroot/WEB-INF/lib/school-portlet-service.jar')

The last line is the key; it is the syntax for including a local jar file, and in our case we're pointing at the service jar which is part of the plugins-sdk folder that we upgraded.

Additionally we need to add the stanza to the bnd.bnd file so the jar gets included into the bundle during the build:



As you'll remember from my blog post on OSGi Module Dependencies, this is option #4 to include the jar into the bundle itself and use it in the classpath for the bundle.

Now if you build and deploy this module, you can place the portlet on a page and start adding courses.  It works!

By including the service jar into the module, we are leveraging the same PortletBeanLocator logic used in the Util class to get access to the service layer and invoke services via the static Util classes.

Now that we know that this is possible (we'll discuss whether to do it this way in the conclusion), let's now rework everything to move the Service Builder code into a set of standard OSGi modules.

Migrating Service Builder War to Bundle

Our service builder code has already been upgraded when we upgraded the SDK, so all we need to do here is create the modules and then move the code.

Creating the Clean Modules

First step is to create a clean project in our Liferay workspace, a foundation for the Service Builder modules to build from.

Once again, I start with Blade since I'm an Intellij developer. In modules directory, we'll let Blade create our Service Builder projects:

blade create -t service-builder -p school

For the last argument, use something that reflects your current Service Builder project name.

This is the clean project, so let's start dirtying it up a bit.

Copy your legacy service.xml to the school/school-service directory.

Build the initial Service Builder code from the service XML. If you're on the command line, you'd do:

../../gradlew buildService

Now we have unmodified, generated code. Layer in the changes from the legacy Service Builder portlet, including:

  • portlet-model-hints.xml
  • Changes to any of the META-INF/spring xml files
  • All of your Impl java classes

Rebuild services again to get the working module code.

Module-Based Service Builder

So we reviewed how the CourseLocalServiceUtil's getService() method in the war-based service jar leveraged the PortletBeanLocator to find the Spring bean registered with Liferay to get the implementation class.

In our OSGi module-based version, the CourseLocalServiceUtil's getService() method is instead using an OSGi ServiceTracker to get access to the DS components registered in OSGi for the implementation class.

Again the service "jar" is still a service jar (well, module), but we also know that the add course portlet will be able to leverage the service (with some modifications), the question of course is whether we can also use the service API module in our legacy course list portlet.

Fixing the Course List Portlet War

So what remains is modifying the course list portlet so it can leverage the API module in lieu of the legacy Service Builder portlet service jar.

This change is actually quite easy...

The file changed from the upgrade assistant contains the following:


This is the line used by the Liferay IDE to inject the service jar so the service will be available to the portlet war. We need to edit this line to strip out these two lines since we're not using the deployment context.

If you have the school-portlet-service.jar file in docroot/WEB-INF/lib, go ahead and delete that file since it is no longer necessary.

Next comes the messy part; we need to copy in the API jar into the course list portlet's WEB-INF/lib directory. We have to do this so Eclipse will be happy and will be able to happily compile all of our code that uses the API. There's no easy way to do this, but I can think of the following options:

  1. Manually copy the API jar over.
  2. Modify the Gradle build scripts to add support for the install of artifacts into the local Maven repo, then the Ivy configuration for the project can be adjusted to include the dependency. Not as messy as a manual file copy, but involves doing the install of the API jar so Ivy can find it.

We're not done there... We actually cannot keep the jar in WEB-INF/lib otherwise at runtime you get class cast exceptions, so we need to exclude it during deployment. This is easily handled, however, by adding an exclusion to your file:

module.framework.web.generator.excluded.paths=<CURRENT EXCLUSIONS>,\

When the WAR->WAB conversion is taking place, it will exclude this jar from being included. So you get to keep it in the project and let the WAB conversion strip it out during deployment.

Remember to keep all of the current excluded paths in the list, you can find them in the file included in your Liferay source.

Build and deploy your new war and it should access the OSGi-based service API module.


Well, this ended up being a mixed bag...

On one hand I've shown that you can use the Service Builder portlet's service jar as a direct dependency in the module and it can invoke the service through the static Util classes defined within. The advantage of sticking with this path is that it really doesn't require much modification from your legacy code beyond completing the code upgrade, and the Liferay IDE's Code Upgrade Assistant gets you most of the way there. The obvious disadvantage is that you're now adding a dependency to the modules that need to invoke the service layer and the deployed modules include the service jar; so if you change the service layer, you're going to have to rebuild and redeploy all modules that have the service jar as an embedded dependency.

On the other hand I've shown that the migrated OSGi Service Builder modules can be used to eliminate all of the service jar replication and redeployment pain, but the hoops you have to jump through for the legacy portlet access to the services are a development-time pain.

It seems clear, at least to me, that the second option is the best. Sure you will incur some development-time pain to copy service API jars if only to keep the java compiler happy when compiling code, but it definitely has the least impact when it comes to service API modifications.

So my recommendations for migrating your 6.2 Service Builder implementations to Liferay 7 CE / Liferay DXP are:

  • Use the Liferay IDE's Code Upgrade Assistant to help migrate your code to be 7-compatible.
  • Move the Service Builder code to OSGi modules.
  • Add the API jars to the legacy portlet's WEB-INF/lib directory for those portlets which will be consuming the services.
  • Add the module.framework.web.generator.excluded.paths entry to your to strip the jar during WAR->WAB conversion.

If you follow these recommendations your legacy portlet wars will be able to leverage the services, any new OSGi-based portlets (or JSP fragments or ...) will be able to access the services, and your deployment impact for changes will be minimized.

My code for all of this is available in github:

Note that the upgraded code is actually in the same repo, they are just in different branches.

Good Luck!


After thinking about this some more, there's actually another path that I did not consider...

For the Service Builder portlet service jar, I indicated you'd need to include this as a dependency on every module that needed to use the service, but I neglected to consider the global service jar option that we used for Liferay 6.x...

So you can keep the Service Builder implementation in the portlet, but move the service jar to the global class loader (Tomcat's lib/ext directory). Remember that with this option there can only be one service jar, the global one, so no other portlet war nor module (including the Service Builder portlet war) can have a service jar. Also remember that to update a global service jar, you can only do this while Tomcat is down.

The final step is to add the packages for the service interfaces to the module.framework.system.packages.extra property in You want to add the packages to the current list defined in, not replace the list with just your service packages.

Before starting Tomcat, you'll want to add the exception, model and service trio to the list. For the school service example, this would be something like:


This will make the contents of the packages available to the OSGi global class loader so, whether bundle or WAB, they will all have access to the interfaces and static classes.

This has a little bit of a deployment process change to go with it, but you might consider this the least impactful change of all. We tend to frown on the use of the global class loader because it may introduce transitive dependencies and does not support hot deployable updates, but this option might be lower development cost to offset the concern.

Liferay Design Patterns - Multi-Scoped Data/Logic

Technical Blogs February 17, 2017 By David H Nebinger

Pattern: Multi-Scoped Data/Logic


The intent for this pattern is to support data/logic usage in multiple scopes. Liferay defines the scopes Global, Site and Page, but from a development perspective scope refers to Portal and individual OSGi Modules. Classic data access implementations do not support multi-scope access because of boundaries between the scopes.

The Multi-Scoped Data/Logic Liferay Design Pattern's intent is to define how data and logic can be designed to be accessable from all scopes in Liferay, either in the Portal layer or any other deployed OSGi Modules.

Also Known As

This pattern is implemented using the Liferay Service Builder tool.


Standard ORM tools provide access to data for servlet-based web applications, but they are not a good fit in the portal because of the barriers between modules in the form of class loader and other kinds of boundaries. If a design starts from a standard ORM solution, it will be restricted to a single development scope. Often this may seem acceptable for an initial design, but in the portal world most single-scoped solutions often need to be changed to support multiple scopes. As the standard tools have no support for multiple scopes, developers will need to hand code bridge logic to add multi-scope support, and any hand coding increases development time, bug potential, and time to market.

The motivation for Liferay's Service Builder tool is to provide an ORM-like tool with built-in support for multi-scoped data access and business logic sharing. The tool transforms an XML-based entity definition file into layered code to support multiple scopes and is used throughout business logic creation to add multi-scope exposure for the business logic methods.

Additionally the tool is the foundation for adding portal feature support to custom entities, including:

  • Auto-populated entity audit columns.
  • Asset framework support (comments, rankings, Asset Publisher support, etc).
  • Indexing and Search support.
  • Model listeners.
  • Workflow support.
  • Expando support.
  • Dynamic Query support.
  • Automagic JSON web service support.
  • Automagic SOAP web service support.

You're not going to get this kind of integration from your classic ORM tool...

And with Liferay 7 CE / Liferay DXP, additionally you also get an OSGi-compatible API and service bundle implementation ready for deployment.


IMHO Service Builder applies when you are dealing with any kind of multi-scoped data entities and/or business logic; it also applies if you need to add any of the indicated portal features to your implementation.


The participants in this pattern are:

  • An XML file defining the entities.
  • Spring configuration files.
  • Implementation class methods to add business logic.
  • Service consumers.

The participants are used by the Service Builder tool to generate code for the service implementation details.

Details for working with Service Builder are covered in the following sections:


ServiceBuilder uses the entity definition XML file to generate the bulk of the code. Custom business methods are added to the ServiceImpl and LocalServiceImpl classes for the custom entities and ServiceBuilder will include them in the service API.


By using Service Builder and generating entities, there is no real downside in the portal environment. Service Builder will generate an ORM layer and provide integration points for all of the core Liferay features.

There are three typical arguments used by architects and developers for not using Service Builder:

  • It is not a complete ORM. This is true, it does not support everything a full ORM does. It doesn't support Many To Many relationships and it also doesn't handle automatic parent-children relationships in One To Many. All that means is the code to handle many to many and even some one to many relationship handling will need to be hand-coded.
  • It still uses old XML files instead of newer Annotations. This is also true, but this is more a reflection of Liferay generating all of the code including the interfaces. With Liferay adding portal features based upon the XML definitions, using annotations would require Liferay to modify the annotated interface and cause circular change effects.
  • I already know how to develop using X, my project deadlines are too short to learn a new tool like Service Builder. Yes there is a learning curve with Service Builder, but this is nothing compared to the mountains of work it will take getting X working correctly in the portal and some Liferay features will just not be options for you without Service Builder's generated code.

All of these arguments are weak in light of what you get by using Service Builder.

Sample Usage

Service Builder is another case of Liferay eating it's own dogfood. The entire portal is based on Service Builder for all of the entities in all of the portlets, the Liferay entities, etc.

Check out any of the Liferay modules from simple cases like Bookmarks through more complicated cases such as Workflow or the Asset Publisher.


Service Builder is a must-use if you are going to do any integrated portal development. You can't build the portal features into your portlets without Service Builder usage.

Seriously. You have no other choice. And I'm not saying this because I'm a fanboy or anything, I'm coming from a place of experience. My first project on Liferay dealt with a number of portlets using a service layer; I knew Hibernate but didn't want to take time out to learn Service Builder. That was a terrible mistake on my part. I never did deal with the multi-scoping well at all, never got the kind of Liferay integration that would have been great to have. Fortunately it was not a big problem to have made such a mistake, but I learned from it and use Service Builder all the time now in the portal.

So I share this experience with you in hopes that you too can avoid the mistakes I made. Use Service Builder for your own good!

Liferay Design Patterns - Flexible Entity Presentation

Technical Blogs February 15, 2017 By David H Nebinger


So I'm going to start a new type of blog series here covering design patterns in Liferay.

As we all know:

In software engineering, a software design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. - Wikipedia

In Liferay, there are a number of APIs and frameworks used to support Liferay-specific general reusable solutions. But since we haven't defined them in a design pattern, you might not be aware of them and/or if/when/how they could be used.

So I'm going to carve out some time to write about some "design patterns" based on Liferay APIs and frameworks. Hopefully they'll be useful as you go forward and designing your own Liferay-based solutions.

Being a first stab at defining these as Liferay Design Patterns, I'm expecting some disagreement on simple things (that design pattern name doesn't seem right) as well as some complex things... Please go ahead and throw your comments at me and I'll make necessary changes to the post. Remember this isn't for me, this is for you. cheeky

And I must add that yes, I'm taking great liberties in using the phrase "design pattern". Most of the Liferay APIs and frameworks I'm going to cover are really combinations of well-documented software design patterns (in fact Liferay source actually implements a large swath of creational, structural and behavioral design patterns in such purity and clarity they are easy to overlook).

These blog posts may not be defining clean and simple design patterns as specified by the Gang of Four, but they will try to live up to the ideals of true design patterns. They will provide general, reusable solutions to commonly occurring problems in the context of a Liferay-based system.

Ultimately the goal is to demonstrate how by applying these Liferay Design Patterns that you too can design and build a Liferay-based solution that is rich in presentation, robust in functionality and consistent in usage and display. By providing the motivation for using these APIs and frameworks, you will be able to evaluate how they can be used to take your Liferay projects to the next level.

Pattern: Flexible Entity Presentation


The intent of the Flexible Entity Presentation pattern is to support a dynamic templating mechanism that supports runtime display generation instead of a classic development-time fixed representation, further separating view management from portlet development.

Also Known As

This pattern is known as and implemented on the Application Display Template (ADT) framework in Liferay.


The problem with most portlets is that the code used to present custom entities is handled as a development-time concern; the UI specifications define how the entity is shown on the page and the development team delivers a solution to satisfy the requirements.  Any change to specifications during development results in a change request for the development team, and post development the change represents a new development project to implement presentation changes.

The inflexibility of the presentation impacts time to market, delivery cycles and development resource allocation.

The Flexible Entity Presentation pattern's motivation is to support a user-driven mechanism to present custom entities in a dynamic way.

The users and admins section on ADTs from starts:

The application display template (ADT) framework allows Liferay administrators to override the default display templates, removing limitations to the way your site’s content is displayed.

ADTs allow the display of an entity to be handled by a dynamic template instead of handled by static code. Don't get hung up on the word content here, it's not just content as in web content but more of a generic reference to any html content your portlet needs to render.

Liferay identified this motivation when dealing with client requests for product changes to adapt presentation in different ways to satisfy varying client requirements.  Liferay created and uses the ADT framework extensively in many of the OOTB portlets from web content through breadcrumbs.  By leveraging ADTs, Liferay defines the entities, i.e. a Bookmark, but the presentation can be overridden by an administrator and an ADT to show the details according to their requirements, and all without a development change by Liferay or a code customization by the client.

Liferay eats its own dogfood by leveraging the ADT framework, so this is a well tested framework for supporting dynamic presentation of entities.

When you look at many of the core portlets, they now support ADTs to manage their display aspects since tweaking an ADT is much simpler than creating a JSP fragment bundle or new custom portlet or some crazy JS/CSS fu in order to affect a presentation change. This flexibility is key for supporting changes in the Liferay UI without extensive code customizations.


The use of ADTs apply when the presentation of an entity is subject to change. Since admins will use ADTs to manage how to display the entities, the presentation does not need to be finalized before development starts. When the ADT framework is incorporated in the design out of the gate, flexibility in the presentation is baked into the design and doors are open to any future presentation changes without code development, testing and deployment.

So there are some fairly clear use cases to apply ADTs:

  • The presentation of the custom entities is likely to change.
  • The presentation of the custom entities may need to change based upon context (list view, single view, etc.).
  • The presentation is not an aspect in the portlet development.
  • The project is a Liferay Marketplace application and presentation customization is necessary.

Notice the theme here, the change in presentation.

ADTs would either not apply or would be overkill for a static entity presentation, one that doesn't benefit from presentation flexibility.


The participants in this pattern are:

  • A custom entity.
  • A custom PortletDisplayTemplateHandler.
  • ADT Resource Portlet Permissions.
  • Portlet Configuration for ADT Selection.
  • Portlet View Leveraging ADTs.

The participants work together with the Liferay ADT framework to support a dynamic presentation for the entity. The implementation details for the participants are covered here:


The custom entity is defined in the Service Builder layer (normally).

The PortletDisplayTemplateHandler implementation is used to feed meta information about the fields and descriptions of the entity to the ADT framework Template Editor UI. The meta information provided will be generally be very coupled to the custom entity in that changes to the entity will usually result in changes to the PortletDisplayTemplateHandler implementation.

The ADT resource portlet permissions must be enabled for the portlet so administrators will be able to choose the display template and edit display templates for the entity.

The portlet configuration panel is where the administrator will chose between display templates, and the portlet view will leverage Liferay's ADT tag library to inject the rendered template into the portlet view.


By moving to an ADT-based presentation of the entity, the template engine (FreeMarker) will be used to render the view.

The template engine will impose a performance cost in supporting the flexible presentation (especially if someone creates a bad template). Implementors should strike a balance between benefitial flexibility and overuse of the ADT framework.

Sample Usage

For practical examples, consider a portal based around a school.  Some common custom entities would be defined for students, rooms, teachers, courses, books, etc.

Consider how often the presentation of the entities may need to change and weigh that against whether the changes are best handled in code or in template.

A course or a teacher entity would likely benefit from the ADT as those entities might need to change the presentation of the course as a brochure-like view needs to change, or the teacher when new additions such as accreditation or course history would change the presentation.

The students and rooms may not benefit from ADTs if the presentation is going to remain fairly static.  These entities might go through future presentation changes but it may be more acceptable to approach those as development projects that are planned and coordinated.

Known Uses

The best known uses come from Liferay itself. The list of OOTB portlets which leverage ADTs are:

  • Asset Publisher
  • Blogs
  • Breadcrumbs
  • Categories Navigation
  • Documents and Media
  • Language Selector
  • Navigation Menu
  • RSS Publisher
  • Site Map
  • Tags Navigation
  • Web Content Display
  • Wiki

This provides many examples for when to use ADTs, the obvious advantage of ADTs (customized displays w/o additional coding) and even hints where ADTs may not work well (i.e. users/orgs control panel, polls, ...).


Well, that's pretty much it for this post. I'd encourage you to go and read the section for styling apps with ADTs as it will help solidify the motivations to incorporate the ADT framework into your design. When you understand how an admin would use ADTs to create a flexible presentation of the Liferay entities, it should help to highlight how you can achieve the same flexibility for your custom assets.

When you're ready to realize these benefits, you can refer to the implementing ADTs page to help with your implementation.


Adding Dependencies to JSP Fragment Bundles

Technical Blogs February 8, 2017 By David H Nebinger

Recently I was lamenting how I felt that JSP fragment bundles could not introduce new dependencies and therefore the JSP overrides could really not do much more than reorganize or add/remove already supported elements on the page.

For me, this is like only 5% of the use cases for a JSP override. I am much more likely to need to add new functionality that the original portlet developers didn't need to consider.  I need to be able to add new services and use those in the JSP to retrieve entities, and sometimes just really do completely different things w/ the JSP that perhaps were never imagined.

The first time I tried a JSP override to do something similar with a JSP fragment bundle, I was disappointed. My fragment bundle would get to status "Installed" in GoGo, but would go no further because it had unresolved references.  It just couldn't get to the resolved stage.

How could I make the next great JSP fragment override bundle if I couldn't access anything outside the original set of services?

My good friend and coworker Milen Dyankov heard my rant and offered the following insight:

According to the spec:

... requirements and capabilities in a fragment bundle never become part of the fragment's Bundle Wiring; they are treated as part of the host's requirements and capabilities when the fragment is attached to that host.

As for providing declarative services in fragments, again the spec is clear:

A Service-Component manifest header specified in a fragment is ignored by SCR. However, XML documents referenced by a bundle's Service-Component manifest header may be contained in attached fragments.

In another words if your host has Service-Components: OSGI-INF/*.xml then your fragment can put a new XML file in OSGI-INF folder and it will be processed by SCR.

Now sometimes Milen seems to forget that I'm just a mere mortal and not the OSGi guru he is, so while this was perfectly clear to him, it left me wondering if there was anything here that would be my lever to lift the lid and peek inside the JSP fragment bundle realm.

The remainder of this blog is the result of that epic journey cheeky.

Service Component Runtime

The SCR is the Apache Felix implementation of the OSGi Declarative Services specification. It's responsible for handling the service registry and lifecycle management of DS components within the OSGi container, starting/stopping the services as bundles are started/stopped, wiring up @Reference dependencies in DS components, etc.

Since the fragment bundle handling comes from the Apache Felix implementation, it's not really a Liferay component and certainly not one that would lend itself to an override in the normal Liferay sense. Anything we do here to access services in the JSP fragment bundles is going to have to go through supported OSGi mechanisms or we won't get anywhere.

So the key for Milen's quote above is the "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments." The rough translation here - we might be able to provide an override XML file for one of the bundle host's components and possibly inject new dependencies. Yes, as a rough translation it really assumes that you know more than what what you might (and especially more that what I did), so let's divert for a second.

Service Component Manifest XML Documents

So the BND tool that we all know and love, that guy actually does many, many things for us when it builds a bundle jar. One of those tasks is to generate the service component manifest and all of the XML documents. The contents of all of these files is basically the metadata the SCR will need for dependency resolution, component wiring, etc.

Any time you annotate your java class with @Component you are indicating it is a DS service. When BND is processing the annotations, it's going to add an entry to the Service Component Manifest (so the SCR will process the component during bundle start). The Service Component Manifest is the Service-Component key in the bundle's MANIFEST.MF file, and it lists each individual XML file in the OSGI-INF, one for each component.

These XML files define the component for the SCR, specifying the java class that implements the component, the service it provides, all reference details and properties for the component.

So if you take any bundle jar you have, expand it and check out the MANIFEST.MF file and look for the Service-Component key. You'll find there's one OSGI-INF/com.example.package.JavaClass.xml file (where it is your package and class) for each component defined in your bundle.

If you open one of the XML files, you can see the structure for a component definition, and it is easy to see how things that you set in the @Component annotation attributes have been mapped into the XML file.

Now that we know about the manifest and XML docs, we can get back to our regularly scheduled program.

Overriding An SCR XML

So remember, we should be able to override one of these files because "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments."

This hints that we cannot add a new file, but we could override an existing one.

So to me, this is the key question - can we create an override XML file to introduce a new dependency, one that really cannot be directly bound to the original (since we can't modify the class) so at least the bundle would have a new dependency and the JSP would be happy?

Well I actually used all of this newfound knowledge to work up a test and tried it out, but it fails. It didn't make any sense...

Return To The Jedi

"Milen, my SCR XML override isn't working."

"Overrides won't work because the XML files are loaded by the class loader, and the host bundle comes before the fragment bundle so SCR ignores the override.  You can't override the XML, you can only add a new one to the fragment bundle."

"But Milen you said I couldn't add new XML files, only those listed in the Service-Component in the MANIFEST.MF file of the host bundle will be used by SCR during loads."

"Change your Service-Component key to use a wildcard like OSGI-INF/* and SCR will load the ones from the host bundle as well as the fragment bundle. It's considered bad practice, but it would work."

"I can't do that, Milen, I'm doing a JSP fragment bundle on a Liferay host bundle, I can't change the Service-Component manifest value and, if I could, I wouldn't need to do any of this fragment bundling in the first place because I would just apply my change directly in the host bundle and be done with it."

"Well then the SCR XML override isn't going to work. Let's try something else..."

Example Project

After working out a new plan of attack, I was going to need an example project to test this all out and verify that it was going to work. The example must include a JSP fragment bundle override and introduce another previously unused service. I don't really want to do any more coding than necessary here, so let's pick something to do out of the portal JSPs and services.

Requirement: On login form, display the current count of membership requests.

Pretty simple, maybe part of some automated membership request handling being added to the portal or trying to show how popular the site is by showing count of how many are waiting to get in.

But it gives us the goal here, we want to access the MemberRequestLocalService inside of the login.jsp page of the login-web host bundle. The service is defined in the com.liferay.invitation.invite.members.api bundle and is not currently connected in any way with the login web module.

Creating The Fragment Bundle

I'll continue my pattern of using blade on the command line, but of course you're free to leverage tools provided by your IDE.

blade create -t fragment -h com.liferay.login.web -H 1.1.4 login-web-fragment

Remember to choose the fragment bundle version from your local portal so you'll override the right one and make OSGi/SCR happy.

Copy in the login.jsp page from the portal source. After the include of init.jsp, add the following lines:

<%@ page import="com.liferay.invitation.invite.members.service.MemberRequestLocalService" %>

  // get the service from the render request attributes
  MemberRequestLocalService memberRequestLocalService = (MemberRequestLocalService)
  // get the current count
  int currentRequestCount = memberRequestLocalService.getMemberRequestsCount();
  // display it somewhere on the page...

Very simple. Doesn't really display, but that's not the point in this blog.

Now if you build and deploy this guy as-is, if you check him you'll see his state in GoGo is "Installed". This is not good as it is not where it needs to be for the JSP fragment to work.

Adding The Dependency

So we have to go back to how the OSGi handles the fragment bundles... So when OSGi is loading the fragment, effectively the MANIFEST.MF items from the fragment bundle will be merged with those from the host bundle.

For me, that means I have to list my dependency in build.gradle and trust BND will add the right Import-Package declaration to the final MANIFEST.MF file.

Then, when the framework is loading my fragment bundle, my Import-Package from the fragment will be added to the Import-Package of the host bundle and all should be good.

JSP fragment bundles created by blade do not have dependencies listed in the build.gradle file (in fact it is completely empty), so let's add the dependency stanza:

dependencies {
  compile group: "com.liferay", name: "com.liferay.invitation.invite.members.api", version: "2.1.1"

We only need to add the dependency that is missing from the host bundle, the one with the service we're going to pull in.

After building, you can unpack the jar and check the MANIFEST.MF file and see that it does now have the Import-Package declaration, so if SCR does actually do the merge while loading, we should be in business.

Deploy your new JSP fragment bundle and if you check the bundle status in GoGo, you'll see it is now "Resolved".


Injecting The Reference

Not so fast. If you try to log into your portal, you'll get the "portlet is temporarily unavailable" message and the log file will have a NullPointerException and a big stack trace. We've totally broken the login portlet because login.jsp depends upon the service but it is not set.

If you check the JSP change I shared, I'm pulling the service instance from the render request attributes. But how the heck does it get in there when we cannot change the host bundle to inject it in the first place?

We're going to do this using another OSGi module with a new component that implements the PortletFilter interface, specifically a RenderFilter.

  immediate = true,
  property = {
      "" + LoginPortletKeys.LOGIN,
      "" + LoginPortletKeys.FAST_LOGIN
  service = PortletFilter.class
public class LoginRenderFilter implements RenderFilter {
  public void doFilter(RenderRequest request, RenderResponse response, FilterChain chain) throws IOException, PortletException {
    // set the request attribute so it is available when the JSP renders
    request.setAttribute("MemberRequestLocalService", _memberRequestLocalService);

    // let the filter chain do it's thing
    chain.doFilter(request, response);

  public void init(FilterConfig filterConfig) throws PortletException { }

  public void destroy() { }

  @Reference(unbind = "-")
  protected void setMemberRequestLocalService(final MemberRequestLocalService memberRequestLocalService) {
    _memberRequestLocalService = memberRequestLocalService;

  private MemberRequestLocalService _memberRequestLocalService;

So here we are intercepting the render request using the portlet filter. We inject the service into the request attributes before invoking the filter chain to complete the rendering; that way when the JSP page from the fragment bundle is used, the attribute will be set and ready.

Build and deploy your new component. Once it starts, refresh your browser and try to log in. You should now see the login portlet again. Not that we did anything fancy here, we're just proving that the service reference is not null and is available for the JSP override to use.


So we took a roundabout path to get here, but we've seen how we can create a JSP fragment bundle to override portal JSPs, add a dependency to the fragment bundle that gets included as a dependency in the host bundle, and we created a portlet filter bundle to inject the service reference in the request attributes so it would be available to the JSP page.

Two different bundle jars, but it certainly gets the job done.

Also along the way we learned some things about what the SCR is, how fragment bundles work, as well as some of the internals of our OSGi bundle jars and the role that BND plays in their construction.  Useful information, IMHO, that can help you while learning Liferay 7 CE/Liferay DXP.

This now opens some new paths for you to pursue for your JSP fragment bundles.  Just follow the outline here and you should be good to go.

Find the project code for the blog here:

Building In Upgrade Support

Technical Blogs February 7, 2017 By David H Nebinger

One of the things that I never really used in 6.x was the Liferay upgrade APIs.

Sure, I knew about the Release table and stuff, but it just seemed kind of cumbersome to not only to build out your code but on top of that track your release and support an upgrade process on top of all of that. I mean, I'm a busy guy and once this project is done I'm already behind on the next one.

When you start perusing the Liferay 7 source code, though, one thing you'll notice is that there is upgrade logic all over the place. Pretty much every portlet module includes an upgrade process to support upgrading from version "0.0.0" to version "1.0.0" (this is the upgrade process to change from 6.x to the new 7.x module version).

And you'll even find that some modules include upgrades from versions "1.0.0" to "1.0.1" to support the independent module versioning that was the promise of OSGi.

So now that I'm trying to exclusively build modules, I'm thinking it's an appropriate time to dig into the upgrade APIs and see how they work and how I can incorporate upgrades into my modules.

The New Release

So previously we'd have to manage the Release entity ourselves, but Liferay has graciously taken that over for us. Your bnd.bnd file, where you specify your module version, well that now becomes the foundation of your Release handling. And just like the portal module, an absense of a Release is technically version "0.0.0" so now you can handle first-time deployment stuff too.

The Upgrade API

Before diving into implementation, let's take a little time to look over some of the classes and interfaces Liferay provides as part of the Upgrade API. We'll start with the classes from the com.liferay.portal.kernel.upgrade package:

Name Purpose
UpgradeStep This is the main interface that must be implemented for all upgrade logic. When registering an upgrade, an ordered list of UpgradeSteps are provided and the upgrade process will execute these in order to complete an upgrade.
DummyUpgradeStep The simplest of all concrete implementations of the UpgradeStep interface, this upgrade step does nothing. But it is a useful step to use for handling new deployments.
UpgradeProcess This is a handy abstract base class to use for all of your upgrade steps. It implements the UpgradeStep interface and has support for database-specific alterations should you need them.
Base* These are abstract base classes for upgrade steps typically used by the portal for managing upgrades from portlet wars to new module-based portlets. For example, the BaseUpgradePortletId class is used to support fixing the portlet ids from older id-based portlet names to the new OSGi portlet ids based on class name. These classes are good foundations if you are building an upgrade process to move your own portlets from wars to bundles or want to handle upgrades from 6.x compatibility to 7.x.
util.* For those wanting to support a database upgrade, the com.liferay.portal.kernel.upgrade.util package contains a bunch of support classes to assist with altering tables, columns, indexes, etc.

Registering The Upgrade

All upgrade definitions need to be registered. That's pretty easy, of course, when one is using OSGi. To register an upgrade, you just need a component that implements the UpgradeStepsRegistrator interface.

But first a word about code structure...

So Liferay's recommendation is to use a java package to contain all of your upgrade code, typically in a package named upgrade, is part of your portlet web module, and the package is at the same level as your portlet package (if you have one).

So if your portlet code is in com.example.myapp.portlet, you're going to have a com.example.myapp.upgrade package.

In here you'll have sub-packages for all upgrade versions supported, so you might have "v1_0_0" and "v1_0_1", etc.  Upgrade step implementations will be in the subpackage for the upgrade level they support.

So now we have enough details to start building out the upgrade definition. Start by updating your build.gradle file to introduce a new dependency:

  compileOnly group: "com.liferay", name: "com.liferay.portal.upgrade", version: "2.3.0"

This pulls in some utility classes we'll be using below.

Let's assume we're building a brand new module and just want to get a placeholder upgrade definition in place. This is quite easily done by adding a single component to our project:

@Component(immediate = true, service = UpgradeStepRegistrator.class)
public class ExampleUpgradeStepRegistrator implements UpgradeStepRegistrator {
  protected void activate(final BundleContext bundleContext) {
    _bundleName = bundleContext.getBundle().getSymbolicName();
  public void register(Registry registry) {

    // for first time deployments this will start by creating the initial release record
    // with the initial version of 1.0.0.
    // Also use the dummy upgrade step since we're not doing anything in this upgrade.
    registry.register(_bundleName, "0.0.0", "1.0.0", new DummyUpgradeStep());
  private String _bundleName;

So that's pretty much it.  Including this class in your component will result in it registering as a Release with version 1.0.0 and you have nothing else to worry about.

When you're ready to release verison 1.1.0 of your component, things get a little more fun.

In your v1_1_0 package you'll create classes that implement the UpgradeStep interface typically by extending the UpgradeProcess abstract base class or perhaps a more appropriate class from the above table. Either way you'll define separate classes to handle different aspects of the upgrade.

We'd then come back to the UpgradeStepRegistrator implementation to add the upgrade steps by including another registry call:

    registry.register(_bundleName, "1.0.0", "1.1.0", new UpgradeMyTableStep(), new UpgradeMyDataStep(), new UpgradeMyConfigAdmin());

When processing this upgrade definition, the Upgrade service will invoke the upgrade steps in the order provided.  So obviously you should take care to order your steps such that they can succeed given only what steps have been processed before and not on subsequent steps.

Database Upgrades

So one of the common issues with Service Builder modules is that the tables will be created when you first deploy the module to a new environment, but updates will not be processed. I think we could argue on one side that it is a bug or on the other side that expecting Service Builder to track data model changes is far outside of the tool's responsibility.

I'm not going to argue it either way; we are where we are, and solving from this point is all I'm really worried about.

As I previously stated, the com.liferay.portal.kernel.upgrade.UpgradeProcess is going to be the perfect base class to accommodate a database update.

UpgradeProcess extends com.liferay.portal.kernel.dao.db.BaseDBProcess which brings the following methods:

  • hasTable() - Determines if the listed table exists.
  • hasColumn() - Determines if the table has the listed column.
  • hasColumnType() - Determines if the listed column in the listed table has the provided type.
  • hasRows() - Determines if the listed table has rows (in order to provide logic to migrate data during an upgrade).
  • runSQL() - Runs the given SQL statement against the database.

UpgradeProcess itself has two upgradeTable() methods both of which add a new table to the database.  The difference between the two, one is simple and will create a table based on the name and a multidimensional array of column detail objects, the second one has additional arguments for fixed SQL for the table, indexes, etc.

Additionally UpgradeProcess has a number of inner support classes to facilitate table alterations:

  • AlterColumnName - A class to encapsulate details to change a column name.
  • AlterColumnType - A class to encapsulate details to change a column type.
  • AlterTableAddColumn - A class to encapsulate details to add a new column to a table.
  • AlterTableDropColumn - A class to encapsulate details to drop a column from a table.

Let's write a quick upgrade method to add a column, change another column's name and another column's type.  To facilitate this, our class will extend UpgradeProcess and will need to implement a doUpgrade() method:

public void doUpgrade() throws Exception {
  // create all of the alterables
  Alterable addColumn = new AlterTableAddColumn("COL_NEW");
  Alterable fixColumn = new AlterColumnType("COL_NEW", "LONG");
  Alterable changeName = new AlterColumnName("OLD_COL_NAME", "NEW_COL_NAME");
  Alterable changeType = new AlterColumnType("ENTITY_PK", "LONG");

  // apply the alterations to the MyEntity Service Builder entity.
  alter(MyEntity.class, addColumn, fixColumn, changeName, changeType);
  // done

So the alterations are based on your ServiceBuilder entity but otherwise you don't have to worry much about SQL to apply these kinds of alterations to your entity's table.


Using just what has been provided here, you can integrate a smooth and automatic upgrade process into your modules, including upgrading your Service Builder's entity backing tables since SB won't do that for you.

Where can you find more details on doing some nitty-gritty upgrade activities? Why, the Liferay source of course.  Here's a fairly complex set of upgrade details to start your review:



My good friend and coworker Nathan Shaw forwarded me a reference that I think is worth adding here.  Thanks Nathan!

Liferay 7 CE/Liferay DXP Scheduled Tasks

Technical Blogs February 6, 2017 By David H Nebinger

In Liferay 6.x, scheduled tasks were kind of easy to implement.

I mean, you'd implement a class that implements the Liferay Message Bus's MessageListener interface and then add the details in the <scheduler-entry /> sections in your liferay-portlet.xml file and you'd be off to the races.

Well, things are not so simple with Liferay 7 CE / Liferay DXP. In fact, I couldn't find a reference anywhere on, so I thought I'd whip up a quick blog on them.

Of course I'm going to pursue this as an OSGi-only solution.

StorageType Information

The first thing we need to know before we schedule a job, we should first discuss the supported StorageTypes. Liferay has three supported StorageTypes:

  • StorageType.MEMORY_CLUSTERED - This is the default storage type, one that you'll typically want to shoot for. This storage type combines two aspects, MEMORY and CLUSTERED. For MEMORY, that means the job information (next run, etc.) are only held in memory and are not persisted anywhere. For CLUSTERED, that means the job is cluster-aware and will only run on one node in the cluster.
  • StorageType.MEMORY - For this storage type, no job information is persisted. The important part here is that you may miss some job runs in cases of outages. For example, if you have a job to run on the 1st of every month but you have a big outage and the server/cluster is down on the 1st, the job will not run. And unlike in PERSISTED, when the server comes up the job will not run even though it was missed. Note that this storage type is not cluster-aware, so your job will run on every node in the cluster which could cause duplicate runs.
  • StorageType.PERSISTED - This is the opposite of MEMORY as job details will be persisted in the database. For the missed job above, when the server comes up on the 2nd it will realize the job was missed and will immediately process the job. Note that this storage type relies on cluster-support facilities in the storage engine (Quartz's implementation discussed here:

So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster.

Choosing between MEMORY[_CLUSTERED] and PERSISTED is how resiliant you need to be in the case of missed job fire times. For example, if that monthly report is mission critical, you might want to elect for PERSISTED to ensure the report goes out as soon as the cluster is back up and ready to pick up the missed job. However, if they are not mission critical it is easier to stick with one of the MEMORY options.

Finally, even if you're not currently in a cluster, I would encourage you to make choices as if you were running in a cluster right from the beginning. The last thing you want to have to do when you start scaling up your environment is trying to figure out why some previous regular tasks are not running as they used to when you had a single server. 

Adding StorageType To SchedulerEntry

We'll be handling our scheduling shortly, but for now we'll worry about the SchedulerEntry. The SchedulerEntry object contains most of the details about the scheduled task to be defined, but it does not have details about the StorageType. Remember that MEMORY_CLUSTERED is the default, so if you're going to be using that type, you can skip this section. But to be consistent, you can still apply the changes in this section even for the MEMORY_CLUSTERED type.

To add StorageType details to our SchedulerEntry, we need to make our SchedulerEntry implementation class implement the com.liferay.portal.kernel.scheduler.ScheduleTypeAware interface. When Liferay's scheduler implementation classes are identifying the StorageType to use, it starts with MEMORY_CLUSTERED and will only use another StorageType if the SchedulerEntry implements this interface.

So let's start by defining a SchedulerEntry wrapper class that implements the SchedulerEntry interface as well as the StorageTypeAware interface:

public class StorageTypeAwareSchedulerEntryImpl extends SchedulerEntryImpl implements SchedulerEntry, StorageTypeAware {

   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry) {

    _schedulerEntry = schedulerEntry;

    // use the same default that Liferay uses.
    _storageType = StorageType.MEMORY_CLUSTERED;

   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   * @param storageType
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry, final StorageType storageType) {

    _schedulerEntry = schedulerEntry;
    _storageType = storageType;

  public String getDescription() {
    return _schedulerEntry.getDescription();

  public String getEventListenerClass() {
    return _schedulerEntry.getEventListenerClass();

  public StorageType getStorageType() {
    return _storageType;

  public Trigger getTrigger() {
    return _schedulerEntry.getTrigger();

  public void setDescription(final String description) {
  public void setTrigger(final Trigger trigger) {
  public void setEventListenerClass(final String eventListenerClass) {
  private SchedulerEntryImpl _schedulerEntry;
  private StorageType _storageType;

Now you can use this class to wrap a current SchedulerEntryImpl yet include the StorageTypeAware implementation.

Defining The Scheduled Task

NOTE: If you're using DXP FixPack 14 or later or Liferay 7 CE GA4 or later, jump down to the end of the blog post for a necessary change due to the deprecation of the BaseSchedulerEntryMessageListener class.

We have all of the pieces now to build out the code for a scheduled task in Liferay 7 CE / Liferay DXP:

  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
public class MyTaskMessageListener extends BaseSchedulerEntryMessageListener {

   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
  protected void doReceive(Message message) throws Exception {"Scheduled task executed...");

   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getEventListenerClass();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.

    // register the scheduled task
    _schedulerEngineHelper.register(this, schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;

   * deactivate: Called when OSGi is deactivating the component.
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);

      // unregister this listener
    // clear the initialized flag
    _initialized = false;

   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
  protected StorageType getStorageType() {
    if (schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) schedulerEntryImpl).getStorageType();
    return StorageType.MEMORY_CLUSTERED;
   * setModuleServiceLifecycle: So this requires some explanation...
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;

So the code here is kinda thick, but I've documented it as fully as I can.

The base class, BaseSchedulerEntryMessageListener, is a common base class for all schedule-based message listeners. It is pretty short, so you are encouraged to open it up in the source and peruse it to see what few services it provides.

The bulk of the code you can use as-is. You'll probably want to come up with your own default cron expression constant and property so you're not running at midnight (and that's midnight GMT, cron expressions are always based on the timezone your app server is configured to run on).

And you'll certainly want to fill out the doReceive() method to actually build your scheduled task logic.

One More Thing...

One thing to keep in mind, especially with the MEMORY and MEMORY_CLUSTERED storage types: Liferay does not do anything to prevent running the same jobs multiple times.

For example, say you have a job that takes 10 minutes to run, but you schedule it to run every 5 minutes. There's no way the job can complete in 5 minutes, so multiple jobs start piling up. Sure there's a pool backing the implementation to ensure the system doesn't run away and die on you, but even that might lead to disasterous results.

So take care in your scheduling. Know what the worst case scenario is for timing your jobs and use that information to define a schedule that will work even in this situation.

You may even want to consider some sort of locking or semaphore mechanism to prevent the same job running in parallel at all.

Just something to keep in mind...


So this is how all of those scheduled tasks from liferay-portlet.xml get migrated into the OSGi environment. Using this technique, you now have a migration path for this aspect of your legacy portlet code.

Update 05/18/2017

So I was contacted today about the use of the BaseSchedulerEntryMessageListener class as the base class for the message listener. Apparently this class has become deprecated as of DXP FP 13 as well as the upcoming GA4 release.

The only guidance I was given for updating the code happens to be the same guidance that I give to most folks wanting to know how to do something in Liferay - find an example in the Liferay source.

After reviewing various Liferay examples, we will need to change the parent class for our implementation and modify the activation code.

So now our message listener class is:

  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
public class MyTaskMessageListener extends BaseMessageListener {

   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
  protected void doReceive(Message message) throws Exception {"Scheduled task executed...");

   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getClass().getName();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    _schedulerEntryImpl = new SchedulerEntryImpl(getClass().getName(), jobTrigger);
    _schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(_schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.

    // register the scheduled task
    _schedulerEngineHelper.register(this, _schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;

   * deactivate: Called when OSGi is deactivating the component.
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(_schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);

      // unregister this listener
    // clear the initialized flag
    _initialized = false;

   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
  protected StorageType getStorageType() {
    if (_schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) _schedulerEntryImpl).getStorageType();
    return StorageType.MEMORY_CLUSTERED;
   * setModuleServiceLifecycle: So this requires some explanation...
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
  private SchedulerEntryImpl _schedulerEntryImpl = null;

That's all there is to it, but it's best to avoid the deprecated class since you never know when deprecated will become disappeared...

Liferay/OSGi Annotations - What they are and when to use them

Technical Blogs February 1, 2017 By David H Nebinger

When you start reviewing Liferay 7 CE/Liferay DXP code, you run into a lot of annotations in a lot of different ways.  They can all seem kind of overwhelming when you first happen upon them, so I thought I'd whip up a little reference guide, kind of explaining what the annotations are for and when you might need to use them in your OSGi code.

So let's dive right in...


So in OSGi world this is the all important "Declarative Services" annotation defining a service implementation.  DS is an aspect of OSGi for declaring a service dynamically and has a slew of plumbing in place to allow other components to get wired to the component.

There are three primary attributes that you'll find for this annotation:

  • immediate - Often set to true, this will ensure the component is started right away and not wait for a reference wiring or lazy startup.
  • properties - Used to pass in a set of OSGi properties to bind to the component.  The component can see the properties, but more importantly other components will be able to see the properties too.  These properties help to configure the component but also are used to support filtering of components.
  • service - Defines the service that the component implements.  Sometimes this is optional, but often it is mandatory to avoid ambiguity on the service the component wants to advertise.  The service listed is often an interface, but you can also use a concrete class for the service.

When are you going to use it?  Whenever you create a component that you want or need to publish into the OSGi container.  Not all of your classes need to be components.  You'll declare a component when code needs to plug into the Liferay environment (i.e. add a product nav item, define an MVC command handler, override a Liferay component) or to plug into your own extension framework (see my recent blog on building a healthcheck system).


This is the counterpart to the @Component annotation.  @Reference is used to get OSGi to inject a component reference into your component. This is a key thing here, since OSGi is doing the injection, it is only going to work on an OSGi @Component class.  @Reference annotations are going to be ignored in non-components, and in fact they are also ignored in subclasses too.  Any injected references you need, they must be done in the @Component class itself.

This is, of course, fun when you want to define a base class with a number of injected services; the base class does not get the @Component annotation (because it is not complete) and @Reference annotations are ignored in non-component classes, so the injection will never occur.  You end up copying all of the setters and @Reference annotations to all of the concrete subclasses and boy, does that get tedious.  But it is necessary and something to keep in mind.

Probably the most common attribute you're going to see here is the "unbind" attribute, and you'll often find it in the form of @Reference(unbind = "-") on a setter method. When you use a setter method with @Reference, OSGi will invoke the setter with the component to use, but the unbind attribute indicates that there is no method to call when the component is unbinding, so basically you're saying you don't handle components disappearing behind your back.  For the most part this is not a problem, server starts up, OSGi binds the component in and you use it happily until the system shuts down.

Another attribute you'll see here is target. Target is used as a filter mechanism; remember the properties covered in @Component? With the target attribute, you specify a query that identifies a more specific instance of a component that you'd like to receive.  Here's one example:

  target = "(" + NotificationsPortletKeys.NOTIFICATIONS + ")",
  unbind = "-"
protected void setPanelApp(PanelApp panelApp) {
  _panelApp = panelApp;

The code here wants to be given an instance of a PanelApp component, but it's looking specifically for the PanelApp component tied to the notifications portlet.  Any other PanelApp component won't match the filter and won't be applied.

There are some attributes that you will sometimes find here that are pretty important, so I'm going to go into some details on those.

The first is the cardinality attribute.  The default value is ReferenceCardinality.MANDITORY, but other values are OPTIONAL, MULTIPLE, and AT_LEAST_ONE. The meanings of these are:

  • MANDITORY - The reference must be available and injected before this component will start.
  • OPTIONAL - The reference is not required for the component to start and will function w/o a component assignment.
  • MULTIPLE - Multiple resources may satisfy the reference and the component will take all of them, but like OPTIONAL the reference is not needed for the component to start.
  • AT_LEAST_ONE - Multiple resources may satisfy the reference and the component will take all of them, but at least one is manditory for the component to start.

The multiple options allow you to get multiple calls with references that match.  This really only makes sense if you are using the @Reference annotation on a setter method and, in the body of the method, were adding to a list or array.  Alternatives to this kind of thing would be to use a ServiceTracker so you wouldn't have to manage the list yourself.

The optional options allow your component to start without an assigned reference.  This kind of thing can be useful if you have a scenario where you have a circular reference issue: A references B which references C which references A.  If all three use REQUIRED, none will start because the references cannot be satisfied (only started components can be assigned as a reference).  You break the circle by having one component treat the reference as optional; then they will be able to start and references will be resolved.

The next important @Reference attribute is the policy.  Policy can be either ReferencePolicy.STATIC (the default) or ReferencePolicy.DYNAMIC.  The meanings of these are:

  • STATIC - The component will only be started when there is an assigned reference, and will not be notified of alternative services as they become available.
  • DYNAMIC - The component will start when there is reference(s) or not, and the component will accept new references as they become available.

The reference policy controls what happens after your component starts when new reference options become available.  For STATIC, new reference options are ignored and DYNAMIC your component is willing to change.

Along with the policy, another important @Reference attribute is the policyOption.  This attribute can be either ReferencePolicyOption.RELUCTANT (the default) or ReferencePolicyOption.GREEDY.  The meanings of these are:

  • RELUCTANT - For single reference cardinality, new reference potentials that become available will be ignored.  For multiple reference cardinality, new reference potentials will be bound.
  • GREEDY - As new reference potentials become available, the component will bind to them.

Whew, lots of options here, so let's talk about common groupings.

First is the default, ReferenceCardinality.MANDITORY, ReferencePolicy.STATIC and ReferencePolicyOption.RELUCTANT.  This summarizes down to your component must have only one reference service to start and regardless of new services that are started, your component is going to ignore them.  These are really good and normal defaults and promote stability for your component.

Another common grouping you'll find in the Liferay source is ReferenceCardinality.OPTIONAL or MULTIPLE, ReferencePolicy.DYNAMIC and ReferencePolicyOption.GREEDY.  In this configuration, the component will function with or without reference service(s), but the component allows for changing/adding references on the fly and wants to bind to new references when they are available.

Other combinations are possible, but you need to understand impacts to your component.  After all, when you declare a reference, you're declaring that you need some service(s) to make your component complete.  Consider how your component can react when there are no services, or what happens if your component stops because dependent service(s) are not available. Consider your perfect world scenario as well as a chaotic nightmare of redeployments, uninstalls, service gaps and identify how your component can weather the chaos.  If you can survive the chaos situation, you should be fine in the perfect world scenario.

Finally, when do you use the @Reference annotation?  When you need service(s) injected into your component from the OSGi environment.  These injections can come from your own module or from other modules in the OSGi container.  Remember that @Reference only works for OSGi components, but you can change into a component with an addition of the @Component reference.


This is a Liferay annotation used to inject a reference to a Spring bean from the Liferay core.


This is a Liferay annotation used to inject a reference from a Spring Extender module bean.

Wait! Three Reference Annotations? Which should I use?

So there they are, the three different types of reference annotations.  Rule of thumb, most of the time you're going to want to just stick with the @Reference annotation.  The Liferay core Spring beans and Spring Extender module beans are also exposed as OSGi components, so @Reference should work most of the time.

If your @Reference isn't getting injected or is null, that will be sign that you should use one of the other reference annotations.  Here your choice is easy: if the bean is from the Liferay core, use @BeanReference, but if it is from a Spring Extender module, use the @ServiceReference annotation instead.  Note that both bean and service annotations will require your component use the Spring Extender also.  For setting this up, check out any of your ServiceBuilder service modules to see how to update the build.gradle and bnd.bnd file, etc.


The @Activate annotation is OSGi's equivalent to Spring's InitializingBean interface.  It declares a method that will be invoked after the component has started.

In the Liferay source, you'll find it used with three primary method signatures:

protected void activate() {
protected void activate(Map<String, Object> properties) {
protected void activate(BundleContext bundleContext, Map<String, Object> properties) {

There are other method signatures too, just search the Liferay source for @Activate and you'll find all of the different variations. Except for the no-argument activate method, they all depend on values injected by OSGi.  Note that the properties map is actually your properties from OSGi's Configuration Admin service.

When should you use @Activate? Whenever you need to complete some initialization tasks after the component is started but before it is used.  I've used it, for example, to set up and schedule Quartz jobs, verify database entities, etc.


The @Deactivate annotation is the inverse of the @Activate annotation, it identifies a method that will be invoked when the component is being deactivated.


The @Modified annotation marks the method that will be invoked when the component is modified, typically indicating that the @Reference(s) were changed.  In Liferay code, the @Modified annotation is typically bound to the same method as the @Activate annotation so the same method handles both activation and modification.


The @ProviderType comes from BND and is generally considered a complex concern to wrap your head around.  Long story greatly over-simplified, the @ProviderType is used by BND to define the version ranges assigned in the OSGi manifest in implementors and tries to restrict the range to a narrow version difference.

The idea here is to ensure that when an interface changes, the narrow version range on implementors would force implementors to update to match the new version on the interface.

When to use @ProviderType? Well, really you don't need to. You'll see this annotation scattered all through your ServiceBuilder-generated code. It's included in this list not because you need to do it, but because you'll see it and likely wonder why it is there.


This is a Liferay annotation for ServiceBuilder entity interfaces. It defines the class from the service module that implements the interface.

This won't be an interface you need to use, but at least you'll know why its there.


This is another Liferay annotation bound to ServiceBuilder service interfaces. It defines the transaction requirements for the service methods.

This is another annotation you won't be expected to use.


The @Indexable annotation is used to decorate a method which should result in an index update, typically tied to ServiceBuilder methods that add, update or delete entities.

You use the @Indexable annotation on your service implementation methods that add, update or delete indexed entities.  You'll know if your entities are indexed if you have an associated implementation for your entity.


The @SystemEvent annotation is tied to ServiceBuilder generated code which may result in system events.  System events work in concert with staging and the LAR export/import process.  For example, when a jouirnal article is deleted, this generates a SystemEvent record.  When in a staging environment and when the "Publish to Live" occurs, the delete SystemEvent ensures that the corresponding journal article from live is also deleted.

When would you use the @SystemEvent annotation? Honestly I'm not sure. With my 10 years of experience, I've never had to generate SystemEvent records or modify the publication or LAR process.  If anyone out there has had to use or modify an @SystemEvent annotation, I'd love to hear about your use case.


OSGi has an XML-based system for defining configuration details for Configuration Admin.  The @Meta annotations from the BND project allow BND to generate the file based on the annotations used in the configuration interfaces.

Important Note: In order to use the @Meta annotations, you must add the following line to your bnd.bnd file:
-metatype: *
If you fail to add this, your @Meta annotations will not be used when generating the XML configuration file.


This is the annotation for the "Object Class Definition" aspect, the container for the configuration details.  This annotation is used on the interface level to provide the id, name and localization details for the class definition.

When do you use this annotation? When you are defining a Configuration Admin interface that will have a panel in the System Settings control panel to configure the component.

Note that the @Meta.OCD attributes include localization settings.  This allows you to use your resource bundle to localize the configuration name, the field level details and the @ExtendedObjectClassDefinition category.


This is the annotation for the "Attribute Definition" aspect, the field level annotation to define the specification for the configuration element. The annotation is used to provide the ID, name, description, default value and other details for the field.

When do you use this annotation? To provide details about the field definition that will control how it is rendered within the System Setings configuration panel.


This is a Liferay annotation to define the category for the configuration (to identify the tab in the System Settings control panel where the configuration will be) and the scope for the configuration.

Scope can be one of the following:

  • SYSTEM - Global configuration for the entire system, will only be one configuration instance shared system wide.
  • COMPANY - Company-level configuration that will allow one configuration instance per company in the portal.
  • GROUP - Group-level (site) configuration that allows for site-level configuration instances.
  • PORTLET_INSTANCE - This is akin to portlet instance preferences for scope, there will be a separate configuration instance per portlet instance.

When will you use this annotation? Every time you use the @Meta.OCD annotation, you're going to use the @ExtendedObjectClassDefinition annotation to at least define the tab the configuration will be added to.


This is a Liferay annotation used to define the OSGi component properties used to register a Spring bean as an OSGi component. You'll find this used often in ServiceBuilder modules to expose the Spring beans into the OSGi container. Remember that ServiceBuilder is still Spring (and SpringExtender) based, so this annotation exposes those Spring beans as OSGi components.

When would you use this annotation? If you are using Spring Extender to use Spring within your module and you want to expose the Spring beans into OSGi so other modules can use the beans, you'll want to use this annotation.

I'm leaving a lot of details out of this section because the code for this annotation is extensively javadoced. Check it out:


So that's like all of the annotations I've encountered so far in Liferay 7 CE / Liferay DXP. Hopefully these details will help you in your Liferay development efforts.

Find an annotation I've missed or want some more details on those I've included? Just ask.

Building an Extensible Health Check

Technical Blogs February 1, 2017 By David H Nebinger

Alt Title: Cool things you can do with OSGi


So one thing that many organizations like to stand up in their Liferay environments is a "health check".  The goal is to provide a simple URL that monitoring systems can invoke to verify servers are functioning correctly.  The monitoring systems will review the time it takes to render the health check page and examine the contents to compare against known, expected results.  Should the page take too long to render or does not return the expected result, the monitoring system will begin to alert operations staff.

The goal here is to allow operations to be proactive in resolving outage situations rather than being reactive when a client or supervisor calls in to see what is wrong with the site.

Now I'm not going to deliver here a complete working health check system here (sorry in advance if you're disappointed).

What I am going to do is use this as an excuse to show how you can leverage some OSGi stuff to build out Liferay things that you really couldn't have easily done before.

Basically I'm going to build out an extensible health check system which exposes a simple URL and generates a simple HTML table that lists health check sensors and status indicators, the words GREEN, YELLOW and RED for the status of the sensors.  In case it isn't clear, GREEN is healthy, YELLOW means there is non-fatal issues, and RED means something is drastically wrong.

Extensible is the key word in the previous paragraph.  I don't want the piece rendering the HTML to have to know about all of the registered sensors.  As a developer, I want to be able to create new sensors as new systems are integrated into Liferay, etc.  I don't want to have to know about every possible sensor I'm ever going to create and deploy up front, I'll worry about adding new sensors as the need arises.

Defining The Sensor

So our health check system is going to be comprised of various sensors.  Our plan here is to follow the Unix concept of creating small, consise sensors that are each great at taking an individual sensor reading rather than one really big complicated sensor.

So to do this we're going to need to define our sensor interface:

public interface Sensor {
  public static final String STATUS_GREEN = "GREEN";
  public static final String STATUS_RED = "RED";
  public static final String STATUS_YELLOW = "YELLOW";

   * getRunSortOrder: Returns the order that the sensor should run.  Lower numbers
   * run before higher numbers.  When two sensors have the same run sort order, they
   * are subsequently ordered by name.
   * @return int The run sort order, lower numbers run before higher numbers.
  public int getRunSortOrder();

   * getName: Returns the name of the sensor.  The name is also displayed in the HTML
   * for the health check report, so using human-readable names is recommended.
   * @return String The sensor display name.
  public String getName();

   * getStatus: This is the meat of the sensor, this method is called to actually take
   * a sensor reading and return one of the status codes listed above.
   * @return String The sensor status.
  public String getStatus();

Pretty simple, huh?  We accommodate the sorting of the sensors for running so we can have control over the test order, we support providing a display name for the HTML output, and we also provide the method for actually getting the sensor status.

That's all we need to get our extensible healthcheck system started.  Now that we have the sensor interface, let's build some real sensors.

Building Sensors

Obviously we are going to be writing classes that implement the Sensor interface.  The fun part for us is that we're going to take advantage of OSGi for all of our sensor registration, bundling, etc.

So the first option we have with the sensors is whether to combine them in one module or build them as separate modules.  The truth is we really don't care.  You can stick with one module or separate modules.  You could mix things up and create multiple modules that each have multiple sensors.  You can include your sensor for your portlet directly in that module to keep it close to what the sensor is testing.  It's entirely up to you.

Our only limitations are that we have a dependency on the Healthcheck API module and our components have to implement the interface and declare themselves with the @Component annotation.

So for our first sensor, let's look at the JVM memory.  Our sensor is going to look at the % of memory used, we'll return GREEN if 60% or less is used, YELLOW if 61-80% and RED if 81% or more is used.  We'll create this guy as a separate module, too.

Our memory sensor class is:

@Component(immediate = true,service = Sensor.class)
public class MemorySensor implements Sensor {
  public static final String NAME = "JVM Memory";

  public int getRunSortOrder() {
    // This can run at any time, it's not dependent on others.
    return 5;

  public String getName() {
    return NAME;

  public String getStatus() {
    // need the percent used
    int pct = getPercentUsed();
    // if we are 60% or less, we are green.
    if (pct <= 60) {
      return STATUS_GREEN;
    // if we are 61-80%, we are yellow
    if (pct <= 80) {
      return STATUS_YELLOW;
    // if we are above 80%, we are red.
    return STATUS_RED;

  protected double getTotalMemory() {
    double mem = Runtime.getRuntime().totalMemory();

    return mem;

  protected double getFreeMemory() {
    double mem = Runtime.getRuntime().freeMemory();

    return mem;

  protected double getUsedMemory() {
    return getTotalMemory() - getFreeMemory();

  protected int getPercentUsed() {
    double used = getUsedMemory();
    double pct = (used / getTotalMemory()) * 100.0;

    return (int) Math.round(pct);
  protected int getPercentAvailable() {
    double pct = (getFreeMemory() / getTotalMemory()) * 100.0;

    return (int) Math.round(pct);

Not very fancy.  There are obvious enhancements we could pursue with this.  We could add a configuration instance so we could define the memory thresholds in the control panel rather than using hard coded values.  We could refine the measurement to account for GC.  Whatever.  The point is we have a sensor which is responsible for getting the status and returning the status string.

Now imagine what you can do with these sensors... You can add a sensor for accessing your database(s).  You can check that LDAP is reachable.  If you use external web services, you could call them to ensure they are reachable (even better if they, too, have some sort of health check facility, your health check can incorporate their health check).

Your sensor options are only limited to what you are capable of creating.

I'd recommend keeping the sensors simple and fast, you don't want a long running sensor chewing up time/cpu just to get some idea of server health.

Building The Sensor Manager

The sensor manager is another key part of our extensible healthcheck system.

The sensor manager is going to use a ServiceTracker so it knows all the sensors that are available and gracefully handles the addition and removal of new Sensor components.  Here's the SensorManager:

@Component(immediate = true, service = SensorManager.class)
public class SensorManager {

   * getHealthStatuses: Returns the map of current health statuses.
   * @return Map map of statuses, key is the sensor name and value is the sensor status.
  public Map<String,String> getHealthStatus() {
    StopWatch totalWatch = null;

    // time the total health check
    if (_log.isDebugEnabled()) {
      totalWatch = new StopWatch();


    // grab the list of sensors from our service tracker
    List<Sensor> sensors = _serviceTracker.getSortedServices();

    // create a map to hold the sensor status results
    Map<String,String> statuses = new HashMap<>();

    // if we have at least one sensor
    if ((sensors != null) && (! sensors.isEmpty())) {
      String status;
      StopWatch sensorWatch = null;

      // create a stopwatch to time the sensors
      if (_log.isDebugEnabled()) {
        sensorWatch = new StopWatch();

      // for each registered sensor
      for (Sensor sensor : sensors) {
        // reset the stopwatch for the run
        if (_log.isDebugEnabled()) {

        // get the status from the sensor
        status = sensor.getStatus();

        // add the sensor and status to the map
        statuses.put(sensor.getName(), status);

        // report sensor run time
        if (_log.isDebugEnabled()) {

          _log.debug("Sensor [" + sensor.getName() + "] run time: " + DurationFormatUtils.formatDurationWords(sensorWatch.getTime(), true, true));

    // report health check run time
    if (_log.isDebugEnabled()) {

      _log.debug("Health check run time: " + DurationFormatUtils.formatDurationWords(totalWatch.getTime(), true, true));

    // return the status map
    return statuses;

  protected void activate(BundleContext bundleContext, Map properties) {

    // if we have a current service tracker (likely not), let's close it.
    if (_serviceTracker != null) {

    // create a new sorting service tracker.
    _serviceTracker = new SortingServiceTracker(bundleContext, Sensor.class.getName(), new Comparator<Sensor>() {

      public int compare(Sensor o1, Sensor o2) {
        // compare method to sort primarily on run order and secondarily on name.
        if ((o1 == null) && (o2 == null)) return 0;
        if (o1 == null) return -1;
        if (o2 == null) return 1;

        if (o1.getRunSortOrder() != o2.getRunSortOrder()) {
          return o1.getRunSortOrder() - o2.getRunSortOrder();

        return o1.getName().compareTo(o2.getName());

  protected void deactivate() {
    if (_serviceTracker != null) {

  private SortingServiceTracker<Sensor> _serviceTracker;
  private static final Log _log = LogFactoryUtil.getLog(SensorManager.class);

The SensorManager has the ServiceTracker instance to retrieve the list of registered Sensor services and uses the list to grab each sensor status.  The getHealthStatus() method is the utility method to hide all of the implementation details but expose the ability to grab the map of sensor status details.


Yep, that's right, this is the conclusion.  That's really all there is to see here.

I mean, there is more, you need a portlet to serve up the health status on demand (a serve resource request can work fine here), and just displaying the health status in the portlet view will allow admins to see the health whenever they log into the portal.  And you can add a servlet so external monitoring systems can hit your status page using /o/healthcheck/status (my checked in project supports this).

But yeah, that's not really important with respect to showing cool OSGi stuff.

Ideally this becomes a platform for you to build out an expandable health check system in your own environment.  Pull down the project, start writing your own Sensor implementations and check out the results.

If you build some cool sensors you want to share, send me a PR and I'll add them to the project.

In fact, let's consider this to be like a community project.  If you use it and find issues, feel free to submit PRs with fixes.  If you build some Sensors, submit a PR with them.  If you come up with a cool enhancement, send a PR.  I'll do some minimal verification and merge everything in.

Here's the github project link to get you started:

Alt Conclusion

Just like there's an alternate title, there's an alternate conclusion.

The alternate conclusion here is that there's some really cool things you can do when you embrace OSGi in Liferay, pretty much the way Liferay has embraced OSGi.

OSGi offers a way to build expandable systems that are very decoupled.  If you need this kind of expansion, focus on separating your API from your implementations, then use a ServiceTracker to access all available instances.

Liferay uses this kind of thing extensively.  The product menu is extensible this way, the My Account pages are extensible in this way, heck even the LiferayMVC portlet implementations using MVCActionCommand and MVCResourceCommand interfaces rely on the power of OSGi to handle the dynamic services.

LiferayMVC is actually an interesting example; there, instead of managing a service tracker list, they manage a service tracker map where the key is the MVC command.  So the LiferayMVC portlet uses the incoming MVC command to get the service instance based on the command and passes the control to it for processing.  This makes the portlet more extensible because anyone can add a new command or override an existing command (using service ranking) and the original portlet module doesn't need to be touched at all.

Where can you find examples of things you can do leveraging OSGi concepts?  The Liferay source, of course.  Liferay eats it's own dog food, and they do a lot more with OSGi than I've ever needed to.  If you have some idea of a thing to do that benefits from an OSGi implementation but need an example of how to do it, find something in the Liferay source that has a similar implementation and see how they did it.

Liferay 7 Notifications

Technical Blogs January 18, 2017 By David H Nebinger

So in a recent project I've been building I reached a point where I believed my project would benefit from being able to issue user notifications.

For those that are not aware, Liferay has a built-in system for subscribing and notifications.  Using these APIs, you can quickly add notifications to your projects.


Before diving into the implementation, let's talk about the foundation of the subscription and notification APIs.

The first thing you need to identify is your events.  Events are what users will subscribe to, and when events occur you want to issue notifications.  Knowing your list of events going in will make things easier as you'll know when you'll need to send notifications.

For this blog post we're going to be implementing an example, so I'm going to build a system to issue a notification when a user with the Administrator role logs in.  We all know that for effective site security no one should be using Administrator credentials because of the unlimited access Administrators have, so getting a notification when an Administrator logs in can be a good security test.

So we know the event, "Administrator has logged in", the rest of the blog will build out support for subscriptions and notifications.

One thing we will need to pull all of this together is a portlet module (since we have some interface requirements).  Let's start by building a new portlet module.  You can use your IDE, but to be IDE-neutral I'm going to stick with blade commands:

blade create -t mvcportlet -p com.dnebinger.admin.notification admin-notification

This will give us a new Liferay MVC portlet using our desired package and a simple project directory.

Subscribing To The Event

The first thing that users need to be able to do is to subscribe to your events.  Some portal examples are blogs (where a user can subscribe to all blogs or an individual blog for changes).  So the challenge for you is to identify where a user would subscribe to your event.  In some cases you would display right on the page (i.e. blogs and forum threads), in other cases you might want to move it to a configuration panel.

For this portlet, there is no real UI work, just going to have a JSP with a checkbox for subscribe/unsubscribe, but the important part is the Liferay subscription APIs we're going to use.

Subscription is handled by the com.liferay.portal.kernel.service.SubscriptionLocalService service.  When you're subscribing, you'll be using the addSubscription() method, and when you're unsubscribing you're going to use the deleteSubscription() method.

Arguments for the calls are:

  • userId: The user who is subscribing/unsubscribing.
  • groupId: (for adds) the group the user is subscribing to (for 'containers' like a doc lib folder or forum category).
  • className: The name of the class user wants to be monitored of changes on.
  • pkId: The primary key for the object to (un)subscribe to.

So normally you'll be building subscription into your SB entities, so it is common practice to put subscribe and unsubscribe methods into the service interface to combine the subscription features with the data access.

For this project, we don't really have an entity or a data access layer, so we're just going to handle subscribe/unsubscribe directly in our action command handler.  Also we don't really have a PK id so we'll just stick with an id of 0 and the portlet class as the class name.

  immediate = true,
  property = {
    "" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION_PORTLET_KEY,
  service = MVCActionCommand.class
public class SubscribeMVCActionCommand extends BaseMVCActionCommand {
  protected void doProcessAction(ActionRequest actionRequest, ActionResponse actionResponse) throws Exception {
    String cmd = ParamUtil.getString(actionRequest, Constants.CMD);

    if (Validator.isNull(cmd)) {
      // an error

    long userId = PortalUtil.getUserId(actionRequest);

    if (Constants.SUBSCRIBE.equals(cmd)) {
      _subscriptionLocalService.addSubscription(userId, 0, AdminNotificationPortlet.class.getName(), 0);
    } else if (Constants.UNSUBSCRIBE.equals(cmd)) {
      _subscriptionLocalService.deleteSubscription(userId, AdminNotificationPortlet.class.getName(), 0);

  @Reference(unbind = "-")
  protected void setSubscriptionLocalService(final SubscriptionLocalService subscriptionLocalService) {
    _subscriptionLocalService = subscriptionLocalService;

  private SubscriptionLocalService _subscriptionLocalService;

User Notification Preferences

Users can manage their notification preferences separately from portlets which issue notifications.  When you go to the "My Account" area of the side bar, there's a "Notifications" option.  When you click this link, you will normally see your list of notifications.  But if you click on the dot menu in the upper right corner, you can choose "Configuration" to see all of the magic.

This page has a collapsable area for each registered notifying portlet and within each area is a line item for a type of notification and sliders to recieve notifications by email and/or website (assuming the portlet has indicated that it supports both types of notifications).  In the future Liferay or your team might add more notification methods (i.e. SMS or other), and for those cases the portlet just needs to indicate that it also supports the notification type.

So how do we get our collapsable panel registered to appear on this page?  Well, through the magic of OSGi DS service annotations, of course!

There are two types of classes that we need to implement.  The first extends the com.liferay.portal.kernel.notifications.UserNotificationDefinition class.  As the class name suggests, this class provides the definition of the type of notifications the portlet can send and the notification types it supports.

  immediate = true,
  property = {"" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION},
  service = UserNotificationDefinition.class
public class AdminLoginUserNotificationDefinition extends UserNotificationDefinition {
  public AdminLoginUserNotificationDefinition() {
    // pass in our portlet key, 0 for a class name id (don't care about it), the notification type (not really), and
    // finally the resource bundle key for the message the user sees.
    super(AdminNotificationPortletKeys.ADMIN_NOTIFICATION, 0,

    // add a notification type for each sort of notification that we want to support.
      new UserNotificationDeliveryType(
        "email", UserNotificationDeliveryConstants.TYPE_EMAIL, true, true));
      new UserNotificationDeliveryType(
        "website", UserNotificationDeliveryConstants.TYPE_WEBSITE, true, true));

This definition registers and informs notifications that we have a notification definition.  The constructor binds the notification to our custom portlet and provides the message key for the notification panel.  We also add two user notification delivery types that we'll support, one for sending an email and one for using the notification portlet.

When you go to the Notifications panel under My Account and choose the Configuration option from the menu in the dropdown on the upper-right corner, you can see the notification preferences:

Notification Type Selection

Handling Notifications

Another aspect of notifications is the UserNotificationHandler implementation.  The UserNotificationHandler's job is to interpret the notification event and determine whether to deliver the notification and build the UserNotificationFeedEntry (basically the notification message itself).

Liferay provides a number of base implementation classes that you can use to build your own UserNotificationHandler instance from:

  • com.liferay.portal.kernel.notifications.BaseUserNotificationHandler - This implements a simple user notification handler with points to override the body of the notification and some other key points, but for the most part it is capable of building all of the basic notification details.
  • com.liferay.portal.kernel.notifications.BaseModelUserNotificationHandler - This class is another base class suitable for asset-enabled entities for notifications.  It uses the AssetRenderer for the entity class to render the asset and this is used as the message for the notifcation.

Obviously if you have an asset-enabled entity you're notifying on, you'd want to use the BaseModelUserNotificationHandler.  For our implementation we're going to use BaseUserNotificationHandler as the base class:

  immediate = true,
  property = {"" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION},
  service = UserNotificationHandler.class
public class AdminLoginUserNotificationHandler extends BaseUserNotificationHandler {

   * AdminLoginUserNotificationHandler: Constructor class.
  public AdminLoginUserNotificationHandler() {

  protected String getBody(UserNotificationEvent userNotificationEvent, ServiceContext serviceContext) throws Exception {
    String username = LanguageUtil.get(serviceContext.getLocale(), _UKNOWN_USER_KEY);
    // okay, we need to get the user for the event
    User user = _userLocalService.fetchUser(userNotificationEvent.getUserId());

    if (Validator.isNotNull(user)) {
      // get the company the user belongs to.
      Company company = _companyLocalService.fetchCompany(user.getCompanyId());

      // based on the company auth type, find the user name to display.
      // so we'll get screen name or email address or whatever they're using to log in.

      if (Validator.isNotNull(company)) {
        if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_EA)) {
          username = user.getEmailAddress();
        } else if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_SN)) {
          username = user.getScreenName();
        } else if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_ID)) {
          username = String.valueOf(user.getUserId());

    // we'll be stashing the client address in the payload of the event, so let's extract it here.
    JSONObject jsonObject = JSONFactoryUtil.createJSONObject(

    String fromHost = jsonObject.getString(Constants.FROM_HOST);

    // fetch our strings via the language bundle.
    String title = LanguageUtil.get(serviceContext.getLocale(), _TITLE_KEY);

    String body = LanguageUtil.format(serviceContext.getLocale(), _BODY_KEY, new Object[] {username, fromHost});

    // build the html using our template.
    String html = StringUtil.replace(_BODY_TEMPLATE, _BODY_REPLACEMENTS, new String[] {title, body});

    return html;

  @Reference(unbind = "-")
  protected void setUserLocalService(final UserLocalService userLocalService) {
    _userLocalService = userLocalService;
  @Reference(unbind = "-")
  protected void setCompanyLocalService(final CompanyLocalService companyLocalService) {
    _companyLocalService = companyLocalService;

  private UserLocalService _userLocalService;
  private CompanyLocalService _companyLocalService;

  private static final String _TITLE_KEY = "title.admin.login";
  private static final String _BODY_KEY = "body.admin.login";
  private static final String _UKNOWN_USER_KEY = "unknown.user";

  private static final String _BODY_TEMPLATE = "<div class=\"title\">[$TITLE$]</div><div class=\"body\">[$BODY$]</div>";
  private static final String[] _BODY_REPLACEMENTS = new String[] {"[$TITLE$]", "[$BODY$]"};

  private static final Log _log = LogFactoryUtil.getLog(AdminLoginUserNotificationHandler.class);

This handler basically builds the body of the notification itself which will be based on the admin login details (user and where they are logging in from).  When building out your own notification, you'll likely want to be using the notification event payload to pass details.  We're going to pass just the host where the admin is coming from so our payload is as simple as it gets, but you could easily pass XML or JSON or whatever structured string you want to pass necessary notification details.

This handler just makes the same notification body for both email and notification portlet display, no difference between the two.  Since the method is passed the UserNotificationEvent, you can use the getDeliveryType() method to build different bodies depending upon whether you are building an email notification or a notification portlet display message.

Publishing Notification Events

So far we have code to allow users to subscribe to our administrator login event, we allow them to choose how they want to receive the notifications, and we also have code to transform the notification event into a notification message, what remains is actually issuing the notification events themselves.

This is very much going to be dependent upon your event source.  Most Liferay events are based on the addition or modification of some entity, so it is common to find their event publishing code in the service implementation classes when the entities are added or updated.  Your own notification events can come from wherever the event originates, even outside of the service layer.

Our notification event is based on an administrator login; the best way to publish these kinds of events is through a post login component.  We'll define the new component as such:

  immediate = true, property = {""},
  service = LifecycleAction.class
public class AdminLoginNotificationEventSender implements LifecycleAction {

  public void processLifecycleEvent(LifecycleEvent lifecycleEvent)
      throws ActionException {

    // get the request associated with the event
    HttpServletRequest request = lifecycleEvent.getRequest();

    // get the user associated with the event
    User user = null;

    try {
      user = PortalUtil.getUser(request);
    } catch (PortalException e) {
      // failed to get the user, just ignore this

    if (user == null) {
      // failed to get a valid user, just return.

    // We have the user, but are they an admin?
    PermissionChecker permissionChecker = null;

    try {
      permissionChecker = PermissionCheckerFactoryUtil.create(user);
    } catch (Exception e) {
      // ignore the exception

    if (permissionChecker == null) {
      // failed to get a permission checker

    // If the permission checker indicates the user is not omniadmin, nothing to report.
    if (! permissionChecker.isOmniadmin()) {

    // this user is an administrator, need to issue the event
    ServiceContext serviceContext = null;

    try {
      // create a service context for the call
      serviceContext = ServiceContextFactory.getInstance(request);

      // note that when you're behind an LB, the remote host may be the address
      // for the LB instead of the remote client.  In these cases the LB will often
      // add a request header with a special key that holds the remote client host
      // so you'd want to use that if it is available.
      String fromHost = request.getRemoteHost();

      // notify subscribers
      notifySubscribers(user.getUserId(), fromHost, user.getCompanyId(), serviceContext);
    } catch (PortalException e) {
      // ignored

  protected void notifySubscribers(long userId, String fromHost, long companyId, ServiceContext serviceContext)
      throws PortalException {

    // so all of this stuff should normally come from some kind of configuration.
    // As this is just an example, we're using a lot of hard coded values and values.
    String entryTitle = "Admin User Login";

    String fromName = PropsUtil.get(Constants.EMAIL_FROM_NAME);
    String fromAddress = GetterUtil.getString(PropsUtil.get(Constants.EMAIL_FROM_ADDRESS), PropsUtil.get(PropsKeys.ADMIN_EMAIL_FROM_ADDRESS));

    LocalizedValuesMap subjectLocalizedValuesMap = new LocalizedValuesMap();
    LocalizedValuesMap bodyLocalizedValuesMap = new LocalizedValuesMap();

    subjectLocalizedValuesMap.put(Locale.ENGLISH, "Administrator Login");
    bodyLocalizedValuesMap.put(Locale.ENGLISH, "Adminstrator has logged in.");

    AdminLoginSubscriptionSender subscriptionSender =
        new AdminLoginSubscriptionSender();



    subscriptionSender.setFrom(fromAddress, fromName);

    int notificationType = AdminNotificationType.NOTIFICATION_TYPE_ADMINISTRATOR_LOGIN;


    String portletId = PortletProviderUtil.getPortletId(AdminNotificationPortletKeys.ADMIN_NOTIFICATION, PortletProvider.Action.VIEW);



        AdminNotificationPortlet.class.getName(), 0);


This is a LifecycleAction component that registers as a post login lifecycle event listener.  It goes through a series of checks to determine if the user is an administrator and, when they are, it issues a notification.  The real fun happens in the notifySubscribers() method.

This method has a lot of initialization and setting of the subscriptionSender properties.  This variable is of type AdminLoginSubscriptionSender, a class which extends SubscriptionSender.  This is the guy that handles the actual notification sending.

The flushNotificationAsync() method pushes the instance onto the Liferay Message Bus where a message receiver gets the SubscriptionSender and invokes its flushNotification() method (you can call this method too if you don't need async notification sending).

The flushNotification() method does some permission checking, user verification, filtering (i.e. don't send a notification to the user that generated the event) and eventually sends the email notification and/or adds the user notification for the notifications portlet.

The AdminLoginSubscriptionSender class is pretty simple:

public class AdminLoginSubscriptionSender extends SubscriptionSender {

  private static final long serialVersionUID = -7152698157653361441L;

  protected void populateNotificationEventJSONObject(
      JSONObject notificationEventJSONObject) {


    notificationEventJSONObject.put(Constants.FROM_HOST, _fromHost);

  protected boolean hasPermission(Subscription subscription, String className, long classPK, User user) throws Exception {
    return true;

  protected boolean hasPermission(Subscription subscription, User user) throws Exception {
    return true;

  protected void sendNotification(User user) throws Exception {
    // remove the super classes filtering of not notifying user who is self.
    // makes sense in most cases, but we want a notification of admin login so
    // we know when never any admin logs in from anywhere at any time.

    // will be a pain if we get notified because of our own login, but we want to
    // know if some hacker gets our admin credentials and logs in and it's not really us.


  public void setFromHost(String fromHost) {
    this._fromHost = fromHost;

  private String _fromHost;

Putting It All Together

Okay, so now we have everything:

  • We have code to allow users to subscribe to the admin login events.
  • We have code to allow users to select how they receive notifications.
  • We have code to transform the notification message in the database into HTML for display in the notifications portlet.
  • We have code to send the notifications when an administrator logs in.

Now we can test it all out.  After building and deploying the module, you're pretty much ready to go.

You'll have to log in and sign up to receive the notifications.  Note if you're using your local test Liferay environment you might not have email enabled so be sure to use the web notifications.  In fact, in my DXP environment I don't have email configured and I got a slew of exceptions from the Liferay email subsystem; I ignored them since they are from not having email set up.

Then log out and log in again as an administrator and you should see your notification pop up in the left sidebar.

Admin Notifications Messages


So there you have it, basic code that will support creating and sending notifications.  You can use this code to add notifications support into your own portlets and take them whereever you need.

You can find the code for this project up on github:


Removing Panels from My Account

Technical Blogs December 29, 2016 By David H Nebinger

So recently I was asked, "How can panels be removed from the My Account portlet?"

It seems like such a deceptively simple question since it used to be a supported feature, but my response to the question was that it is just not possible anymore.

Back in the 6.x days, all you needed to do was override the, identification and/or miscellaneous properties in and you could take out any of the panels you didn't want the users to see in the my account panel.

The problem with the old way is that while it was easy to remove panels or reorganize panels, it was extremely difficult to add new custom panels to My Account.

With Liferay 7 and DXP, things have swung the other way.  With OSGi, it is extremely easy to add new panels to My Account.  Just create a new component that implements the FormNavigatorEntry interface and deploy it and Liferay will happily present your custom panel in the My Account.  See for information how to do this.

Although it is easy to add new panels, there is no way to remove panels.  So I set out to find out if there was a way to restore this functionality.


In Liferay 7, the new Form Navigator components replaces the old portal properties setup.  But while there used to be three separate sets of properties (one for adds, one for updates and one for My Account), they have all been merged into a single set.  So even if there was a supported way to disable a panel, this would disable the panel from not only My Account but also from the users control panel, not something that most sites want.

So this problem actually points to the solution - in order to control the My Account panels separately, we need a completely separate Form Navigator setup.  If we had a separate setup, we'd also need to have a JSP fragment bundle override to use the custom Form Navigator rather than the original.

Creating the My Account Form Navigator

This is actually a simple task because the Liferay code is already deployed as an OSGi bundle, but more importantly the current classes are declared in an export package in the module's BND file.  This means that our classes can extend the Liferay classes without dependency issues.

Here's one of the new form navigator entries that I came up with:

	immediate = true,
	property = {"form.navigator.entry.order:Integer=70"},
	service = FormNavigatorEntry.class
public class UserPasswordFormNavigatorEntryExt extends UserPasswordFormNavigatorEntry {

	private boolean visible = GetterUtil.getBoolean(PropsUtil.get(

	public String getFormNavigatorId() {
		return Constants.MY_ACCOUNT_PREFIX + super.getFormNavigatorId();

	public boolean isVisible(User user, User selUser) {
		return visible && super.isVisible(user, selUser);

So first off our navigator entry extends the original entry, and that saves us a lot of custom coding.  We're using a custom portal property to disable the panel, so we're fetching that up front.  We're using the property setting along with the super class' method to determine if the panel should be visible.  This will allow us to set a value to disable the panel and it will just get excluded.

The most important part is the override for the form navigator ID.  We're declaring a custom ID that will separate how the new entries and categories will be made available.

All of the other form navigator entries will follow the same pattern, they'll extend the original class, use a custom property to allow for disabling, and they'll use the special category prefix for the entries.

We'll do the similar with the category overrides:

	property = {"form.navigator.category.order:Integer=30"},
	service = FormNavigatorCategory.class
public class UserUserInformationFormNavigatorCategoryExt extends UserUserInformationFormNavigatorCategory {
	public String getFormNavigatorId() {
		return Constants.MY_ACCOUNT_PREFIX + super.getFormNavigatorId();

For the three component categories we'll return our new form navigator ID string.

Creating the My Account JSP Fragment Bundle

Now that we have a custom form navigator, we'll need a JSP override on My Account to use the new version.

Our fragment bundle is based off the the My Account portlet, so we can use the following blade command:

blade create -t fragment -h -H 1.0.4 users-admin-web-my-account

If you're using an IDE, just use the equivalent to create a fragment module from the My Account web module for the version currently used in your portal (check the $LIFERAY_HOME/work folder for the folder as it includes the version number in your portal).

To use our new form navigator, we need to override the edit_user.jsp page.  This page actually comes from the modules/apps/foundation/users-admin/users-admin-web module from the Liferay source (during build time it is copied into the My Account module).  Copy the file from the source to your new module as this will be the foundation for the change.

I made two changes to the file.  The first change adds some java code to use a variable for the form navigator id to use, the second change was to the <liferay-ui:form-navigator /> tag declaration to use the variable instead of a constant.

Here's the java code I added:

// initialize to the real form navigator id.
String formNavigatorId = FormNavigatorConstants.FORM_NAVIGATOR_ID_USERS;

// if this is the "My Account" portlet...
if (portletName.equals(myAccountPortletId)) {
	// include the special prefix
	formNavigatorId = "my.account." + formNavigatorId;

We start with the current constant value for the navigator id.  Then if the portlet id matches the My Account portlet, we'll add our form navigator ID prefix to the variable.

Here's the change to the form navigator tag:

	backurl="<%= backURL %>" 
	formmodelbean="<%= selUser %>" 
	id="<%= formNavigatorId %>" 

Build and deploy all of your modules to begin testing.


Testing is actually pretty easy to do.  Start by adding some file entries in order to disable some of the panels:


If your portal is running, you'll need to restart the portal for the properties to take effect.

Start by logging in and going to the Users control panel and choose to edit a user.  Don't worry, we're not editing, we're just looking to see that all of the panels are there:

User Control Panel View

Next, navigate to My Account -> Account Settings to see if the changes have been applied:

My Account Panel

Here we can see that the whole Identification section has been removed since all of the panels have been disabled.  We can also see that password and personal site template sections have been removed from the view.


So this project has been loaded to GitHub:  Feel free to clone and use as you see fit.

It uses properties in to disable specific panels from the My Account page while leaving the user control panel unchanged.

There is one caveat to remember though.

When we created the fragment, we had to use the version number currently deployed in the portal.  This means that any upgrade to the portal, to a new GA or a new SP or FP or even hot fix may change the version number of the My Account portlet.  This will force you to come back to your fragment bundle and change the version in the BND file.  In fact, my recommendation here would be to match your bundle version to the fragment host bundle version as it will be easier to identify whether you have the right version(s) deployed.

Otherwise you should be good to go!

Update: You can eliminate the above caveat by using a version range in your bnd file.  I changed the line in bnd.bnd file to be:


This allows your fragment bundle to apply to any newer version of the module that gets deployed up to 2.0.0.  Note, however, that you should still compare each release to ensure you're not losing any fixes or improvements that have been shipped as an update.

Extending Liferay OSGi Modules

Technical Blogs December 20, 2016 By David H Nebinger

Recently I was working on a fragment bundle for a JSP override to the message boards and I wanted to wrap the changes so they could be disabled by a configuration property.

But the configuration is managed by a Java interface and set via the OSGi Configuration Admin service in a completely different module jar contained in an LPKG file in the osgi directory.

So I wondered if there was a way to weave in a change to the the Java interface to include my configuration item in a concept similar to the plugin extending a plugin technique.

And in fact there is and I'm going to share it with you here...

Creating The Module

So first we're going to be building a gradle module in a Liferay workspace, so you're going to need one of those.

In our modules directory we're going to create a new folder named message-boards-api-ext for containing our new module.  Actually the name of the folder doesn't matter too much so feel free to follow your own naming standards.

We need a gradle build file, and since we're in a Liferay workspace, our build.gradle file is pretty simple:

dependencies {
    compileOnly group: "biz.aQute.bnd", name: "biz.aQute.bndlib", version: "3.1.0"
    compileOnly group: "com.liferay", name: "com.liferay.portal.configuration.metatype", version: "2.0.0"
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
    compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
    compileOnly group: "org.osgi", name: "org.osgi.core", version: "5.0.0"

    compile group: "com.liferay", name: "com.liferay.message.boards.api", version: "3.1.0"

jar.archiveName = 'com.liferay.message.boards.api.jar'

The dependencies mostly come from the build.gradle file from the module from the Liferay source found here:

We did add as a compile option the module that we're building a replacement for, in this case the com.liferay.message.boards.api module.

Also we are specifying the archive name that we are building that excludes the version number.  We're specifying the archive name so it matches the specifications from the Liferay override documentation:

We also need a bnd.bnd file to build our module:

Bundle-Name: Liferay Message Boards API
Bundle-SymbolicName: com.liferay.message.boards.api
Bundle-Version: 3.1.0
Liferay-Releng-Module-Group-Title: Message Boards

Include-Resource: @com.liferay.message.boards.api-3.1.0.jar

The bulk of the file is going to come directly from the original:

The only addition to the file is the Include-Resource BND declaration.  As I previously covered in my blog post about OSGi Module Dependencies, this is the declaration used to create an Uber Module.  But for our purposes, this actually provides the binary source for the bulk of the content of our module.

By building an Uber Module from the source module, we are basically going to be building a jar file from the exploded original module jar, allowing us to have the baseline jar with all of the original content.

Finally we need our source override file, in this case we need the src/main/java/com/liferay/message/boards/configuration/ file:

package com.liferay.message.boards.configuration;

import aQute.bnd.annotation.metatype.Meta;

import com.liferay.portal.configuration.metatype.annotations.ExtendedObjectClassDefinition;

 * @author Sergio González
 * @author dnebinger
@ExtendedObjectClassDefinition(category = "collaboration")
	id = "com.liferay.message.boards.configuration.MBConfiguration",
	localization = "content/Language", name = ""
public interface MBConfiguration {

	 * Enter time in minutes on how often this job is run. If a user's ban is
	 * set to expire at 12:05 PM and the job runs at 2 PM, the expire will occur
	 * during the 2 PM run.
	@Meta.AD(deflt = "120", required = false)
	public int expireBanJobInterval();

	 * Flag that determines if the override should be applied.
	@Meta.AD(deflt = "false", required = false)
	public boolean applyOverride();

So the bulk of the code comes from the original:

Our addition is the new flag value.

Now if we had other modifications for other classes, we would make sure we had the same paths, same packages, same class names, we would just have our changes in on top of the originals.

We could even introduce new packages and classes for our custom code.

Building The Module

Building is pretty easy, we just use the gradle wrapper to do the build:

$ ../../gradlew build

When we look inside of our built module jar, this is where we can see that our change did, in fact, get woven into the new module jar:

Exploded Module

We can see from the highlighted line from the compiled class that our jar definitely contains our method, so our build is good.  We can also see the original packages and resources that we get from the Uber Module approach, so our module is definitely complete.

Deploying The Module

Okay, so this is the ugly part of this whole thing, deployments are not easy.

Here's the restrictions that we have to keep in mind:

  1. The portal cannot be running when we do the deployment.
  2. The built jar file must be copied manually to the $LIFERAY_HOME/osgi/marketplace/override folder.
  3. The $LIFERAY_HOME/osgi/state folder must be deleted.
  4. If you have changed any web-type file (javascript, JSP, css, etc.) you should delete the relevant folder from the $LIFERAY_HOME/work folder.
  5. If Liferay deployes a newer version than the one declared in our bundle override, our changes may not be applied.
  6. Only one override bundle can work at a time; if someone else has an override bundle in this folder, your change will step on theirs and this may not be a option (Liferay may distribute updates or hot fixes as module overrides in this fashion).
  7. Support for $LIFERAY_HOME/osgi/marketplace/override was added in later LR7CE/DXP releases, so check that the version you are using has this support.
  8. You can break your portal if your module override does bad things or has bugs.

Wow, that is a lot of restrictions.

Basically we need to copy our jar manually to the $LIFERAY_HOME/osgi/marketplace/override folder, but we cannot do it if the application server is running.  The $LIFERAY_HOME/osgi/state folder should be whacked as we are changing the content of the folder.  And the bundle folder in $LIFERAY_HOME/work may need to be deleted so any cached resources are properly cleaned out.

For this change, we copy the com.liferay.message.boards.api.jar file to $LIFERAY_HOME/osig/marketplace/override folder and delete the state folder when the application server is down, then we can fire up the application server to see the outcome.


We can verify the change works by navigating to the Control Panel -> System Settings -> Collaboration -> Message Boards page and viewing the change:

Deployed Module Override

Be forewarned, however!  Listen when I emphasize the following:

Pay special attention to the restrictions on this technique!  This is not a "build, deploy and forget it" technique as each Liferay upgrade, fix pack or hot fix can easily invalidate your change, and every deployment requires special handling.
You can seriously break Liferay if you deliver bad code in your module override, so test the heck out of these overrides.
This should not be used in lieu of supported Liferay methods to extend or replace functionality (JSP fragment bundles, MVC command service overrides, etc.), this is just intended for edge cases where no other override/extension option is supported.

In other words:


Using a Custom Bundle for the Liferay Workspace

Technical Blogs December 6, 2016 By David H Nebinger

So the Liferay workspace is pretty handy when it comes to building all of your OSGi modules, themes, layout templates and yes, even your legacy code from the plugins SDK.

But, when it comes to initializing a local bundle for deployment or building a dist bundle, using one of the canned Liferay 7 bundles from sourceforge may not cut it, especially if you're using Liferay DXP, or if you're behind a strict proxy or you just want a custom bundle with fixes already streamed in.

Looking at the file in the workspace it seems as though you can use a custom URL will work and, in fact, it does.  You can point to your custom URL and download your custom bundle.

But if you don't have a local HTTP server to host your bundle from, you can simulate having a custom bundle downloaded from a site w/o actually doing a download.

Building the Custom Bundle

So first you need to build a custom bundle.  This is actually quite easy.  It is simply a matter of downloading a current bundle, unzipping it, making whatever changes you want, then re-zipping it back up.

So you might, for example, download the DXP Service Pack 1 bundle, expand it, apply Fix Pack 8, then zip it all back up.  This will give you a custom bundle with the latest (currently) fix pack ready for your development environment.

One key here, though, is that the bundle must have a root directory in the zip.  Basically this means you shouldn't see, for example, /tomcat-8.0.32 as a folder in the root directory of the zip, it should always have some directory that contains everything, such as /dxp-sp1-fp8/tomcat-8.0.32, etc.

If you try to use your custom bundle and you get weird unzip errors like corrupt streams and such, that likely points to a missing directory at the root level w/ the bundle nestled underneath it.

Copy the Bundle to the Cache Folder

As it turns out, Liferay actually caches downloaded bundles in the ~/.liferay/bundles directory.

Copy your custom bundle zip to this directory to get it in the download cache.

Update the Download URL

Next you have to update your file to use your new URL.

You can use any URL you want, just make sure it ends with your bundle name:


Modify the Build Download Instructions

The final step is to modify the build.gradle file in the workspace root.  Add the following lines:

downloadBundle {
    onlyIfNewer false
    overwrite false

This basically has the download bundle skip the network check to see if the remote URL file exists and is newer than the local cached file.


Hey, that's it!  You're now ready to go with your local bundle.  Issue the "gradlew distBundleZip" command to build all of your modules, extract your custom bundle, feather in your deployment artifacts, and zip it all up into a ready-to-deploy  Liferay bundle.

Great for parties, entertaining, and even building a container-based deployment artifact for a cloud-based Liferay environment.


Showing 1 - 20 of 51 results.
Items 20
of 3