SQL Server/DXP Upgrade Settings

Technical Blogs May 4, 2018 By David H Nebinger

When you are upgrading to DXP on SQL Server, it is recommended to set the following property in the portal-upgrade-ext.properties file:


Otherwise you can encounter issues during the upgrade related to schema changes after cursor opens, etc.

This can hit you during the verification of audited models or resource permissions, especially when you have large row counts.

The property setting above should make the verify processes run sequentially rather than in parallel and will prevent the kinds of exceptions like:

Caused by: java.lang.Exception: Verification error: com.liferay.portal.verify.VerifyResourcePermissions
	at com.liferay.portal.verify.VerifyProcess.doVerify(VerifyProcess.java:124)
	at com.liferay.portal.verify.VerifyResourcePermissions.verify(VerifyResourcePermissions.java:76)
	at com.liferay.portal.verify.VerifyResourcePermissions.doVerify(VerifyResourcePermissions.java:88)
	at com.liferay.portal.verify.VerifyProcess.verify(VerifyProcess.java:72)
	... 15 more
Caused by: java.util.concurrent.ExecutionException: com.microsoft.sqlserver.jdbc.SQLServerException: Could not complete cursor operation because the table schema changed after the cursor was declared.
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at com.liferay.petra.function.UnsafeConsumer.accept(UnsafeConsumer.java:42)
	at com.liferay.petra.function.UnsafeConsumer.accept(UnsafeConsumer.java:30)
	at com.liferay.portal.verify.VerifyProcess.doVerify(VerifyProcess.java:117)
	... 18 more


Bringing DropWizard Metrics to Liferay 7/DXP

Technical Blogs April 16, 2018 By David H Nebinger


So in any production system, there is typically a desire to capture metrics, use them to define a system health check, and then monitor the health check results from an APM tool to preemptively notify administrators of problems.

Liferay does not provide this kind of functionality, but it was functionality that I needed for a recent project.

Rather than roll my own implementation, I decided that I wanted to start from DropWizard's Metrics library and see what I could come up with.

DropWizard's Metrics library is well known for its usefulness in this space, so it is an obvious starting point.

The Metrics Library

As a quick review, the Metrics library exposes objects representing counters, gauges, meters, timers and histograms. Based upon what you want to track, one of these metric types will be used to store the runtime information.

In addition, there's also support for defining a health check which is basically a test to return a Result, basically a pass/fail, and it is intended to be combined with the metrics as a basis for the result evaluation.

For example, you might define a Gauge for available JVM memory. As a gauge, it will basically be checking the difference between the total memory and used memory. A corresponding health check might be created to test that available memory must be greater than, say, 20%. When available memory drops below 20%, the system is not healthy and an external APM tool could monitor this health check and issue notifications when this occurs. By using 20%, you are giving admins time to get in and possibly resolve the situation before things go south.

So that's the overview, but now let's talk about the code.

When I started reviewing the code, I was initially disheartened to see very little in the way of "design by interface". For me, design by interface is an indicator of how easy or hard it will be to bring the library into the OSGi container. With heavy design by interface, I can typically subclass key implementations and expose them as @Components, and consumers can just @Reference the interfaces and OSGi will take care of the wiring.

Admittedly, this kind of architecture can be considered overkill for a metrics library. The library developers likely planned for the lib to be used in java applications or even web applications, but likely never considered OSGi.

At this point, I really struggled with figuring out the best path forward. What would be the best way to bring the library into OSGi?

For example, I could create a bunch of interfaces representing the clean metrics and some interfaces representing the registries, then back all of these with concrete implementations as @Components that are shims on top of the Drop Wizard Metrics library. I soon discarded this because the shims would be too complicated casting things back and forth from interface to metrics library implementation.

I could have cloned the existing DropWizard Metrics GitHub repo and basically hacked it all up to be more "design by interface". The problem here, though, is that every update to the Metrics lib would require all of this repeated hacking up of their code to bring the updates forward. So this path was discarded.

I could have taken the Metrics library and used it as inspiration for building my own library. Except then I'd be stuck maintaining the library and re-inventing the wheel, so this path was discarded.

So I settled on a fairly light-weight solution that, I feel, is OSGi-enough without having to take over the Metrics library maintenance.

Liferay Metrics

The path I elected to take was to include and export the DropWizard Metrics library packages from my bundle and add in some Liferay-specific, OSGi-friendly metric registry access.

I knew I had to export the Metrics packages from my bundle since OSGi was not going to provide them and having separate bundles include their own copies of the Metrics jar would not allow for aggregation of the metrics details.

The Liferay-specific, OSGi-friendly registry access comes from two interfaces:

  • com.liferay.metrics.MetricRegistries - A metric registry lookup to find registries that are scoped according to common Liferay scopes.
  • com.liferay.metrics.HeallthCheckRegistries - A health check registry lookup to find registries that are scoped according to common Liferay scopes.

Along with the interfaces, there are corresponding @Component implementations that can be @Reference injected via OSGi.

Liferay Scopes

Unlike in a web application where there is typically like one scope, the application, Liferay has a bunch of common scopes used to group and aggregate details. A metrics library is only useful if it too can support scopes in a fashion similar to Liferay. Since the DropWizard Metrics library supports different metric registries, it was easy to overlay the common Liferay scopes over the registries.

The supported scopes are:

  • Portal (Global) scope - This registry would contain metrics that have no separate scope requirements.
  • Company scope - This registry would contain metrics scoped to a specific company id. For example, if you were counting logins by company, the login counter would be stored in the company registry so it can be tracked separately.
  • Group (Site) scope - This registry would contain metrics scoped to the group (or site) level.
  • Portlet scope - This registry would contain metrics scoped to a specific portlet plid.
  • Custom scope - This is a general way to define a registry by name.

Using these scopes, different modules that you create can lookup a specific metric in a specific scope without having tight coupling between your own modules.

Metrics Servlets

The DropWizard Metrics library ships with a few useful servlets, but to use them you need to be able to add them to your web application's web.xml file. In Liferay/OSGi, instead we want to leverage the OSGi HTTP Whiteboard pattern to define an @Component that gets automagically exposed as a servlet.

The Liferay Metrics bundle does just that; it exposes five of the key DropWizard servlets, but they use OSGi facilities and the Liferay-specific interfaces to provide functionality.

The following table provides details on the servlets:

Servlet Context Description
CPU Profile /o/metrics/gprof Generates and returns a gprof-compatible file of profile details.
Health Check /o/metrics/health-checks Runs the health checks and returns a JSON object with the results. Takes two arguments, type (for the desired scope) and key (for company or group id, plid or custom scope name).
Metrics /o/metrics/metrics Returns a JSON object with the metrics for the given scope. Takes same two arguments, type and key, as described for the health checks servlet.
Ping /o/metrics/ping Simple servlet that responds with the text "pong". Can be used to test that a node is responding.
Thread Dump /o/metrics/thread-dump Generates a thread dump of the current JVM.
Admin /o/metrics/admin A simple menu to access the above listed servlets.

The Ping servlet can be used to test if the node is responding to requests. The Metrics servlet can be used to pull all of the metrics at the designated scope and evaluated in an APM for alterting. The Health Check servlet can run health checks defined in code that perhaps needs access to server-side details to evaluate health, but they too can be invoked from an APM tool to evaluate health.

The CPU Profile and Thread Dump servlets can provide useful information to assist with profiling your portal or capturing a thread dump to, say, submit to Liferay support on a LESA ticket.

The Admin portlet, while not absolutely necessary, provides a convenient way to get to the individual servlets.

NOTE: There is no security or permission checks bound to these servlets. It is expected that you would take appropriate steps to secure their access in your environment, perhaps via firewall rules to block external access to the URLs or whatever is appropriate to your organization.

Metrics Portlet

In addition, there is a really simple Liferay MVC portlet under the Metrics category, the Liferay Metrics portlet. This is a super-simple portlet which just dumps all of the information from the various registries. Can be used by an admin to view what is going on in the system, but if used it should be permissioned against casual usage from general users.

Using Liferay Metrics

Now for some of the fun stuff...

The DropWizard Metrics Getting Started page shows a simple example for measuring pending jobs in a queue:

private final Counter pendingJobs = metrics.counter(name(QueueManager.class, "pending-jobs"));

public void addJob(Job job) {

public Job takeJob() {
    return queue.take();

Our version is going to be different than this, of course, but not all that much. Lets assume that we are going to be tracking the metrics for the pending jobs by company id. We might come up with something like:

        immediate = true
public class CompanyJobQueue {

    public void addJob(long companyId, Job job) {
        // fetch the counter
        Counter pendingJobs = _metricRegistries.getCompanyMetricRegistry(companyId).counter("pending-jobs");

        // increment

        // do the other stuff

    public Job takeJob(long companyId) {
        // fetch the counter
        Counter pendingJobs = _metricRegistries.getCompanyMetricRegistry(companyId).counter("pending-jobs");

        // decrement

        // do the other stuff
        return queue.take();

    @Reference(unbind = "-")
    protected void setMetricRegistries(final MetricRegistries metricRegistries) {
        _metricRegistries = metricRegistries;

    private MetricRegistries _metricRegistries;

The keys here are that the MetricRegistries is injected by OSGi, and that class is used to locate a specific instance of the DropWizard Metrics registry instance where the metrics can be retrieved or created. Since they can be easily looked up, there is no reason to hold a reference to the metric indefinitely.

In the liferay-metrics repo, there are some additional examples that demonstrate how to leverage the library from other Liferay OSGi code.


So I think that kind of covers it. I've pulled in the DropWizard Metrics library as-is, I've exposed it into the OSGi container so other modules can leverage the metrics, I've provided an OSGi-friendly way to inject registry locators based on common Liferay scopes. There's the exposed servlets which provide APM access to metrics details and a portlet to see what is going on using a regular Liferay page.

The repo is available from https://github.com/dnebing/liferay-metrics, so feel free to use and enjoy.

Oh, and if you have some additional examples or cool implementation details, please feel free to send me a PR. Perhaps the community can grow this out into something everyone can use...

Liferay 7/DXP: Making Logging Changes Persistent

Technical Blogs April 16, 2018 By David H Nebinger


I have never liked one aspect of Liferay logging - it is not persistent.

For example, I can't debug a startup issue unless I get the portal-log4j-ext.xml file set up and out there.

Not so much of a big deal as a developer, but as a portal admin if I use the control panel to change logging levels, I don't expect them to be lost just because the node restarts.


So about a year ago, I created the log-persist project.

The intention of this project is to persist logging changes.

The project itself contains 3 modules:

  • A ServiceBuilder API jar to define the interface over the LoggingConfig entity.
  • A ServiceBuilder implementation jar for the LoggingConfig entity.
  • A bundle that contains a Portlet ActionFilter implementation to intercept incoming ActionRequests for the Server Administration portlet w/ the logging config panel.

The ServiceBuilder aspect is pretty darn simple, there is only a single entity defined, the LoggingConfig entity which represents a logging configuration.

The action is in the ActionFilter component. This component wires itself to the Server Administration portlet. All incoming ActionRequests (meaning all actions a user performs on the Server Administration portlet) will be intercepted by the filter. The filter passes the ActionRequest on to the real portlet code, but upon return from the portlet code, the filter will check the command to see if it was the "addLogLevel" or "updateLogLevels" commands, the ones used in the portlet to change log levels. For those commands, the filter will extract the form values and pass them to the ServiceBuilder layer to persist.

Additionally the filter has an @Activate method that will be invoked when the component is started. In this method, the code pulls all of the LoggingConfig entities and will re-apply them to the Liferay logging configuration.

All you need to do is build the 3 modules and drop them into your Liferay deploy folder, they'll take care of the rest.


So that's it. I should note that the last module is not really necessary. I mean, it only contains a single component, the ActionFilter implementation, and there's no reason that it has to be in its own module. It could certainly be merged into the API module or the service implementation module.

But it works. The logging persists across restarts and, as an added bonus, will apply the logging config changes across the cluster during startup.

It may not be a perfect implementation, but it will get the job done.

You can find it in my git repo: https://github.com/dnebing/log-persist

BND Instruction To Avoid

Technical Blogs April 9, 2018 By David H Nebinger


Recently I was building a fragment bundle to expose a private package per my blog entry, . In the original bnd.bnd file, I found the following:

-dsannotations-options: inherit

Not seeing this before, I had to do some research...

Inheriting References

So I think I just gave it away.

When you add the instruction to your bnd.bnd file, the class heirarchy is searched and all @Reference annotations on parent classes will be processed as if they were defined in the base class.

Normally, if you have a Foo class with an @Reference and a child Bar class, the parents references are not handled by OSGi. Instead, you need to add an @Reference annotation to the Bar class and have it call the super classes setter method (it is also why you should always use your @Reference annotations on protected setters instead of private members, because a subclass may need to set the value).

Once you add the dsannotations instruction to your bnd.bnd file, you no longer have to copy all of those @Reference annotations into the subclasses.

My first thought was that this was cool, this would save me from so much @Reference copying. Surely it would be an instruction I'd want to use like all of the time...

Avoid This Instruction

Further research led me to a discussion about supporting @Reference in inheritance found here: https://groups.google.com/forum/#!topic/bndtools-users/6oKC2e-24_E

It turns out that this can be a rather nasty implementation issue. Mainly if you split Foo and Bar to different bundles, the contexts are different. So when processing Bar in a different bundle, it has its own context, class loader, etc from the bundle that has the Foo parent class. I know that OSGi appears to be magic in how it is able to apparently cross these contexts without us as developers realizing how, but there's actually some complicated stuff going on under the hood, stuff that you and I really don't want to know too much about.

But for us to correctly and effectively use the dsannotations inheritance, we would have to know a lot more about how this context stuff worked.

Effectively, it's a can of worms, one that you really don't want to rip the lid off of.

So we need to avoid using this instruction, if for that reason alone.

A more complete response, though, comes from Felix Meschberger:

You might be pleased to hear that at the Apache Felix project we once had this feature in our annotations. From that we tried to standardize it actually.

The problem, though, is that we get a nasty coupling issue here between two implementation classes across bundle boundaries and we cannot express this dependency properly using Import-Package or Require-Capability headers.

Some problems springing to mind:

  • Generally you want to make bind/unbind methods private. Would it be ok for SCR to call the private bind method on the base class ?(It can technically be done, but would it be ok).

  • What if we have private methods but the implementor decides to change the name of the private methods — after all they are private and not part of the API surface. The consumer will fail as the bind/unbind methods are listed in the descriptors provided by/for the extension class and they still name the old method names.

  • If we don’t support private method names for that we would require these bind/unbind methods to be protected or public. And thus we force implementation-detail methods to become part of the API. Not very nice IMHO.

  • Note: package private methods don’t work as two different bundles should not share the same package with different contents.

We argued back then that it would be ok-ish to have such inheritance within a single bundle but came to the conclusion that this limitation, the explanations around it, etc. would not be worth the effort. So we dropped the feature again from the roadmap.

If I Shouldn't Use It, Why Is Liferay?

Hey, I had the same question!

It all comes down to the Liferay code base. Even though it is now OSGi-ified code, it still has a solid connection to the historical versions of the code. Blogs, for example, are now done via OSGi modules, but a large part of the code closely resembles code from the 6.x line.

The legacy Liferay code base heavily uses inheritance in addition to composition. Even for the newer Liferay implementation, there is still the heavy reliance on inheritance.

The optimal pattern for OSGi is one of composition and lighter inheritance; it's what makes the OSGi Declarative Services so powerful, I can define a new component with a higher service ranking to replace an existing component, I can wire together components to compose a dynamic solution.

Liferay's heavy use of inheritance, though, means there's a lot of parent classes that would require a heck of a lot of child class @Reference annotation copying in order to complete injection in the class heirarchy.

While there are plans to rework the code to transition to more composition and less inheritance, this will take some time to complete. Instead of forcing those changes right away and to eliminate the @Reference annotation copying, they have used the -dsannotations-options instruction to force the @Reference annotation processing in the class heirarchy. Generally this is not a problem because the inheritance is typically restricted within a single bundle, so the context change issues are not a problem, although the remainder of the points Felix raised are still a concern.


So now you know as much as I do about the -dsannotations-options BND instruction, why you'll see it in Liferay bundles, but more importantly why you shouldn't be using it in your own projects.

And if you are mucking with Liferay bundles, if you see the -dsannotations-options instruction, you'll now know why it is there and why you need to keep it around.

Overriding Component Properties

Technical Blogs April 7, 2018 By David H Nebinger


So once you've been doing some Liferay OSGi development, you'll recognize your component properties stanza, most commonly applied to a typical portlet class:

	immediate = true,
	property = {
		"javax.portlet.display-name=my-controlpanel Portlet",
		"javax.portlet.name=" + MyControlPanelPortletKeys.MyControlPanel,
	service = Portlet.class
public class MyControlPanelPortlet extends MVCPortlet {

This is the typical thing you get when you use the Blade tools' "panel-app" template.

This is well and good, you're in development and you can edit these as you need to add, remove or change values.

But what can you do with the OOTB Liferay components, the ones that are compiled into classes packaged into a jar which is packaged into an LPKG file in the osgi/marketplace folder?

Overriding Component Properties

So actually this is quite easy to do. Before I show you how, though, I want to show what is actually going on...

So the "property" stanza or the lesser-used "properties" one (this one is used to identify a file to use for component properties), these are actually managed by the OSGi Configuration Admin service. Because it is managed by CA, we actually get a slew of functionality without even knowing about it.

The @Activate and @Modified annotations that allow you to pass in the properties map? CA is participating in that.

The @Reference annotation target filters referring to property values? CA is participating in that.

Just like CA is at the core of all of the configuration interfaces and Liferay's ConfigurationProviderUtil to fetch a particular instance, these properties can be accessed in code in a similar way.

The other thing that CA brings us, the thing we're going to take advantage of here, is that CA can use override files w/ custom property adds/updates (sorry, no deletes).

Lets say my sample class is actually com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet. To override the properties, I just have to create an osgi/configs/com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet.cfg file. Note the importance of a) the location where the file goes, b) the file name is the full package/class name and c) the file has either the .cfg or .config extension and conforms to the appropriate CA formatting for the type.

The .cfg format is the simplest of the two, it actually follows a standard property file format. So if I wanted to override the category, to expose this portlet so it can be dropped on a page, I could put the following in my osgi/configs/com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet.cfg file:


That's all there is to it. CA will use this override when collecting the component properties and, when Liferay is processing it, will treat this portlet as though it is in the Sample category and allow you to drop it on a page.

In a similar way you can add new properties, but the caveat is that the code must support them. For example, the MyControlPanelPortlet is not instanceable; I could put the following into my .cfg file:


I'm adding a new property, one that is not in the original set of properties, but I know the code supports it and will make the portlet instanceable.


Using this same technique, you can override the properties for any OOTB Liferay component, including portlets, action classes, etc.

Just be sure to put the file into the osgi/configs folder, name the file correctly using full path/class, and use the .cfg or .config extensions with the correct format.

You can find out more about the .config format here: https://dev.liferay.com/discover/portal/-/knowledge_base/7-0/understanding-system-configuration-files

Compile Time vs Runtime OSGi Dependencies

Technical Blogs April 5, 2018 By David H Nebinger

Just a quick blog post to talk about compile time vs runtime dependencies in the OSGi container, inspired by this thread: https://web.liferay.com/community/forums/-/message_boards/view_message/105911739#_19_message_106181351.

Basically a developer was able to get Apache POI pulled into a module, but they did so by replicating all of the "optional" directives into the bnd.bnd file and eventually putting it into the bundle's manifest.

So here's the thing - dependencies come in two forms. There are compile time dependencies and there are runtime dependencies.

Compile time dependencies are those direct dependencies that we developers are always familiar with. Oh, you want to create a servlet filter? Fine, you just have a compile time dependency as expressed in a build.gradle file like:

compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"

Basically a compile time dependency is a dependency needed by the compiler to compile your java files into class files. That's really it; you need a class from, say, the XStream XOM package, well then you declare your dependency on in it your build.gradle file and your code compiles just fine.

Runtime dependencies are not as straight forward. What you find, especially when you deploy your OSGi bundles, is that you not only have your direct dependencies like the servlet spec jar or the XStream jar, but there are also indirect dependencies to deal with.

Let's take a look at XStream. If we check mvnrepository.com, these are the listed dependencies for XStream:

Group Name Version
cglib cglib-nodep (optional) 2.2
dom4j dom4j (optional) 1.6.1
joda-time joda-time (optional) 1.6
net.sf.kxml kxml2-min (optional) 2.3.0
net.sf.kxml kxml2 (optional) 2.3.0
org.codehaus.jettison jettison (optional) 1.2
org.codehaus.woodstox wstx-asl (optional) 3.2.7
org.jdom jdom (optional) 1.1.3
org.jdom jdom2 (optional) 2.0.5
stax stax (optional) 1.2.0
stax stax-api (optional) 1.0.1
xmlpull xmlpull
xom xom (optional) 1.1
xpp3 xpp3_min 1.1.4c

Note that there are only two dependencies here that are not listed as optional, xmlpull and xpp3_min. These are libraries that XStream uses for lower-level XML stuff.

But what are all of these optional dependencies?

Let's pick the well-known one, Joda Time.  Joda is a date/time library that supports parsing date/times from strings and formatting strings from date/times, amongst other things. The library is marked as "optional" because you don't have to have Joda in order to use XStream.

For example, if you are using XStream to do XOM marshaling on XML that does not have dates/times, well then the code that uses Joda will never be reached. So Joda is absolutely optional, from a library perspective, but as the implementing developer only you know if you need it or not. If you have XML that does have dates/times but you don't have Joda, you'll get ClassNotFoundException errors when you hit the XStream code that leverages Joda.

When the libraries are being built, the scope used for the declared dependencies, i.e. runtime vs compile in Gradle and <optional /> tag in Maven, will translate into the "resolution:= optional" stanza in the jar's MANIFEST.MF. Depending upon how the jar is used, this extra designation can be used or it can be ignored. For example, if use the "java" command with a classpath that includes the XStream jar and your classes, Java will happily run the code whether or not Joda is required. However, if you were to try to process an XML file with a date/time stamp, you may encounter a ClassNotFoundException or the like if Joda is not available.

The OSGi container is stricter about these optional dependencies. OSGi sees that XStream may need Joda, but it cannot determine whether or not it will be needed when the bundle is resolving. This is one reason why you get the "Unresolved Requirement" error when OSGi attempts to start the bundle.

It is up to the bundle developer to know what is required and what isn't, and OSGi forces you to either satisfy the dependency (ala something like this) or excluding the package dependency by masking them out using the Import-Package declaration. If you, the developer, are using XStream, OSGi expects that you should know if you are going to need an optional dependency like Joda or not.

Now I hate picking out one example like this, but I think this is really important to point out. Yes, you can tell OSGi to also treat the dependency as optional. It will get you by the Unresolved Requirement bundle start error. The problem, however, is it leaves you open to a later ClassNotFoundException because you have a dependency on a package which is marked as optional. The last thing you want to have happen is to find your module deployed to production and fail on a sporadic basis because sometimes an XML file has a date/time to parse.


So, now for some recommendations...

If you have a dependency, you have to include it. I tend to use Option 4 from my blog, but I'm using the compileInclude Gradle dependecy directive to handle the grunt work for me. If your dependency has required dependencies, well you have to include them also and compileInclude should cover that for you too.

For the optional dependencies, you have to determine if you need them or not. Yes, this is some analysis on your part, but it will be the only way to ensure that your module is correctly packaged.

If you need the optional dependency, you have to include it. I will typically use the compileInclude directive explictly to ensure the dependency gets pulled into the module correctly.

If you don't need the optional dependency, then exclude it entirely from the bundle. You do this in the bnd.bnd file using the Import-Package directive like:

Import-Package: !org.codehaus.jackson, \
  !org.codehaus.jackson.annotate, \
  !org.codehaus.jackson.format, \
  !org.codehaus.jackson.impl, \
  !org.codehaus.jackson.io, \
  !org.codehaus.jackson.type, \
  !org.codehaus.jackson.util, \
  !org.joda.time, \
  !org.joda.time.format, \

The syntax above tells OSGi that the listed packages are not imported (because of the leading exclamation point). This is how you exclude a package from being imported.

NOTE: When using this syntax, always include that last line, the wildcard. This tells OSGi that any other non-listed packages should be imported and it will require all remaining packages be resolved before starting the bundle.

And the final recommendation is:

Do not pass the optional resolution directive on to OSGi as it may lead to runtime CNFEs due to unsatisfied dependencies.

Got questions about your dependency inclusion? Comment below or post to the forums!

Thinking Outside of the Box: Resources Importer

General Blogs March 28, 2018 By David H Nebinger


On a project recently I had a Theme war and, like those themes you can download from the MarketPlace, I also had pages, contents and documents imported by the Resources Importer (RI) as a site template.

Which is pretty cool, on its own, so I could deploy the theme and create a new site based on the theme and demo how it looks and works.

But I ran into something that I consider a bug: every time the container restarts, the WAR->WAB processes my theme but also my Resources Importer stuff and it goes crazy creating new versions for the contents and documents, and my sites (if propagation was enabled) would start throwing exceptions about missing versions (I had developer mode enabled so the old versions were getting deleted).

I have open bugs on all of these issues, but it made me wonder what I could do with the RI to work around these issues in the interim.

So I knew that I would still want to use RI to load my assets, but I only ever want RI to load them once, and not again if the containing bundle were already deployed.

Running once and only once, as part of an initial deployment or perhaps as part of an upgrade, well I've seen that before, that's a perfect fit for an Upgrade Process implementation.

So I had an idea that I wanted to build an upgrade process that could invoke RI to import new resources. The Upgrade framework would ensure that a particular upgrade step would only run once (so I don't end up w/ weird version issues), that I could support doing version upgrades when necessary, and since I'm using RI I don't have to recreate the wheel to import resources.

What Can RI Do OOTB?

So the Resources Importer (RI) is an integrated part of Liferay 7.x CE and Liferay 7.x DXP. It is implemented in the com.liferay:com.liferay.exportimport.resources.importer bundle. RI ships with the following capabilities OOTB:

That second one was a doozy to find. It's not really documented, but it is in the code to support it.

If you trace through the code from modules/apps/web-experience/export-import/export-import-resources-importer in the com.liferay.exportimport.resources.importer.internal.extender.ResourceImporterExtender class, you will see that it has code to track all com.liferay.exportimport.resources.importer.provider.ResourceImporterBundleProvider instances. When found, the RI infrastructure will look for a liferay-plugin-package.properties file in your bundle classes which defines where to find the resources to import. So if you register a ResourceImporterBundleProvider component in your bundle, RI will load your resources from that bundle.

Now I don't know if it suffers from the same issue as the WAR->WAB reloading loop, so it might have issues on its own, but that would take some testing to find out.

The LMB message aspect can be found in the ResourceImporterExtender class.  If you don't want to use the ResourceImporterBundleProvider aspect, you could use the code in this class to initialize the bundle servlet context and send RI a hot deploy message and it will kick of the resource import (which is how the Extender class actually invokes RI, so if you can use the ResourceImporterBundleProvider component it will save you some boilerplate code).

The LAR file handling was interesting.  You can have a single LAR file as /WEB-INF/classes/resources-importer/archive.lar to load public resources.  If you have public and private or just private resources, you have to use /WEB-INF/classes/resources-importer/public.lar and/or /WEB-INF/classes/resources-importer/private.lar respectively.

Can I Do More with RI?

In short, yes. The first problem is that the Liferay APIs are not exported, so even though their bundle has the necessary classes, they are hidden away.

So in my workspace, https://github.com/dnebing/rsrc-upgrade-import, I have a module, resources-importer-api, which has copies of the classes but they are exported. Included in this module are some extension classes I created to support running RI within a bundle's UpgradeProcess.

The second module, resources-importer-upgrade-sample, is a sample bundle that shows how to build out an upgrade process that invokes the Resources Importer in upgrade processes.

The code that is checked in is configured only for bundle version 1.0.0. You can build and deploy to get the Dog articles.

Next, change the version to 1.1.0 in the bnd.bnd file and uncomment the line, https://github.com/dnebing/rsrc-upgrade-import/blob/master/modules/resources-importer-upgrade-sample/src/main/java/com/liferay/exportimport/resources/importer/sample/ResourceImporterUpgradeStepRegistrator.java#L73, build and deploy to get the Cat articles.

Next, change the version to 1.1.1 in the bnd.bnd file and uncomment the line, https://github.com/dnebing/rsrc-upgrade-import/blob/master/modules/resources-importer-upgrade-sample/src/main/java/com/liferay/exportimport/resources/importer/sample/ResourceImporterUpgradeStepRegistrator.java#L76, build and deploy to get the Elephant articles.

At each deployment, only the assets tied to the upgrade process will be processed. And if you start your version at 1.1.1 and uncomment both of the registry lines referenced above, build and deploy the first time to a clean environment, you'll see all 3 upgrade steps run in sequence, 0.0.0 -> 1.0.0, 1.0.0 -> 1.1.0, and 1.1.0 -> 1.1.1.

Since we're using an upgrade process to handle the asset deployment, the RI will only run once and only once for each version.


With the provided API module, there are more ways to leverage the RI. I can imagine a message queue listener that receives specially crafted messages that contain articles that transforms these into consumable RI objects and invokes RI to do the heavy lifting, invoking the RI system to properly load up the assets correctly, letting it invoke all of the necessary Liferay APIs.

Or a directory watcher that looks for files dropped in a particular folder and does pretty much the same thing.

For the record, I don't think I'd want to use this for managing the deployment of a long list of assets. I wouldn't want to use the RI as some sort of content promotion process as content creation is not a development activity and should be handled by appropriate publication tools built into the Liferay platform.

Anyways, check out the blog project repo at https://github.com/dnebing/rsrc-upgrade-import and let me know what you think...

Choosing OSGi Versions During Development

Technical Blogs March 23, 2018 By David H Nebinger

This one is for my good friend Milen... Sometimes he frustrates me, but he always forces me to think...


So if you've done any Liferay 7.x CE or DXP development, you may have encountered something similar to the following in your build.gradle:

dependencies {
  compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.6.0"
  compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0"

For the Maven users out there, I'm not going to include the pom equivalent, but I'm sure you can figure out how it would go.

One of the biggest mistakes that I see developers make is the selection of the version for modules they use. For example, I'm using portal-kernel version 2.6.0 above. This is really old; the Liferay Maven repo reports it was released in June, 2016.

And currently it looks like the latest available version is 2.62.0. You can see all of the available versions here.

So why do I pick such an old release? I certainly don't want to link to such an old version that likely has bugs or issues. Shouldn't I be picking the 2.62.0 version to get the latest?

Picking Old Versions

The Definitive Answer: No

Okay, so here's the skinny...

When you are building a module, you are not really linking to a jar. You are not including 2.6.0 or 2.62.0 or whatever into your module, you are only including meta information for the OSGi container that you need portal-kernel.

We as developers, we know that we always want to use the latest version available, when possible, so we get bugfixes, performance improvements, etc. So most of us want to grab 2.62.0 and use it as the version that we declare.

In OSGi, however, we never declare a specific version, we're always using a version range.

When I declare that my version is 2.6.0, the OSGi container thinks that I'm actually indicating that I want version [2.6.0,3.0.0). That's a version range from 2.6.0 up to (but not including) 3.0.0. This offers me a great bit of flexibility.

I can deploy my module to a Liferay 7.0 GA1 container or I could deploy it to the very latest Liferay 7.0 DXP Fix Pack 41. My version range means my module will work in all of these environments.

If instead I stuck with the 2.62.0 version, OSGi will treat that as [2.62.0,3.0.0). This version range is so narrow, it will not work in any of the 7.0 CE GAs and will actually only deploy to 7.0 DXP FP 40 or later.

So the version is not really the version you get, it is the minimum version you need to function. The OSGi container, it has some version of portal-kernel, you may or may not know what it is in your environment(s), the container will determine if the version it has is acceptable to your version range and may (or may not) start your module.

So, as a rule:

You always want to pick the oldest version you can get away with (relative to the major version number).

Wait, oldest you can get away with? What does that mean?

Oldest You Can Get Away With

The minor version numbers don't increment because you do a new build or antying like that. Liferay bumps the minor version when there is changed code in the API.

For eample, the com.liferay.portal.kernel.dao.search.SearchContainer class. I recently was copying a chunk of code from a certain Liferay DisplayContext class to emulate how it was handling the paged search container views.

I had my portal-kernel version set to 2.6.0 like I always do, and the code I copied went along something like this:

SearchContainer searchContainer = new SearchContainer(_renderRequest, getIteratorURL(), null, "there-are-no-rules");

if (isShowRulesAddButton()) {

Although I copied and only changes the strings, the IDE had the setEmptyResultsMessageCssClass() line marked as an error and it told me No Such Method errors. This made no sense because I literally lifted the code straight from the Liferay source, how could the method not exist.

Well, that comes back to the version range. Since I indicated 2.6.0, my code had to compile against 2.6.0 and this method doesn't exist in the SearchContainer interface from 2.6.0. I looked in Liferay's build.gradle file and found that they had used portal-kernel version 2.13.0. I changed my version to 2.13.0 and my compiler error went away.

So this tells me that, somewhere between portal-kernel version 2.6.0 and 2.13.0, this method was added to the SearchContainer interface. Since my code needs that method, the oldest version my code will work with is not 2.6.0, but more like 2.13.0.

Actually I guess it is possible it could have been added to the interface in 2.7.0, 2.8.0, etc, up to 2.13.0 so, if I were really a stickler about it, I could find the actual version where the method was introduced. I tend to value my time a lot, so instead I just settled on the version Liferay was also using.

Now, though, most of my modules use the version as 2.6.0 except for those that use this SearchContainer container code, this one uses version 2.13.0.

Why not just use 2.13.0 and keep everything the same?

Using Different Versions for Different Modules

Well, it comes down to flexibility. Most of my modules will work fine using the older version. This particular portlet uses a function in a SearchContainer that maybe I do or maybe I don't really need. If a client asks to run my portlet in their GA1 environment that might not have this version, well I can change the version in build.gradle, comment out the line using the setEmptyResultsMessageCssClass() method (and anything else that was not yet available) and then build and deploy my code.

In its current form it needed 2.13.0, but with a few modifications I could get a version that worked w/ 2.6.0.


Or, how about this, this will really trip you out...

I create one bundle that has [2.6.0,2.13.0) for the portal-kernel version range dependency that doesn't use setEmptyResultsMessageCssClass(), then on my second bundle I use version range [2.13.0,3.0.0).  Users can deploy both bundles into an environment, but OSGi will only enable the one that has the right version available.

It gets even better. You might deploy this to your GA1 envirronment where the first bundle starts. But then you upgrade to GA2 and, without changing anything from a deployment, the second bundle starts but the first does not. Completely transparent upgrades because my version ranges control where the code runs!


So I'm at the point where I hate the label "version". For a normal developer specifying a value in build.gradle, it means what version do I want to compile against. As we've seen, to the OSGi container it specifies the minimum version the module needs in order to start.

So even though it goes against your developer nature, avoid the latest version. Always use the oldest version your module needs.


Fronting Liferay Tomcat with Apache HTTPd daemon Revisted

Technical Blogs March 6, 2018 By David H Nebinger


So originally I presented the blog post, Fronting Liferay Tomcat with Apache HTTPd daemon, but this post featured my partiality for using mod_jk to connect HTTPd and Tomcat.

Personally I think it is much easier to do the JkMount and JkUnmount mappings to point to Tomcat, plus Liferay sees the original request so when it generates URLs, it can generate them using the incoming request URL.

Remember that Liferay generates URLs for other assets such as stylesheets, javascripts, images, etc. Rather that being hard-coded, Liferay will create URLs based on the incoming request URL. This should mean that the host, port and protocol it uses for the generated URLs will correctly resolve back to the Liferay server.

When you front with HTTPd, it gets the actual request, not Tomcat. If you use the AJP binary protocol with mod_jk, the original URL goes to Liferay and it can generate URLs using the normal logic.

But when you use mod_proxy, things can be challenging. The URL that goes to Tomcat and Liferay is the URL request from HTTPd, not the external client browser. So if you are proxying from HTTPd to localhost:8080, a defaut Liferay configuration will end up generating URLs with localhost:8080 in them even though they will not be valid in the external client browser.

The challenge is getting the protocol, host and port correct when Liferay generates URLs.

Option 1 - Handle It In Liferay

The first way to get this right is to do it in Liferay.  By setting the following properties in portal-ext.properties, you control how Liferay generates URLs:

# Ports that apache HTTPd will be listening on
# Host name to use in generated URLs, will typically be the name used to get to HTTPd
# Force all generated URLs to use the HTTPS protocol
# If not set, will use the protocol from the connection from HTTPd to Tomcat.

As specified above, regardless of the incoming request URL, Liferay will generate URLs in the form of https://www.example.com/...

In your httpd.conf file, all that is missing is the ProxyPass directives to send traffic to the local tomcat instance. Personally I want to send all traffic to Tomcat and specifically exclude those paths that should not go to Tomcat. I find this to be easier to manage rather than trying to figure out all URL patterns that Liferay might use to send traffic selectively to Tomcat:

# Serve /excluded from the local httpd data
ProxyPass /excluded !
# Pass all traffic to a localhost tomcat.
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
# This would be the configuration to invoke a tomcat on another server
# ProxyPass /
# ProxyPassReverse /

This is a very, very simplistic HTTPd configuration. It doesn't deal at all with virtual host configurations, http/https configurations, etc.

Also on the Liferay side it will generate URLs w/ www.example.com and may not use the Liferay configured virtual hosts correctly either.

Option 2 - Handle It In HTTPd

The second way to handle things is to push the heavy lifting to HTTPd.

We start by configuring httpd.conf (and the child files) to use the Virtual Hosts:

<VirtualHost *:80>
    # Set the header for the http protocol
    RequestHeader set X-Forwarded-Proto "http"
    # Serve /excluded from the local httpd data
    ProxyPass /excluded !
    # Preserve the host when invoking tomcat
    ProxyPreserveHost on
    # Pass all traffic to a localhost tomcat.
    ProxyPass / http://localhost:8080/
    ProxyPassReverse / http://localhost:8080/
    # This would be the configuration to invoke a tomcat on another server
    # ProxyPass /
    # ProxyPassReverse /
<VirtualHost *:443>
    # Set the header for the https protocol
    RequestHeader set X-Forwarded-Proto "https"
    # Serve /excluded from the local httpd data
    ProxyPass /excluded !
    # Preserve the host when invoking tomcat
    ProxyPreserveHost on
    # Pass all traffic to a localhost tomcat.
    ProxyPass / http://localhost:8080/
    ProxyPassReverse / http://localhost:8080/
    # This would be the configuration to invoke a tomcat on another server
    # ProxyPass /
    # ProxyPassReverse /

So this sets up two virtual hosts, but using wildcards so they will actually handle all incoming requests that get to HTTPd. Each virtual host is tied to the HTTP or HTTPS protocol and set a header, X-Forwarded-Proto with the protocol to use.

Also it includes the ProxyPreserveHost directive which will preserve the incoming host when sending the request to Tomcat (it will get the incoming host instead of the localhost from the ProxyPass directive).  The X-Forwarded-Host header will also be set.

On the Liferay side, the portal-ext.property changes are simpler since we didn't change the expected header names:

# Set this to true to use the property "web.server.forward.host.header" to
# get the host. The property "web.server.host" must be set its default
# value.

# Set this to true to use the property "web.server.forward.port.header" to
# get the port.
# Set this to true to use the property "web.server.forward.protocol.header"
# to get the protocol. The property "web.server.protocol" must not have been
# overriden.

This configuration elegantly handles virtual hosts correctly in HTTPd and in Liferay, it respects the incoming protocol correctly (to support mixed mode requests) and it is just easy to set up and validate, plus it won't require property changes when you add a new virtual host to Liferay.


So here's two options for configuring your server to support fronting Tomcat with HTTPd but using mod_proxy instead of mod_jk.

Personally I recommend using mod_jk, but if I had to go with mod_proxy, I would lean towards implementing it using Option 2 above.


OSGi Version Details

Technical Blogs March 6, 2018 By David H Nebinger

A good friend of mine, Minhchau Dang, pointed out to me that I have frequently used OSGi version ranges in my blogs.

I explained that I was concerned that I didn't want to bind to a specific version, I often wanted my code to work over a range of versions so I wouldn't have to go back and update my code.

He pointed me at the specifications, https://osgi.org/specification/osgi.core/7.0.0/framework.module.html#i3189032, which indicate that I didn't understand that a standalone version also work as a range. Yes this link is for the 7.0 specs and we're not at 7.0, but the 6.0 and 5.0 specs echo the same thing, I just didn't have a working link to that section.

So, in case you don't want to spin out to the specs document, I'll summarize here...

You can specify a range like version="[1.2.3,2)" to supply a fixed range, or 1.2.3 <= x < 2. This range specifies both the lower and upper bounds and allows you to be inclusive or exclusive of values.

But, if you simply use version="1.2.3", you are also using an unlimited range, or 1.2.3 <= x. This range only specifies the minimum version, anything greater is just fine.

Well, technically, that is. As far as OSGi is concerned, if 4.0.0 is available, it will happily mark the dependency as satisfied. You have to keep in mind, though, that there is never a guarantee of backwards compatibility on higher version numbers. So while OSGi will resolve the dependency with 4.0.0, it may not at all be compatible with what you need as a dependency. You are likely going to be okay if 1.3.4 is deployed, or 1.6.7, but you will likely encounter failure with version 2.0.0. So keep in mind that ranges, although not necessary, will help to constrain your module to platforms that will actually successfully host your module.

Now, I suppose we can still have a discussion of what to do with respect to Liferay releases. I mean, I don't know how Liferay will version portal-kernel for the upcoming 7.1 release, but let's consider the following...

Normally the Liferay tooling will start you with portal-kernel version 2.0.6 which we now know means 2.0.6 <= x, so any higher number is fine. But it actually likely will not be for 7.1. We might actually need to use a version range like [2.0.6,2.1) if 7.1 uses a minor version bump or [2.0.6,3) if 7.1 does a major version bump and our code doesn't work on 7.1.

I guess we'll cross that bridge when we get there...



How to Upgrade to Liferay 7.0+

General Blogs February 23, 2018 By David H Nebinger

So the question comes up how to do Liferay upgrades.

I'm not talking here about the technical details of how you upgrade one particular plugin to another type of plugin, what kinds of API changes have to be made, etc.

Instead, I'm thinking more about the general process of how to upgrade, what choices you're presented with and what the ramifications are for making certain choices.

Upgrades Are Projects

The first thing to remember is that upgrades are projects. You should totally build them out as projects, you should have a project plan, you should have a project team, ... Having a full project plan forces you to define scope for the upgrade and time box all activities so you will know how the project is proceeding.

As a project, you also should have a risk assessment/mitigation plan. Know in advance what you will do if the project runs long or runs into issues. Will you just stretch the timeline? Will you seek help from Liferay GS or a Liferay Partner? Are you sending your development team to Liferay training in advance or only if they seem to struggle? Will you rely on LESA tickets, Liferay documentation, community support via the forums or slack?

Liferay GS offers an Upgrade Assessment package where an experienced Liferay GS consultant will come onsite to review your environment and build a customized report outlining what your upgrade effort will be. This assessment can become the foundation of your project planning and can help set your upgrade in the right direction.

Upgrades Have Scopes

Upgrading from Liferay 6.2 to Liferay DXP 7.1, there will be scope to this project and the project is susceptible to scope creep.

For example, you might decide going in that your project is simply to get what you currently have migrated and running under DXP. During the upgrade project, though, you might decide to add some backlogged features or refactor your codebase or rework legacy portlet wars into OSGi portlet modules. Each of these things would be considered scope creep. Each change like these that the team takes on will negatively impact your project plan and schedule.

Upgrades Expose Bad Practices

The one thing I've found is that upgrades tend to expose bad practices. This could be bad practices such as using an ORM package instead of Service Builder for data access. It could be a non-standard way of decoupling code such as making all service calls via remote web services where a local service implementation would have been an easier path. It can expose when standard design practices such as design by interface were not fully leveraged. It could be as simple as having used an EXT plugin to do something that should have been handled by a JSP hook or a separate custom implementation.

Exposing bad practices may not seem very important, but upgrading bad practices will always add to a project plan. Something done initially as a shortcut or a hack, these things get difficult to carry forward in an upgrade.

The one thing I've found in 10+ years of experience with Liferay, it is often better to do things "The Liferay Way". It is not always easy and may not seem like the right way, but it usually ends up being the better way generally to develop for the platform.

Upgrade Project Recommendations

To facilitate your upgrade project, I offer the following recommendations:

  • Limit scope. As far as the upgrade is concerned, limit the project scope to getting what you currently have running under the later version. Do not consider refactoring code, do not consider reworking portlet wars as OSGi modules, etc. Limit the scope to just get on the new version. If you want to refactor or rework wars as OSGi modules, save that for a later project or phase.


  • Leave portlet wars as portlet wars. I can't say this strongly enough. It is absolutely not necessary for your legacy portlet wars to be refactored as OSGi modules. Your legacy portlet wars can be deployed to DXP (after necessary API changes) and they will automagically be converted into an OSGi WAB for you. Do not spend your upgrade cycles reworking these into OSGi bundles yourself, it is a complete waste of your time and effort.


  • Only rework what you have to rework. You'll have to touch legacy hooks and EXT plugins, there is no way around that. But that is where your upgrade cycles need to be spent. So spend them there.


  • Rethink upgrading bad practices. I know, I said limit scope and migrate what you have. The one exception I make to this rule is if you have exposed some really bad practices. In the long run, it can often be a smaller level of effort to rework the code to eliminate the bad practice first or as part of the upgrade. Cleaner code is easier than spaghetti to upgrade.


  • Use Liferay IDE tooling. The Liferay IDE comes with a built-in upgrade assistant tool. While the tool is not perfect, it can help you upgrade your Maven and Plugin SDK projects to be compatible with the later version, including suggesting and making necessary API changes for you. If you do not use the upgrade assistant, you are willfullly missing out on an automated way to start your upgrade.


  • Have a Backup Plan. Know in advance if you are going to leverage Liferay GS or a Liferay Partner to help complete your upgrade in case you are seeing delays. If you wait until you are behind the eight-ball before mitigating your risk, you will be less prepared to get the project back on track if it is going off the rails.


  • Get a Liferay Upgrade Assessment Package. Even if you are going to do all of the work in house, an upgrade assessment can highlight all of the aspects you and your team will need to consider.

That's pretty much it.  Have any horror stories or suggestions you'd like to share? Please add them below...

Why You Should Join the 7.1 Community Beta Program

General Blogs February 20, 2018 By David H Nebinger

So Jamie just announced the new Liferay 7.1 Community Beta Program here: https://community.liferay.com/news/liferay-7-1-community-beta-program/

I recommend everyone who has working code in Liferay 7.0 or Liferay DXP should join the 7.1 beta sooner rather than later.

Why? Well, mostly because Liferay's engineering team is focused on the 7.1 release, so anything that you find in beta, well that will be something that they will want to fix before release. Without those bug reports, some incompatibility that you encounter later on falls under the regular release process and that will only work against your release schedule.

Let's say, for example, that you have spent time building out a comprehensive audit solution for your Liferay 7.0/DXP environment where you have a slew of model listeners and a custom output mechanism to route the audit details to an ElasticSearch instance that you report on using Kibana. You've got a decent investment in your auditing solution, one that you plan on leveraging in 7.1 on.

You're basically looking at two choices.  Choice #1 is to do nothing; wait for the official 7.1 GA to come out and for your organization to decide it is time to consider an update to 7.1. Now I can't predict what might be part of 7.1, but let's argue that it has one or more bugs related to the audit mechanisms.  Perhaps the model listeners registration has changed or the audit messages are broken or the output mechanism isn't invoked consistently, ... I don't know, some sort of bug that maybe could have been caught sooner but ended up getting by. But now that you're looking at the upgrade, you've found the bug and want to report it. That's fine, Liferay wants you to report bugs, but at the time you've found it 7.1 is out and fixing the bug ends up becoming part of the release process.

Choice #2 is to join the Beta program. Now you dedicate a little bit of time to test your code under 7.1 before it goes out and you find and report the issue. Now Liferay has this list of things that they want to knock out for the first GA, so your report becomes one of many that Liferay really wants to deal with for a solid initial release. Your bug gets dealt with before it can impact your own upgrade schedule, and this actually helps you from the early reporting.

So please, please, please sign up for the 7.1 beta program.

Get the 7.1 beta and beat on it as much as you can.

Run the DB upgrade against your current database. Update and deploy your custom modules, make sure features, functionality and APIs are still there that you depend on. Point your load test tool at your instance and see if you have a measurable difference in performance or capacity vs your current environment.

Just bring it. Find and report the problems.

Together we can make the next release one of the best ever, all that's missing is you.

OSGi Fragment Bundles

Technical Blogs February 15, 2018 By David H Nebinger

Okay, in case it is not yet clear, Liferay 7 uses an OSGi container.

I know what you're thinking: "Well, Duh..."

The point is that OSGi is actually a standard and anything that works within OSGi will work within Liferay. You just need to understand the specs to make something of it.

For example, I'd like to talk about OSGi Fragment Bundles. There's actually stuff in the specs that cover what fragment bundles can do, how they will be treated by the container, etc.

The only way that Liferay typically presents a fragment bundle as a solution is for a JSP fragment, but there's actually some additional stuff that can be done with them.

OSGi Fragment Bundles

Fragment bundles are covered in chapter 3.14 of the OSGi Core 6.0.0 specification document.  In technical terms,

Fragments are bundles that can be attached to one or more host bundles by the framework. Attaching is done as part of resolving: the Framework appends the relevant definitions of the fragment bundles to the host's definitions before the host is resolved. Fragments are therefore treated as part of the host, including any permitted headers.

The idea here is that fragments can supplement the content of a host bundle, but cannot replace files from the host. Since the fragment is appended to the host, resources will always be loaded from the host bundle before they are loaded from the fragment.

This is counter to the old way Liferay used to do JSP hooks, where these hooks could actually replace any JSP or static file from the original bundle.  Fragments can only add new files, but not replace existing ones.

So you might now be wondering how the JSP files from a fragment bundle actually do override the files from the host bundle. The answer? Liferay magic. Well, not magic, per se, but there is special handling of JSP files by the Liferay systems to use a JSP file from a fragment before the host, but this is not normal OSGi Fragment Bundle behavior.

What good are they if they can only add files?

Actually you can do quite a bit once you understand that bit.

For example, I was recently trying to help a team override the notification template handling from the calendar-service module, a ServiceBuilder service module for the Liferay calendar. The team needed to replace some of the internal classes to add some custom logic and had been unsuccessful. They had seen my blog post on Extending Liferay OSGi Modules, but the warnings in the blog were taken to heart and they didn't want to go down that road unless it became necessary.

The calendar-service module has a bunch of internal classes that are instantiated and wired up using the internal Spring context set up for ServiceBuilder service modules. So the team needed to provide new template files, but in addition they needed custom classes to replace the instances wired up by Spring when setting up the context, a seemingly difficult ask.

The Spring aspects of the host module in addition to the appending nature of the fragment bundle handling actually makes this pretty easy to do...

First, create a fragment bundle using the host and version from the original.  For this override, it is com.liferay.calendar.service and rather than use a specific version, I opted for a range [2.3.0,3.0.0).

Next I added a new class, src/main/java/com/liferay/calendar/notification/impl/CustomEmailNotificationSender.java.  It was basically a copy of the original EmailNotificationSender class from the same package, I just added in a bunch of log statements to see that it was being used.  Note that I was actually free to use any package I wanted to here, it really wasn't that important.

Next I added a src/main/resources/META-INF/spring/calendar-spring-ext.xml file to the fragment bundle with a replacement bean definition.  Instead of instantiating the original EmailNotificationSender, I just had to instantiate my custom class:

<?xml version="1.0"?>

	xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"
	<!-- Replace the Liferay bean definition with our own. -->
	<bean class="com.liferay.calendar.notification.impl.NotificationSenderFactory" id="com.liferay.calendar.notification.impl.NotificationSenderFactory">
		<property name="notificationSenders">
				<entry key="email">
					<bean class="com.liferay.calendar.notification.impl.CustomEmailNotificationSender" />
				<entry key="im">
					<bean class="com.liferay.calendar.notification.impl.IMNotificationSender" />

So while I couldn't replace the old class, I could add a new class and a new Spring configuration file to replace the old definition with one that used my class.

Once the fragment was built and deployed, it was working as expected.  The project at this state is available here: https://github.com/dnebing/calendar-override

Additionally the team needed to alter the template files.  This was accomplished by adding a src/main/resources/portlet-ext.properties file with a replacement property key for the template to replace pointing to a new file also included in the fragment bundle.  Since the original portlet.properties file has an "include-and-override" line to pull in portlet-ext.properties, when the fragment bundle is appended to the host the replacement property key will be used and the new file from the fragment bundle will be loaded.

What Else Can I Use Fragments For?

While I don't have working code to demonstrate each of these, the working example makes me think you can use a fragment bundle to extend an existing Liferay ServiceBuilder service module.

Since ServiceBuilder modules are Spring-based and the default Spring configuration declaring the service is in META-INF/spring/module-spring.xml, we can use a fragment bundle to add a META-INF/spring/module-spring-ext.xml file and replace a default wiring to a service instance to a custom class, one that perhaps extends the original but overrides whatever code from the original. Spring would instantiate our class, it would have the right heirarchy and should make everything work.

This wouldn't work for services from portal-impl since they are not loaded by OSGi ServiceBuilder host modules, but it should work for those that are deployed this way.

Another idea is that it could be used to override static .css and/or .js files from the host bundle. Well, not override per se, but introduce new files that, in addition to a Configuration Admin config file, could use the replacements in lieu of the originals.

So, for example, calendar-web has a /css/main.css (actually main.scss file, but it will be built to main.css) file that is pulled in by the portlet. We could use a fragment bundle to add a new file, /css/main-ext.scss file. It could either have everything from main.scss with your changes or it could just contain the changes depending upon how you wanted to manage it going forward.  As a new file, it could be loaded if the portal was going for the file.

The original file is pulled in by the portal due to the properties on the CalendarPortlet annotation:

	immediate = true,
	property = {
		"javax.portlet.name=" + CalendarPortletKeys.CALENDAR,
	service = Portlet.class
public class CalendarPortlet extends MVCPortlet {

So we would need the portlet to use a new set of properties that specifically changed from /css/main.css to /css/main-ext.css. This we can do by adding a file to $LIFERAY_HOME/osgi/configs/com.liferay.calendar.web.internal.portlet.CalendarPortlet.cfg file. This file format as defined here: https://dev.liferay.com/discover/portal/-/knowledge_base/7-0/understanding-system-configuration-files and basically allows you to create a file to replace configuration property key values w/ custom versions.

So in our file we would add a line, com.liferay.portlet.header-portlet-css=["/css/main.css","/css/main-ext.css"]. This is the format for including both files, if you just wanted the one file it would simply be com.liferay.portlet.header-portlet-css="/css/main-ext.css". This file would need to be manually deployed to the osgi/configs directory, but once it is done and combined with the fragment bundle, the main.css file and main-ext.css file would be included.

This is the same kind of process that you would use to handle static javascript files pulled in by the portlet's @Component annotation properties.


So OSGi Fragment Bundles can be used for things beyond simple JSP fragment bundles.

I'm hoping what I presented here gives you some ideas on how you might solve problems that you're facing with "overriding" Liferay default functionality.

If you have some ideas, please share below as they may be helpful to others struggling w/ similar issues.


Angular 2+ Portlets in DXP

Technical Blogs February 8, 2018 By David H Nebinger

So I've been working a lot more with Angular 2+ recently (Angular 4 actually, but that is not so important) and wanted to share some of my findings for those of you whom are interested...

Accessing the Liferay Javascript Object

So TypeScript is sensitive to defined variables, classes, objects, etc.  Which is good when you want to make sure you are building complex apps, type safety and compilation help to ensure that your code starts on a solid foundation.

However, without a corresponding .ts file to support your import of the Liferay javascript object, TypeScript will not be able to compile your code if you try to use the Liferay object.

That said, it is easy to get access to the Liferay object in your TypeScript code.  Near the top of your .ts file, add the following line:

declare var Liferay: any;

Drop it in like after your imports but before your class declaration.

This line basically tells Angular that there is an object, Liferay, out there and it is enough to pass the compile phase.

Alternatively you can use the following syntax:


to get to the object, but to me this is not really the cleanest looking line of code.

Supplying Callback Function References to Liferay

So much of the Liferay javascript functions take callback functions.  For example, if you wanted to use the Liferay.fire() and Liferay.on() mechanism for in-browser notification, the Liferay.on() function takes as the second argument a Javascript function.

But, when you're in your TypeScript code, your object methods are not pure javascript functions, plus as an object instance, the method is for a particular object, not a generic method.

But you can pass a bound pointer to an object method and Liferay will call that at the appropriate points.

For example, if you have a method like:

myAlert(event) {
  alert('Received event data ' + event.data);

So if you want it to be called on a 'stuff-happened' event, you could wire it up like:

ngOnInit() {
  Liferay.on('stuff-happened', this.myAlert.bind(this));

The this.myAlert.bind(this) is used to bind up sort of a proxy to invoke the myAlert() method of the particular instance. If someone does a:

Liferay.fire('stuff-happened', { data: 'Yay!'});

Liferay will end up invoking the myAlert() method, providing the event object, and the method will invoke the alert() to show the details.

Sometimes it is advantageous to have the callback run inside of a zone.  We would change the above ngOnInit() method to:

constructor(private ngZone: NgZone) {}
myZoneAlert(event) {
  this.ngZone.run(() => this.myAlert(event));
ngOnInit() {
  Liferay.on('stuff-happened', this.myZoneAlert.bind(this));

Using NgRoute w/o APP_BASE_HREF Errors

When using NgRoute, I typically get runtime browser errors complaining Error: No base href set. Please provide a value for the APP_BASE_HREF token or add a base element to the document.

This one is pretty easy to fix.  In your @NgModule declaration, you need to import the APP_BASE_HREF and declare it as a provider. For example:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { APP_BASE_HREF } from '@angular/common';

import { AppComponent } from './app.component';

  declarations: [
  imports: [
  providers: [{provide: APP_BASE_HREF, useValue : '/' }],
  bootstrap: [AppComponent]
export class AppModule { }

The important parts above are (a) the import of APP_BASE_HREF and (b) the declaration of the providers.

NgRoute Without Address Bar Changes

Personally I don't like NgRoute changing the address bar as it gives the really false impression that those URLs can be bookmarked or referenced or manually changed.  The first time you try this, though, you'll see the Liferay error page because Liferay has no idea what that URL is for, it doesn't know that Angular might have taken care of it if the page were rendered.

So I prefer just to block the address bar changing.  This doesn't give false impressions about the URL, plus if you have multiple Angular portlets on a page they are not fighting to change the address bar...

When I'm routing, I always include the skipLocationChange property:

this.router.navigateByUrl('path', { skipLocationChange: true });

Senna and Angular

Senna is Liferay's framework that basically allows for partial-page updates for non-SPA server side portlets. Well, it is actually a lot more than that, but for a JSP or Struts or Spring MVC portlet developer, it is Senna which is allowing your unmodified portlet to do partial page updates in the rendered portal page w/o a full page refresh.

Senna may or may not cause you problems for your Angular apps. I wouldn't disable it out of the gate, but if during testing you find that hokey things are happening, you might try disabling Senna to see if things clear up for you.

Find out how to disable Senna here: https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/automatic-single-page-applications#disabling-spa

I say try your app w/o disabling Senna first because, well, if you disable Senna globally then your non-SPA portlets revert to doing full page refreshes.


So that's really all of the tips and tricks I have at this point.

I guess one final thing I would leave you with is that the solutions presented above really have nothing to do with Liferay. That's kind of important, I found these solutions basically by googling for an answer, but I would leave Liferay out of the search query.

When you think about it, at the end of the day you're left with a browser with some HTML, some CSS, some Angular and non-Angular Javascript. Whatever problems you run into with Angular, if you try to solve them generically that solution will likely also work for fixing the problem under Liferay.

Don't get too hung up on trying to figure out why something in Angular is not working under Liferay, because you are not going to find a great deal of articles which talks about the two of them together.


The Power of Patience

Technical Blogs February 2, 2018 By David H Nebinger

So, as a developer, I'm like usually whacking my whole runtime environment and starting over.

Why? Well, regardless how much I try to keep it clean, cruft will find its way into the environment. I'm left with users I don't want, content I don't want, pages I'm not using any more, sites created to demo something I'm not demoing any more...

So I'll throw out my data directory, drop my database and create a new one, purge old logs, modules, osgi/state and the work directories, ... Basically I get back to a point where I am starting up a brand new bundle, all shiny and new.

One of the things I often find when I do this, though, is that I'm a lazy developer.

So for awhile I was actually writing code that had dependencies on some environmental aspect. For example, my code would assume that a custom field had been created, roles or organizations were defined, ...

When I would bring up the environment, my code would fail because I had forgotten to create the dependent data.

So I got a little smarter, and I learned about the Liferay Upgrade framework. I now use this framework when writing my modules so I can just deploy my modules and, if the dependent data is missing, it will be created automagically. This works really great and, from a deployment perspective, I can promote my modules to test, QA, and prod knowing that my modules will create whatever they have to have to function properly.

This has been working really well for me, well, until today.

Today I did one of my purges and then fired up the environment and my upgrade process was failing.

I had created an upgrade process that was creating a custom field for the User model object. The code totally works when deploying to a previously-running instance, but I was getting an error during startup after the purge because my code was executing before the portal finished loading all of the default data. So one of the things you need for a custom field is a company id to bind them to, but my upgrade process was running before the first Company record was created.

Not company record, no company ID, and my code was just throwing all kinds of exceptions.

So I was effectively still being lazy. I was assuming that my code always ran at a particular time and that certain things were already available, and I found that when starting up from a clean state my assumptions caused my code to fail.

Basically I wasn't waiting for a good time for my code to execute (well, I didn't think I had to wait).  As a module developer, we can all often assume that if our code works on a hot deploy, well then it will always work. The problem, of course, is that environment startup we really don't have control over when our modules load, when our components start. As far as OSGi is concerned, if the references have been resolved and injected, the component is good to go.

But often times our components may have some sort of dependency that is hard to declare. For me, I was dependent upon the initial Company record being available, and I can't really declare a dependency on that which OSGi would be able to verify. In this forum thread, the code wanted to unload a component that had not been loaded/started when the componen t was activated, so sometimes the right thing happened and sometimes it didn't.

So the key is to identify when you have code that really needs to wait on something, but also to have something identifiable to OSGi to wait on.

What if you can't identify something for OSGi such as the initial Company record? Or the creation of some service in a different module that you don't want to declare as an @Reference in your component?

One trick I like to do to solve this case is to add a final dependency on portal startup. By adding the following to your @Component class, you will typically get the right outcome:

@Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) { }

That's it. This code basically has OSGi wait to start your component until the portal has been initialized. Note that I'm not doing anything with the injected object; I'm just forcing OSGi to delay the start of my component until it can be injected.

Adding this line to my Upgrade class, I did my purge stuff and started up the portal. After the portal was initialized, the Company record is guaranteed to be available, and my Upgrade had no problem creating the custom field.


So, keep this in mind if you need to ensure that some component doesn't try to do its thing too early in the startup process.

Oh, and if you just need to enforce a start sequence between your own components? Well, just remember that OSGi will not start a component until its @References can be injected. So if B depends on A, and C depends on B, just add the necessary @Reference guys to B and C. At startup, OSGi will be able to start A (since it has no dependencies), which means it can then start B and finally start C, guaranteeing the startup sequence that you need to have.

DevCon 2017

General Blogs October 14, 2017 By David H Nebinger

So I've been home for almost a week now after having attended Devcon 2017 in Amsterdam.

I have to give a shout out to Pascal Brusset and his entire team for putting on a great event. The venue was great, the sessions were great, and the speakers were great too. I especially want to thank them and all of Liferay for letting me attend and give a presentation, it has been one of the things on my bucket list for a while now that I can now check off of the list.  cool  Note this doesn't mean I don't want to go again to a future one, hint hint hint...

And another shout out to my good friend, Olaf Kock, who organized the sold-out Unconference. I'm glad I was able to attend, and I'm going to be sure to sign up as early as I can next year so I can make the cut wink

I want to thank my friend Ray Auge for the idea about OSGi Subsystems; they solve a problem I've been concerned about, and once he planted the idea in my head I was able to burn the midnight oil and turn it into a new blog post.

For all of those who I had a chance to talk with and hopefully help a little, it was truly my honor. And to those that I didn't get a chance to, well that's something I hope to reconcile at a future event.

To those that attended my session on Development Pitfalls, thanks for attending. Remember if you have any questions or concerns, you can usually get a response from me in the forums.

Now I'm getting ready for next week's LSNA in Austin. I'm looking forward to catching up with some of my old friends and hopefully making some new ones.

If you're going to LSNA, feel free to stop me and say Hi or ask a question or whatever. To me, that's the best part of attending the events and getting to hear your problems and issues and potentially turning those into a new blog post.


OSGi Subsystems and Why You Want Them

Technical Blogs October 9, 2017 By David H Nebinger

So last week I'm sitting in an Unconference session at DevCon in a group talking about OSGi. I don't remember how it came up, but we got on a discussion about deployment issues and someone asked about creating an LPKG file (the format Liferay uses to distribute a single artifact containing many bundles). I explained that it might be possible to create a file, but the problem was that the format (outside of being a zip file) is not documented and subject to change at any moment.

That's when Ray Auge jumped in and stated that we didn't want to use LPKG files anyway. Instead we should be using Subsystems, an OSGi specification for packaging many bundles in a single artifact.

Well I had not heard of Subsystems before, so I jotted down a note to myself to do some research on them to see just what they were and how I could use them...


OSGi Subsystems is part of the R5 specification for OSGi.

"So What?" you might ask. Well, it turns out they are really useful.

For example, Liferay actually distributes all of their bundles as .lpkg files because it can be hard to distribute and deploy over 500 bundles, but it's actually pretty easy to distribute and deploy 7 .lpkg files.

The problem for us, though, is that the .lpkg file is undocumented and generally not for our use as developers and deployers.

Fine, but when it comes time for you to deploy your own app that consists of, say, 30 bundles, that deployment process can easily become a point of failure. It is all too easy to deploy 29 of the 30 files (without even knowing that one has been missed) or using the wrong version of one of the bundles...

As soon as the number of deployment artifacts grows that large, the risk of deployment failure or issue rises along with the artifact count.

What we need is an .lpkg-like mechanism to package all of our own custom artifacts into one or a small number of deployment artifacts.  This way we can keep the level of modularity we want with the bundles, but we can package them and deploy them together in a managable number of artifacts.

Enter the OSGi Subsystem Specification...

OSGi Subsystems

In case you haven't guessed it, subsystems represent a package or container of other bundles, fragments or even other subsystems.

Subsystems break down into three different types:

  1. Feature - A Feature subsystem is the simplest type and represents basically a container of bundles. All of the bundles in the feature subsystem are accessible from outside of the feature, and all of the bundles inside the feature can access outside bundles.
  2. Application - An Application subsystem is a container of bundles, but these bundles can only access bundles outside of the application; no outside bundles can use or leverage bundles or services inside of the application.
  3. Composite - A Composite subsystem is the most complex type, it is a container of bundles with fine-grained access control for bundles inside and outside of the subsystem.

While each of these types represent a container of bundles, the differences lie in accessibility inside of the subsystem or outside of the subsystem.

Although Subsystems has a lot of available features and functionality, I'm going to limit scope here to a discussion of feature subsystems. I see great value in being able to create a single artifact out of multiple bundles for deployment, but I tend to question the value in limiting or controlling access to the bundles. For most projects, that kind of (micro)management seems to be overkill; I'm guessing that these other types start to show their benefits as the project size increases.

That said, this blog post will help you get started with the simplest of subsystems, the Feature subsystem, but you'll have all of the necessary stuff installed as a foundation for your self-education on the other subsystem types.

You can find out about the full Subsystems specs and usage here:

Benefits of Feature Subsystems

Why do I see value in Feature Subsystems? Because they bring the following benefits to the table:

  • Simplifies deployment by reducing bundle count. Instead of pushing out 30 bundles for deployment, maybe I can reduce that to 5 or even 1.
  • Defines a named and semanticaly-versioned grouping of the contained bundles. Want to release version 1.1 of your portlet? Fine, it may consist of 1.3 of an api, 1.22 of an impl bundle and 1.8 of the portlet module, but they can all version together inside the subsystem.

These are the benefits that are clear to me. There may be others related to management (and scope control if you build an Application or Composite subsystem), plus others that you might see that I'm missing.

Installing Subsystems into Liferay

Okay, so there are a number of bundles you'll need to download and drop into your Liferay deploy folder:

Group ID Artifact ID Version Link
org.apache.aries.subsystem org.apache.aries.subsystem.api 2.0.6 https://repo1.maven.org/maven2/org/apache/aries/subsystem/org.apache.aries.subsystem.api
org.apache.aries.subsystem org.apache.aries.subsystem.core 2.0.6 https://repo1.maven.org/maven2/org/apache/aries/subsystem/org.apache.aries.subsystem.core
org.apache.aries org.apache.aries.util 1.1.1 http://repo1.maven.org/maven2/org/apache/aries/org.apache.aries.util/1.1.1/org.apache.aries.util-1.1.1.jar
org.apache.felix org.apache.felix.coordinator 1.0.0 http://repo1.maven.org/maven2/org/apache/felix/org.apache.felix.coordinator
org.eclipse.equinox org.eclipse.equinox.region 1.2.101.v20150831-1342 http://repo1.maven.org/maven2/org/eclipse/equinox
org.slf4j slf4j-api 1.7.12 http://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.12/slf4j-api-1.7.12.jar
org.apache.aries.subsystem org.apache.aries.subsystem.gogo-command 1.0.0 https://repo1.maven.org/maven2/org/apache/aries/subsystem

Note: You may want to check to see if there are newer versions you might want to use.

When using SLF4J, you also need to provide a binding to an actual logging implementation. I want the messages to go to the Liferay logging system, so I created an SLF4J binding for the Liferay logging system. You can get the project here: https://github.com/dnebing/slf4j-liferay.

After they have deployed, you can drop into the Gogo shell to check their status:

g! lb | grep slf4j        
  534|Active     |   10|slf4j-liferay (1.7.12)
  535|Active     |   10|slf4j-api (1.7.12)
g! lb | grep Apache       
   25|Active     |    6|Apache Commons FileUpload (1.3.2)
   27|Active     |    6|Apache Felix Bundle Repository (2.0.2.LIFERAY-PATCHED-1)
   28|Active     |    6|Apache Felix Configuration Admin Service (1.8.8)
   29|Active     |    6|Apache Felix Dependency Manager (3.2.0)
   30|Active     |    6|Apache Felix Dependency Manager Shell (3.2.0)
   31|Active     |    6|Apache Felix EventAdmin (1.4.6)
   32|Active     |    6|Apache Felix File Install (3.5.4.LIFERAY-PATCHED-1)
   33|Active     |    6|Apache Felix Gogo Command (0.12.0)
   34|Active     |    6|Apache Felix Gogo Runtime (0.10.0)
   35|Active     |    6|Apache Felix Gogo Shell (0.10.0)
   36|Active     |    6|Apache Felix Declarative Services (2.0.6)
  537|Active     |   10|Apache Aries Util (1.1.1)
  538|Active     |   10|Apache Felix Coordinator Service (1.0.0)
  539|Active     |   10|Apache Aries Subsystem Core (2.0.6)
  540|Active     |   10|Apache Aries Subsystem API (2.0.6)
  542|Active     |   10|Apache Aries Subsystem Gogo command (1.0.0)
g! lb | grep region       
  541|Active     |    1|org.osgi.service.subsystem.region.context.0 (1.0.0)

I've highlighted the new bundles that we loaded.

Part of the deployed bundles includes a new Gogo command for subsystems:

g! subsystem:list
0	ACTIVE	org.osgi.service.subsystem.root 1.0.0

At this point you've installed subsystems, so lets go on to building and deploying one.

Building an Enterprise Subsystem Archive

Hey, before you start on this step, do yourself a favor and make sure your bundles deploy outside of Subsystems. My first pass at this was based off of using some modules I built out of the Blade Samples. I used those to create an esa archive and tried to deploy it and got some errors. Thinking they were Subsystem errors, I spent some time trying to resolve them. Eventually I decided just to make sure they would work and deploy them directly and all of the errors I was seeing were still there. The modules I had built were not complete and they wouldn't activate whether as normal bundles or within a Subsystem. I started over with clean projects, got them working by deploying directly and only when that was all working did I proceed to work this step.

Moral of the story - Only build an ESA archive out of working, deployable modules.

So, in case you haven't guessed it yet, we need to build a zip file (it is an archive after all) with the extension .esa, this is our Enterprise Subsystem Archive.

Building an esa archive is actually kind of easy.

If you're using Maven, you're going to be leveraging the ESA Maven Plugin. If you're using Ant or Gradle, you're going to be leveraging the ESA Ant Task. For those of you who are new to Gradle, you might want to check out how to invoke Ant from within your Gradle build script.

I created a rather simple set of modules available via this Github repo for a maven workspace: https://github.com/dnebing/subsystem-sample. There's two ServiceBuilder modules, one an API and one a service jar, and there's also a portlet module to do CRUD operations. The entities represent and extremely simple "event" system for booking events for conference rooms.

Each of the three modules were deployed directly into a local dev environment to ensure they could start and be active in the portal.

Our go at this is going to be based on pulling in dependencies to be part of the esa archive. I created a new submodule in the project, subsystem-events-esa, to pull in the dependencies. I like to use mvn archetype:generate to start a new project, but you can create one however you'd like.

Then we need to update the pom. This is the one I came up with:

<?xml version="1.0"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"






Now when you build this guy, you will get a foo-1.0.0.esa file in the target folder. Peel him open with your zip tool and you should see something like:

Subsystems ESA Archive Contents

So what did we end up with?

Well, we have our three dependencies that were listed in the pom. There's a couple of Maven pom artifacts that we don't really worry about. The other thing we have is the SUBSYSTEM.MF file:

Subsystem-ManifestVersion: 1
Subsystem-SymbolicName: com.dnebinger.subsystem-events-esa
Subsystem-Version: 1.0.0
Subsystem-Name: subsystem-events-esa
Subsystem-Content: com.dnebinger.subsystem.events.api;version="[1.0.0,1.0.0]";start-order:="1",
Subsystem-Type: osgi.subsystem.feature

If you check the pom, I asked for the file to be generated. You can just as easily specify your own if you wanted to.

This file defines the subsystem, the modules it contains and has the semantic versioning details for the esa.

Deploying the esa Archive

So we built it, let's deploy it. Note that I'm using a clean bundle here, not one where the foo Blade samples have already been deployed to.

Drop the foo-1.0.0.esa file in the Liferay deploy folder and...

Nothing. Which is kind of what I expected. You see, Liferay has an extension point for the Felix FileInstall module to handle .lpkg files and, well, we need a similar extension to support the esa Archives.

Fortunately for everyone, I've already created it. The whole project is available at https://github.com/dnebing/subsystem-deployer. Build the module and drop it into your Liferay deploy folder. Any .esa file dropped in the Liferay deploy folder will be picked up by the module and moved to the osgi/modules directory where it will be processed by OSGi and the Felix FileInstall service.

With this module in place, the .esa file will be picked up and processed. With our new subsystem Gogo command, we can now see the following:

g! subsystem:list
0	ACTIVE	org.osgi.service.subsystem.root 1.0.0
3	ACTIVE	com.dnebinger.subsystem-events-esa 1.0.0

So the subsystem looks good, but how about our bundles?

g! lb | grep subsystem-events
  548|Active     |    1|subsystem-events-service (1.0.0)
  549|Active     |    1|subsystem-events-api (1.0.0)
  550|Active     |    1|subsystem-events-web (1.0.0)

That's exactly what we hope to see! Since the bundles are all valid, you can now log into the portal and place the portlet on a page and try it out.


So that's really it. We can now leverage OSGi Subsystems in our portal. This allows us to build an esa archive file containing multiple files (bundles, fragments or other subsystem esa files) and deploy them as a single unit.

Here I've presented how to use the Feature subsystem type that has no limits on using bundles in the subsystem or bundles outside the subsystem it can use.

The two other subsystem types, the Application and Composite types, I didn't do anything with. If you come up with a valid use case or some example code, please share as I'm sure we'd like to hear about your successes.

Necessary SiteMinder Configuration...

Technical Blogs September 8, 2017 By David H Nebinger

A quick note for those using SiteMinder and Liferay...

Liferay likes the Tilde (~) character, and uses it quite often for friendly URLs and other reasons.

When fronting with SiteMinder, though, you may run into return code 500 for URLs with the tilde character in them.

This is actually a default SiteMinder configuration thing, SiteMinder treats tilde (and others) as "bad characters" and will return a 500 when they are found.

The default SiteMinder configuration has this setting:

BadUrlChars       //,./,/.,/*,*.,~,\,%00-%1f,%7f-%ff,%25

You want to override the default configuration to allow the tilde character:

BadUrlChars       //,./,/.,/*,*.,\,%00-%1f,%7f-%ff,%25

After making this change, restart the daemons and you should be back in business.

Leveraging Maven Proxies

Technical Blogs July 12, 2017 By David H Nebinger

Taking a short break from the Vue.js portlet because I had to implement a repository proxy. Since I was implementing it, I wanted to blog about it and give you the option of implementing one too. Next blog post will be back to Vue.js, however...


I love love love Maven repositories. I can distribute a small project w/o all of the dependencies included and trust that the receiver will be able to pull the dependencies when/if they do a build.

And I like downloading small project files such as the Liferay source separately from the multitude of dependencies that would otherwise make the download take forever.

But you know what I hate? Maven repositories.  Well, downloading from remote Maven repositories. Especially if I have to pull the same file over and over. Or when the internet is slow or out and development grinds to a halt. Or maintenance on the remote Maven repositories makes them unavailable for some unknown period of time.

Now, of course the Maven (and Gradle and Ivy) tools know about remote repository download issues and will actually leverage a local repository cache so you really only have to download once. Except that they made this support optional, and frankly it gets overlooked a lot. The default Liferay Gradle repository references don't list mavenLocal() in the repo list and it is not used by default.

Enterprise or multi-system users (like myself) have additional remote repository issues.  First, a local repo on one user's system is not shared w/ a team or multiple systems, so you end up having to pull the file to your internal network multiple times, even if it has already been pulled before.

A complete list of use cases would include:

  • Anyone developing behind an HTTP proxy (typical corporate users). Using a repository proxy removes all of the necessary proxy configuration from your development tools, the repository proxy needs to know how to connect through the HTTP proxy, but your dev tools don't.
  • Anyone developing in teams. Using a repository proxy, you will only pay for downloading to your network once, then all developers will be able to access the artifact from the proxy and not have to download on their own.
  • Anyone using an approved artifact list. Using a repository proxy populated with approved artifacts and versions, you have automatic control over the development environment to ensure that an unapproved version does not get through.
  • Anyone developing in multiple workspaces, projects, or machines. Shares the benefits from the team development where you only download once, then share the artifact.
  • Anyone who suffers network outages. Using a repository proxy, you can pull previously loaded artifacts even when the external network is down. You can't pull new artifacts, but you should have most of the ones you regularly use.
  • Anyone who needs to publish team artifacts. A repository proxy can also hold your own published artifacts, so it is easy to share those with the team and use each others artifacts as dependencies.
  • Anyone using a secured continuous integration server. CI servers should not have access to the interweb, but they still need to pull dependencies for builds. The repository proxy gives your CI server a repository to pull from without having direct internet access.
  • Anyone behind a slow internet link. Your internal network is typically always faster than your uplink, so having a local proxy cache of artifacts reduces or eliminates lag issues from your uplink.
  • Anyone interested in simplifying repository references. Liferay uses two repositories, a public mavenCentral mirror and a release repository. Then there's mavenCentral, of course, and there are other repositories out there too. By using a proxy, you can have a single repository reference in your development tools, but that reference could actually represent all of these separate repositories as a single entity.

So I'm finally giving up the ghost, I'm moving to a repository proxy.

What Is A Repository Proxy?

For those that don't know, a repository proxy is basically a local service that you're going to run that proxies artifact requests to remote Maven repositories and caches the artifacts to serve back later on. It's not a full mirror of the external repositories because it doesn't pull and cache everything from the repository, only the artifacts and versions you use.

So for most of us, our internal network is usually significantly faster than the interweb link, so once the proxy cache is fully populated, you'll notice a big jump when building new projects. And for team development, the entire team can benefit from leveraging the cache.

Setting Up A Repository Proxy

There are actually a bunch of freely-available repository proxies you can use. One popular option is Artifactory.  But for my purposes, I'm going to set up Apache Archiva. It's a little lighter than Artifactory and can be easily deployed into an existing Tomcat container (Artifactory used to support that, but they've since buried or deprecated using a war deployment).

The proxy you choose is not so important, it will just impact how you go about configuring it for the various Liferay remote repositories.

Follow the instructions from the Archiva site to deploy and start your Archiva instance. In the example I'm showing below, I have a local Tomcat 7 instance running on port 888 and have deployed the Archiva war file per the site instructions. After starting it up, I created a new administrator account and was ready to move on to the next steps.

Once you're in, add two remote repositories:

These two repositories are used by Liferay for public and private artifacts.  Order the liferay-public-releases guy first, then the public one second.

In Archiva, you also need to define proxy connectors:

Once these are set, you can then define a local repository:

Initially if you browse your artifacts, your list will be empty. Later, after using your repo proxy, when you browse you should see the artifacts.

And finally you may want to use a repository group so you only need a single URL to access all of your individual repositories.

Configuring Liferay Dev Tools For The Repository Proxy

So this is really the meat of this post, how to configure all of the various Liferay development tools to leverage the repository proxy.

In all examples below, I'm just pointing at a service running on localhost port 888; for your own environment, you'll just make necessary changes to the URL for host/port details, but otherwise the changes will be similar.

Liferay Gradle Workspace

This is handled by changing the root settings.gradle file.  You'll take out references to cdn.liferay.com and instead just point at your local proxy as such:

buildscript {
  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins.workspace", version: "1.0.40"

  repositories {
    maven {
      url "http://localhost:888/archiva/repository/liferay/"

apply plugin: "com.liferay.workspace"

Now if you happen to have other repositories listed, you may want to make sure that they too are pushed up to your repository proxy. No reason to not do so. And by using the repository group, we can simplify the repository list in the settings.gradle file.

This is the only file that typically has repositories listed, but if you have an existing workspace you might have added some references to the root build.gradle file or a build.gradle file in one of the subdirectories.

Liferay Gradle Projects

For standalone Liferay Gradle projects, your repositories are listed in the build.gradle file, these will also change to point at the repository proxy:

buildscript {
  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.1.3"

  repositories {

    maven {
      url "http://localhost:888/archiva/repository/liferay/"

apply plugin: "com.liferay.plugin"

dependencies {
  compileOnly group: "org.osgi", name: "org.osgi.core", version: "6.0.0"
  compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0"

repositories {

  maven {
    url "http://localhost:888/archiva/repository/liferay/"

Global Gradle Mirrors

Gradle supports the concept of "init" scripts.  These are global scripts that are executed before tasks and can tweak the build process that the build.gradle or settings.gradle might otherwise define. Create a file in your ~/.gradle directory called init.gradle and set it to the following:

allprojects {
  buildscript {
    repositories {
      maven { url "http://localhost:888/archiva/repository/liferay" }
  repositories {
    maven { url "http://localhost:888/archiva/repository/liferay" }

This should normally have all Gradle projects use your local Maven repository first and your new proxy repository second.  Any repositories listed in the build.gradle or settings.gradle file will come after these. This also sets repository settings for both Gradle plugin lookups as well as general build dependency resolution.

Liferay SDK

The Liferay SDK leverages Ant and Ivy for remote repository access. Our change here is to point Ivy at our repository proxy.

Edit the main ivy-settings.xml file to point at the repository proxy:

  <settings defaultResolver="default" />

    <ibiblio m2compatible="true" name="liferay" root="http://localhost:888/archiva/repository/liferay/" />
    <ibiblio m2compatible="true" name="local-m2" root="file://${user.home}/.m2/repository" />

    <chain dual="true" name="default">
      <resolver ref="local-m2" />

      <resolver ref="liferay" />

Liferay Maven Projects

For simple Liferay Maven projects, we just have to update the repository like we would for any normal pom.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">



Liferay Maven Workspace

The Liferay Maven Workspace is a new workspace based on Maven instead of Gradle. You can learn more about it here.

In the root pom.xml file, we're going to add our repository entry but we also want to disable using the Liferay CDN as the default repository:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">




Global Maven Mirrors

Maven supports the concept of mirrors, these can be local repositories that should be used in place of other commonly named repositories.  Create (or update) your ~/.m2/settings.xml file and make sure you have the following:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"

      <name>Repo Proxy</name>

The <mirrorOf /> tag using the wildcard means that this repository is a mirror for all repositories and Maven builds will go to this repository proxy for all dependencies, build or otherwise.

Configuring Liferay Source For The Repository Proxy

We took care of all of your individual project builds, but what about if you have the source locally and want to build it using the repository proxy?

I actually combined a bunch of the previous listed techniques. For the maven portion of the build (if there is one), my settings.xml mirror declaration ensures that my repo proxy will be used. For the Gradle portion of the build, I used the init.gradle script (although I copied my ~/.gradle/init.gradle to the .gradle directory created inside of the folder as the ~/.gradle/init.gradle script was ignored).

In addition, in my build.<username>.properties file, I set basejar.url=http://localhost:888/archiva/repository/liferay.

And I also had to set my ANT_OPTS environment variable, so I used "-Xmx8192m -Drepository.url=http://localhost:888/archiva/repository/liferay".

With all of these changes, I was able to build the liferay-portal from master with all dependencies coming through my repository proxy.

You might be wondering why you would want to go through this exercise. For me, it seemed like an ideal way to pre-populate my proxy with most of the necessary public dependencies and Liferay modules. Sure this isn't necessary because, since we're using a proxy, if we need an updated version later on the proxy will happily fetch it for us. It's just a way to pre-populate with a lot of the artifacts that you'll be needing.

Publishing Artifacts

The other big reason to use a local repository service is the ability to publish your own artifacts into the repository. When you're working in teams, being able to publish artifacts into the repository so teammates can use the artifacts without building themselves can be quite valuable. This will also force you to address your versioning strategy since you need to bump versions when you publish; I see this as a good thing because it will force you to have a strategy going into a project rather than trying to create one in the middle or at the end of the project.

So next we'll look into the changes necessary to support publishing from each of the Liferay development environments to your new repository service.

Note that we're going to assume that you've set up users in the repository service to ensure you don't get uploads from unauthorized users, so these instructions will include the details for setting up the repository with authenticated access for publishing.

Finally, during the research for this blog post I have quickly come to find that there different ways to publish artifacts, sometimes even based on the repository proxy you're using. For example, the fine folks at JFrog actually have their own Gradle plugin to support Artifactory publication. I didn't try it since I'm currently targeting Archiva, but you might want to look it over if you are targeting Artifactory.

These instructions are staying with the generic plugins so they might work across repositories w/o much change, but obviously you should test them out for yourself.

Liferay Gradle Workspace

The Liferay workspace was actually the most challenging configuration out of all of these different methods if only because tweaking the various build.gradle files to support the subproject publication can take a while.

In fact, Gradle has an older uploadArchives() method that has since been replaced by a newer maven-publish plugin, but for the life of me I couldn't get the submodules to work right with it. I could get submodules to use maven-publish if each submodule build.gradle file had the full stanzas for individual publication, but then I couldn't get the gradlew publish command to work in the root project.

So the instructions here purposefully leverage the older uploadArchives() method because I could get it working with the bulk of the configuration and setup in the root build.gradle and minor updates to the submodule build.gradle files.

Add Properties

The first thing we will do is add properties to the root gradle.properties file. This will isolate the URL and publication credentials and keep them out of the build.gradle files. For SCM purposes, you would not want to check this file into revision control as it would expose the publishers credentials to those who have access to the revision control system.

# Define the URL we'll be publishing to

# Define the username and password for authenticated publish

Root build.gradle Changes

The root build.gradle file is where the bulk of the changes go. By adding the following content to this file, we are adding support for publishing to every submodule that might be added to the Liferay Workspace.

// define the publish for all subprojects
allprojects {

  // all of our artifacts in this workspace publish to same group id
  // set this to your own group or override in submodule build.gradle files
  // if they need to change in the submodules.
  group = 'com.dnebinger.gradle'
  apply plugin: 'java'
  apply plugin: 'maven'
  // define a task for the javadocs
  task javadocJar(type: Jar, dependsOn: javadoc) {
    classifier = 'javadoc'
    from 'build/docs/javadoc'
  // define a task for the sources
  task sourcesJar(type: Jar) {
    classifier = 'sources'
    from sourceSets.main.allSource
  // list all of the artifacts that will be created and published
  artifacts {
    archives jar
    archives javadocJar
    archives sourcesJar
  // configure the publication stuff
  uploadArchives {
    // disable upload, force each submodule to enable
    enabled false
    repositories.mavenDeployer {
      repository(url: project.publishUrl) {
        authentication(userName: project.publishUsername, password: project.publishPassword)

The instructions in this file will have you building a source jar and a javadocs jar. These and the build artifacts will all be published to the repository from the gradle.properties file using the credentials from that file.

Note that by default the upload is disabled for all subprojects. This forces us to enable the upload in those individual submodules we want to set it up for.

Submodule build.gradle Changes

In each submodule where we want to publish to the repo, there are two sets of simple changes to make.

// define the version that we will publish to the repo as
version = '1.0.0'

// change group value here if must differ from the one in the root build.gradle.

dependencies {

// enable the upload
uploadArchives { enabled true }

We specify the version for publishing and enable the uploadArchives for the submodule.

Publish Archives

That's pretty much it.  In both the root directory as well as in the individual module directories you can issue the command gradlew uploadArchives and (if you get a good build) the artifacts will be published to the repository.

Liferay Gradle Projects

Liferay gradle projects get a similar modification as the Liferay Gradle Workspace modifications, but they're a little easier since you don't have to worry about submodules.

From the previous Liferay Gradle Workspace section above, add the same property values to the gradle.properties file. If you don't have a gradle.properties file, you can create one with the listed properties.

Build.gradle Changes

The changes we make to the build.gradle file are similar, but are still different enough:

buildscript {
  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.3.9"

  repositories {
    maven {
      url "http://localhost:888/archiva/repository/liferay"

apply plugin: "com.liferay.plugin"
apply plugin: 'java'
apply plugin: 'maven'

repositories {

  maven {
    url "http://localhost:888/archiva/repository/liferay"

// define the group and versions for the artifacts
group = 'com.dnebinger.gradle'
version = '1.0.3'

dependencies {

// define a task for the javadocs
task javadocJar(type: Jar, dependsOn: javadoc) {
  classifier = 'javadoc'
  from 'build/docs/javadoc'

// define a task for the sources
task sourcesJar(type: Jar) {
  classifier = 'sources'
  from sourceSets.main.allSource

// list all of the artifacts
artifacts {
  archives jar
  archives javadocJar
  archives sourcesJar

// configure the publication stuff
uploadArchives {
  repositories.mavenDeployer {
    repository(url: project.publishUrl) {
      authentication(userName: project.publishUsername, password: project.publishPassword)

That's pretty much it.  Like the previous section, this build.gradle file supports creating the javadoc and source jars and will upload those using the same gradlew uploadArchives command.

Liferay SDK

In the Liferay SDK we have to configure Ivy to support the artifact publication. And actually this is quite easy because Liferay has already configured ivy to support their internal deployments to an internal Sonatype server, we just have to override the properties.

In your build.username.properties file in the root of the SDK, you need to just add some property overrides:


That is your configuration for publishing. After building your plugin, i.e. using the ant war command, just issue the ant publish command to push the artifact to the repository. If you run ant jar-javadoc before the ant publish, your javadocs will be generated so they'll be available for publishing too. There's also an ant jar-source target available, but I didn't see where it was being uploaded to the repository, so that might not be supported by the SDK scripts.

One thing I did find, though, is that in each plugin you plan on publishing, you should edit the ivy.xml file in the plugin directory. The ivy.xml file has, as the first tag element, the line:

<info module="hello-portlet" organisation="com.liferay">

The organisation attribute is actually going to be the group id used during publishing so, unless you want all of your plugins to be in the com.liferay group, you'll want to edit the file to set it to what you need it to be.

I did check the templates and there doesn't seem to be a way to configure it.  The templates are all available in the SDK's tools/templates directory, so you could go into all of the individual ivy.xml files and set the value you want, that way as you create new plugins using the templates the default value will be your own.

Note that this only applies if you create plugins using the command line; I'm honestly not sure if you are using the Liferay IDE that the templates in the SDK folder are actually the ones the IDE uses for new plugin project creation.

Liferay Maven Projects

Liferay Maven projects are, well, simple projects based on Maven.  I'm not going to dive into all of the files here, but suffice it to say you add your repository servers, usually in your settings.xml file, and then you add to the pom file:

    <name>Internal Release Repository</name>
    <name>Internal Snapshot Repository</name>

With just this addition, you can use the mvn deploy command to push up your artifacts. Additionally, you can add support for publishing the javadocs and sources too: https://stackoverflow.com/questions/4725668/how-to-deploy-snapshot-with-sources-and-javadoc

Liferay Maven Workspace

For publishing purposes, the Liferay Maven workspace is merely a collection of submodules. This means that Maven pretty much is going to support the publication of submodules after you complete the configuration and pom changes mentioned in the previous Liferay Maven Projects section.

Normally the mvn deploy command will publish all of the submodules, but you can selectively disable submodule publish by configuring the plugin in the submodule pom: http://maven.apache.org/plugins/maven-deploy-plugin/faq.html#skip


This blog post ended up being a lot bigger than what I originally planned, but it does contain a great deal of information.

We reviewed how to set up an Archiva instance to use for your repository proxy.

We checked out the changes to make in each one of the Liferay development frameworks to leverage the repository proxy to pull all dependencies from the repository proxy, whether that proxy is Apache Archiva, Artifactory or Nexus.

We also learned how to configure each of the development frameworks to support publishing of the artifacts to share with the development team.

A lot of good stuff, if you ask me. I hope you find these details useful and, of course, if you have any comments leave them below and any questions, well post them to the forum and we'll help you out.

As a test, I timed the complete build of the https://github.com/liferay/liferay-blade-samples after deleting my ~/.m2/repository folder (purging my local system repository). Without the repository proxy, downloading all of the dependencies and completing the build took 69 seconds (note I have gigabit ethernet at home, so your numbers are going to vary from that). After purging ~/.m2/repository again and configuring for the repository proxy (pre-populated with artifacts), downloading all of the dependencies and completing the build took 45 seconds.

That is almost a 35% reduction in build time, and it means that 35% of the total build is taken to download artifacts even over my gigabit ethernet.

If you do not have gigabit ethernet where you're at, it would not surprise me to see that time to download increase, taking the % of reduction up along with it.

Building JS Portlets Part 2

Technical Blogs June 27, 2017 By David H Nebinger


In part 1 of the blog series, we started a Vue.js portlet based in the Liferay Workspace, but actually there was no JS framework introduced just yet.

We're actually going to continue that trend here in part 2, but in this part we're going to tackle some of the important parts that we'll need in our JS portlets to fit them into Liferay.

Passing Portlet Instance Configuration

In part 1 our view.jsp page used the portlet instance configuration, displayed as two text lines:

<%@ include file="/init.jsp" %>

  <b><liferay-ui:message key="vue-portlet-1.caption"/></b>
<p><liferay-ui:message key="caption.flag.one"/> <%= 
  String.valueOf(portletInstanceConfig.flagOne()) %></p>
<p><liferay-ui:message key="caption.flag.two"/> <%= 
  String.valueOf(portletInstanceConfig.flagTwo()) %></p>

We're actually going to continue this, but we're going to leverage our <aui:script /> tag to leverage it in a new addition to the JSP:


var <portlet:namespace/>portletPreferences = {
	flag: {
		one: <%= portletInstanceConfig.flagOne() %>,
		two: <%= portletInstanceConfig.flagTwo() %>


Using <aui:script />, we're embedding javascript into the JSP.

Inside of the script tag, we declare a new variable with the name portletPreferences (although this name is namespaced to prevent a name collision with another portlet on the page).

We initialize the variable as an object which contains our portlet preferences. In this example the individual flags have been set underneath a contained flag object, but the structure can be whatever you want.

The goal here is to create a unique Javascript object that will hold the portlet instance configuration values. We need to extract them in the JSP because once the code is over and running in the browser, it will not have access to the portlet config. Using this technique, we get all of the prefs in a Javascript variable so they will be available to the running JS portlet in the browser.

Passing Permissions

We're actually going to continue this technique to pass permissions from the back end to the JS portlet:

var <portlet:namespace/>permissions = {
	<c:if test="Validator.isNotNull(permissionChecker)">
		isSignedIn: <%= permissionChecker.isSignedIn() %>,
		isOmniAdmin: <%= permissionChecker.isOmniadmin() %>,
		hasViewOrgPermission: <%= permissionChecker.hasPermission(null, 
		  Organization.class.getName(), Organization.class.getName(), 
		  ActionKeys.VIEW) %>
	<c:if test="Validator.isNull(permissionChecker)">
		isSignedIn: false,
		isOmniAdmin: false,
		hasViewOrgPermission: false

Here we're trying to use the permission checker added by the init.jsp's <liferay-theme:defineObjects /> tag.

Although I'm confident I should never get a null permission checker instance, I'm using defensive programming to ensure that I can populate my permissions object even if the checker is not available.

When it is, I'm basically populating the object with keys for the permissions and then a scriptlet to evaluate whether the current user has the permission details.

Since we are building this as a Javascript object, we will be able to collect all of the permissions when the JSP is rendering the initial HTML fragment and allows us to ship the permissions back to the browser so the portlet can use the permissions to make decisions on what to view and what to allow editing on.

The only thing you need to figure out is what permissions you will need on the front end; once you know that, you can use this technique to gather those permissions and ship them to the browser.

NOTE: This does not replace the full use of the permission checker on the back end when processing requests. This technique passes the permissions to the browser, but as we all know it is easy for hackers to adjust these settings once retrieved in order to try to circumvent the permissions. These should be used to manage the UI, but in no way, shape or form should the browser control permissions when invoking services on the backend.


The previous sections have been fairly straight-forward; we have data on the backend (prefs and permissions) we need to have on front end but really only one way to pass them.

Handling the I18N in the JS portlets comes with a couple of alternatives.

It would actually be quite easy to continue the technique we used above:

var <portlet:namespace />languageKeys = {
	accessDenied: '<liferay-ui:message key="access-denied" />',
	active: '<liferay-ui:message key="active" />',
	localCustomMessage: '<liferay-ui:message key="local-custom-message" />'

I can tell you that this technique is extremely tedious. I mean, most apps have tens if not hundreds of language keys for buttons, messages, labels, etc. Itemizing them here will get the job done, but it is a lot of work.

Fortunately we have an alternative. Liferay actually provides the Liferay.Language.get(key) Javascript function. This function will conveniently call the back end to translate the given key parameter using the backend language bundle resolution. It is backed by a cache on the browser side so multiple calls for the same key will only call the backend once.

So, rather than passing the 'access-denied' message like we did above, we could just remember to replace hard-coded strings from our JS portlet's template code with calls like Liferay.Language.get('access-denied'). We would likely see the following for Vue templates:

var app5 = new Vue({
  el: '#app-5',
  data: {
    message: Liferay.Language.get('access-denied')
  methods: {
    reverseMessage: function () {
      this.message = this.message.split('').reverse().join('')

Although this is convenient, it too has a problem. Because of how the backend resolves keys, only resource bundles registered with the portal can be resolved. If you are only going after Liferay known keys, that's no problem. But to use your local resource bundle, you will need to register it into the portal per instructions available here: https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-language-keys#modifying-liferays-language-keys. Note that you're not overriding so much as adding your local resource bundle into the mix to expose your language bundle.

So actually to keep things simple I would recommend a mixed implementation. For existing Liferay keys, use the Liferay.Language.get() method to pull the value from the backend. But instead of registering an additional resource bundle in the portal, just use the script-based technique above to pass your local keys.

This will minimize your coding impact, but if your local bundle has tens or hundreds of your own keys, well you might find it easier to just register your language bundle and stick with Liferay.Language.get().


What what? We're at the end of part 2 and still no real Javascript framework integration?

That is correct, we haven't pulled in Vue.js yet, although we will be doing that in Part 3.

I think that it is important to note that from our original 6 prerequisites, we have already either fully or partially satisfied:

  • Prereq #1 - Using the Liferay Workspace.
  • Prereq #6 - Using the Liferay Environment.

The big conclusion here is that we can use this pattern to preload data from the portal side to include in the JS applications, data which is otherwise not available in the browser once the JS app is running.  You can apply this pattern for collecting and passing data that you want available to the JS app w/o providing a runtime fetch mechanism or exposing portal/portlet data.

Note: Of course in my JSP/Javascript stuff above, there is no verification of JS, no proper handling of string encoding and other protections you might use to ensure you are not serving up some sort of JS attack. Just because I didn't present it, doesn't mean that you shouldn't include it in your production apps.

See you in Part 3...


Showing 1 - 20 of 78 results.
Items 20
of 4