Hidden Dangers of Trying to Keep Liferay Up to Date

Technical Blogs January 31, 2018 By Minhchau Dang Staff

At Liferay Symposium North America 2017, I had a discussion with a few of our customers on the hidden dangers of rolling back Liferay changes.

There are multiple ongoing efforts to improve how we will handle the issue for the upcoming 7.1 release, such as LPS-76923. However, the 7.0 release is already affected, and DE-40 is fast approaching with another round of hidden dangers scheduled to arrive with it.

This blog post was drafted to provide a little more transparency around what is about to happen so that people who have a rollback plan actually have enough information to formulate a complete rollback plan.

With that being said, not everyone immediately realizes there is a problem that arises when you need to rollback the Liferay platform, so I'll start with an example starting from the side of customization, which may be easier to relate to.

Imagine that you've followed the guide on Creating Upgrade Processes for Modules, and you decide to create an upgrade process in order to transition to a new version. While it's a very complete article from the perspective of what you do to move forward, there's one thing the article doesn't try to answer: what happens if you need to go backwards?

If you've ever tried it, you'll find that there is actually little you can do.

And therein lies the problem: some people do not realize that this lack of things you can do to rollback changes applies to the Liferay platform in the same way it applies to customizations.

Core Schema Changes

Of course, it turns out that in some cases, Liferay has as much of a problem moving forward as it does moving backward.

Let's say you've skimmed through the javadocs linked in Development Reference, and you've decided that you want to take advantage of some newer API that's been made available in the later tags of the Core Portal Artifacts. Or maybe, you've browsed through our issue tracker or our release notes and decided that you need one of the fixes or changes. To do so, in theory all you do is rebuild Liferay from source at a later revision that is equal to or later than the specific tag containing the API change you need.

But, what happens if you decide that the newer commit contains changes that you dislike, and you decide you want to rollback to an earlier version?

In theory, all you need to deal with is the changing API, assuming you started using that new API. However, in practice, one other thing you have to face much more prominently in the current Liferay release (at least, when compared to past releases) is schema changes.

Let's say that you started at a release that has 7.0.3 GA4 as its nearest parent tag, but you've decided that you want to roll forward to a new release where 7.0.4 GA5 is its nearest parent tag.

However, if you perform this update, Liferay's core schema version (essentially, a version number that describes the schema for service builder classes living in portal-impl) has also changed. These updates are documented as new versions added to ReleaseInfo. Any time this happens, Liferay treats this as a schema change.

Core Schema in Pre-7 Releases

Early on in Liferay's history (5.1, 5.2, 6.0), Liferay code review allowed new point releases within the same branch to have schema changes. For example, if you were on 5.2 EE SP1, there might be a schema change if you attempted to go to 5.2 EE SP2. These schema changes would apply to the next release.

Of course, with schema changes happening in both a 5.2 branch and a 6.0 branch, you can imagine that depending on where you started within the 5.2 branch, you would need different processes to run in order to reach the correct final state in 6.0. So in order to ensure Liferay ran the correct upgrade processes, you would need to set the upgrade.processes values in portal properties for the upgrade, and there was documentation describing what those properties should be depending on where you started your upgrade.

Later on in Liferay's history (6.1, 6.2), Liferay decided that all of the different permutations were getting really unwieldy. Therefore, we added all the known paths to portal properties (something we later called "seamless upgrade") so that there wouldn't be mistakes from people following the documentation, and we strongly discouraged schema changes after the official release.

If there was something that seemed like it would need a schema change in order to fix, we invested a lot of extra effort into finding a way that would not require a schema change. If there was no way to address something without a schema change (for example, performance issues that required we reorganize how we store data in a column), we might hide the schema change inside of verify.processes. This resulted in an increase in the number of upgrade-like fixes that appeared as verify processes.

Core Schema in Post-7 Releases

For Liferay 7, we added a policy around verify processes where you needed to demonstrate that it's something that must be run for every release before you could make it a verify process.

Because we knew from past experience that sometimes schema changes might need to happen anyway, this also meant that we would need to re-allow actual schema changes within the same branch. If it weren't for the shared core of CE and DXP, this would have brought us back to Liferay's darker days of multiple variations in upgrade steps, but the shared core side-steps this particular problem.

However, what happens if you try to startup Liferay against a code base that assumes a newer Liferay core schema version?

Well, we also removed the logic which automatically ran core upgrades as you started up, which used to allow you to start up Liferay as long as you were switching to a newer version. As a result, this meant that whenever a version change happened, you would always need to consciously run the upgrade process whenever you transition to that new version, giving you a stronger sense that significant changes were about to happen to your database.

However, there is one thing missing in all of this planning put into preventing startup on core schema version changes: it only works if you only use traditional CE releases, where the core schema version formally changes. If you are in an environment where that version number never changes, how many schema changes you've successfully run starts to get murky.

A particularly awkward variant of this limitation arises if you choose to use DXP rather than CE: your version number is fixed at 7.0.10 from the time you first installed or upgraded Liferay. As a result, Liferay will never force you to run additional core upgrade processes, even when more are added in subsequent Liferay releases.

Known Symptoms of Incomplete Schema Upgrades

If you upgraded to DXP before SP1 (November 1, 2016), or you upgraded with a hotfix that uses DE-7 or earlier as a baseline (the fix pack corresponding to SP1, as noted in the Service Pack Matrix), you might be affected by the following issues due to missing schema updates:

  • LPS-66133: Discussion-enabled assets with zero comments cause performance issues
  • LPS-66599: MBDiscussion table has entries where groupId is 0, which prevents staging from working
  • LPS-44965: For a specific upgrade path (6.1.10 -> 6.1.20 -> 7.0.10), text columns are the wrong size on Oracle

If you upgraded to DXP before SP2 (March 28, 2017), or you upgraded with a hotfix that uses DE-12 or earlier as a baseline, you might be affected by the following issues due to missing schema updates:

  • LPS-68410: Certain text columns, like LayoutPrototype.description, allow fewer characters on SQL Server and Sybase than they would allow on other databases
  • LPS-68775: Organization.type_ column contains values not defined in the organizations.types portal property
  • LPS-69878: Group_.groupKey column is a CLOB instead of a VARCHAR on MySQL

If you upgraded to DXP before SP3 (May 11, 2017), or you upgraded with a hotfix that uses DE-14 or earlier as a baseline, you might be affected by the following issue due to missing schema updates:

  • LPS-70807: PortletPreferences table contains references to 1_WAR_kaleoformsportlet

Module Schema Changes

Just as before, let's say you've skimmed through the javadocs, and you've decided that you want to take advantage of some newer API that's been made available in the later tags of the Core Portal Artifacts. However, this time let's also assume LPS-72269 was recently applied to the core source code of Liferay, and you both started and ended on a commit that sits between 7.0.3 GA4 and 7.0.4 GA5.

So, you fast-forward your repository to a point in time which contains the API that you want, and you build Liferay from source. However, you encounter some behavior that you dislike, and so you decide to bring things back to how they used to be. To do so, you rollback to your original commit, and you build Liferay from source again.

When you start up Liferay after this rollback, something unexpected happens. You no longer see the "Navigation" option in the Control Panel side menu.

When Module Schema Changes Occur

Essentially, as long as the portal can start, and the module has all of its packages satisfied, its upgrades will run.

In theory, when you use only CE releases without building from source, the only time you will see a module schema change is when you acquire the new release containing the updated core schema version. As a result, Liferay will refuse to start due to the core version changes, and the only time module upgrades happen is when you manually run the upgrade process.

In practice, in fulfilling the dream of modularity, module schema versions change independently of the core schema version, and so it is theoretically possible for a module upgrade to happen without a core schema version change, as long as the release manager is configured to allow it (it is allowed by default).

Of course, because transitioning between actual CE releases prevents the portal from starting until the upgrade process is officially run, this really only affects two audiences: those who build from source between core schema changes, and those who choose to use DXP instead of CE.

In those situations, because there is no additional restriction on when a module upgrade will run beyond "can the portal start up", we have a situation where module schema upgrades will simply run as soon as the portal starts up and the module is deployed, and there will be no advance warning.

Transitive Component Dependencies

So how does all of this lead to the "Navigation" option disappearing? Welll, between the commit you started from and the commit you tested, pull request #50722 was merged, which resulted in an updated Liferay-Require-SchemaVersion on the com.liferay.mobile.device.rules.service.

As a result, after starting Liferay with the newer module, the database was automatically updated to reflect the module schema changes. Liferay recorded that this updated schema occurred, and when you rolled back, the rolled back version of com.liferay.mobile.device.rules.service declares that it provides an older schema version.

And this is where the problem arises. Liferay announces to the different modules that only code that knows how to work with the newer schema version should be run. (Of course, it does so very quietly, which is another point of contention; it just quietly fails instead of loudly complaining about it.)

This means that even though the bundle starts, none of its service builder components are made available as OSGi components. Through several layers of transitive dependencies, the code responsible for rendering "Navigation" option in the Control Panel side menu no longer has its dependencies satisfied, and it disappears.

Identifying Module Schema Changes

So how do you know if you're having problems that might be related to schema changes?

If you are building from source, you will need to know both the starting commit and the ending commit. From there, you can get a list of all the schema versions by scanning all the bnd.bnd files in the source for the Liferay-Require-SchemaVersion header, and use a diff tool in order to compare the differences.

If you are working with releases and fix packs, you can use a tool that I built after we discovered the problem between DE-26 and DE-27 (which was caused by LPS-72269 mentioned above) to understand just how often this actually happened. This link shows the differences between DE-29 and DE-30, which involved services that are fairly critical to Liferay's function: Liferay-Require-SchemaVersion Changes Since DXP Release

If you've recently attempted to rollback, and you're worried about whether you're affected, you can check its symptoms. While there are many reasons for Spring beans to not be registered as OSGi components (one of which we encountered in Troubleshooting Liferay from Source), if you're affected by a module schema change, you are definitely in a situation where the Spring beans are not registered as OSGi components.

For example, by using dm wtf, because Felix's dependency manager is used in order to prevent the registering of the service builder services as OSGi components, and everything that's failing to resolve should be reported as a result.

You can also have Liferay automatically report problems by creating a configuration file and add unavailableComponentScanningInterval=60 to that file, which will turn on scanning of unavailable Spring components, as described in Detecting Unresolved OSGi Components. You will also want to turn on INFO level logging by following the instructions on Adjusting Module Logging.

  • Between and including DE-24 and DE-30 (which includes 7.0.4 GA5), you would update the file com.liferay.portal.spring.extender.internal.configuration.SpringExtenderConfiguration.cfg and enable INFO logging on com.liferay.portal.spring.extender.internal.context which lives in the com.liferay.portal.spring.extender module
  • After and including DE-31, you would update the file com.liferay.portal.osgi.debug.spring.extender.internal.configuration.UnavailableComponentScannerConfiguration.cfg and enable INFO logging on com.liferay.portal.osgi.debug.spring.extender.internal which lives in the com.liferay.portal.osgi.debug.spring.extender module

Fixing Module Schema Changes

You may have encountered Liferay-Require-SchemaVersion in the article Creating Data Upgrade Processes for Modules: it attempts to prevent outdated code from running against an upgraded schema.

Of course, the mechanism is not cluster-aware, so it doesn't prevent outdated nodes in a cluster from running older code against the newer schema, so this functionality really only helps with accidental bad deployments.

So now we run into a dilemma. Assuming you have no choice but to rollback (in other words, whatever issue you encountered in the updated Liferay requires the rollback), how can we get everything working again? Well, as noted at the beginning, there's very little you can do, but a little is better than nothing.

Restore the Older Schema

If you ask around, the recommended way is to restore a database backup that contains the older schema for your module. This will have the appropriate Release_ table entries in Liferay for it to know that this older schema exists for the module, and everything is known to work with the older module code.

However, you wind up losing all data from the time the backup was made up until today. Therefore, you will have to determine if this trade-off is actually acceptable.

Restore the Newer Bundle

One way is to deploy a bundle that is actually compatible with the upgraded schema, where it might be simplest to deploy just the updated module (if you're dealing with a release bundle where everything is in .lpkg files, the updated module needs to go into ${liferay.home}/osgi/marketplace/override).

However, this approach is risky for all the reasons we just experienced with TermsOfUseContentProvider in Keeping Customizations Up to Date with Liferay Source, and it can rapidly become unwieldy. If you're dealing with a release bundle, any of the modules you are forced to override can no longer be patched, which makes your installation unsupportable.

Even if you assume that you don't need supportability, if the service module we're redeploying implements a ProviderType interface and the package containing that interface has updated, things get really messy really fast.

To start, you will need to deploy an updated version of the module that provides that interface, or use one of the Import-Package approaches described in the previous blog entry. If the service module we're redeploying also exports a package containing a ProviderType interface, you may have to deploy an updated version of everything that implements that interface so that the correct Import-Package versions are listed.

Ultimately, with this approach, you may have to deploy an updated version of all of the transitive dependencies as well, if any of these updated modules either provides or implements ProviderType interfaces. It may get to the point where you're redeploying large parts of the portal as an override in order to counteract package updates for any packages holding or any components implementing ProviderType interfaces.

Erase Module Tables

It's entirely possible that you don't actually care about the data in the affected module. For example, in the case of Mobile Device Rules, it might not have been a component you used at all, so you'd be perfectly comfortable erasing all of its data (because there is no data).

We started this blog post from the side of customization. From the customization side, if you'd run across the problem and tried to find a solution, you would eventually run into a tool named the DB Support Gradle Plugin. This tool provides a single command, cleanServiceBuilder, which does just one thing: "Cleans the Liferay database from the Service Builder tables and rows of a module."

In other words, a fairly straightforward way to get things working is to simply start over from scratch for that module.

If you have access to the Liferay source, you can simply navigate to the module that's failing to start, update the Gradle configuration so that the DB Support Gradle Plugin can function (though you may need to update it if you're using a database other than MySQL, due to a regression bug introduced with the fix to LPS-73124, resolved in LPS-76854), and use cleanServiceBuilder to erase all of that module's data.

Stopping Module Schema Changes

As described in Running the Upgrade Process, you can prevent module upgrades from running by adding autoUpgrade=false to com.liferay.portal.upgrade.internal.configuration.ReleaseManagerConfiguration.cfg. Normally this is done during an actual upgrade so you can run the module upgrade separately from the core schema upgrade, but it also has the side-effect that it prevents module upgrades from running if new upgrades are added without warning.

However, this doesn't actually prevent the problem of transitive component dependencies, because we now simply have the problem in reverse.

Just as is the case when you create a custom service builder module that doesn't have an upgrade process to a schema version, Liferay is declaring that it only knows how to work with the older schema version, but the code says that it only knows how to work with the newer schema version. As a result, none of the service builder components are made available as OSGi components.

So, what happens is components will stop working. However, at least it's very easy to rollback to the earlier version, as no module schema changes have actually happened.

Keeping Customizations Up to Date with Liferay Source

Company Blogs January 4, 2018 By Minhchau Dang Staff

Welcome to the fourth entry in a series about what to keep in mind when building Liferay from source. First, to recap the previous entries in this series from last year:

  • Getting Started with Building Liferay from Source: How to get a clone of the Liferay central repository and how to build Liferay from source. Also some tools that can help you setup your IDE (whether it's Netbeans, Eclipse, or IntelliJ) to navigate that portal source.
  • Troubleshooting Liferay from Source: How to make changes to the source code in Liferay's central repository and deploy changes to individual Liferay modules. Also some tips and tricks in troubleshooting issues within Liferay itself.
  • Deploying CE Clustering from Subrepositories: How to take advantage of smaller repositories to deploy clustering to a CE bundle. How to maintain changes to the code base without having to clone or build the massive Liferay central repository.

Continuing onward from there, one of the practical reasons why would you might want to be able to build Liferay from source is simply to be able to keep your Liferay installation up to date.

However, keeping your Liferay installation up to date involves a lot more than just rebuilding Liferay from source. After all, while many of Liferay's customers start with the noble goal of staying up to date with Liferay releases (and they're given binaries that don't require them to build from source), for a variety of reasons, these updates wind up delayed.

Among these reasons is that Liferay is a platform you can customize. Yet, when you customize Liferay with an incomplete understanding of Liferay's release artifacts (and as a consequence, an incomplete understanding of your own release artifacts if they depend on Liferay's release artifacts), your customizations will mysteriously stop working when you apply an update. When this happens, your ability to apply the update gets stalled.

This entry will talk about some of the struggles that you are likely to encounter whenever you try to keep your own customizations up to date whenever you update Liferay. In order to provide you with a more concrete example that you can use to understand the different roadblocks, this entry will walk you through a customization that's compiled against an initial release of Liferay, and the hardships you're likely to face if you tried to deploy it with an incomplete understanding of Liferay's release artifacts.

A Minimal Customization, Part 1

In this entry, we'll go over a simple customization. I say simple, but it's one that will have failed to initialize in all 34 of the past 34 fix packs. Additionally, even if you managed to make it work the first fix pack using naive approaches, you would have needed to update it again in 24 out of the following 33 fix packs. This would occur even if you wrote the code exactly as Liferay intended for you to write it.

This customization is deceptively simple: modifying Liferay's terms of use.

Why You Modify Terms of Use

There are many reasons to customize a terms of use, but one very prominent reason is fast approaching.

On May 28, 2018, the General Data Protection Regulation (GDPR), will go into effect. There are a great many things involved in compliance, and among them is the fact that those who collect data on citizens of the member countries of the European Union will be held to a higher standard when it comes to obtaining consent.

Chapter I, Article 4 of the GDPR defines consent as "any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her".

With that in mind, a terms of use agreement is a simple way to receive consent, even though there are internet memes about whether people actually read them, and academic studies showing that many college students probably do not.

How Terms of Use is Implemented

By default, Liferay records whether you have agreed to its terms of use through a single boolean flag in the database: agreedToTermsOfUse in the User_ table.

When dispatching any portal request, PortalRequestProcessor will force any authenticated user who has not agreed to the terms of use through to the Terms of Use page.

To present the user with for the terms of use agreement, you can trace the logic for 7.0.x to a single JSP named terms_of_use_default.jsp that you can modify directly in the Liferay portal source code (or even from the release artifact binary).

How Terms of Use is Customized

If modifying this single JSP inside of the Liferay web application archive is insufficient for your needs (for example, you want to use services provided by a module), Liferay provides a mechanism for more elaborate customizations: a TermsOfUseContentProvider.

By default, Liferay provides you with a single example of this that allows you to configure a piece of web content to serve as the terms of use in place of the default terms of use.

In theory, because you can embed portlets inside of web content in the same way you embed them in a theme or layout template (Embedding Portlets in Themes and Layout Templates), the default implementation of TermsOfUseContentProvider provided in journal-terms-of-use can be very flexible from a user interface perspective. In practice, until you've actually agreed to terms of use, a lot of requests that dispatch to pages do not work until you've agreed to those terms of use, and portlet requests always go to pages.

We can start creating a new one inside of a Liferay workspace using the following command:

blade create -t service \
    -s com.liferay.portal.kernel.util.TermsOfUseContentProvider \
    -p com.example.termsofuse \
    -c ExampleTermsOfUseContentProvider \
    example-terms-of-use

If you check the interface (or you let your IDE populate all the methods in the interface so that it can compile), you find that TermsOfUseContentProvider requires implementing three methods:

  • includeConfig: This expects for you to use a RequestDispatcher to include a JSP. It is called from portal-settings-web, and you can view the area that renders it by navigating to Control Panel > Instance Settings, and in the Configuration tab, scroll down to the Terms of Use section. The one you see by default comes from the com.liferay.journal.terms.of.use module.
  • includeView: This expects for you to use a RequestDispatcher to include a JSP. It is called from portal-web, and you can view the area that renders it if you have a user that has not agreed to the terms of use or by navigating directly to /c/portal/terms_of_use.
  • getClassName: On the surface, the method name suggests that one day, Liferay might allow you to have different terms of use for different types of assets (such as a separate terms of use for document library). However, at this time, this hasn't been implemented, and the lack of stable Map iteration also means that if you have multiple content providers with different class names, Liferay presents what is functionally equivalent to a random terms of use content provider for both view and configuration (source code).

As noted in the getClassName note above, the first thing you have to do before you even customize it is disable the existing implementation.

  • If you are building from source, you can achieve this by deleting osgi/modules/com.liferay.journal.terms.of.use.jar and then removing the file modules/apps/web-experience/journal/journal-terms-of-use/.lfrbuild-portal so that it doesn't get deployed again when you rebuild Liferay from source.
  • If you are using an older release rather than building from source, you can achieve this with an empty marketplace override of com.liferay.journal.terms.of.use.jar (namely, just a JAR with no classes), as described in Overriding LPKG Files.
  • If you are using an up to date release rather than building from source, you can achieve this in later versions of Liferay with Blacklisting OSGi Modules, and either using the GUI or using a configuration file to blacklist the com.liferay.journal.terms.of.use module.

With that in mind, let's assume that we've done that, and that we'd create a new implementation of TermsOfUseContentProvider. Here is what a set of empty method implementations might look like, which we would add to ExampleTermsOfUseContentProvider.java:

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

// ...

@Override
public String getClassName() {
    System.out.println("Called getClassName()");

    return "";
}

@Override
public void includeConfig(
        HttpServletRequest request, HttpServletResponse response)
    throws Exception {

    System.out.println("Called includeConfig(HttpServletRequest, HttpServletResponse)");
}

@Override
public void includeView(
        HttpServletRequest request, HttpServletResponse response)
    throws Exception {

    System.out.println("Called includeView(HttpServletRequest, HttpServletResponse)");
}

To get it to compile, we will need to update build.gradle to provide the dependencies that we need in order to compile these empty method implementations:

dependencies {
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
    compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
    compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0"
}

Attempt to Deploy the Stub Implementation

At this point, we have completed a stub implementation.

In general, whenever you work with a new extension point for the first time, you should stop as soon as you have a stub implementation and try a few small things to see if the extension point will work the way you expect it to. For a TermsOfUseContentProvider. But as you will soon see, your first unwieldy obstacle is getting it to deploy at all.

If you invoke blade gw jar, it will create the file build/libs/com.example.termsofuse-1.0.0.jar. If you're using a Blade workspace, you can set liferay.workspace.home.dir in gradle.properties and use blade gw deploy to have it be copied to ${liferay.home}/osgi/modules, or you can manually copy this file to ${liferay.home}/deploy.

When you do so, you will see a message saying that the bundle is being processed, but the bundle never starts.

If you check with the Gogo shell (Felix Gogo Shell) with the lb -s | grep example, you will see that it has stayed in the INSTALLED state. If you note the bundle ID that comes back (it's the first column in the list of results) then use diag #, where you replace # with the bundle ID, it will tell you why it's not in the ACTIVE state:

Unresolved requirement: Import-Package: com.liferay.portal.kernel.util; version="[7.0.0,7.1.0)"

If this is the first time you've seen an error message like this, you will want to read up on Resolving Bundle Requirements and Detecting Unresolved OSGi Components for a little bit of background before continuing.

The Naive Bundle Manifest

The previously linked documentation talks about how you can resolve the error, but if you're building up expertise rather than troubleshooting, I think it's also useful to understand what's causing the problem, and thus reach an understanding of why certain steps can fix that problem.

So, why does this error arise in the first place? Well, if you open up build/tmp/jar/MANIFEST.MF (which we describe in more detail in OSGi and Modularity for Liferay Portal 6 Developers), you should see the following lines:

Import-Package: com.liferay.portal.kernel.util;version="[7.0,7.1)",jav
 ax.servlet.http;version="[3.0,4)"

These lines are why the com.example.termsofuse-1.0.0.jar bundle asks for com.liferay.portal.kernel.util with the specified version range. This leaves us with two unanswered questions: (1) why does it ask for version 7.0 (inclusive) as a lower part of the range, and (2) why does it ask for version 7.1 (exclusive) as the upper part of the range?

Default Import-Package Lower Bound

First, that our bundle imports the com.liferay.portal.kernel.util package at all is because it is the package containing interface we implement, com.liferay.portal.kernel.util.TermsOfUseContentProvider.

This class comes from the com.liferay.portal.kernel dependency specified in build.gradle, and since we've specified version 2.0.0 of this dependency, we can find it in one of the subfolders of ${user.home}/.gradle/caches/modules-2/files-2.1/com.liferay.portal/com.liferay.portal.kernel/2.0.0.

Note: If this is the first time you've needed to check inside a .gradle cache, the folder layout is similar to a Maven cache except it uses the SHA1 as the folder name rather than as a separate file, and one SHA1 corresponds to a .pom file while the other corresponds to a .jar file. In some cases, there may be a third SHA1 that corresponds to the source code for the artifact.

If we check inside the META-INF/MANIFEST.MF file within the .jar file artifact, we'll find the following lines buried inside of it:

Export-Package: com.liferay.admin.kernel.util;version="1.0.0";uses:="c
  ...
 feray.portal.kernel.url;version="1.0.0";uses:="javax.servlet",com.lif
 eray.portal.kernel.util;version="7.0.0";uses:="com.liferay.expando.ke
  ...

And this is where we get the 7.0 as the lower bound on the version range: version 2.0.0 of the com.liferay.portal.kernel artifact exports version 7.0.0 of the com.liferay.portal.kernel.util package.

Default Import-Package Upper Bound

For many package import packages like the javax.servlet.http import, you'll notice that they take the form [<x>.<y>, <x+1>), where the upper part of the range essentially asks for the next major version. However, our com.liferay.portal.kernel.util is a lot less optimistic, instead choosing to have a version range of [<x>.<y>, <x>.<y+1>). The reason lies in how our code uses the classes from the packages we import.

In the case of javax.servlet.http, we're just using objects that implement the HttpServletRequest and HttpServletResponse interfaces. Whenever you simply consume an interface (or consume a class), the default accepted version range will be set to [<x>.<y>, <x+1>).

In the case of com.liferay.portal.kernel.util, we're implementing the TermsOfUseContentProvider interface. If you implement an interface, then the Import-Package statement will sometimes be optimistic by default and specify [<x>.<y>, <x+1>) and sometimes be more pessimistic by default and specify [<x>.<y>, <x>.<y+1>). In our case, it's chosen the more pessimistic default.

There are technical details that the creator of the interface needs to consider when deciding whether implementors need to be optimistic or pessimistic (The Needs of the Many Outweigh the Needs of the Few). These details center around fairly nebulous concepts like whether the interface is intended to be implemented by an "API provider" or by an "API consumer" (Semantic Versioning Technical Whitepaper), where an API provider and an API consumer are very abstractly defined.

However, from the side of someone implementing an interface, we can simply look at the end result, which is a commitment on the stability of the interface:

  • If it is marked as a ConsumerType (or not marked at all, since ConsumerType is assumed if no explicit annotation is provided), this interface is not allowed to change during a minor version increment to its package, so implementors do not need to worry about minor version changes
  • If it is marked as a ProviderType, this interface is allowed to change during a minor version increment to its package, so implementors do need to worry about minor version changes

This leads to the following default behavior when setting the upper version range on a package import involving an implemented interface:

  • If the interface we implement is annotated with the ConsumerType annotation (or not annotated at all, since ConsumerType is assumed if no explicit annotation is provided), we can be optimistic, and the default accepted version range will be set to [<x>.<y>, <x+1>)
  • If the interface we implement is annotated with the ProviderType annotation, we should be pessimistic, and the default accepted version range will be set to [<x>.<y>, <x>.<y+1>)

And this is where we get the 7.1 as the upper bound on the version range: TermsOfUseContentProvider is annotated with the ProviderType annotation, which means that minor version changes to the package might also include an update to the package we implement, so we should be conservative when specifying the accepted version ranges.

The Improved Bundle Manifest

So now that we know that the default behavior for our package import is [<x>.<y>, <x>.<y+1>), we have two options for getting our bundle to deploy. Either we can (a) choose a different dependency to generate a version range compatible with our installation automatically, or (b) set a broader version range manually.

Automatically Set Import-Package

In the case of (a), now that you know where the lower part of the range <x>.<y> comes from, you can change the dependency version of com.liferay.portal.kernel so that exports the same version of the package that is exported in your Liferay installation. For example, if you know that your version of com.liferay.portal.kernel is a snapshot release of 2.57.1, you can specify the following in your build.gradle:

compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.57.0"

However, how exactly do you find that value?

If you've built from source, all the versions are computed during the build initialization (specifically the ant setup-sdk step) and copied to .gradle/gradle.properties. If you open up that file, you'll find something that looks like this, which will give you both the module name and the module version.

com.liferay.portal.impl.version=x.y.z-SNAPSHOT
com.liferay.portal.kernel.version=x.y.z-SNAPSHOT
com.liferay.portal.test.version=x.y.z-SNAPSHOT
com.liferay.portal.test.integration.version=x.y.z-SNAPSHOT
com.liferay.util.bridges.version=x.y.z-SNAPSHOT
com.liferay.util.java.version=x.y.z-SNAPSHOT
com.liferay.util.taglib.version=x.y.z-SNAPSHOT

If you're curious where that information comes from, the bundle name is found inside build.xml as the manifest.bundle.symbolic.name build property (example here), while the bundle version is found inside bnd.bnd as the Bundle-Version (example here).

If you're working with a release artifact, then as documented in Configuring Dependencies, open up the portal-kernel.jar provided with your version of the Liferay distribution and check inside of META-INF/MANIFEST.MF for its version. This will provide you with what the version was at build time for portal-kernel.jar. If constantly unzipping .jar files gets to be too tedious, you can also look it up using a tool I created for seeing how Liferay's module versions have evolved over time: Module Version Changes Since DXP Release

However, the automatic approach has a limitation: Liferay does not release com.liferay.portal.kernel with every release of Liferay, but rather, each Liferay release uses a snapshot release of com.liferay.portal.kernel.

This isn't a big deal if the snapshot has a minor version like .1, because a packageinfo minor version increment will also trigger a bundle minor version increment, and so a .1 snapshot will have the same minor versions on its exports as the original .0 release.

However, when the snapshot has a minor version of .0, things get murky because of the fact that it's a snapshot: there was some package change between the previous minor version and the snapshot version, but it's not guaranteed to have been the package we are using. Additionally, if it wasn't our package that was update, our package might update between the snapshot used for the Liferay release, and the actual .0 for the artifact is published, because the Baseline Plugin allows all packages to experience a minor version increment up until the version is published and the baseline version changes.

As a result, you have to check both the release version and one minor version below to see which one you need to use in order to get the correct version range generated automatically. If you are implementing multiple interfaces from different packages within the same artifact, it's also theoretically possible that there is no version you can use to have the correct version range generated automatically.

  • DE-15 was released with a snapshot of 2.28.0. The snapshot version exports 7.22, version 2.27.0 exports 7.22, and version 2.28.0 exports 7.22.
  • DE-27 was released with a snapshot of 2.42.0. The snapshot version exports 7.30, version 2.41.0 exports 7.29, and version 2.42.0 exports 7.30.
  • DE-28 was released with a snapshot of 2.43.0. The snapshot version exports 7.31, version 2.42.0 exports 7.30, and version 2.43.0 exports 7.31.

Manually Set Import-Package

At this point, we've discovered that the automatic approach is hardly automatic at all, because we're still investigating the package versions of different artifacts. We also know that an automatic approach might fail. Given that we'll need to investigate all the artifacts and package versions anyway, how do we achieve (b)?

Since you're setting a version range, you will want to set the broadest version range that is known to compile successfully. To that end, from the OSGi perspective, you update bnd.bnd with a new Import-Package statement that is known to work, and this Import-Package will automatically be added to the generated META-INF/MANIFEST.MF. We also add * to tell the bnd to also include everything else it was planning to add.

In the case of a ProviderType (which is really the only time when this kind of problem happens), its API can change for any minor release. Therefore, we should only include version ranges where we know the package has not yet changed, and we should not project into the future beyond that. Therefore, if we know that our interface has its current set of methods at <a>.<i>, and it still has not changed as of <b>.<j>, we would choose the version range [<a>.<i>, <b>.<j+1>).

In the specific case of TermsOfUseContentProvider, it started with the current interface methods at version 7.0 of the package, and if you check the source code of the interface within DE-34 to confirm that it is still unchanged in the version of Liferay you are using, and you unzip portal-kernel.jar to check the META-INF/MANIFEST.MF to find its corresponding export version at 7.40.0. This means that we can use the following Import-Package statement of our bnd.bnd.

Import-Package: com.liferay.portal.kernel.util;version="[7.0,7.41)",*

If this process gets to be too tedious, you can also look it up using a tool I created for seeing how Liferay's package versions have evolved over time: Package Breaking Changes Since DXP Release

History of a Similar Customization

If the 7.0 to 7.41 version range did not immediately clue you in, then if you were to scan through the evolution of com.liferay.portal.kernel.util across different versions of Liferay, you'll have discovered that while the TermsOfUseContentProvider interface itself has not changed at all since the initial DXP release, the package it resides in is very frequently updated. In fact, it has changed in 25 of the past 34 fix pack releases.

Because both the automatic process and the manual process rely on minor versions, this means that no matter which route you chose, you would have needed to modify either your build.gradle or your bnd.bnd for each one of those releases, or your custom terms of use would have failed to deploy in 25 out of the past 34 fix packs.

This leads us to the following question.

Liferay has its own journal-terms-of-use, which we mentioned earlier in this entry, that implements the TermsOfUseContentProvider interface. Obviously it should run into the same issue. So, how has Liferay been keeping journal-terms-of-use up to date?

Import-Package Version Range Macro

At the beginning, journal-terms-of-use started with trying to solve the reverse problem: if we know that we aren't changing the API, how do we ensure that the bundle can deploy on older versions? The idea was built on a concept where we'd release the Web Experience package separate from the rest of Liferay, and we wanted this package to be able to deploy against older versions of Liferay.

With LPS-64350, Liferay decided to achieve this using version range macros inside of the bnd.bnd:

Import-Package: com.liferay.portal.kernel.util;version="${range;[=,=+)}",*

Essentially, this says that we know it works as of the initial major version release, and we know it will work up until the next minor version. From there, we'd update build.gradle with every release whenever we confirmed that we had not changed the interface with com.liferay.portal.kernel, and the version range macro would allow it to be compatible with all previous releases without us having to explicitly lookup the package version for the current release.

Gradle Dependency Version Range

However, after awhile, this got to be extremely tedious, because we were updating build.gradle with every release of com.liferay.portal.kernel.

From there, we came up with a seemingly clever idea. Essentially, Liferay was rebuilt at release time anyway, we could tell Gradle to fetch the latest release of com.liferay.portal.kernel. As a result, we'd simply re-compile Liferay, and this latest release combined with the version range macro would give us the desired version range automatically. This is functionally equivalent to replace the com.liferay.portal.kernel dependency with the following:

compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "[2.0.0,3.0.0)"

We later learned that this approach had two critical problems.

First, Gradle is not guaranteed to try to use the latest version of a dependency whenever you specify a range. Therefore, you might run into a situation where your portal would fail to deploy journal-terms-of-use simply because Gradle happened to choose something earlier than the latest dependency version.

Second, we might implement multiple interfaces that come from multiple packages published by the com.liferay.portal.kernel artifact. Because we only had a version range macro set for the com.liferay.portal.kernel.util package, the journal-terms-of-use module would suddenly fail to deploy if we were to rebuild it and deploy the bundle to an older Liferay release (such as when building for a hotfix) due to the other ProviderType interfaces it might have implemented, or if Liferay converted a ConsumerType interface into a ProviderType interface without incrementing the major version on the package (it's not required, similar to changing the byte-code compilation level, and so Liferay never does so).

Dependency Version as a Property

As a temporary stop-gap measure for building against older versions of Liferay, we needed a way to retain the old manifests. As noted before, there's a problem: the version of com.liferay.portal.kernel that accompanies a past release is an unpublished snapshot.

In theory, we could simply publish a snapshot to our local Gradle cache at build time and reference it, but internally at Liferay, our source formatter rules disallow using an un-dated snapshot as part of a dependency version. Luckily, there's a work around for that: because it's simply checking for the string, as long as something else provides the snapshot version (like a variable or a build property), we are allowed to use it.

So, our work around at the time was to take advantage of something that was automatically set inside of gradle.properties whenever you build Liferay from source. This can also be set manually for your own Blade workspace. Ultimately, the net effect of using a build property is that you update a single file and it can be referenced by all other custom modules within the same workspace, which is the same idea as using a Groovy variable or a Maven build property.

com.liferay.portal.kernel.version=2.57.1-SNAPSHOT

Once this property is set, there is an additional shorthand for using it. The Liferay Gradle plugin uses a LiferayExtension that allows us to use the name default in order to reference this property value, or substitutes the Apache Ivy alias latest.release (which Gradle happens to recognize) if the property has not been set.

As a result, our com.liferay.portal.kernel dependency looks like this:

compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "default"

If we then deploy the resulting .jar, all of our modules will compile against the specified version of com.liferay.portal.kernel. Compiled with the original values inside of bnd.bnd, it will fail at compile time for all modules with this pattern. We can then just update this property any time we're updating to a later fix pack or rebuilding Liferay from source to confirm that the ProviderType interfaces have not changed.

Treat it Like a ConsumerType

In order to fix the bug introduced with the Gradle versions while also retaining the intended result of being able to deploy a module like journal-terms-of-use on multiple versions of Liferay, the answer we arrived at in LPS-70519 was to simply treat TermsOfUseContentProvider like a ConsumerType when specifying version ranges, even though it's been marked as a ProviderType.

In other words, we manually set the lower bound to be the version of com.liferay.portal.kernel.util that is exported by the minimum com.liferay.portal.kernel that provides other API that journal-terms-of-use needs, and we set the upper bound to be the next major version after that, just as would happen automatically with implementing a ConsumerType interface or any other regular class usage.

Import-Package: com.liferay.portal.kernel.util;version="[7.15.0,8)",*

There are two downsides to this, both of which Liferay excepts.

The first is that the module advertises something that is technically untrue. Because it is a ProviderType, Liferay can modify TermsOfUseContentProvider before the next major release, and even though the module declares that it will work with every version of the package up through 8, this won't be true if the interface gets updated.

The second is that this approach results in journal-terms-of-use being unable to detect when we make binary-incompatible changes to TermsOfUseContentProvider. However, in practice, Liferay can get away treating this particular ProviderType as though it were a ConsumerType for journal-terms-of-use, because Liferay itself maintains both the interface and the implementation, and therefore a code reviewer would know if we changed the interface and know to update our implementation of that interface.

A Minimal Customization, Part 2

With all of that background information, we can now come back to our module and make it work.

Choose a Long-Term Solution

At this point, we have two exactly opposite solutions that we can use over the long term: (a) add the dependency version as a build property, or (b) treat the ProviderType interface as though it were a ConsumerType interface. With the former solution, you accept the idea that you will need to check each time you update, but you do it once per workspace rather than once per module. With the latter solution, you reject that as being too tedious and accept the risk that Liferay might one day change the ProviderType and your module will stop working.

If we'd like to accept the downside of constantly updating our com.liferay.portal.kernel version in order to ensure that the TermsOfUseContentProvider interface has not changed, we can set our dependency version as default and maintain gradle.properties with an up to date value of com.liferay.portal.kernel.version for each Liferay release you update to. This allows us to handle all ProviderType interfaces in one place at compile time and leads to the following bnd.bnd entry for Import-Package:

Import-Package: com.liferay.portal.kernel.util;version="${range;[=,=+)}",*

Because the version of com.liferay.portal.kernel has changed at compile time, it's likely that the manifest is also changing. Therefore, if you go with this solution, you will want to increment the Bundle-Version each time you update the properties value just as you might do with Maven artifacts that depend on changing properties values, because the binary artifact produced by the compilation will be changing alongside the properties value change.

If we'd prefer not to accept the downside of constantly updating our com.liferay.portal.kernel version, you can choose to treat TermsOfUseContentProvider as a ConsumerType. In this case, you'd leave com.liferay.portal.kernel at whichever minimum version you need for API compatibility, and add the following bnd.bnd entry for Import-Package:

Import-Package: com.liferay.portal.kernel.util;version="${range;[==,+)}",*

As noted earlier, you essentially give up the ability to check for binary compatibility at build time, and you will need to periodically check in on the ProviderType interfaces to make sure that they have not changed, because those changes will not be detected at build time and they will not be noticed at deployment time. You will likely only notice if you coincidentally wrote a functional test that happens to hit a page that invokes the new methods on the interface.

Successfully Deploy the Stub Implementation

Whichever route you choose, we have completed our updates to the stub implementation.

Just as before, if you invoke blade gw jar, it will create the file build/libs/com.example.termsofuse-1.0.0.jar. If you're using a Blade workspace, you can set liferay.workspace.home.dir in gradle.properties and use blade gw deploy to have it be copied to ${liferay.home}/osgi/modules, or you can manually copy this file to ${liferay.home}/deploy.

When you do so, you will see a message saying that the bundle is being processed, and then you will see a message saying that the bundle has started.

If you then navigate to /c/portal/terms_of_use, then assuming that you also disabled the com.liferay.journal.terms.of.use module as documented earlier, it will show you a completely empty terms of use rather than the default terms of use, and all you can do is agree or disagree to the empty page.

Just Beyond the Stub Implementation

While this article is focused on understanding road blocks rather than providing sample code, it would be a little disingenous to stop here and say that we have a functioning implementation of a terms of use override, so we'll take a few steps forward and bring in some additional information.

Just like any other component that needs to include JSPs, we'll need a reference to the appropriate ServletContext. You can find an example of this in Customizing the Control Menu and Customizing the Product Menu.

For our specific example, we might add the following to our bnd.bnd so that we can have a ServletContext:

Web-ContextPath: /example-terms-of-use

We would then then add the following imports and replace the content of the includeView method in ExampleTermsOfUseContentProvider.java, assuming that the bundle symbolic name from the steps so far is com.example.termsofuse (which it should be, by default):

import javax.servlet.RequestDispatcher;
import javax.servlet.ServletContext;
import org.osgi.service.component.annotations.Reference;

// ...

@Override
public void includeView(
        HttpServletRequest request, HttpServletResponse response)
    throws Exception {

    System.out.println("Called includeView(HttpServletRequest, HttpServletResponse)");

    _servletContext.getRequestDispatcher("/terms_of_use.jsp").include(request, response);
}

@Reference(target = "(osgi.web.symbolicname=com.example.termsofuse)")
private ServletContext _servletContext;

If we then create a file named src/main/resources/META-INF/resources/terms_of_use.jsp with the content, <h1>TODO</h1> and redeploy our module, we'll now see the words "TODO" just above the agree and disagree buttons if you navigate to /c/portal/terms_of_use.

Deploy CE Clustering from Subrepositories

Technical Blogs November 3, 2017 By Minhchau Dang Staff

About one month ago, Liferay posted an announcement which talked about how to deploy the New Clustering Code for Liferay Portal Community.

One of the first steps documented in this announcement was to clone the liferay-portal repository. As mentioned in a previous blog post, Liferay from Source, while this sounds like it shouldn't be a big deal, the large size of the liferay-portal repository makes this first step far from trivial from a time perspective. Additionally, the announcement also tells you to run ant all, which is also far from trivial from a time perspective.

So you might be wondering, "Is it possible to deploy the clustering code without cloning that gigantic repository and running ant all?"

Yes, there's one way that's a little bit more complex, but ultimately much faster than cloning the entire liferay-portal repository: deploy the clustering code from subrepositories. Doing so provides another way to maintain any customizations to the clustering code by allowing collaborators to fork a much smaller repository.

After reading this post, you'll hopefully understand a little bit more about Liferay's repository layout and you'll hopefully have a better sense of how you can make small-scale changes to Liferay for your own personal purposes.

With that being said, this process is admittedly both slower and more complex than if you were to make use of changes bundled as a Marketplace application (as others are planning here) or if someone were to build the JARs for you and install them for you if you were to run docker-compose (as others have described here), so it might only be of interest to those who are building Liferay from source.

Step 1: Locate Subrepositories

Our first step is to locate what we're being told to deploy from liferay-portal and use that to identify which subrepositories we need to clone. We can do that by looking at the commands that are provided in the announcement.

cd modules
../gradlew :apps:foundation:portal:portal-cluster-multiple:deploy
../gradlew :apps:foundation:portal-cache:portal-cache-ehcache-multiple:deploy
../gradlew :apps:foundation:portal-scheduler:portal-scheduler-multiple:deploy

Essentially, the modules folder in the Liferay source is the root of Gradle multi-project build. As noted in the Gradle documentation, "The path of a task is simply its project path plus the task name." So in this case, we can see that we are executing the deploy task for each of the following projects:

  • :apps:foundation:portal:portal-cluster-multiple
  • :apps:foundation:portal-cache:portal-cache-ehcache-multiple
  • :apps:foundation:portal-scheduler:portal-scheduler-multiple

If you replace the colons with slashes, each of these project paths give you the actual file path to the Gradle projects:

Now that we have the project paths, the next step is to find the subrepository corresponding to that Gradle project. To identify that subrepository, you check each parent folder until you locate a .gitrepo file, which is essentially a metadata file that describes the root of a subrepository. These are the .gitrepo files for the Gradle projects we need to deploy.

If you look inside the .gitrepo file, you will find a remote field, which designates the repository we need to clone if you wish to have the subrepository corresponding to that folder in liferay-portal. Based on that, these are the commands we'd run in order to clone just the repositories we need to deploy clustering:

git clone git@github.com:liferay/com-liferay-portal.git
git clone git@github.com:liferay/com-liferay-portal-cache.git
git clone git@github.com:liferay/com-liferay-portal-scheduler.git

Step 2: Locate the Release Commit

Now that you have the subrepository, the next step is to decide whether you want to deploy the latest 7.0.x code or deploy the code at the time of the 7.0.4 GA5 release. Regardless of which route you choose, you'll need to make sure that you have the 7.0.x branch, since you will either checkout the branch itself or you will checkout a commit that only exists in that branch.

for folder in com-liferay-portal com-liferay-portal-cache com-liferay-portal-scheduler; do
    cd $folder
    git fetch origin 7.0.x:7.0.x
    git checkout 7.0.x
    cd ..
done

Liferay's GA releases aren't tagged in subrepositories, and so you'll need to find a different frame of reference. Luckily, if you look inside the .gitrepo file, you will find a commit field, which designates the commit in the subrepository which matches up against the current state within liferay-portal. Because all of the URLs we've been using so far in the liferay-portal repository refer directly to the 7.0.4-ga5 tag, this means that the commit field corresponds to the subrepository commit for the release. Therefore, you can navigate into each folder and checkout the correct commit. Assuming you're using the GA5, you would use these commands:

cd com-liferay-portal
git checkout ae0c594e47b3728d95340e4eddf1f5836b56f38c
cd -

cd com-liferay-portal-cache
git checkout 034a3671f9bded7c029a54488ef14d290754f2f2
cd -

cd com-liferay-portal-scheduler
git checkout 3de4a3de6c94337257994c8d32737ccbcd88b374
cd -

Step 3: Prepare Gradle Wrapper

At this point, you'll want to get a copy of the Gradle wrapper. Some repositories (particularly those that are already in pull mode) already have a Gradle wrapper committed to version control. However, this is not true for any of the clustering modules (at least, not at the time of this writing).

The version of Gradle used by Liferay at the specific tag will be documented in gradle-wrapper.properties in the portal source, and you can use Blade in order to retrieve it. In this case, Liferay uses Gradle 3.3, and we can acquire the proper Gradle wrapper by using blade init to initializing a Blade workspace (which will contain a Gradle wrapper), move it to the shared parent folder for all subrepositories, and reset the version to 3.3.

mkdir blade-temp
cd blade-temp
blade init

mv gradle ..
mv gradlew ..
mv gradlew.bat ..

cd -
rm -rf blade-temp

./gradlew wrapper --gradle-version=3.3

Step 4: Configure Liferay Home

Now, all we have to do is configure the Liferay module deployment folder. This should be the location of your Liferay bundle (in this case, your Liferay 7.0.4 GA5 bundle).

First, check build.gradle to see if it has a reference to a deployDir. If it does (which is the case in repositories like Audience Targeting), it will take precedence over any configuration file, you will need to modify this value in order to point to your desired deployment folder. In a default Liferay bundle where the OSGi and deployment folders have not changed, make sure to either /deploy at the end of the path if you wish for hot deploy to take care of things, or /osgi/modules if you'd like to deploy it directly to where OSGi is monitoring changes.

liferay {
    deployDir = new File("/path/to/liferay/home/deploy")
}

If there is no reference to deployDir inside of build.gradle, you have two options. You can either add it to build.gradle for each subrepository, or you can modify gradle.properties for each subrepository to specify the value for liferay.home. The latter is recommended, just because it matches up with how things normally happen elsewhere in Liferay source.

liferay.home=/path/to/liferay/home

Step 5: Deploy Clustering Modules

If you prefer something that imitates the commands that were published in the original announcement, you can do the following:

cd com-liferay-portal
../gradlew :apps:foundation:portal:portal-cluster-multiple:deploy
cd -

cd com-liferay-portal-cache
../gradlew :apps:foundation:portal-cache:portal-cache-ehcache-multiple:deploy
cd -

cd com-liferay-portal-scheduler
../gradlew :apps:foundation:portal-scheduler:portal-scheduler-multiple:deploy
cd -

If you prefer something that imitates commands you might already be familiar with in working with a Blade workspace, you can do the following:

cd com-liferay-portal/portal-cluster-multiple
blade gw deploy
cd -

cd com-liferay-portal-cache/portal-cache-ehcache-multiple
blade gw deploy
cd -

cd com-liferay-portal-scheduler/portal-scheduler-multiple
blade gw deploy
cd -

 

Troubleshooting Liferay from Source

Company Blogs September 6, 2017 By Minhchau Dang Staff

Many months later, it's time to continue the series on Liferay from Source.

For this segment, we'll start with the following question: why would you have wanted to build from source in the first place?

This particular series entry assumes that maybe you want to fix a bug that affects you. Or maybe you feel like there is a feature that is missing from an existing module, but you want to contribute that change so that it gets incorporated into the Liferay upstream repository as something that Liferay maintains instead of being a custom module that you maintain. In both cases, your ultimate goal is to have your contribution benefit the larger community instead of being isolated to your own Liferay installation.

In this example, I'm going to walk you through the process of troubleshooting a bug that exists in the 7.0.3 GA4 code base but seemingly not in the current master code base. It's an example of how Liferay internally handles the two diverging branches. Along the way, you'll be introduced to Liferay's internal processes for handling bugs, and ultimately you'll find different techniques that our internal developers use for troubleshooting issues. Finally, you'll see how to deploy your changes on top of what you've already built from source.

Step 1: Encounter the Problem

First, let's talk about the problem I want to solve (which in this case happens to be a bug), which is logged in LPS-74173. Essentially, the fix for LPS-64906 was incomplete. If you disable scheduler via portal-ext.properties by setting scheduler.enabled=false, if you navigate to the forms portlet by selecting Liferay > Content > Forms, you are unable to view the Forms portlet due to an UnsupportedOperationException.

java.lang.UnsupportedOperationException
    at com.liferay.portal.workflow.WorkflowDefinitionManagerProxyBean.getActiveWorkflowDefinitions(WorkflowDefinitionManagerProxyBean.java:60)

If you'd like to try this yourself, you will probably use a specific tag where the bug definitely exists, which is the tag corresponding to 7.0.3 GA4, aptly named 7.0.3-ga4. In case you are new to Git and do not yet know how to pull down a specific tag without pulling down every tag (and Liferay can accumulate lots of them), you can do it with these commands from the folder where you cloned the repository:

git fetch --no-tags git@github.com:liferay/liferay-portal.git tags/7.0.3-ga4:refs/tags/7.0.3-ga4
git checkout 7.0.3-ga4

Note that you can either build this tag from source, or you can simply use the existing release bundle. Using existing bundles is generally preferred if you simply want to use Liferay, but Liferay's decision to place everything inside of .lpkg files (an additional layer of zip files) and requiring marketplace overrides makes debugging issues and fixing them much more difficult, as you may be required to restart the server after every change for the overrides to be recognized.

Therefore, it's recommended to build from source if your end goal is to fix an issue.

Step 2: Reproduce in Master, Part 1

As part of Liferay's validation process, we try to determine if the issue is still present in the master branch. This is because in most (but not all) cases, Liferay's red tape dictates that you should try to make sure that the fix is applied to the unstable branch (master) before making it available in the stable version (in this case, 7.0.x).

Master itself doesn't have any useful tags, so if you'd like to try this yourself, you will probably use a specific hash where the bug definitely exists, because it might be fixed in the actual master by the time you view this blog post. In this case, it's known to exist at 325c62f9b7709c5c96ef484c9b050199500a41eb (which is where Liferay was when I started this blog entry), which you can checkout with the following command:

git checkout 325c62f9b7709c5c96ef484c9b050199500a41eb

After you've built Liferay from source, start up Tomcat in debug mode. If you navigate to the Forms portlet with scheduler disabled, you'll find that the issue doesn't appear.

Digging into the code, an easy conclusion to draw is that the issue does not occur in master.

You might draw this conclusion because as a side-effect of some refactoring in LPS-66761, the code throwing the exception is no longer directly included in the portlet class of the Forms portlet. However, the code still exists in a different class: WorkflowDefinitionDataProvider.

Because the code still exists, you now deal with another layer of Liferay red tape. Before you can bring the supposed fix back to a release branch, you have to prove to yourself (and to the reviewer) that this code change actually fixed the issue instead of masking the issue.

One approach is to perform a git bisect, and this is if you're fairly sure that the fix works. In my case, I'm pretty skeptical (the code throwing the stack trace hasn't been touched), so I will want to dig into where the code moved to see if a similar bug is still present.

The trick of course is figuring how to get to the new code.

Step 3: Reproduce in Master, Part 2

Because it's an OSGi component, it's possible the error goes away due to the component no longer meeting its requirements when scheduler is disabled (which would make our job really easy). So, let's ask ourselves this question: is an instance of this class still registered as an OSGi component?

Let's find out by asking Gogo shell. The following code snippet asks OSGi about all the components that claimed that they provided the DDMDataProvider service (the second argument that is currently null is a filter), and then prints out the class names. We're checking to see if our WorkflowDefinitionsDataProvider appears in the list.

each [($.context getServiceReferences "com.liferay.dynamic.data.mapping.data.provider.DDMDataProvider" null)] { ($.context service $it) class }

Sure enough, it shows up. This means that in theory, this code can be called, because our component is available.

Step 4: Reproduce in Master, Part 3

The next part is to figure out how to actually get the code to be called. There are a lot of different ways to do this (dig up the call stack), but since in this case the code simply moved, an easier test is to see if simply visiting the original page hits a breakpoint we set in our code. We achieve this by attaching a remote debugger, adding a breakpoint to the getData method in WorkflowDefinitionDataProvider, and visiting the Forms portlet.

image

Looks like our breakpoint was hit! Let's step over it and see what happens.

image

Liferay is swallowing the exception because WARN level logging is not enabled on the class DDMDataProviderInvokerImpl. Let's go ahead and enable it and see if we reproduce the issue. You reach the logging interface by choosing Control Panel > Configuration > Server Administration, choosing Log Levels, choosing Add Category, and adding com.liferay.dynamic.data.mapping.data.provider.internal at WARN level.

After visiting the Forms portlet again, we see the same error as we saw in 7.0.3-ga4, though the portlet still renders because the exception is swallowed.

java.lang.UnsupportedOperationException
    at com.liferay.portal.workflow.WorkflowDefinitionManagerProxyBean.getActiveWorkflowDefinitions(WorkflowDefinitionManagerProxyBean.java:60)

Detour: Workflow Implementation Details

Now that we've encountered the bug, let's take a step back and understand why this exception was raised.

Workflow is a weird beast where a dummy implementation is always provided by default, which is WorkflowDefinitionManagerProxyBean. Therefore, if the code ever calls any methods on this default implementation, it will throw an exception. The only way for these exceptions to go away is for a different, non-default implementation to be provided. In the case of Liferay, this non-default implementation is known as Kaleo.

Looking at the code block that is throwing the exception, the call to _workflowEngineManager succeeds, but the call to _workflowDefinitionManager fails. This suggests that the non-dummy implementation is available for WorkflowEngineManager, but it is not available for WorkflowDefinitionManager. Let's use Gogo shell to confirm.

each [($.context getServiceReferences "com.liferay.portal.kernel.workflow.WorkflowDefinitionManager" null)] { ($.context service $it) class }
each [($.context getServiceReferences "com.liferay.portal.kernel.workflow.WorkflowEngineManager" null)] { ($.context service $it) class }

This code confirms that there is only a proxy implementation for WorkflowDefinitionManager, and both a proxy and a non-proxy implementation for WorkflowEngineManager. So the question is, why did the non-proxy implementation for WorkflowDefinitionManager (WorkflowDefinitionManagerImpl) not get registered to OSGi? The most likely answer is that its dependencies were not satisfied.

We can take a look at each of its non-optional @Reference annotations to see if any exist.

$.context getServiceReferences "com.liferay.portal.workflow.kaleo.service.KaleoDefinitionLocalService" null
$.context getServiceReferences "com.liferay.portal.workflow.kaleo.KaleoWorkflowModelConverter" null
$.context getServiceReferences "com.liferay.portal.kernel.workflow.comparator.WorkflowComparatorFactory" null

Running these commands, we find that KaleoDefinitionLocalService and KaleoWorkflowModelConverter have not been satisfied. Interesting, KaleoDefinitionLocalService has the naming format for a Liferay service builder service (it ends with the words LocalService), and a quick investigation inside of the source code in an IDE reveals that it is a service builder service provided by the portal-workflow-kaleo-service, made available in a bundle with the symbolic name com.liferay.portal.workflow.kaleo.service.

Detour: Wiring Spring into OSGi

If you've been Liferay for awhile, you know that service builder services are not created and managed by OSGi, but rather they are created and managed by Spring. However, a lot of Liferay stuff uses it as though OSGi manages it. You might wonder, how does that actually work?

Liferay is coded so that a bundle waits for a variety of things to be registered to OSGi, and then it has code that will register its Spring managed dependencies as OSGi components.

Of course, the whole registration process has dependencies, which are managed in a build time generated file OSGI-INF/context/context.dependencies for each bundle. These are then assumbled into a placeholder component and then registered with the Apache Felix dependency manager.

If for any reason any of these components have unsatisfied dependencies (one common example being the Release component for the module, like in LPS-66607), the bundle will be active, but none of the Spring-managed components (such as message listener configurations) will be made available as OSGi components. As you can imagine, this means that all the OSGi components that depend on those Spring-managed components being registered will also fail to resolve.

Naive Debugging

One way to troubleshoot this issue is to construct a Gosh script that basically does what we've been doing in all the examples above. It will read in OSGI-INF/context/context.dependencies from the bundle and check each entry to see if there are any service references that match it, similar to what we have been manually doing with the previous commands. If something exists which satisfies the reference, you get a check mark. If not, you get a warning exclamation point and the text will be in red.

In other words, if there is at least one warning exclamation point (or one line in red) in the output of this script, none of the Spring managed dependencies for this bundle will be registered as OSGi components!

Note that it's not valid to paste the following directly into a shell (every line break is treated as a command, which will not work given that some closures span multiple lines).

math = ($.context bundle 0) loadclass "java.lang.Math"
stringutil = ($.context bundle 0) loadclass "com.liferay.portal.kernel.util.StringUtil"

id_map = [""=0]
each [($.context bundles)] { $id_map put ($it symbolicname) ($it bundleid) }

checkref = {
    pos = $args indexOf " "
    has_filter = $pos equals ($math abs $pos)
    service = if { $has_filter } { $args substring 0 $pos } { $it }
    filter = if { $has_filter } { ($args substring ($service length)) trim }

    refs = $.context servicereferences $service $filter
    color = if { $refs } { "\u001B[0;0m" } { "\u001B[1;31m" }
    satisfied = if { $refs } { "[✔]" } { "[!]" }
    echo "$color" "$satisfied" "$args" "\u001B[0;0m"
}

check = {
    bundle_id = if { (($args class) name) equals "java.lang.String" } { $id_map get $args } { $args }

    bundle = $.context bundle $bundle_id
    url = $bundle entry "OSGI-INF/context/context.dependencies"
    dependencies = []
    if { $url } { $stringutil readlines ($url openstream) $dependencies }

    echo ''
    echo "$bundle"
    echo ''
    each $dependencies $checkref
}

# List the bundle ids or symbolic names that you wish to check here

check "com.liferay.portal.workflow.kaleo.service"

# Blank line echo to suppress output

echo ''

Copy it into a text file, which I'll assume will be named check_context_dependencies.gosh and stored in /tmp. If you're diagnosing something other than com.liferay.portal.workflow.kaleo.service (or you're checking multiple services), update the check lines accordingly.

You can run this script by using the gosh command provided by Gogo shell. You will need to specify the file path using a file URI. In this example, the local file is located in /tmp/check_context_dependencies.gosh, which means we'd use the following command to execute it:

gosh file:///tmp/check_context_dependencies.gosh

We'll get the following as script output.

[✔] com.liferay.counter.kernel.service.CounterLocalService
[✔] com.liferay.portal.kernel.dao.orm.EntityCache
[✔] com.liferay.portal.kernel.dao.orm.FinderCache
[✔] com.liferay.portal.kernel.model.Release (&(release.bundle.symbolic.name=com.liferay.portal.workflow.kaleo.service)(release.schema.version=1.3.4))
[✔] com.liferay.portal.kernel.scheduler.SchedulerEngineHelper
[!] com.liferay.portal.kernel.scheduler.TriggerFactory
[✔] com.liferay.portal.kernel.service.ClassNameLocalService
[✔] com.liferay.portal.kernel.service.ClassNameService
[✔] com.liferay.portal.kernel.service.PersistedModelLocalServiceRegistry
[✔] com.liferay.portal.kernel.service.ResourceLocalService
[✔] com.liferay.portal.kernel.service.RoleLocalService
[✔] com.liferay.portal.kernel.service.UserLocalService
[✔] com.liferay.portal.kernel.service.UserService
[✔] com.liferay.portal.kernel.service.persistence.ClassNamePersistence
[✔] com.liferay.portal.kernel.service.persistence.CompanyProvider
[✔] com.liferay.portal.kernel.service.persistence.RolePersistence
[✔] com.liferay.portal.kernel.service.persistence.UserPersistence
[✔] com.liferay.portal.workflow.kaleo.runtime.calendar.DueDateCalculator

Interesting, so it looks like in order for all of our Spring managed dependencies to be added as OSGi components, we must first make sure that com.liferay.portal.kernel.scheduler.TriggerFactory is also available. If we look at the package name which includes the word "scheduler", we can draw the conclusion that TriggerFactory is not available because we disabled scheduler, and so nothing exists in our environment to satisfy this dependency.

Smarter Debugging

While the approach above is what I did when I was troubleshooting the issue, I was later told that there is a built-in way to do this, because a closer look at the source code for ModuleApplicationContextExtender that I linked above shows that we're using Apache Felix's dependency manager to manage Liferay's Spring dependencies.

The documentation provides shortened variants on the command options (such as b) that are not available in the version of Apache Felix shipped with Liferay, but the longer command variant works, giving us this for troubleshooting our issue:

dm bundleIds "com.liferay.portal.workflow.kaleo.service"

If you aren't sure which bundle it is that is failing to work, then you can express your frustration with the acronym for a very popular expletive, and the command will report all services that are currently failing to resolve, even though some OSGi component is asking for them to be resolved.

dm wtf

The former command asking for bundleIds as a parameter has to be parsed visually in order to understand which component is failing, but the latter command that only provides wtf only lists the ones that are failing. In both cases, we see that com.liferay.portal.kernel.scheduler.TriggerFactory and many service builder services are missing, just as was discovered in our naive analysis.

Step 5: Choose Your Battle

If we track down why TriggerFactory is needed by com.liferay.portal.workflow.kaleo.service, we find that KaleoTimerInstanceTokenLocalServiceImpl wants to use it in order to add scheduled jobs related to workflow timers.

Knowing this, let's look back at the facts so far. As we were troubleshooting the issue, we made the following observation:

Looking at the code block that is throwing the exception, the call to _workflowEngineManager succeeds, but the call to _workflowDefinitionManager fails.

With this in mind, and with our discovery for TriggerFactory in mind, you might consider the following question: given that scheduler is disabled, is the bug that the call to _workflowEngineManager succeeds, or is the bug that the call to _workflowDefinitionManager fails?

You could argue it both ways, depending on whether you agree with the following statement: workflow timers should be treated a required part of the Kaleo workflow implementation, and that discussion would probably happen with the component team and the project owner team for the Kaleo workflow component. Whether you're an internal user or an external user, you could start that conversation by assuming one stance, submitting your changes via a pull request to the component team, and then using the pull request for the conversation.

However, as a contributor, you might not be very deeply invested in the correct answer to this question, and so the thought of starting this conversation doesn't appeal to you. After all, you simply wish to fix a bug that affects other people. Understanding the nuances of how workflow should work sounds like an unnecessary use of your time.

What can you do instead? Well, you'd re-frame the question.

Instead of asking whether timers should work, you could answer the more specific question that applies to our current bug: if the only thing we're using _workflowEngineManager for is to determine whether or not workflow is available, and we know that it's giving us a misleading answer, and we can achieve what we want by changing the way we're pulling in the _workflowDefinitionManager reference, should we keep the logic that uses the misleading _workflowEngineManager manager reference?

This one has a more obvious answer: no.

Step 6: Test Your Changes, Part 1

In this case, we can update WorkflowDefinitionsDataProvider to simply wait for non-proxy implementations of WorkflowDefinitionManager, which we will assume simply provides the property proxy.bean=false.

@Reference(
    cardinality = ReferenceCardinality.OPTIONAL,
    policyOption = ReferencePolicyOption.GREEDY,
    target = "(proxy.bean=false)")
private WorkflowDefinitionManager _workflowDefinitionManager;

For ReferenceCardinality, because we don't need one to function (we're using it for a null check), and we only expect one to be provided, "The reference is optional and unary" is what best describes the cardinality for what we want, so we choose OPTIONAL.

For ReferencePolicyOption, because we don't expect it to be provided immediately, and we want to function properly should one be deployed, "When a new target service for a reference becomes available, ... bind the new target service" is what best describes the policy option for what we want, and so we choose GREEDY.

Now, how to deploy the change? Navigate to the folder containing our change, and call the gradlew command located at the root of the portal source.

cd modules/apps/forms-and-workflow/dynamic-data-mapping/dynamic-data-mapping-data-provider-instance
../../../../../gradlew deploy

After deploying the change, visit the Forms page. Looks like our issue has been resolved. Let's make sure that our component is still available.

each [($.context getServiceReferences "com.liferay.dynamic.data.mapping.data.provider.DDMDataProvider" null)] { ($.context service $it) class }

We still see our WorkflowDefinitionsDataProvider available, so everything checks out.

Step 7: Test Your Changes, Part 2

Up until this point you have everything you need to run the change locally. The last step is to make your change available to others, possibly submitting it as a bugfix. To do that, the next thing to confirm is whether you made the change in the correct repository.

In this example, we made the change to the liferay-portal repository. However, this isn't always the correct place to create a fix.

So, how do you know if something has moved to a subrepository? This was briefly touched on in the previous blog entry in this series.

What's in the master branch of liferay-portal is actually not the latest code. Liferay has started moving things into subrepositories, which you can see from the hundreds of strangely named repositories that have popped up under the Liferay GitHub account.

However, a lot of these repositories are just placeholders. These placeholders are in what's called "push" mode, where code from the liferay-portal repository is pushed to the subrepository. However, a handful of them (five at the time of this writing) are actually active where they're in what's called "pull" mode, where code is pulled from the subrepository into the liferay-portal repository on-demand. You know the difference by looking at the .gitrepo file in each subrepository and checking the line describing the mode.

So in practice, find the file you're modifying, and then see if any parent folder has a .gitrepo file. If it does, check inside of that file for a line that says mode = pull. In this case, the .gitrepo located in dynamic-data-mapping, which is the parent folder for the dynamic-data-mapping-data-provider-instance module that we deployed, does say that our mode is pull.

Therefore, we'll need to clone the com-liferay-dynamic-data-mapping repository, we would update to the latest master branch of this repository, we would deploy the module to confirm that the issue still exists, and we would then apply our change to make sure that it fixes the issue.

Step 8: Submit Your Changes

After finding right repository, the next step is to make sure you've followed the Contributors Guide located on the GitHub repository.

There is a vague step in the guide, which is finding the core maintainer to review your changes so that it can actually be included in Liferay itself.

While Liferay has started a CODEOWNERS file, it's still only in the evaluation phase and so the file only has a single name in it. Given that, the easiest way is to leverage the pre-existing people who are dedicated to creating fixes and routing them to the correct people: Liferay's support department. A few people who work in Liferay's support department hang out in the Liferay Community Slack, so it should be easy to convince someone to take a look at your proposed changes and at least begin the conversation.

Once that conversation starts, the next step after that is a lot of patience while you wait for your changes to be included in the product. As the saying goes, you can lead a Liferay core maintainer to a pull request, but you can't make them merge.

Getting Started with Building Liferay from Source

Company Blogs May 18, 2017 By Minhchau Dang Staff

When a new intern onboards in the LAX office, their first step is to build Liferay from source. As a side-effect of that, usually the person that handles the intern onboarding will see that the reported ant all time is in hours rather than in minutes, and then start asking people, "Is it normal for it take this long to build Liferay for the first time?"

It happens frequently enough that sometimes my hyperactive imagination supposes that a UI intern's entire first day must be spent cloning the Liferay GitHub repository and building it from source, and then going home.

In hindsight, this must be what the experience is like for a developer in the community looking to contribute back to Liferay as well. Though, rather than go home in a literal sense, they might stare at the source code they've downloaded and built (assuming they got that far) and think, "Wow, if it took this long just to get started, it must be really terrible to try to do more than that," and choose to go home in a less literal way, but with more dramatic flair.

One of the many community-related discussions we have been having internally is how we can make things better, both internally and externally, when it comes to working with Liferay at the source code level. Questions like, "How do we make it easier to compile Liferay?" or "How do we make it easier to debug Liferay?" After all, just how open source are you if it's an uphill battle to compile from source in order find out if things are already fixed in branch?

We don't have great answers to these problems yet, but I believe that we can, at a minimum, provide a little more transparency about what we are trying internally to make things better for ourselves. Sharing that information might give all of us a better path forward, because if nothing else, it lets us ask important questions about the pain points rather than bikeshed color ones.

Step 1: Upgrade Your Build System

Let's say I were to create a survey asking the question, "Which of the following numbers best describes your build time on master in minutes (please round up)?" and gave people a list of options, ranging from 5 minutes and going all the way up to two hours.

This question makes the unstated assumption that you are able to successfully build master from source, because none of the options is, "I can't build master from source." Granted, it may seem strange that I call that an "assumption", because why would you not be able to build an open source product from source?

Trick question.

If you've seen me at past Liferay North America Symposiums, and if you were really knowledgeable about Dell computers in the way that many people are knowledgeable about shoes or cars, you'd know that I've been sporting a Dell Latitude E6510 for a very long time.

It's a nice machine, sporting a mighty 8 GB of RAM. Since memory is one of the more common bottlenecks, this made it at least on-par with some of the machines I saw developers using when I visited Liferay clients as a consultant. However, to be completely honest, a machine with those specifications has no hope of building the current master branch of Liferay without intimate knowledge of Liferay build process internals. Whenever I attempted to build master from source without customizing the build process, my computer was guaranteed to spontaneously reboot itself in the middle.

So why was this not really a problem for other Liferay developers?

Liferay has a policy of asking its developers to accept upgrades to their hardware once every two to three years. The idea is that if new hardware increases your productivity, it's such a low cost investment that it's always worthwhile to make. A handful of people resist upgrades (inertia, emotional attachment to Home and End keys, etc.), but since almost everyone chooses to upgrade, Liferay has an ivory tower problem, where much of Liferay has no idea what it's like to even start up Liferay on an older machine, not to even discuss what it's like to compile Liferay on those older machines.

  • Liferay tries to do parallel builds, which consumes a lot of memory. To successfully build Liferay from source, a dedicated build system needs 8 GB of memory, while a developer machine with an IDE running needs at least 16 GB of memory.
  • Liferay writes an additional X GB every time you build, a lot of it being just copies of JARs and node_modules folders. While it will succeed on platter drives, if you care about build time, you'll want Liferay source code to live on a solid-state drive to handle the mass file creation.

So eventually, I ran into a separate problem which required a computer upgrade, I needed to run a virtual machine that itself wanted 4 GB, and so that combined with running Liferay alongside an IDE meant my machine wasn't up to handling my task. After upgrading, the experience of building Liferay is substantially different from how it used to be. While I have other problems like an oversensitive mousepad, building Liferay is no longer something that made me wonder what else could possibly go wrong.

If you weren't planning on upgrading your computer in the near future, it doesn't make sense to upgrade your computer just to build Liferay. Instead, consider spinning up a virtual machine in a cloud computing environment that has the minimum requirements, such as something in your company's internal cloud infrastructure, or even a spot instance on Amazon EC2. Then you can use those servers to perform the build and you can download the result to your local computer.

Step 2: Clone Central Repository

So let's assume you've got a computer or virtual machine satisfying the requirements listed above. The next step is to get the source code so you can use this machine to build Liferay from source. This is the command you would use to do it:

git clone git@github.com:liferay/liferay-portal.git

However, the first step that interns get hung up on is waiting for this clone to complete. If you've ever tried to do that, you'll find that Liferay has violated one of the best practices of version control and we've committed a large folder full of binary files, .gradle. As a result of having this massive folder, GitHub sends us angry emails and, of course, cloning our repository takes hours.

How does Liferay make this better internally? Well, in the LAX office, the usual answer is to plug in the ethernet cable. Liferay invested heavily in fast internet, and so simply plugging in the ethernet cable makes the multi-hour process finish in 30 minutes.

However, it turns out that there is actually a better answer, even in the LAX office. Each office has a mirror that holds archives of various GitHub repositories, including liferay/liferay-portal. We suspect the original being mirrored is maintained by Quality Assurance, because we have heard that keeping all of our thousands of automated testing servers in sync used to result in angry emails from GitHub. Since it's an internal mirror, this means that downloading X GB and unzipping it takes a few minutes, even over WiFi, and it's on the order of seconds if you plug in your ethernet cable.

So, in order to improve our internal processes, we've been trying to get the people who manage our new hires and new interns to recognize that such a mirror exists and to use it during their onboarding process to save a lot of time for new hires on their first day.

So what does this mean for you?

Essentially, if you plan to clone the code directly onto your computer for simplicity, you'll need to make sure that it's during a time where you won't shut down the computer for a few hours and when you don't need it for anything (maybe run it overnight), because it's a time-consuming process.

Alternately, have a remote server perform the clone, and then download an archive of the .git folder to your local computer, similar to what Liferay is trying to do internally. This will free up your machine to do useful things, and even spinning up Amazon EC2 spot instances (like an m1.small) and bringing things down with either SCP or an S3 bucket as an intermediate point may be beneficial.

Step 3: Clone the Binaries Cache

We mentioned the very large .gradle folder, but something else we noticed over time is that both master and 7.0.x share a lot of libraries, and they were constantly getting rewritten as you switched between branches. So, to make this situation slightly more tolerable, what we've done is we've created a separate repository just for binaries. When building Liferay, you will also want a copy of this binaries cache. For convenience, make it a sibling folder of the portal source folder you cloned in the previous step.

git clone git@github.com:liferay/liferay-binaries-cache-2017.git

Step 4: Build Central Repository

The next step is your first build from source. This is done with a single command that theoretically handles everything. However, before you run this single command, you might need to do things to reduce the number of resources it consumes.

  • Liferay issues a lot of requests to the NPM registry in parallel builds. You can cap this by checking build.properties for nodejs.npm.args, and taking the commented out line and adding it to your own build.USERNAME.properties.
  • Liferay includes a lot of extra things most people never need. You can remove these by checking build.properties for build.include.dirs and using its commented out value in your build.USERNAME.properties, or adjusting it for your needs if you want more than what it tries by default.
  • If you're on Windows, disable Windows Defender (or at least disable it on specific folders or drives). The ongoing scan drastically slows down Liferay builds.

After you've thought through all of the above, you're almost ready for the command itself. When Liferay introduced the liferay-binaries-cache-2017, it also introduced another way for the build to fail: Liferay's build process tries to auto-update this cache at build time, but since it's constantly synchronizing this folder, you might actually run into a situation where the cache cannot update because it already contains the files it wants to add! So you'll need to add extra commands to clean up the folder before you build.

cd liferay-binaries-cache-2017
git clean -xdf
git reset --hard
cd ../liferay-portal

At this point, you are now ready for the command itself, which requires that you download and install Apache Ant. After knowing that this is what I'm asking you to download, you might also realize that this means that the entry point for everything is build.xml.

ant all

So now you've built the latest Liferay source code, right?

Another trick question!

What's in the master branch of liferay-portal is actually not the latest code. Liferay has started moving things into subrepositories, which you can see from the hundreds of strangely named repositories that have popped up under the Liferay GitHub account.

However, a lot of these repositories are just placeholders. These placeholders are in what's called "push" mode, where code from the liferay-portal repository is pushed to the subrepository. However, a handful of them (five at the time of this writing) are actually active where they're in what's called "pull" mode, where code is pulled from the subrepository into the liferay-portal repository on-demand. You know the difference by looking at the .gitrepo file in each subrepository and checking the line describing the mode.

However, because all of those files are actually also on the central repository, after you've cloned the liferay-portal repository, you can use the files there to find out which subrepositories are active with git, grep, and xargs magic run from the root of the repository.

git ls-files modules | grep -F .gitrepo | xargs grep -Fl 'mode = pull' | xargs grep -h 'remote = ' | cut -d'=' -f 2

I will dive into more detail on the subrepositories in a later entry when we talk about submitting fixes, but for now, they're not relevant to getting off the ground running other than an awareness that they exist, and an awareness that additional wrinkles exist in the fix submission process as a side-effect of their existence.

Step 5: Choose an IDE

At this point, you've built Liferay, and the next thing you might want to do is point an IDE with a debugger to the artifact you've built, so that you can see what it's doing after you start it up. However, if you point an IDE to the Liferay source code in order to load the source files, whether it's Netbeans, Eclipse, or IntelliJ, you'll notice that while Liferay has a lot of default configurations populated, but these default files are missing about 90% of Liferay's source folders.

If you're using Netbeans, given the people who have forked the Liferay Source Netbeans Project Builder overlap exactly with the team I know for sure uses Netbeans, this tool will help you hit the ground running. Since there are recent commits to the repository, I have confidence that the Netbeans users team actively maintains it, though I can't say with equal confidence how they'll react to the news that I'm telling other people about it.

If you're using Eclipse or Liferay IDE, then Jorge Diaz has you covered with his generate-modules-classpath script, which he has blogged about in the past, and his blog post explains its capabilities much clearly than I would be able to in a mini-section at the end of this getting started guide.

If you're using IntelliJ IDEA Ultimate, you can take advantage of the liferay-intellij project and leave any suggestions. It was originally written as a streams tutorial for Java 7 developers rather than as a tool, and I still try to keep it as a streams tutorial even as I make improvements to it, but I'm open to any improvement ideas that make people's lives easier in interacting with Liferay core code.

Step 6: Bend Liferay to Your Will

So now that everything is setup for your first build, and you're able to at least attach a debugger to Liferay, the next thing is to explain what you can do with this newly-discovered power.

However, that's going to require walking through a non-toy example for it to make sense, so I'll do that in my next post so that this one can stay as a "Getting Started" guide.

Pre-Compiling Liferay JSPs Take 2

Company Blogs January 9, 2013 By Minhchau Dang Staff

Many years ago, Liferay's build validation was only getting started and so JSP pre-compilation related slowness wasn't a big deal (I think it was really only used during the distribution process). However, I wanted to make it faster for a client-related project (where we couldn't really do a server warm-up after deployment) and the research culminated in a blog entry summarizing my findings.

Fast forward a few years later and I found out that Liferay QA was suffering from build slowness because Selenium tests have the same requirement of not being able to do a server warm-up before doing tests, and so pre-compilation is turned on. Thus came an improvement to the core build process in LPS-14215 based on the findings of that blog entry.

Fast forward a few more years and JSP pre-compilation is still a standard part of our distribution process. However, after an upgrade to Tomcat 7 for Liferay 6.1, we released milestones for community testing, and the community (as well as other core engineers) reported that everything was very slow the first time you loaded the portal. Poking around, we discovered that Tomcat was recompiling our pre-compiled JSPs when development mode was active.

In a nutshell, we found that any pre-compilation (including the example provided in the Tomcat 7 documentation) will provide zero performance benefit. This is because rather than assuming class files newer than JSP files were up-to-date, Tomcat used the exact file timestamp to determine that, making sure that when Jasper generated the class files, it updated the modified timestamp of the file to be the same as the JSP. This new feature allowed older files to replace newer ones, in case you rolled back a change to a JSP and re-copied it to the application server, but broke JSP pre-compilation.

Thus, we fixed our own tools with LPS-23032. The TimestampUpdater stand-alone program walks the entire directory and makes sure that the class files have the same timestamps as the java files, which (thanks to Jasper itself behaving properly) will have the same timestamps as the jsp files. All this together makes JSP pre-compilation effective again.

Which leads me to the point of this blog entry. Which is, the example script in the previous blog entry has been updated for Tomcat 7 and invokes Liferay's TimestampUpdater to fix file timestamps. This updated example is available in my document library (download here). See the README.txt contained in the example script for additional instructions on how to use it.

Note that this only works for Liferay 6.1 and beyond, because we didn't have to add the class in previous releases (our default bundle was Tomcat 6). If you need to deploy Liferay 6.0 or earlier against Tomcat 7 for whatever reason, you can extract the TimestampUpdater from the portal-impl.jar of Liferay 6.1 and put it in the portal-impl.jar for your version of Liferay.

Avoiding Accidental Automatic Upgrades

Company Blogs January 4, 2013 By Minhchau Dang Staff

Once upon a time, a long time Liferay user decided to upgrade from an old version of Liferay (Liferay 6.0 GA2) to a new version of Liferay (Liferay 6.1 GA2). They performed this upgrade by shutting down Liferay 6.0 running on an old VM, starting Liferay 6.1 on a new VM, and then doing some DNS magic and some load balancer magic to switch to using the new VM after the upgrade completed.

A day later, a routine Cron job restarted Liferay 6.0 on the old VM, which still pointed at the same database used by Liferay 6.1.

Liferay 6.0's upgrade code checked to see if it had to run any of its upgrade processes, found that the build number stored in the Release_ table (6101) was higher than any of the upgrade processes it wanted to run, and proceeded to update the Release_ table with its own version number (6006).

Naturally, because nobody was accessing that old version of the portal, it didn't matter. It didn't touch any other tables, so Liferay 6.1 continued to run fine. That is, until Liferay 6.1 had to be restarted a week later for some routine maintenance.

As Liferay restarted, Liferay 6.1's upgrade code checked to see if it had to run any of its upgrade processes, found that the build number stored in the Release_ table (6006) was lower than some of the upgrade processes it wanted to run, and so ran them it did. Of course, because the data was actually in the 6101 state, all of these upgrade processes wound up corrupting the data, rendering the whole Liferay instance unusable.

No problem, the client had followed industry best practices and regularly backed up their system. So, they restored the database from yesterday's backup, thinking that all would now be well.

Alas, it was not meant to be. The damage to the database had been done a week before, and so the bad data was already in the database. Every time the client restored the database from yesterday's backup , Liferay re-detected the version number was different and that it needed to re-upgrade from 6006 to 6101. As a result, Liferay obediently re-corrupted the database.

This raises the following question: how do we avoid the data corruption resulting from an older release running against a newer release's database?

Enter LPS-21250, which prevents Liferay from starting up if its release number is lower than what is seen in the Release_ table. This means that if this situation were to repeat with 6.1 GA2 and 6.2 M3, 6.1 GA2 would fail to start up and not update the Release_ table.

Of course, this doesn't fix the problem of earlier versions of Liferay (such as Liferay 6.0) updating the Release_ table, raising a second question: how do we handle the problem where earlier versions of Liferay (such as Liferay 6.0) might update the Release_ table?

In Liferay 6.0 and in Liferay 6.1 GA1, it is possible to prevent upgrade processes from running by setting the upgrade.processes property to blank. The Release_ table will still be updated, but at least Liferay wouldn't attempt to re-corrupt its own data. This strategy will work to prevent upgrade processes from running if you were to accidentally start a Liferay 5.x release against a Liferay 6.0 database or a Liferay 6.1 GA1 database.

	upgrade.processes=

In Liferay 6.1 GA2, a blank value for upgrade.processes has been given a new meaning while adding support for LPS-25637: automatically try to detect the upgrade processes that need to be run. So, this workaround no longer works.

So at this time, for Liferay 6.1 GA2 and possibly the Liferay 6.2 releases, you have two ways of handling the problem of a Liferay 5.x or a Liferay 6.0 running against your database. You can either create a no-op upgrade process, or you can prevent Liferay from starting up at all if it detects a version number change.

In the former case, there's no native UpgradeProcess in Liferay 6.1 which truly does nothing. So, you can either use an upgrade process that is very unlikely to run (such as the 4.4.2 to 5.0.0 upgrade process), or you'll have to write an EXT plugin which contains a custom do-nothing upgrade process and specify it in your portal properties, or you'll have to specify a bad value for upgrade.processes and get a stack trace on every startup.

	upgrade.processes=com.liferay.portal.upgrade.UpgradeProcess_5_0_0

In the latter case, LPS-25637 also introduced a new feature which skips all upgrade processes if there's a version number match (previous versions of Liferay would check each one individually). Leveraging that, you can make use of a built-in run-on-all-releases upgrade process that was originally intended to allow you to split an upgrade in chunks, and effectively does nothing more than shut down Liferay.

	upgrade.processes=com.liferay.portal.upgrade.UpgradePause

Updating Document Library Hooks

Company Blogs August 28, 2009 By Minhchau Dang Staff

A common problem that comes up is the need to change how documents are stored in the Document Library.

For example, you may start out storing documents using Jackrabbit hooked to a database. However, as time goes on and you find yourself using more Liferay deployments, the number of database connections reserved for Jackrabbit alone gets dangerously close to the maximum number of database connections supported by your database vender.

So, you want to switch from using JCRHook over to using the FileSystemHook to store documents on a SAN.

If your migration uses two different hooks and you have access to the portal class loader (for example, you're running in the EXT environment, or you have the ability to update the context class loader for your web application like in Tomcat 5.5), the solution is straightforward.

InputStream exportStream =
	sourceHook.getFileAsStream(
		companyId, folderId, fileName, versionNumber);

targetHook.updateFile(
	companyId, portletId, groupId, folderId, fileName,
	versionNumber, fileName, fileEntryId, properties, modifiedDate,
	tagsCategories, tagsEntries, exportStream);
 

In summary, you instantiate the hook objects, pull data from your source hook and push it to your target hook. Then, update your configurations and restart your server to use the new document library hook. (In case you're not sure how to change document library hooks, see the comments for the dl.hook.impl property in portal.properties.)

If you'd like to see how this is accomplished via code samples rather than via textual explanations, these document links might help (just to note, the sample plugins SDK portlets leverage the portal class loader in order to access the classes found in portal-impl.jar, and so you must use Tomcat 5.5 or another servlet container that supports a similar mechanism in order to use them):

However, it's not always so easy. Another common problem is where you start out using Jackrabbit hooked up to a file system (perhaps in the earlier iterations of Liferay where JCRHook was the default), but you want to move to a clustered environment and you do not have access to a SAN.

Therefore, you want to migrate over to using Jackrabbit hooked to a database.

This is different from the previous problem in that you're using the same exact hook in both cases, and the way Liferay handles Jackrabbit inside of this hook makes it so that you can only have one active Jackrabbit workspace configuration (specified in your portal.properties file), and so simply instantiating two different hooks is not possible.

The solution here is to run the migration twice.

First, with the original JCRHook configuration, export the data to an intermediate hook (for example, the Liferay FileSystemHook) and shut down your Liferay instance. Then, you update portal.properties and/or repository.xml to reflect the desired JCRHook configuration, and import from your intermediate hook back to JCRHook.

If you do not have access to the portal class loader, the migration is less straightforward, because you won't have access to the hook classes themselves.

InputStream exportStream =
	DLLocalServiceUtil.getFileAsStream(
		companyId, folderId, fileName, versionNumber);

FileUtil.write(intermediateFileLocation, exportStream);

InputStream importStream =
	new FileInputStream(intermediateFileLocation);

DLLocalServiceUtil.updateFile(
	companyId, portletId, groupId, folderId, fileName,
	versionNumber, fileName, fileEntryId, properties, modifiedDate,
	tagsCategories, tagsEntries, importStream);
 

In summary, you'll have to read the documents using DLLocalServiceUtil with the original configuration and save them to disk in a way where you can parse all the data needed to call updateFile (perhaps in the same way that mirrors the way FileSystemHook works). Then, you re-import those exported documents using DLLocalServiceUtil after updating your configuration and restarting the server.

Using the Dynamic Query API

Company Blogs August 17, 2009 By Minhchau Dang Staff

Assuming that indexes have already been created against the fields you're querying against, the Dynamic Query API is a great way to create custom queries against Liferay objects without having to create custom finders and services.

However, there are some "gotchas" that I've bumped into when using them, and I'm hoping that sharing these experiences will help someone out there if they're banging their head against a wall.

One gotcha was getting empty results, even though the equivalent SQL query was definitely returning something.

By default, the dynamic query API uses the current thread's class loader rather than the portal class loader. Because the Impl classes can only be found in Liferay's class loader, when you try to utilize the Dynamic Query API for plugins that are located in a different class loader, Hibernate silently returns nothing instead.

A solution is to pass in the portal class loader when you're initializing your dynamic query object, and Hibernate will know to use the portal class loader when looking for classes.

DynamicQuery query =
	DynamicQueryFactoryUtil.forClass(
		UserGroupRole.class, PortalClassLoaderUtil.getClassLoader());
 

A second gotcha was figuring out how to use SQL projections (more commonly known as the stuff you put in the SELECT clause), the most common cases being to select a specific column or to get a row count.

The gotcha was that the sample documentation on the Liferay Wiki uses Hibernate's DetachedCriteria classes, and trying to add Projections.rowCount() to a DynamicQuery object gave me a compile error, because the Liferay DynamicQuery object requires using Liferay's version of the Projection class, rather than the Hibernate version.

To resolve this gotcha, you can use ProjectionFactoryUtil to get the appropriate object.

query.setProjection(ProjectionFactoryUtil.rowCount());
 

A third gotcha was a "could not resolve property" error when I tried to add a restriction via RestrictionsFactoryUtil. Even though it looked like the bean class definitely had that attribute defined in portal-hbm.xml, Hibernate wasn't able to figure out what property needed to be used.

The gotcha is that some of Liferay's objects use composite keys. When using composite keys with Hibernate's detached criteria and Liferay's dynamic queries, the name of the property must include the name of the composite key. In the case of the Hibernate definitions created by Liferay's Service Builder, composite keys are always named primaryKey.

Therefore, the solution is to use primaryKey.userId instead of userId.

Criterion criterion =
	RestrictionsFactoryUtil.eq("primaryKey.userId", userId);
 

A fourth gotcha is that even if you don't specify a projection (thus resulting in a default of selecting all columns and implied "give me the entity"), casting directly to a List<T> won't work as it does in custom finders, because you're getting back a List<Object>, not a List.

The quick and dirty solution is to either (a) use addAll on a List and typecast (simulating what happens in a custom finder), or (b) add each result to a List<T> (cleaner to read).

Adding a Plugins Portlet to the Control Panel

Company Blogs December 30, 2008 By Minhchau Dang Staff

A "Gotcha!" situation came up today when I attempted to add a portlet developed in the plugins environment to the Control Panel (which is a new feature being developed for 5.2.0 that you can read about in Jorge's blog entry).

In a nutshell, the Control Panel provides a centralized administrative interface for the entire portal. Unlike a standard layout where you 'manage pages' and put portlets anywhere, whether or not a portlet shows up inside of the Control Panel depends on whether or not you've set the following nodes in your liferay-portlet.xml:

  • control-panel-entry-category: The 'category' where your portlet will appear. There are currently 4 valid values for this element: 'my', 'content', 'portal', and 'server'.
  • control-panel-entry-weight: Determines the relative ordering for your portlet within a given category. The higher the number, the lower in the list your portlet will appear within that category.
  • control-panel-entry-class: The name of a class that implements the ControlPanelEntry interface which determines who can see the portlet in the control panel via an isVisible method.

By making sure to define these elements in your liferay-portlet.xml, you can theoretically add any portlet (whether they be portlets built in the Extensions environment or Plugins environment) to the new Control Panel interface.

In my case, I couldn't think of a page/layout which would fit for the plugin I was developing, but since it had administrative elements to it, I felt the best place for it would be the Control Panel. So, I updated the dtd for liferay-portlet.xml to 5.2.0 and added the appropriate control panel elements.

Unexpectedly, the new plugin portlet did not show up in the Control Panel.

After debugging com.liferay.portal.util.PortalImpl, I verified that the portlet was recognized as a Control Panel portlet, and that it was added it to the java.util.Set, but by the time it returned from the method, the portlet had disappeared. I was at a loss for why.

However, it turns out those abstract algebra classes that I took in college came in handy, and I realized where the "Gotcha!" was hidden:

Set portletsSet = new TreeSet(
	new PortletControlPanelWeightComparator());
 

By their mathematical definition, every element in a set must be unique. After checking com.liferay.portal.util.comparator.PortletControlPanelWeightComparator, I discovered that the control panel render weight is the only thing that matters for determining uniqueness of a portlet within a Control Panel category. As long as render weights are equal, they are treated as the same portlet.

In summary, my portlet shared the same render weight as another Control Panel portlet (I gave it the same render weight as My Account because I copy-pasted), so it was getting replaced as soon as the My Account portlet was added to the java.util.TreeSet. Thus, why it was clearly being added but disappeared by the time it returned from the method.

So, in the event that you are planning to leverage the new Control Panel feature for portlets that you're developing, bear in mind that you need to keep your render weights different from the portlets which exist in Liferay and from other portlets you may want to add to your Liferay instance, and you'll be okay.

Alternatively, you could extend com.liferay.portal.util.PortalImpl, override the definition of getControlPanelPortlets, and use a different comparator which checks portlet ids as well in the event of ties.

Running Ant With a Double-Click

Company Blogs December 19, 2008 By Minhchau Dang Staff

There are times when I don't want to deal with the start up time of an integrated development environment and opt to use a fast-loading text editor instead. By making this trade-off, I wind up losing many of the nifty productivity features that are built into IDEs.

One feature that can be taken for granted is the ability to run build targets with a double-click. By leaving the comfort of an IDE, I need to open up a command window (using the CmdHere PowerToy to skip a navigation step), call ant {target-name}, wait for the task to finish, then call exit once it completes successfully.

However, I realized that I don't have to lose that. I usually run default targets (so I was waiting for a command window to load just to type three letters: ant). Since I usually don't edit build.xml files, I could safely configure Windows Explorer to call ant in the appropriate directory whenever I double-click it:

  1. Navigate to the ANT_HOME directory and create a file called ant-noargs.bat. Inside of that file, add the following contents:

    @if %~nx1 neq build.xml start "" "C:\Program Files\Textpad 5\TextPad.exe" "%~f1"
    @if %~nx1 equ build.xml call ant
    @if errorlevel 1 pause
     

    The filename check is so this only applies to 'build.xml' files and so I load my default text editor for all other files. The additional error level check exists so I don't have to look at the command window at all to see if anything went wrong -- it will automatically close when it's complete and will only stay open if something went wrong.

  2. Navigate to a directory containing a build.xml file, right-click on the build.xml file, and select Open With... from the context menu. If extra options show up, select the Choose Program... option.

  3. Use the Browse... button and navigate back to the ant-noargs.bat created earlier. Confirm the selection. When returning to the dialog, hit the checkbox which reads Always use the selected program to open this kind of file and confirm by hitting Ok.

As a result of the above, deploying the extensions environment, hooks, themes, and portlets are still a double-click away (or an Enter key away if I'm currently using a keyboard to navigate in Windows Explorer) without ever loading an IDE.

Pre-Compiling Liferay JSPs

Company Blogs April 21, 2008 By Minhchau Dang Staff

A feature I recently discovered was the ability to precompile all the JSPs in Liferay. The motivation behind this discovery was the need for sanity checking after making a small change to nearly a hundred JSPs across many different portlets.

After using it on this particular problem, I wanted to know if it was possible to apply pre-compilation to a current ongoing project.

In this project, the development cycle involves testing changes on a development environment, generating a clean build from source and deploying to an existing Tomcat instance, promoting those changes to an alpha environment, validating the changes again in the alpha environment, promoting those changes again to a beta environment, and validating the changes one more time in the beta environment. This process needs be completed in a two-hour window, and so I wanted to use the pre-compilation process to reduce the time spent waiting for page loads.

So, the first question is, how do you setup Liferay for pre-compiling JSPs?

The built-in approach is to set your jsp.precompile property to on in your build.properties. In this scenario, Liferay's JSPCompiler will precompile each file individually, avoiding potential out of memory exceptions resulting from compiling large numbers of JSPs. It also has the nice property of aborting as soon as the first compilation error is encountered, making it very friendly for sanity checks. However, in a clean environment, it takes ten minutes to run to completion, making this practical only for unattended builds and sanity checks.

Therefore, a faster option was necessary. After digging into the documentation for Jasper, I discovered that faster options are available if you modify the build scripts being used in your development environment. The sequence of changes that were made in my local development environment are discussed below.

One option becomes available if you set your ANT_OPTS environment variable to give Ant a lot of memory. Once you do this, you can modify the appropriate build file (build.xml in portal-web or build-parent.xml in ext-web) to take advantage of the extra memory by calling the javac ant task directly in the compile-common-jsp task. In a clean environment, this shortens the pre-compilation time to three minutes, making this solution attractive for general deployment.

In the example build.liferay.properties:

jsp.precompile=on
jsp.precompile.fast=on
 

But I was pretty sure I could do better, since over a minute was being spent decompressing JARs. By modifying the build target specific to an application server (for example, compile-tomcat), you remove the need to decompress JARs prior to precompilation, as you know exactly where the JARs are found and can setup a simple classpath accordingly. In the particular example where your application server is Tomcat, this step reduces the JSP pre-compilation time in a clean environment to one minute, making this solution attractive for Liferay core and extensions development.

In the example build.liferay.properties:

jsp.precompile=on
jsp.precompile.fast=on
jsp.precompile.faster=on
 

In the process of creating an application-specific build target for the Liferay WAR, it's possible to take this one step further and create a full build script which compiles every plugin WAR. However, since the plugin build scripts involve hot deployment rather than direct deployment to the application server, this is a bit tricky, as the work and webapps folders may go out of synch in terms of timestamps, thus negating the performance benefit of pre-compiling your JSPs if this lack of synchronization results in a recompilation.

Therefore, in the example build script (which makes use of the Ant-Contrib library to iterate through all the plugins folders, and is available for download here), it is assumed that Liferay and the various Liferay/JSR-168 plugins have already been deployed to Tomcat's webapps folder, and the build script and related files are inside of Tomcat's webapps folder. Running ant clean will wipe the work directory, and ant all will compile the ROOT context, followed by all the different plugins.

So, I now have a script which runs in less than two minutes which generates all the appropriate Java code and class files for the JSPs in Liferay along with every plugin in the development environment, speeding up the validation process.

Since these modifications depend on Liferay being deployed to the ROOT context, it can't really be committed to core without some modifications. Still, hopefully this knowledge is also useful to someone else.

Liferay Code Format

Company Blogs December 26, 2007 By Minhchau Dang Staff

After reading through Jorge's blog post on the guidelines for Liferay contributions, and after following the link to the style guidelines on the Liferay wiki, I put together a configuration file which works with Eclipse's built-in code formatter to adjust the whitespace in your code so that is compatible with the stated Liferay guidelines.

To use this configuration file, go to your Eclipse preferences dialog. Check under Java > Code Style > Formatter (screenshot), and then Import the above linked file. A new profile called "Liferay [user]" will then be made available (screenshot). Finally, click the Apply button to set it for the current workspace.

Once this configuration file is imported and applied to the current workspace, every time you run Source > Format, Eclipse will automatically adjust the whitespace in your code to make it conform to the stated Liferay guidelines. To bulk reformat, you can right-click on any given folder while in the Navigator view (or Ctrl+Click on the folder under OS X), and select Source > Format.

I don't know of any tools which help satisfy all of the Liferay style guidelines not pertaining to whitespace. However, based on its documentation, the commercial version of Jalopy from TRIEMAX software appears to come very close with its sorting capabilities.

Eclipse Open Resource Dialog

Company Blogs November 20, 2007 By Minhchau Dang Staff

In Eclipse, everything is a plugin, including the IDE itself. So, if there's something which bothers you, all you need to do is replace the plugin with something which works in a way more consistent with your personal preferences, or if the plugin is open source, tweak the existing plugin to suit your needs.

In my case, what bothered me was this: I don't have any reason to look at .svn-base files or .class files, so I'd prefer they not show up. Playing with working sets doesn't solve the problem, because (as far as I know) there's no way to exclude .svn folders from the working set without adding all the files individually. I tried to add extension points, but I could never get the filters to show up. So, all other options exhausted, I went looking for a way to modify the source code.

In order to modify the "Open Resource" dialog, I needed to find the FilteredResourceSelectionDialog class found in the org.eclipse.ui.dialogs package in the org.eclipse.ui.ide Java archive. A bandwidth-intensive way to get the source code for that one source file was to download Eclipse Classic, and find it in the src.zip found under org.eclipse.ui.ide in the org.eclipse.platform.source folder.

I then overrode the matches method in FilteredResourceSelectionDialog.ResourceFilter to exclude all file names ending in .svn-base and .class, created a quick build file to include all the Eclipse plugins jars in my classpath, compiled, copied the resulting class files into the appropriate Java archive, and got this: a clean view showing only the files I might want to open.

Portlet Ids Bookmarklet

Company Blogs November 2, 2007 By Minhchau Dang Staff

The nice thing about Liferay's drag and drop layout is that you (usually) don't have to remember any of the portlet ids. At least, until you do.

If you skim through portal.properties, there's a set of properties all prefixed with default.user.layout which let you control this. However, it wants portlet ids, which Liferay doesn't readily give to you unless you dig through /portal/portal-web, /ext/ext-web, and /plugins/portlets.

So, I wanted to create something which converted a layout, like this into a quick reference for all the different portlet ids, like this, Without digging through the backend and/or mousing over the configuration links.

So, while I'm not familiar enough with Liferay to implement a server-side way to do this, I dug through "view page source" and found that it was pretty straightforward to write a bookmarklet which, when run on a Liferay-based page, gives me the name-to-id mapping for all the portlets currently on the page in a Liferay.Popup.

To use it, convert the script below into a bookmarklet (you can use use this one which turns up in a Google search) and paste the bookmarklet into your address bar. Voila! Instant popup containing the portlet ids.

Showing 15 results.
Items 20
of 1