Leveraging Maven Proxies

Technical Blogs July 12, 2017 By David H Nebinger

Taking a short break from the Vue.js portlet because I had to implement a repository proxy. Since I was implementing it, I wanted to blog about it and give you the option of implementing one too. Next blog post will be back to Vue.js, however...

Introduction

I love love love Maven repositories. I can distribute a small project w/o all of the dependencies included and trust that the receiver will be able to pull the dependencies when/if they do a build.

And I like downloading small project files such as the Liferay source separately from the multitude of dependencies that would otherwise make the download take forever.

But you know what I hate? Maven repositories.  Well, downloading from remote Maven repositories. Especially if I have to pull the same file over and over. Or when the internet is slow or out and development grinds to a halt. Or maintenance on the remote Maven repositories makes them unavailable for some unknown period of time.

Now, of course the Maven (and Gradle and Ivy) tools know about remote repository download issues and will actually leverage a local repository cache so you really only have to download once. Except that they made this support optional, and frankly it gets overlooked a lot. The default Liferay Gradle repository references don't list mavenLocal() in the repo list and it is not used by default.

Enterprise or multi-system users (like myself) have additional remote repository issues.  First, a local repo on one user's system is not shared w/ a team or multiple systems, so you end up having to pull the file to your internal network multiple times, even if it has already been pulled before.

A complete list of use cases would include:

  • Anyone developing behind an HTTP proxy (typical corporate users). Using a repository proxy removes all of the necessary proxy configuration from your development tools, the repository proxy needs to know how to connect through the HTTP proxy, but your dev tools don't.
  • Anyone developing in teams. Using a repository proxy, you will only pay for downloading to your network once, then all developers will be able to access the artifact from the proxy and not have to download on their own.
  • Anyone using an approved artifact list. Using a repository proxy populated with approved artifacts and versions, you have automatic control over the development environment to ensure that an unapproved version does not get through.
  • Anyone developing in multiple workspaces, projects, or machines. Shares the benefits from the team development where you only download once, then share the artifact.
  • Anyone who suffers network outages. Using a repository proxy, you can pull previously loaded artifacts even when the external network is down. You can't pull new artifacts, but you should have most of the ones you regularly use.
  • Anyone who needs to publish team artifacts. A repository proxy can also hold your own published artifacts, so it is easy to share those with the team and use each others artifacts as dependencies.
  • Anyone using a secured continuous integration server. CI servers should not have access to the interweb, but they still need to pull dependencies for builds. The repository proxy gives your CI server a repository to pull from without having direct internet access.
  • Anyone behind a slow internet link. Your internal network is typically always faster than your uplink, so having a local proxy cache of artifacts reduces or eliminates lag issues from your uplink.
  • Anyone interested in simplifying repository references. Liferay uses two repositories, a public mavenCentral mirror and a release repository. Then there's mavenCentral, of course, and there are other repositories out there too. By using a proxy, you can have a single repository reference in your development tools, but that reference could actually represent all of these separate repositories as a single entity.

So I'm finally giving up the ghost, I'm moving to a repository proxy.

What Is A Repository Proxy?

For those that don't know, a repository proxy is basically a local service that you're going to run that proxies artifact requests to remote Maven repositories and caches the artifacts to serve back later on. It's not a full mirror of the external repositories because it doesn't pull and cache everything from the repository, only the artifacts and versions you use.

So for most of us, our internal network is usually significantly faster than the interweb link, so once the proxy cache is fully populated, you'll notice a big jump when building new projects. And for team development, the entire team can benefit from leveraging the cache.

Setting Up A Repository Proxy

There are actually a bunch of freely-available repository proxies you can use. One popular option is Artifactory.  But for my purposes, I'm going to set up Apache Archiva. It's a little lighter than Artifactory and can be easily deployed into an existing Tomcat container (Artifactory used to support that, but they've since buried or deprecated using a war deployment).

The proxy you choose is not so important, it will just impact how you go about configuring it for the various Liferay remote repositories.

Follow the instructions from the Archiva site to deploy and start your Archiva instance. In the example I'm showing below, I have a local Tomcat 7 instance running on port 888 and have deployed the Archiva war file per the site instructions. After starting it up, I created a new administrator account and was ready to move on to the next steps.

Once you're in, add two remote repositories:

These two repositories are used by Liferay for public and private artifacts.  Order the liferay-public-releases guy first, then the public one second.

In Archiva, you also need to define proxy connectors:

Once these are set, you can then define a local repository:

Initially if you browse your artifacts, your list will be empty. Later, after using your repo proxy, when you browse you should see the artifacts.

And finally you may want to use a repository group so you only need a single URL to access all of your individual repositories.

Configuring Liferay Dev Tools For The Repository Proxy

So this is really the meat of this post, how to configure all of the various Liferay development tools to leverage the repository proxy.

In all examples below, I'm just pointing at a service running on localhost port 888; for your own environment, you'll just make necessary changes to the URL for host/port details, but otherwise the changes will be similar.

Liferay Gradle Workspace

This is handled by changing the root settings.gradle file.  You'll take out references to cdn.liferay.com and instead just point at your local proxy as such:

buildscript {
  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins.workspace", version: "1.0.40"
  }

  repositories {
    maven {
      url "http://localhost:888/archiva/repository/liferay/"
    }
  }
}

apply plugin: "com.liferay.workspace"

Now if you happen to have other repositories listed, you may want to make sure that they too are pushed up to your repository proxy. No reason to not do so. And by using the repository group, we can simplify the repository list in the settings.gradle file.

This is the only file that typically has repositories listed, but if you have an existing workspace you might have added some references to the root build.gradle file or a build.gradle file in one of the subdirectories.

Liferay Gradle Projects

For standalone Liferay Gradle projects, your repositories are listed in the build.gradle file, these will also change to point at the repository proxy:

buildscript {
  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.1.3"
  }

  repositories {
    mavenLocal()

    maven {
      url "http://localhost:888/archiva/repository/liferay/"
    }
  }
}

apply plugin: "com.liferay.plugin"

dependencies {
  compileOnly group: "org.osgi", name: "org.osgi.core", version: "6.0.0"
  compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0"
}

repositories {
  mavenLocal()

  maven {
    url "http://localhost:888/archiva/repository/liferay/"
  }
}

Global Gradle Mirrors

Gradle supports the concept of "init" scripts.  These are global scripts that are executed before tasks and can tweak the build process that the build.gradle or settings.gradle might otherwise define. Create a file in your ~/.gradle directory called init.gradle and set it to the following:

allprojects {
  buildscript {
    repositories {
      mavenLocal()
      maven { url "http://localhost:888/archiva/repository/liferay" }
    }
  }
  repositories {
    mavenLocal()
    maven { url "http://localhost:888/archiva/repository/liferay" }
  }
}

This should normally have all Gradle projects use your local Maven repository first and your new proxy repository second.  Any repositories listed in the build.gradle or settings.gradle file will come after these. This also sets repository settings for both Gradle plugin lookups as well as general build dependency resolution.

Liferay SDK

The Liferay SDK leverages Ant and Ivy for remote repository access. Our change here is to point Ivy at our repository proxy.

Edit the main ivy-settings.xml file to point at the repository proxy:

<ivysettings>
  <settings defaultResolver="default" />

  <resolvers>
    <ibiblio m2compatible="true" name="liferay" root="http://localhost:888/archiva/repository/liferay/" />
    <ibiblio m2compatible="true" name="local-m2" root="file://${user.home}/.m2/repository" />

    <chain dual="true" name="default">
      <resolver ref="local-m2" />

      <resolver ref="liferay" />
    </chain>
  </resolvers>
</ivysettings>

Liferay Maven Projects

For simple Liferay Maven projects, we just have to update the repository like we would for any normal pom.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  ...

  <repositories>
    <repository>
      <id>liferay</id>
      <url>http://localhost:888/archiva/repository/liferay/</url>
    </repository>
  </repositories>

Liferay Maven Workspace

The Liferay Maven Workspace is a new workspace based on Maven instead of Gradle. You can learn more about it here.

In the root pom.xml file, we're going to add our repository entry but we also want to disable using the Liferay CDN as the default repository:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  ...

  <repositories>
    <repository>
      <id>liferay</id>
      <url>http://localhost:888/archiva/repository/liferay/</url>
    </repository>
  </repositories>

  <properties>
    <liferay.workspace.default.repository.enabled>false</liferay.workspace.default.repository.enabled>
    <liferay.modules.default.repository.enabled>false</liferay.workspace.modules.repository.enabled>
  </properties>

Global Maven Mirrors

Maven supports the concept of mirrors, these can be local repositories that should be used in place of other commonly named repositories.  Create (or update) your ~/.m2/settings.xml file and make sure you have the following:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                          https://maven.apache.org/xsd/settings-1.0.0.xsd">

  <mirrors>
    <mirror>
      <id>repoproxy</id>
      <name>Repo Proxy</name>
      <url>http://localhost:888/archiva/repository/liferay</url>
      <mirrorOf>*</mirrorOf>
    </mirror>
  </mirrors>
</settings>

The <mirrorOf /> tag using the wildcard means that this repository is a mirror for all repositories and Maven builds will go to this repository proxy for all dependencies, build or otherwise.

Configuring Liferay Source For The Repository Proxy

We took care of all of your individual project builds, but what about if you have the source locally and want to build it using the repository proxy?

I actually combined a bunch of the previous listed techniques. For the maven portion of the build (if there is one), my settings.xml mirror declaration ensures that my repo proxy will be used. For the Gradle portion of the build, I used the init.gradle script (although I copied my ~/.gradle/init.gradle to the .gradle directory created inside of the folder as the ~/.gradle/init.gradle script was ignored).

In addition, in my build.<username>.properties file, I set basejar.url=http://localhost:888/archiva/repository/liferay.

And I also had to set my ANT_OPTS environment variable, so I used "-Xmx8192m -Drepository.url=http://localhost:888/archiva/repository/liferay".

With all of these changes, I was able to build the liferay-portal from master with all dependencies coming through my repository proxy.

You might be wondering why you would want to go through this exercise. For me, it seemed like an ideal way to pre-populate my proxy with most of the necessary public dependencies and Liferay modules. Sure this isn't necessary because, since we're using a proxy, if we need an updated version later on the proxy will happily fetch it for us. It's just a way to pre-populate with a lot of the artifacts that you'll be needing.

Publishing Artifacts

The other big reason to use a local repository service is the ability to publish your own artifacts into the repository. When you're working in teams, being able to publish artifacts into the repository so teammates can use the artifacts without building themselves can be quite valuable. This will also force you to address your versioning strategy since you need to bump versions when you publish; I see this as a good thing because it will force you to have a strategy going into a project rather than trying to create one in the middle or at the end of the project.

So next we'll look into the changes necessary to support publishing from each of the Liferay development environments to your new repository service.

Note that we're going to assume that you've set up users in the repository service to ensure you don't get uploads from unauthorized users, so these instructions will include the details for setting up the repository with authenticated access for publishing.

Finally, during the research for this blog post I have quickly come to find that there different ways to publish artifacts, sometimes even based on the repository proxy you're using. For example, the fine folks at JFrog actually have their own Gradle plugin to support Artifactory publication. I didn't try it since I'm currently targeting Archiva, but you might want to look it over if you are targeting Artifactory.

These instructions are staying with the generic plugins so they might work across repositories w/o much change, but obviously you should test them out for yourself.

Liferay Gradle Workspace

The Liferay workspace was actually the most challenging configuration out of all of these different methods if only because tweaking the various build.gradle files to support the subproject publication can take a while.

In fact, Gradle has an older uploadArchives() method that has since been replaced by a newer maven-publish plugin, but for the life of me I couldn't get the submodules to work right with it. I could get submodules to use maven-publish if each submodule build.gradle file had the full stanzas for individual publication, but then I couldn't get the gradlew publish command to work in the root project.

So the instructions here purposefully leverage the older uploadArchives() method because I could get it working with the bulk of the configuration and setup in the root build.gradle and minor updates to the submodule build.gradle files.

Add Properties

The first thing we will do is add properties to the root gradle.properties file. This will isolate the URL and publication credentials and keep them out of the build.gradle files. For SCM purposes, you would not want to check this file into revision control as it would expose the publishers credentials to those who have access to the revision control system.

# Define the URL we'll be publishing to
publishUrl=http://localhost:888/archiva/repository/liferay

# Define the username and password for authenticated publish
publishUsername=dnebing
publishPassword=dnebing1

Root build.gradle Changes

The root build.gradle file is where the bulk of the changes go. By adding the following content to this file, we are adding support for publishing to every submodule that might be added to the Liferay Workspace.

// define the publish for all subprojects
allprojects {

  // all of our artifacts in this workspace publish to same group id
  // set this to your own group or override in submodule build.gradle files
  // if they need to change in the submodules.
  group = 'com.dnebinger.gradle'
  
  apply plugin: 'java'
  apply plugin: 'maven'
  
  // define a task for the javadocs
  task javadocJar(type: Jar, dependsOn: javadoc) {
    classifier = 'javadoc'
    from 'build/docs/javadoc'
  }
  
  // define a task for the sources
  task sourcesJar(type: Jar) {
    classifier = 'sources'
    from sourceSets.main.allSource
  }
  
  // list all of the artifacts that will be created and published
  artifacts {
    archives jar
    archives javadocJar
    archives sourcesJar
  }
  
  // configure the publication stuff
  uploadArchives {
    // disable upload, force each submodule to enable
    enabled false
    
    repositories.mavenDeployer {
      repository(url: project.publishUrl) {
        authentication(userName: project.publishUsername, password: project.publishPassword)
      }
    }
  }
}

The instructions in this file will have you building a source jar and a javadocs jar. These and the build artifacts will all be published to the repository from the gradle.properties file using the credentials from that file.

Note that by default the upload is disabled for all subprojects. This forces us to enable the upload in those individual submodules we want to set it up for.

Submodule build.gradle Changes

In each submodule where we want to publish to the repo, there are two sets of simple changes to make.

// define the version that we will publish to the repo as
version = '1.0.0'

// change group value here if must differ from the one in the root build.gradle.

dependencies {
  ...
}

// enable the upload
uploadArchives { enabled true }

We specify the version for publishing and enable the uploadArchives for the submodule.

Publish Archives

That's pretty much it.  In both the root directory as well as in the individual module directories you can issue the command gradlew uploadArchives and (if you get a good build) the artifacts will be published to the repository.

Liferay Gradle Projects

Liferay gradle projects get a similar modification as the Liferay Gradle Workspace modifications, but they're a little easier since you don't have to worry about submodules.

From the previous Liferay Gradle Workspace section above, add the same property values to the gradle.properties file. If you don't have a gradle.properties file, you can create one with the listed properties.

Build.gradle Changes

The changes we make to the build.gradle file are similar, but are still different enough:

buildscript {
  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.3.9"
  }

  repositories {
    maven {
      url "http://localhost:888/archiva/repository/liferay"
    }
  }
}

apply plugin: "com.liferay.plugin"
apply plugin: 'java'
apply plugin: 'maven'

repositories {
  mavenLocal()

  maven {
    url "http://localhost:888/archiva/repository/liferay"
  }
}

// define the group and versions for the artifacts
group = 'com.dnebinger.gradle'
version = '1.0.3'

dependencies {
  ...
}

// define a task for the javadocs
task javadocJar(type: Jar, dependsOn: javadoc) {
  classifier = 'javadoc'
  from 'build/docs/javadoc'
}

// define a task for the sources
task sourcesJar(type: Jar) {
  classifier = 'sources'
  from sourceSets.main.allSource
}

// list all of the artifacts
artifacts {
  archives jar
  archives javadocJar
  archives sourcesJar
}

// configure the publication stuff
uploadArchives {
  repositories.mavenDeployer {
    repository(url: project.publishUrl) {
      authentication(userName: project.publishUsername, password: project.publishPassword)
    }
  }
}

That's pretty much it.  Like the previous section, this build.gradle file supports creating the javadoc and source jars and will upload those using the same gradlew uploadArchives command.

Liferay SDK

In the Liferay SDK we have to configure Ivy to support the artifact publication. And actually this is quite easy because Liferay has already configured ivy to support their internal deployments to an internal Sonatype server, we just have to override the properties.

In your build.username.properties file in the root of the SDK, you need to just add some property overrides:

sonatype.release.url=http://localhost:888/archiva/repository/liferay/
sonatype.release.hostname=localhost
sonatype.release.username=dnebing
sonatype.release.password=dnebing1

That is your configuration for publishing. After building your plugin, i.e. using the ant war command, just issue the ant publish command to push the artifact to the repository. If you run ant jar-javadoc before the ant publish, your javadocs will be generated so they'll be available for publishing too. There's also an ant jar-source target available, but I didn't see where it was being uploaded to the repository, so that might not be supported by the SDK scripts.

One thing I did find, though, is that in each plugin you plan on publishing, you should edit the ivy.xml file in the plugin directory. The ivy.xml file has, as the first tag element, the line:

<info module="hello-portlet" organisation="com.liferay">

The organisation attribute is actually going to be the group id used during publishing so, unless you want all of your plugins to be in the com.liferay group, you'll want to edit the file to set it to what you need it to be.

I did check the templates and there doesn't seem to be a way to configure it.  The templates are all available in the SDK's tools/templates directory, so you could go into all of the individual ivy.xml files and set the value you want, that way as you create new plugins using the templates the default value will be your own.

Note that this only applies if you create plugins using the command line; I'm honestly not sure if you are using the Liferay IDE that the templates in the SDK folder are actually the ones the IDE uses for new plugin project creation.

Liferay Maven Projects

Liferay Maven projects are, well, simple projects based on Maven.  I'm not going to dive into all of the files here, but suffice it to say you add your repository servers, usually in your settings.xml file, and then you add to the pom file:

<distributionManagement>
  <repository>
    <id>archiva.internal</id>
    <name>Internal Release Repository</name>
    <url>http://localhost:888/archiva/repository/internal</url>
  </repository>
  <snapshotRepository>
    <id>archiva.snapshots</id>
    <name>Internal Snapshot Repository</name>
    <url>http://localhost:888/archiva/repository/snapshots</url>
  </snapshotRepository>
</distributionManagement>

With just this addition, you can use the mvn deploy command to push up your artifacts. Additionally, you can add support for publishing the javadocs and sources too: https://stackoverflow.com/questions/4725668/how-to-deploy-snapshot-with-sources-and-javadoc

Liferay Maven Workspace

For publishing purposes, the Liferay Maven workspace is merely a collection of submodules. This means that Maven pretty much is going to support the publication of submodules after you complete the configuration and pom changes mentioned in the previous Liferay Maven Projects section.

Normally the mvn deploy command will publish all of the submodules, but you can selectively disable submodule publish by configuring the plugin in the submodule pom: http://maven.apache.org/plugins/maven-deploy-plugin/faq.html#skip

Conclusion

This blog post ended up being a lot bigger than what I originally planned, but it does contain a great deal of information.

We reviewed how to set up an Archiva instance to use for your repository proxy.

We checked out the changes to make in each one of the Liferay development frameworks to leverage the repository proxy to pull all dependencies from the repository proxy, whether that proxy is Apache Archiva, Artifactory or Nexus.

We also learned how to configure each of the development frameworks to support publishing of the artifacts to share with the development team.

A lot of good stuff, if you ask me. I hope you find these details useful and, of course, if you have any comments leave them below and any questions, well post them to the forum and we'll help you out.

As a test, I timed the complete build of the https://github.com/liferay/liferay-blade-samples after deleting my ~/.m2/repository folder (purging my local system repository). Without the repository proxy, downloading all of the dependencies and completing the build took 69 seconds (note I have gigabit ethernet at home, so your numbers are going to vary from that). After purging ~/.m2/repository again and configuring for the repository proxy (pre-populated with artifacts), downloading all of the dependencies and completing the build took 45 seconds.

That is almost a 35% reduction in build time, and it means that 35% of the total build is taken to download artifacts even over my gigabit ethernet.

If you do not have gigabit ethernet where you're at, it would not surprise me to see that time to download increase, taking the % of reduction up along with it.

Building JS Portlets Part 2

Technical Blogs June 27, 2017 By David H Nebinger

Introduction

In part 1 of the blog series, we started a Vue.js portlet based in the Liferay Workspace, but actually there was no JS framework introduced just yet.

We're actually going to continue that trend here in part 2, but in this part we're going to tackle some of the important parts that we'll need in our JS portlets to fit them into Liferay.

Passing Portlet Instance Configuration

In part 1 our view.jsp page used the portlet instance configuration, displayed as two text lines:

<%@ include file="/init.jsp" %>

<p>
  <b><liferay-ui:message key="vue-portlet-1.caption"/></b>
</p>
<p><liferay-ui:message key="caption.flag.one"/> <%= 
  String.valueOf(portletInstanceConfig.flagOne()) %></p>
<p><liferay-ui:message key="caption.flag.two"/> <%= 
  String.valueOf(portletInstanceConfig.flagTwo()) %></p>

We're actually going to continue this, but we're going to leverage our <aui:script /> tag to leverage it in a new addition to the JSP:

<aui:script>

var <portlet:namespace/>portletPreferences = {
	flag: {
		one: <%= portletInstanceConfig.flagOne() %>,
		two: <%= portletInstanceConfig.flagTwo() %>
	}
};

</aui:script>

Using <aui:script />, we're embedding javascript into the JSP.

Inside of the script tag, we declare a new variable with the name portletPreferences (although this name is namespaced to prevent a name collision with another portlet on the page).

We initialize the variable as an object which contains our portlet preferences. In this example the individual flags have been set underneath a contained flag object, but the structure can be whatever you want.

The goal here is to create a unique Javascript object that will hold the portlet instance configuration values. We need to extract them in the JSP because once the code is over and running in the browser, it will not have access to the portlet config. Using this technique, we get all of the prefs in a Javascript variable so they will be available to the running JS portlet in the browser.

Passing Permissions

We're actually going to continue this technique to pass permissions from the back end to the JS portlet:

var <portlet:namespace/>permissions = {
	<c:if test="Validator.isNotNull(permissionChecker)">
		isSignedIn: <%= permissionChecker.isSignedIn() %>,
		isOmniAdmin: <%= permissionChecker.isOmniadmin() %>,
		hasViewOrgPermission: <%= permissionChecker.hasPermission(null, 
		  Organization.class.getName(), Organization.class.getName(), 
		  ActionKeys.VIEW) %>
	</c:if>
	<c:if test="Validator.isNull(permissionChecker)">
		isSignedIn: false,
		isOmniAdmin: false,
		hasViewOrgPermission: false
	</c:if>
};

Here we're trying to use the permission checker added by the init.jsp's <liferay-theme:defineObjects /> tag.

Although I'm confident I should never get a null permission checker instance, I'm using defensive programming to ensure that I can populate my permissions object even if the checker is not available.

When it is, I'm basically populating the object with keys for the permissions and then a scriptlet to evaluate whether the current user has the permission details.

Since we are building this as a Javascript object, we will be able to collect all of the permissions when the JSP is rendering the initial HTML fragment and allows us to ship the permissions back to the browser so the portlet can use the permissions to make decisions on what to view and what to allow editing on.

The only thing you need to figure out is what permissions you will need on the front end; once you know that, you can use this technique to gather those permissions and ship them to the browser.

NOTE: This does not replace the full use of the permission checker on the back end when processing requests. This technique passes the permissions to the browser, but as we all know it is easy for hackers to adjust these settings once retrieved in order to try to circumvent the permissions. These should be used to manage the UI, but in no way, shape or form should the browser control permissions when invoking services on the backend.

I18N

The previous sections have been fairly straight-forward; we have data on the backend (prefs and permissions) we need to have on front end but really only one way to pass them.

Handling the I18N in the JS portlets comes with a couple of alternatives.

It would actually be quite easy to continue the technique we used above:

var <portlet:namespace />languageKeys = {
	accessDenied: '<liferay-ui:message key="access-denied" />',
	active: '<liferay-ui:message key="active" />',
	localCustomMessage: '<liferay-ui:message key="local-custom-message" />'
};

I can tell you that this technique is extremely tedious. I mean, most apps have tens if not hundreds of language keys for buttons, messages, labels, etc. Itemizing them here will get the job done, but it is a lot of work.

Fortunately we have an alternative. Liferay actually provides the Liferay.Language.get(key) Javascript function. This function will conveniently call the back end to translate the given key parameter using the backend language bundle resolution. It is backed by a cache on the browser side so multiple calls for the same key will only call the backend once.

So, rather than passing the 'access-denied' message like we did above, we could just remember to replace hard-coded strings from our JS portlet's template code with calls like Liferay.Language.get('access-denied'). We would likely see the following for Vue templates:

var app5 = new Vue({
  el: '#app-5',
  data: {
    message: Liferay.Language.get('access-denied')
  },
  methods: {
    reverseMessage: function () {
      this.message = this.message.split('').reverse().join('')
    }
  }
})

Although this is convenient, it too has a problem. Because of how the backend resolves keys, only resource bundles registered with the portal can be resolved. If you are only going after Liferay known keys, that's no problem. But to use your local resource bundle, you will need to register it into the portal per instructions available here: https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-language-keys#modifying-liferays-language-keys. Note that you're not overriding so much as adding your local resource bundle into the mix to expose your language bundle.

So actually to keep things simple I would recommend a mixed implementation. For existing Liferay keys, use the Liferay.Language.get() method to pull the value from the backend. But instead of registering an additional resource bundle in the portal, just use the script-based technique above to pass your local keys.

This will minimize your coding impact, but if your local bundle has tens or hundreds of your own keys, well you might find it easier to just register your language bundle and stick with Liferay.Language.get().

Conclusion

What what? We're at the end of part 2 and still no real Javascript framework integration?

That is correct, we haven't pulled in Vue.js yet, although we will be doing that in Part 3.

I think that it is important to note that from our original 6 prerequisites, we have already either fully or partially satisfied:

  • Prereq #1 - Using the Liferay Workspace.
  • Prereq #6 - Using the Liferay Environment.

The big conclusion here is that we can use this pattern to preload data from the portal side to include in the JS applications, data which is otherwise not available in the browser once the JS app is running.  You can apply this pattern for collecting and passing data that you want available to the JS app w/o providing a runtime fetch mechanism or exposing portal/portlet data.

Note: Of course in my JSP/Javascript stuff above, there is no verification of JS, no proper handling of string encoding and other protections you might use to ensure you are not serving up some sort of JS attack. Just because I didn't present it, doesn't mean that you shouldn't include it in your production apps.

See you in Part 3...

 

Building JS Portlets Part 1

Technical Blogs June 27, 2017 By David H Nebinger

Introduction

In Liferay 7 CE / Liferay DXP, there are new facilities in place to help us create JS portlets. In this blog series I'm going to present a new project to demonstrate how to build Vue.js portlets.

Vue.js is a lightweight JS framework similar to React or Angular or ... I'm actually picking Vue.js for this series not so much because I think it is better than the other frameworks, but mostly because I want to focus on building JS portlets and I don't want to get hung up on perfect React or Angular implementation. I figured that by picking a newer framework I could present topics that affect all implementations and avoid the framework debates.

And who knows, maybe this will start a big trend of adopting Vue.js in Liferay. We'll just see how it goes.

Prerequisites

So I'm going to lay down some prerequisites, but they are not requirements per se, they're just things that I want to have being a regular portlet developer.

So prereq #1 is that the portlet has to fit into my Liferay Workspace. I mean, I'm building all kinds of modules in there: JSP fragment bundles, Service Builder modules, Liferay MVC portlet modules, etc. I don't want to maintain two separate repositories for normal stuff and JS stuff. So the JS portlets must fit into the Gradle-based Liferay Workspace for the general build process. I'm okay with the module leveraging other tools (gulp, npm, etc.) the portlets might need, but the Gradle build must rule them all.

Prereq #2 is that I need open and unfettered access to the internet. I know a lot of developers sit behind proxies and that's okay, as long as they can get to the web for plugins and projects from GitHub, open maven repositories, NPM repositories, etc. If you find yourself in a secured environment that needs approval for all external tools, libraries, etc., you might want to stop now and rethink following along here. All of the JS-based portlets are going to leverage a lot of new stuff from NPM, new build tools such as Gulp, new build plugins from Liferay GitHub repositories, etc.  There's going to be a long list of external sites to pull from, and if you need to get permissions for each one there is going to be a ton of red tape in your future. You might be better served just sticking with Liferay MVC portlet implementations and rely on the built-in SennaJS support to provide the ajaxy-sort of responsiveness we all expect now.

For the record, there really is nothing wrong with sticking with Liferay MVC and SennaJS. When you do a Liferay DXP trial walkthru and use the portlets on a page, you'll see that there are few full page refreshes, and when there are they are usually a result of page navigation within the portal. The portlets themselves are still using the regular Portlet Lifecycle, they're just getting invoked via AJAX and the browser is going to be doing partial DOM updates in the page. So you can get most of the benefits from the new whiz-bang JS frameworks without retooling yourself or your team.

Prereq #3 concerns deployment; I'd like to be able and build and deploy my portlet independently without requiring supporting theme work. As of right now I'm not 100% confident that will work or even that it is the best path, but it is something that I'd like to shoot for. To me, the more things that make a portlet deployment difficult, the more things there are that can go wrong during deployment.

Prereq #4 is that I am really only targeting JS portlets. I have no plan on co-running my Vue.js apps as both portlets and straight-up web apps, so I have no plans on testing, styling or running these guys outside of the portal.

Prereq #5 is that since they are running within the portal, I expect the UI to be consistent with the rest of the portal; buttons should look the same, fonts, etc. I don't want to have a portlet that stands out just because it is from some other framework type.

And finally, prereq #6 is that they must take advantage of the Liferay environment. I expect them to support portlet preferences via the Configuration panel. I expect them to respect the Liferay permissioning framework. I expect them to support localization through standard Liferay techniques. After all, I don't want to be doing things one way for standard Liferay stuff and some other way for JS portlets.

So let's get started...

Starting The Project

As per usual I'm going to be sticking with the Blade CLI for everything to highlight that you don't need an IDE to get these things started.

blade init liferay-vuejs

This will give me the new Liferay Workspace to build everything in. If you have an existing workspace, you're all good.

So this is really all you need to do from a workspace level.  Everything else goes into the individual modules.

I am planning on demonstrating remote services in the JS portlet, so eventually our modules folder will contain some Service Builder modules as well as a REST module, but we'll worry about creating those later.

In our modules folder we want to start the JS portlet itself.  Navigate to liferay-vuejs/modules to create the new module:

blade create -t mvc-portlet -p com.dnebinger.vue vue-portlet-1

So you might be wondering why we're using the Liferay MVC portlet template since we're building a JS portlet.

We use the Liferay MVC portlet as a template because we need to be able to kick off the JS inside of our portlet frame, we need to be able to declare and pull in resources, etc. The Liferay MVC portlet template will give that to us, plus a lot more.

Creating The Portlet Instance Preferences

Since we have a new Liferay MVC portlet, let's take a moment to create a configuration page.

What? We're not starting with the JS directly?

Well, no. Like I said in the prereqs, one of my goals is to include portlet prefs in order to be a proper Liferay portlet. To support that, we'll create a simple JSP configuration page for our portlet and worry about wiring it into JS later on...

Our configuration is going to be pretty simple. We're just going to have a couple of checkboxes to capture two flag values. Here's the full src/main/resources/META-INF/resources/configuration.jsp file:

<%@ include file="/init.jsp" %>

<%
  portletInstanceConfig = ConfigurationProviderUtil.getConfiguration(
      VuePortlet1PortletInstanceConfiguration.class,
      new ParameterMapSettingsLocator(request.getParameterMap(),
          new PortletInstanceSettingsLocator(themeDisplay.getLayout(), 
              portletDisplay.getPortletResource())));
%>
<liferay-portlet:actionURL portletConfiguration="<%= true %>" var="configurationActionURL" />

<liferay-portlet:renderURL portletConfiguration="<%= true %>" var="configurationRenderURL" />

<aui:form action="<%= configurationActionURL %>" method="post" name="fm">
  <aui:input name="<%= Constants.CMD %>" type="hidden" value="<%= Constants.UPDATE %>" />
  <aui:input name="redirect" type="hidden" value="<%= configurationRenderURL %>" />

  <div class="portlet-configuration-body-content">
    <div class="container-fluid-1280">
      <aui:fieldset-group markupView="lexicon">
        <aui:fieldset>
          <aui:input label="config.flag.one" name="preferences--flagOne--" 
              type="toggle-switch" value="<%= portletInstanceConfig.flagOne() %>" />
          <aui:input label="config.flag.two" name="preferences--flagTwo--" 
              type="toggle-switch" value="<%= portletInstanceConfig.flagTwo() %>" />
        </aui:fieldset>
      </aui:fieldset-group>
    </div>
  </div>

  <aui:button-row>
    <aui:button cssClass="btn-lg" type="submit" />
  </aui:button-row>
</aui:form>

This is going to give us two boolean portlet preferences leveraging the new Liferay Config Admin services. They will be instance parameters so, if we choose to create an instanceable portlet, each one will have its own preferences.

We will use our view.jsp page to show the values:

<%@ include file="/init.jsp" %>

<p>
  <b><liferay-ui:message key="vue-portlet-1.caption"/></b>
</p>
<p><liferay-ui:message key="caption.flag.one"/> <%= 
  String.valueOf(portletInstanceConfig.flagOne()) %></p>
<p><liferay-ui:message key="caption.flag.two"/> <%= 
  String.valueOf(portletInstanceConfig.flagTwo()) %></p>

Since we're showing the JSP, here's the init.jsp:

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<%@ taglib uri="http://java.sun.com/portlet_2_0" prefix="portlet" %>
<%@ taglib uri="http://liferay.com/tld/aui" prefix="aui" %>
<%@ taglib uri="http://liferay.com/tld/portlet" prefix="liferay-portlet" %>
<%@ taglib uri="http://liferay.com/tld/theme" prefix="liferay-theme" %>
<%@ taglib uri="http://liferay.com/tld/ui" prefix="liferay-ui" %>
<%@ taglib uri="http://liferay.com/tld/frontend" prefix="liferay-frontend" %>

<%@ page import="com.dnebinger.vue.portlet.configuration.VuePortlet1PortletInstanceConfiguration" %>
<%@ page import="com.liferay.portal.kernel.module.configuration.ConfigurationProviderUtil" %>
<%@ page import="com.liferay.portal.kernel.settings.PortletInstanceSettingsLocator" %>
<%@ page import="com.liferay.portal.kernel.util.Constants" %>
<%@ page import="com.liferay.portal.kernel.settings.ParameterMapSettingsLocator" %>

<liferay-frontend:defineObjects />

<liferay-theme:defineObjects />

<portlet:defineObjects />

<%
  VuePortlet1PortletInstanceConfiguration portletInstanceConfig =
    ConfigurationProviderUtil.getConfiguration(VuePortlet1PortletInstanceConfiguration.class,
      new PortletInstanceSettingsLocator(themeDisplay.getLayout(), portletDisplay.getId()));
%>

Okay, so we now have basically a simple Liferay MVC portlet project that has portlet preferences and initial support for the language bundle (as seen in the view.jsp file).

If we build and deploy the portlet, this is what we currently will see (after changing one of the toggles on the configuration panel):

Conclusion

Hey, wait a minute, there's no Javascript frameworks in here! I don't see any Vue.js stuff, no node, in fact this looks like a simple Liferay MVC portlet! What's going on here?

Well, I should have said that this is actually going to be a blog series. In this first post, we're basically going to stop here since our portlet is ready to start overlaying the key parts for building out our Javascript portlet.

You can find the sample project checked in here: https://github.com/dnebing/liferay-vuejs.

See you in Part 2!

Responsive Responsibility

General Blogs June 26, 2017 By David H Nebinger

According to Liferay:

Digital Experience Platform (DXP) is an emerging category of enterprise software seeking to meet the needs of companies undergoing digital transformation, with the ultimate goal of providing better customer experiences.

The focus here is the customer experience.  Better experiences, regardless of whether they come to you from a desktop computer or a mobile device, lead to better outcomes (happier customers, returning viewers, more sales, etc.). That is why the DXP is garnering more focus from the enterprise than ever before.

So part of the better customer experience is the recognition that your site, whether as an intranet, extranet, corporate internet site or even a B2B bridge, must acknowledge the strength and reach of mobile as a platform, and mobile support must be considered as an unwritten requirement for our projects.

For most developers, this often gets reduced down to needing to support responsive in the website.  If the website supports responsive design, as the screen size is reduced the view changes to better serve the device.

It is easy to assume that the theme and layout developer(s) are the only ones responsible for managing the responsive aspects of your website. After all, that is the primary place where your responsive design is implemented to manage how your navigation is presented, how the multi-column layout adjusts to the shrinking size, etc. That is handled by the theme and layout developers, so they are the only ones that have to worry about responsive design.

But that really isn't true. Responsive is a responsibility of content creators and portlet developers too.

As a content creator, you may have some fancy ADTs or web content templates that look great under the desktop view. You have your content, you have a side bar with some details, ... But here's a question - have you tried looking at your page on a mobile device? Does your side bar or localized grid look as good in the mobile view as it does on the desktop view?

As a content creator, it is your responsibility to incorporate responsive and mobile-first design rules into your ADTs and templates. After all, the theme/layout developer is only going to manage the outer column that your content is in, they will not have any responsibility for restructuring content within the column, that is your job.

And portlet authors are not off the hook here either. As a portlet developer it is easy to fall into a pattern of testing your portlet on the desktop and verifying it works there. Even the testing team, unless they have a mandate to test for mobile, can miss this kind of verification. But if you currently have mobile support requirements or you believe that will have mobile in your future, developing with responsive in mind and testing for responsive and a good mobile-friendly experience is important.

If you use the Liferay AUI tag framework appropriately, you're likely a good way towards being mobile friendly. If you're using another framework, well then responsive is totally up to you and your framework.

In either case, you should take some time to test your app, including your tables and forms, in a mobile view and verify that it is still usable. After all, a form that looks great and works great in a desktop orientation can easily become unusable in a mobile presentation. Take Liferay's OOTB login portlet - that portlet works whether shown in a full page view or if it is dropped in the small column of the 30/70 layout - it is fully responsive and adjusts to the space it has available. Are your portlets equally as responsive?

How do you know when responsive is your responsibility? There are some key indicators that you can look for. Are you using a one column layout? That is the clearest indication that you are taking over responsive since you are trying to take advantage of the full page width. Even if you are only targeting the large column (the 70 in the 70/30 or 30/70 layouts for example), you also have responsive aspects that you should consider.

So responsive is everyone's responsibility. It's also one that many of us forget to consider unless it is a requirement of the project that we're currently on.

Incorporating responsive tests and creating sites that are mobile-first/mobile-friendly in every phase of development or content creation is a key aspect of implementing your own digital experience platform.

Remember, this is all about the user's digital experience, not your development project convenience.

Disabling LPKG Index Validation

Technical Blogs June 21, 2017 By David H Nebinger

Just a quick blog today...

When you start up LR 7 CE/LR DXP, you'll often see it stop while trying to validate LPKGs. This is a security measure which is used to verify that the LPKG files have not been tampered with.

In development, though, this delay is just painful.

Fortunately it can be disabled. Add the following line to portal-ext.properties:

module.framework.properties.lpkg.index.validator.enabled=false

That's all there is to it.

Note that I probably would not do this in my production environments. It is, after all, in there to protect your environment. Disabling it in production removes that small bit of protection and doesn't seem wise.

Fixing Module Package Access Modifiers

Technical Blogs June 16, 2017 By David H Nebinger

If you're a Java Architect or Senior Developer, you know just how important the Java access modifiers are.

Each choice about what to make public, protected, private or package protected is important from an architectural perspective. If you make the wrong choices you can expose too much of your implementation or not enough, you can give subclasses unlimited ability to change internal functionality or little access at all.

If you've ever worked on a library or framework project, either public or a company internal project, you know that these decisions are even more important. With these types of projects, if you make the wrong decision about access modifiers you can end up with irate users or an unused library.

So there exists two different sets of rules, one for app developers and one for lib developers.  App developers are going to use a lot of private to hide stuff and public to expose; they'll define pojos w/ getters but no setters and perhaps a package private constructor to initialize final fields. Methods are typically public or private and protected only comes into play if there are known plans to subclass.

Library developers swing the other way, allowing for every class to potentially be subclassed, extensive use of protected over private, no use of package protected, etc. Basically implementation details will be protected from those using the class yet exposed for subclasses to extend or override.

Rules for lib developers are not hard and fast, some libraries certainly do a better job than others for exposing necessary access points.

I'm sure most of us have been there... Using a class in someone's jar where they declare, for example, a private field with a public getter but no setter, resulting in a class that is difficult to extend and customize. Then we have to break out our Reflection skills to access the field, change the access and update the value. Obviously we don't want to do these things, but we get forced into it because the library developer used application developer rules when defining the access modifiers.

OSGi Access Modifiers

OSGi bundles has its own set of "access modifiers".  We've seen those in the bnd.bnd files, at a package level you can choose to export packages or mark them as private.

Choices you make along these lines affect what you can extend/override in other bundles. If you mark a package as private, the classes are not available to another bundle to leverage and use.

Just like app vs lib developer access modifier rules, there is a similar distinction for OSGi application bundle developer rules and OSGi library bundle developer rules.  For the app bundle developer, packages are exported if they can be used by other modules, otherwise they are private to protect them. For lib bundle developers, you're going to export pretty much every package because you can never know how your library module will be used.

How Liferay Gets This Wrong

Probably my biggest complaint with the Liferay 7 CE / Liferay DXP modules is that I believe the developers were creating modules as though they are app bundle developers when, in fact, they should have been library bundle developers.

For example, the Liferay chat portlet... The Liferay chat portlet does not export a single package; every package in the module is private. As an application portlet bundle developer, this is probably exactly the decision I would make to protect my code, it won't need to be extended or overridden, as the developer if that comes up in the future I can just do it.

But the Liferay developers, they should not have built it this way in my opinion. Me, I may have a need to make some significant changes to the chat portlet, not just for JSP changes but perhaps also some logic. From that point of view, the Liferay chat portlet is a library bundle, a "base" bundle that I want to be able to extend or override. The com.liferay.chat.web.portlet.ChatPortlet is not full Liferay MVC, so all business logic is tied up in that class. If I want to customize the chat portlet, I need to copy the class and make my change and hope that Liferay doesn't update the portlet.

In order to complete my customization, I might need to change a method in the ChatPortlet itself. Sure, with OSGi I can replace the OOTB portlet class with my own, but I really want to be able to do something like:

@Component(...)
public class MyChatPortlet extends ChatPortlet {...}

This would allow me to replace the OOTB portlet using a higher service ranking for mine, yet I can keep as much of the original logic as-is without taking over responsibility for maintaining the full class myself.

For another concrete example, take the Liferay Login portlet.  This portlet is full-on Liferay MVC so, if I want to override the create account action, I just need to register an instance of MVCActionCommand with the right mvc.command.name and a higher service ranking. But again, since most of the packages in the Liferay Login portlet are private, I cannot do something like:

@Component(...)
public class CustomCreateAcountMVCActionCommand extends CreateAccountMVCActionCommand {...}

If my requirement is just to do some additional work in the addUser() method, I don't want to copy the whole Liferay class just to be able to tweak one method.  What happens when the next release comes out? I'd have to copy the whole class in and release again. At least by extending I only have to worry about keeping in sync the stuff I change, everything else is extended.

Can We Fix It?

As Bob the Builder says, "Yes We Can!", and it turns out it is really, really easy!

Let's say we want to tackle being able to extend Liferay's CreateAccountMVCActionCommand class. Our requirement is that we need to log whenever an account is being created. I know, pretty lame, but the point here is to extend a class which Liferay didn't plan on our extending - once we're over that hump, any additional requirements will be easy to tackle.

So let's get started. The first thing we need is a Fragment bundle. That's right, you read correctly, a Fragment bundle.

blade create -t fragment -h com.liferay.login.web -H 1.0.0 open-liferay-web

That gets us started. We need to open the bnd.bnd file and we're going to be doing two basic things:

  1. Copy in most of the stuff from the original bnd.bnd file. The only change we want to make is with the exported packages, so we want to keep everything else.
  2. Change the exported package line (or add one) to include the packages we want to export, and we'll also change to a version range.

I've gone ahead and done this, and here's what I ended up with:

Bundle-Name: Open Liferay Login Web
Bundle-SymbolicName: open.login.web
Bundle-Version: 1.1.19
Fragment-Host: com.liferay.login.web;bundle-version="[1.0.0,2.0.0)"

Export-Package: com.liferay.login.web.constants,\
	com.liferay.login.web.internal.portlet.action
Import-Package:\
	javax.net.ssl,\
	\
	*
Liferay-Releng-Module-Group-Description:
Liferay-Releng-Module-Group-Title:

So you can see that I satisfied #1 above, I've kept the import packages and the Liferay-Releng guys.

For #2, my export package statement was updated so now we're going to be exporting the com.liferay.login.web.internal.portlet.action package. This will allow us to subclass Liferay's action command by making it visible.

I also tweaked the Fragment-Host version. Instead of using a single version, I've changed it to a version range. Why? Because this fragment bundle doesn't care what version is actually deployed, we're just planning on exporting the package regardless of version.

And that's it! See, I said it was easy. You don't really need any other files, you're basically just going to be building and deploying a jar w/ the overriding OSGi manifest information.

Testing

Testing is also kind of easy. We know we want to extend the CreateAccountMVCActionCommand, so we just create a bundle and specify the contents. I did that already, too, and here's what I got:

@Component(
	property = {
		"javax.portlet.name=" + LoginPortletKeys.FAST_LOGIN,
		"javax.portlet.name=" + LoginPortletKeys.LOGIN,
		"mvc.command.name=/login/create_account",
		"service.ranking:Integer=100"
	},
	service = MVCActionCommand.class
)
public class CustomCreateAccountMVCActionCommand extends CreateAccountMVCActionCommand {

	@Override
	protected void addUser(ActionRequest actionRequest, ActionResponse actionResponse) throws Exception {
		_log.info("About to create a new account.");

		super.addUser(actionRequest, actionResponse);
	}

	@Reference(unbind = "-")
	protected void setLayoutLocalService(
		LayoutLocalService layoutLocalService) {
		super.setLayoutLocalService(layoutLocalService);
	}

	@Reference(unbind = "-")
	protected void setUserLocalService(UserLocalService userLocalService) {
		super.setUserLocalService(userLocalService);
	}

	@Reference(unbind = "-")
	protected void setUserService(UserService userService) {
		super.setUserService(userService);
	}

	@Reference(unbind = "-")
	protected void setAuthenticatedSessionManager(AuthenticatedSessionManager sessionMgr) {
		update("_authenticatedSessionManager", sessionMgr);
	}
	@Reference(unbind = "-")
	protected void setListTypeLocalService(ListTypeLocalService listTypeLocalService) {
		update("_listTypeLocalService", listTypeLocalService);
	}
	@Reference(unbind = "-")
	protected void setPortal(Portal portal) {
		update("_portal", portal);
	}

	protected void update(final String fieldName, final Object value) {
		try {
			Field f = getClass().getSuperclass().getDeclaredField(fieldName);

			f.setAccessible(true);

			f.set(this, value);
		} catch (IllegalAccessException e) {
			_log.error("Error updating " + fieldName, e);
		} catch (NoSuchFieldException e) {
			_log.error("Error updating " + fieldName, e);
		}
	}

	private static final Log _log = LogFactoryUtil.getLog(CustomCreateAccountMVCActionCommand.class);
}

Oh, crap. What is all of this junk?

Well, first let's get the necessary stuff out of the way. Our @Component reference has the necessary properties and service ranking so OSGi will use our action command class, which we are now extending Liferay's CreateAccountMVCActionCommand. We also have the overriding addUser() method to log when we are about to create an account, so we have satisfied our requirement.

The rest of the class, well that is necessary to inject the right OSGi references into the super class that it expects. Some of these are easy, such as the layout service and the two user services. The others are hard, the authenticated session manager, list type service and the portal instance.

Remember I started this blog saying that the rules for a library developer are different than an app developer, and when you have a bad library class you're left to using Reflection to update a super class? Yep, here's an example. Now, I can't really fault Liferay here for this because they created the module as though they were an app module developer, so the fact that they used app developer rules here is no surprise. Fortunately though I could use Reflection to get to the super field and update it appropriately.

Conclusion

So, when we build and deploy these two modules and create a new account, we find we have been successful:

13:30:31,360 INFO  [http-nio-8080-exec-6][CustomCreateAccountMVCActionCommand:39] About to create a new account.

Through a simple (really simple) fragment bundle we were able to export a package that Liferay did not export. From there, we can extend classes from that package to introduce our own modifications without having to copy everything from the original.

It's important to note the hurdles we had to bypass for the OSGi stuff, especially the Reflection usage to update the super class.

If you're going to go down this path, you will be doing things like this. There's no way around it, not all @Reference usage in Liferay classes are tied to methods; when they are, great, but when they're not you'll have to peel them open yourself.

Hope this helps you on your Liferay 7 CE / Liferay DXP developer journey!

 

REST Custom Context Providers

Technical Blogs June 16, 2017 By David H Nebinger

So a question came up today how to access the current user as part of a REST method body.

My friend, Andre Fabbro, was trying to build out the following application:

@ApplicationPath("/myapp")
@Component(immediate = true, service = Application.class)
public class MyApplication extends Application {

    @GET
    @Path("/whoami")
    @Produces(MediaType.APPLICATION_JSON)
    public String getUserFullName() {

        User user = ????;

        return user.getFullName();
    }
}

He was stuck trying to get the current user in order to finish the whoami handler.

So, being a long-time Liferay guy, I fell back on what I knew, and I pointed him towards the PrincipalThreadLocal and the getName() method to get the current user id.  Of course ThreadLocals kind of smell, they're almost like global variables, but I knew it would work.

My other friend, Rafael Oliveira, showed us both up and introduced me to the new concept of a custom context provider. You see, he knew that sometime soon a new module, com.liferay.portal.workflow.rest was coming and it was going to bring with it a new class, com.liferay.portal.workflow.rest.internal.context.provider.UserContextProvider. He did us one better by providing an implementation of Andre's app using @Context and the new UserContextProvider:

import javax.ws.rs.core.Context;

@ApplicationPath("/myapp")
@Component(immediate = true, service = Application.class)
public class MyApplication extends Application {

    @GET
    @Path("/whoami")
    @Produces(MediaType.APPLICATION_JSON)
    public String getUserFullName(@Context User user) {
        return user.getFullName();
    }
}

I was kind of blown away having learned something completely new with DXP and I needed to know more.

Before going on, though, all credit for this blog post goes to Rafael, all I'm doing here is putting it to electronic paper for us all to use for Liferay REST application implementations.

Basic @Context Usage

So when you create a new module using "blade create -t rest myapp", BLADE is starting a new JAX-RS-based RESTful application that you can build and deploy as an OSGi module. Using JAX-RS standard conventions, you can build out your RESTful methods using common annotations and (hopefully) best practices.

JAX-RS actually provides the javax.ws.rs.core.Context annotation and is used to inject common servlet-based values. Using @Context, you can define a method parameter that is not part of the RESTful call but are injected by the JAX-RS framework, kind of like the automagic ServiceContext injection in ServiceBuilder remote services.

Out of the box, JAX-RS Context annotation supports injecting the following parameters in methods:

Type Description
javax.ws.rs.core.Application Provides access to metadata information on the JAX-RS application.
javax.ws.rs.core.UriInfo Provides access to application and request URI information.
javax.ws.rs.core.Request Provides access to the request used for the method.
javax.ws.rs.core.HttpHeaders Provides access to the HTTP header information for the request.
javax.ws.rs.core.SecurityContext Provides access to the security-related information for the request.
javax.ws.rs.ext.Providers Provides runtime lookup of provider instances.

To use these, you just add appropriately decorated parameters to the REST method. If necessary, we could easily add a method to the application above such as:

@GET
@Path("/neato")
@Produces(MediaType.APPLICATION_JSON)
public String getStuff(@Context Application app, @Context UriInfo uriInfo, @Context Request request,
        @Context HttpHeaders httpHeaders, @Context SecurityContext securityContext, @Context Providers providers) {
    ....
}

The above getStuff() method will be handling all requests to the /neato path, but all of the parameters are injected, none are provided in the URL or as parameters; they are injected automagically by JAX-RS.

Custom @Context Usage

So these types are really nice, but they really don't do anything for our Liferay integration. What would be really cool is if we could use @Context to inject some Liferay parameters.

And we can! As Rafael pointed out, there is a new module in the pipeline for workflow to invoke RESTful methods on the backend. The new module is the portal-workflow-rest project. I'm not sure, but I believe this is going to be part of the upcoming GA4 release, but don't hold me to that.

Once available, this project will provide three new types that can be injected into RESTful method parameters:

Type Description
com.liferay.portal.kernel.model.Company The Liferay Company associated with the request.
java.util.Locale The locale associated with the request.
com.liferay.portal.kernel.model.User The Liferay User associated with the request.

So, like the out of the box parameters, we could extend our getStuff() method with these parameters too:

@GET
@Path("/neato")
@Produces(MediaType.APPLICATION_JSON)
public String getStuff(@Context Application app, @Context UriInfo uriInfo, @Context Request request,
        @Context HttpHeaders httpHeaders, @Context SecurityContext securityContext, @Context Providers providers,
        @Context Company company, @Context Locale locale, @Context User user) {
    ....
}

Just pick from all of these different available types to get the data you need and run with it.

Remember these will not be available in GA3 nor in DXP just yet - I'm sure they'll make it in soon, but I'm not aware of the schedule for either product lines.

Writing Custom Context Providers

So to me, the biggest value of this new module is this package: https://github.com/liferay/liferay-portal/tree/master/modules/apps/forms-and-workflow/portal-workflow/portal-workflow-rest/src/main/java/com/liferay/portal/workflow/rest/internal/context/provider

Why? Because they expose how we can write our own custom context provider implementations so we can inject custom parameters into REST methods.

Say, for example, that we want to inject a ServiceContext instance. I'm not sure if the portal source already has one of these fellas, but if so let's pretend it doesn't exist and we want to write our own. Where are we going to start?

So first you need a project, we'll create a blade workspace:

blade init custom-context-provider

We also need a new module to develop, so we'll change to the custom-context-provider/modules directory to create an initial module:

blade create -t api -p com.dnebinger.rest.internal.context.provider service-context-context-provider

This will give us a nearly empty API module. We'll end up cleaning out most of the generated files, but we will end up with the com.dnebinger.rest.internal.context.provider.ServiceContextContextProvider class:

package com.dnebinger.rest.internal.context.provider;

import com.liferay.portal.kernel.exception.PortalException;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.service.ServiceContext;
import com.liferay.portal.kernel.service.ServiceContextFactory;
import org.apache.cxf.jaxrs.ext.ContextProvider;
import org.apache.cxf.message.Message;
import org.osgi.service.component.annotations.Component;

import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.ext.Provider;

/**
 * class ServiceContextContentProvider: A custom context provider for ServiceContext instantiation.
 *
 * @author dnebinger
 */
@Component(immediate = true, service = ServiceContextContentProvider.class)
@Provider
public class ServiceContextContentProvider implements ContextProvider {
	/**
	 * Creates the context instance
	 *
	 * @param message the current message
	 * @return the context
	 */
	@Override
	public ServiceContext createContext(Message message) {
		ServiceContext serviceContext = null;

		// get the current HttpServletRequest for building the service context instance.
		HttpServletRequest request = (HttpServletRequest) message.getContextualProperty(PROPKEY_HTTP_REQUEST);

		try {
			// now we can create a service context
			serviceContext = ServiceContextFactory.getInstance(request);

			// done!
		} catch (PortalException e) {
			_log.warn("Failed creating service context: " + e.getMessage(), e);
		}

		// return the new instance.
		return serviceContext;
	}

	private static final String PROPKEY_HTTP_REQUEST = "HTTP.REQUEST";

	private static final Log _log = LogFactoryUtil.getLog(ServiceContextContentProvider.class);
}

So this is pretty much the whole module. Easy, huh?

Conclusion

Now that we can create custom context providers, we can use this one for example in the original code:

@ApplicationPath("/myapp")
@Component(immediate = true, service = Application.class)
public class MyApplication extends Application {

    @GET
    @Path("/whoami")
    @Produces(MediaType.APPLICATION_JSON)
    public String getUserFullName(@Context ServiceContext serviceContext) {

        User user = _userLocalService.fetchUser(serviceContext.getUserId());

        return user.getFullName();
    }

    @Reference
    private UserLocalService _userLocalService;
}

These custom context providers become the key for being able to create and inject non-REST parameters into your REST methods.

Check out the code from GitHub: https://github.com/dnebing/custom-context-provider

Enjoy!

Resolving Missing Components

Technical Blogs June 15, 2017 By David H Nebinger

So if you've started developing for Liferay 7 CE / Liferay DXP, I'm sure you've been hit at one point or another with the old "unresolved reference" issue that prevents your bundle from starting.

You would have seen it by now, the Gogo shell where you list the beans and find your bean there stuck in the Installed state. You try starting it and Gogo tells you about the unresolved reference you have and you're stuck going back to your bnd.bnd file to resolve the dependency issue.

This is so common, in fact, that I wrote a blog post to help resolve them: https://web.liferay.com/web/user.26526/blog/-/blogs/osgi-module-dependencies

While this will be your issue more often than not, there's another form of "unsatisfied reference" problem that leads to missing components rather than non-started bundles.

The Case of the Missing Component

You can have a case where your module starts but your component is not available. This sounds kind of strange, right? You've taken the time to resolve all of those 3rd party dependency jars, the direct and transitive ones, and your bean starts cleanly and there are no errors.

But your component is just not available. It seems to defy logic.

So, here's the skinny... Any time your component has an @Reference with default binding, you are basically telling OSGi that your component just has to have the reference injected or else it cannot be used.

That's where this comes from - you basically have an unsatisfied reference to some object that was supposed to be @Reference injected but could not be found; since the reference is missing, your component cannot start and it is therefore not available.

There's actually a bunch of different ways that this scenario can happen:

  • The @Reference refers to an object from another module that was not deployed or has not started (perhaps because of normal unresolved references). This is quite common if you deploy your Service Builder API module but forget to deploy the service module.
  • You have built in circular references (more below).
  • You use a target filter for the @Reference that is too narrow or incorrect, such that suitable candidates cannot be used.

In all of these cases you'll be stuck with a clean component, just one that cannot activate because of unsatisfied references.

Sewing (@Reference) Circles

Reference circles are real pains to resolve but they rise out of your own code. Understanding reference circles is probably best started through an example.

Let's say we are building a school planning system. We focus on two major classes, a ClassroomManager and an InstructorManager. The ClassroomManager has visibility on all classrooms and is aware of the schedule and availability. The InstructorManager has the instructor roll and is aware of their schedule and availability.

It would be quite natural for a ClassroomManager to use an InstructorManager to perhaps find an available instructor to substitute in a class. Likewise it would be natural for an InstructorManager to need a ClassroomManager to try to reschedule a class to another time in an available room.

So you might find yourself creating the following classes:

@Component
public class ClassroomManager {

  @Reference
  private InstructorManager instructorManager;
}

@Component
public class InstructorManager {

  @Reference
  private ClassroomManager classroomManager;
}

If you look at this code, it seems quite logical.  Each class has a need for another component, so it has been @Reference injected. Should be fine, right?

Well actually this code has a problem - there's a circular reference.

When OSGi is processing the ClassroomManager class, it knows that the class cannot activate unless there's an available, activated InstructorManager instance to inject. Which there isn't yet, so this class cannot activate.

When OSGi is processing the InstructorManager class, it knows that the class cannot activate unless there's an available, activated ClassroomManager instance to inject. Which there isn't yet, so this class cannot activate.

But wait, you say, we just did the ClassroomManager, we should be fine! We're stuck, though, because the ClassroomManager could not activate because of the unsatisfied reference.

This is your reference circle - neither component can start because they are circularly dependent upon each other.

Resolving Component Unsatisfied References

Resolution is not going to be the same for every unsatisfied component reference.

If the problem is an undeployed module, resolving is as simple as deploying the missing module.

If the problem is an unstarted module, resolving is a matter of starting the module (perhaps fixing whatever problem that might be preventing it from starting in the first place).

For a reference target filter issue, well those are going to be challenging. You'll have to figure out if the target is not right or too narrow and make appropriate adjustments.

The circular reference resolutions can be resolved by refactoring code - instead of big ClassroomManager and InstructorManager classes, perhaps use a bunch of smaller classes that don't result in similar reference circles.

Another option is to use different ReferenceCardinality, ReferencePolicy and ReferencePolicyOption values (see my blog post on annotations, specifically the section on the @Reference annotation). You could switch both from MANDITORY to OPTIONAL ReferenceCardinalities, DYNAMIC for the ReferencePolicy, ...  The right set is usually mandated by what the code can handle and requires, but the outcome would allow the components to activate without the initial references being satisfied, but once activated the references will be post-injected.

How Do You Fix What You Can't Find?

This, for me, has been kind of a challenge. Of course the answer lies within one of the Gogo shell commands, but I've always found it hard to separate the wheat (the components with unsatisfied references) from the chaff (the full output with all component status details from the bundle).

For me, I've found it easiest to use TripWire CE or TripWire DXP. After going to the TripWire panel in the control panel, click on the Take Snapshot card and you can actually drill into and view all unsatisfied references.  The following screen capture is an actual view I used to resolve an unsatisfied reference issue:

The issue I was looking at was the first unsatisfied reference line for my com.liferay.metrics.health.portal.layouts.LayoutHealthCheck component. It just wouldn't activate and I didn't know why; I knew it wasn't available, it wasn't doing its thing, but the module was successfully started so what could the problem be?

Well the value for the key spells it out - I have two unsatisfied references for two different gauge instances. And you know what? it is totally true. Those two missing references happen to be the 2nd and 3rd lines in the list above, and they in turn had unsatisfied references that needed to be resolved, ...

Conclusion

The point here is that in order to resolve unsatisfied references, you need to be able to identify them. Once you can identify the problem, you can resolve them and move on.

For me, I've found it easiest to use TripWire CE or TripWire DXP to identify the unsatisfied references, it does so quickly, easily, and doesn't require memorizing Gogo shell commands to get it done.

 

Securing The /api/jsonws UI

Technical Blogs June 12, 2017 By David H Nebinger

The one thing I never understood was why the UI behind the /api/jsonws is publicly viewable.

I mean, there's lots of arguments for it to be secured:

  • Exposing all of your web service APIs exposes attack vectors to hackers. Security by obscurity is often one of the best and easiest form of security that you can have1.
  • Just because users may have permission to do something, that doesn't mean you want them to. They might not be able to get to a portlet to delete some web content or document, but if they can get to /api/jsonws and know anything about Liferay web services and parameters, they might have fun trying to do it.
  • It really isn't something that non-developers should ever be looking at.

I'm sorry, but I've tried really hard and I can't think of a single use case where making the UI publicly available is a good thing. I guess there might be one, but at this point it seems like it should just be an edge case and not a primary design requirement.

A client recently asked about how to secure the UI, and although I had wondered why it wasn't secured, I decided it was time for me to figure out how to do it.

The UI Implementation

It was actually a heck of a lot easier than what I thought it was going to be.

I had in my head some sort of complicated machinery that was aware of all locally deployed remote services and perhaps was leveraging reflection and stuff to expose methods and parameters and ...

But it wasn't that complicated at all.

The UI is implemented as a basic JSP-based servlet. Whether in Liferay 6.2 or Liferay 7 CE or Liferay DXP, there are core JSPs in /html/portal/api/jsonws that implement the complete UI servlet code.

For incoming remote web service calls, the portal will look at the request URI - if it is targeting a specific remote service, the request is handed off to the service for handling. However, if it is just /api/jsonws, the portal passes the request to the /html/portal/api/jsonws/index.jsp page for presentation.

Securing the UI

All I'm going to do is tweak the JSP to make sure the current user is an OmniAdmin if they get to see the UI. Nothing fancy, I admit, but it gets the job done. If you have more complicated requirements, you're free to use this blog as a guide but you're on your own for implementing them.

My JSP change is basically going to be wrapping the content area to require OmniAdmin to view the content, otherwise you will see a permission failure. Here's what I came up with:

<div id="content">
	<%-- Wrap content in a permission check --%>
	<c:if test="<%= permissionChecker.isOmniadmin() %>">
		<div id="main-content">
			<aui:row>
				<aui:col cssClass="lfr-api-navigation" width="<%= 30 %>">
					<liferay-util:include page="/html/portal/api/jsonws/actions.jsp" />
				</aui:col>

				<aui:col cssClass="lfr-api-details" width="<%= 70 %>">
					<liferay-util:include page="/html/portal/api/jsonws/action.jsp" />
				</aui:col>
			</aui:row>
		</div>
	</c:if>
	<c:if test="<%= !permissionChecker.isOmniadmin() %>">
		<liferay-ui:message key="you-do-not-have-permission-to-view-this-page" />
	</c:if>
</div>

This code I took mostly from the DXP version of the file, so use the file from the version of Liferay you have so you don't introduce some sort of source issue. I've highlighted the real code that I added so you can work it into your override.

Creating the Project

So regardless of version, we're going to be doing a JSP hook, they just get implemented a little differently.

For Liferay 6.2, it is just a JSP hook. Build a JSP hook and pull in the original /html/portal/api/jsonws/index.jsp file and edit in the change outlined above. I'm not going to get into details for building a 6.2 JSP hook, those have been rehashed a number of times now, so there's no reason for me to rehash it again. Just build your JSP hook and deploy it and you should be fine.

For Liferay 7 CE, well let me just say that what I'm about to cover for DXP is how you will be doing it once GA4 is released.  Until then, the path I'm using below won't be available to you.

For Liferay DXP (and CE GA4+), we'll follow the instructions from https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-core-jsps to override the core JSPs using an OSGi module.

So first I created a new workspace using the command "blade init api-jsonws-override".

Then I entered the api-jsonws-override/modules directory and created my new module using the command "blade create -t service -p com.liferay.portal.jsonwebservice.override api-jsonws-override".

I don't like building code myself, so the first thing I did was add a dependency to a module which has a base implementation of the CustomJspBag in my build.gradle file:

dependencies {
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.impl", version: "2.0.0"
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.6.0"
    compileOnly group: "com.liferay", name: "com.liferay.portal.custom.jsp.bag", version: "1.0.0"
    compileOnly group: "org.osgi", name: "org.osgi.core", version: "5.0.0"
    compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0"
}

It's actually that com.liferay.portal.custom.jsp.bag guy which makes my project not work for Liferay 7 CE, it's not available in GA3 but I expect it to be released with GA4. If you don't want to wait for GA4, you can of course not extend the BaseCustomJspBag class like I'm about to and can build out the complete CustomJspBag interface in your class.

NOTE: I really, really dislike the fact that I have to pull in the com.liferay.portal.impl dependency above. I have to do that because for some reason the CustomJspBag interface is only in the portal-impl project and not portal-kernel as we would normally expect.
Do not import this dependency yourself unless you are really, really, really sure that you gotta have it. 95% of the time you're actually going to be wrong, so if you think you need it you really have to question whether that is true or whether you're perhaps missing something.

Since I have the module with the base class, I can now write my JsonWsCustomJspBag class:

package com.liferay.portal.jsonwebservice.override;

import com.liferay.portal.custom.jsp.bag.BaseCustomJspBag;
import com.liferay.portal.deploy.hot.CustomJspBag;
import org.osgi.framework.BundleContext;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

/**
 * class JsonWsCustomJspBag: This is the custom jsp bag used to replace the core JSP files for the jsonws UI.
 *
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"context.id=JsonWsCustomJspBag",
		"context.name=/api/jsonws Permissioning Custom JSP Bag",
		"service.ranking:Integer=20"
	}
)
public class JsonWsCustomJspBag extends BaseCustomJspBag implements CustomJspBag {

	@Activate
	protected void activate(BundleContext bundleContext) {
		super.activate(bundleContext);
	
		// we also want to include the jspf files in the list
		Enumeration enumeration = bundleContext.getBundle().findEntries(
			getCustomJspDir(), "*.jspf", true);
	
		while (enumeration.hasMoreElements()) {
			URL url = enumeration.nextElement();
	
			getCustomJsps().add(url.getPath());
		}
	}
}

Pretty darn simple, huh? That's because I could leverage the BaseCustomJspBag class. Without that, you need to implement the CustomJspBag interface and that makes this code a lot bigger. You'll find help for implementing the complete interface from https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-core-jsps.

Don't forget your override file in src/main/resources/META-INF/resources/custom_jsps/html/portal/api/jsonws/index.jsp file with the override code as discussed previously.

Conclusion

That's it. Build and deploy your new module and you can hit the page as a guest and you'll get the permission message. Hit the page as a non-OmniAdmin user and you get the same. Navigate there as the OmniAdmin and you see the UI so you can browse the services, try them out, etc. as though nothing changed.

I'm making the project available in GitHub: https://github.com/dnebing/api-jsonws-override

 

 

1 A friend of mine called me out on the "security by obscurity" statement and thought that I was advocating for only this type of security.  Security by obscurity should never, ever be your only barrier to prevent folks with malicious intent from hacking your site. I do see it as a first line of defense, one that can keep script kiddies or inexperienced hackers from discovering your remote APIs. But you should always be checking permissions, securing access, monitoring for intrusions, etc.

ServiceBuilder and Upgrade Processes

Technical Blogs May 17, 2017 By David H Nebinger

Introduction

Today I ran into someone having issues with ServiceBuilder and the creation of UpgradeProcess implementations. The doco is a little bit confusing, so I thought I'd do a quick blog post sharing how the pieces fit...

Normal UpgradeProcess Implementations

As a reminder, you register UpgradeProcess implementations to support upgrading from, say, 1.0.0 to 2.0.0, when there are things that you need to code to ensure when the upgrade is complete that the system is ready to use your module. Say, for example, that you're storing XML in a column in the DB and in 2.0.0 you've changed the DTD; for those folks that already have 1.0.0 deployed, your UpgradeProcess implementation would be responsible for processing each existing record in the database to change the contents over to the 2.0.0 version of the DTD. For non-ServiceBuilder modules, it is up to you to write the initial UpgradeProcess code for the 0.0.0 -> 1.0.0 version.

Through the lifespan of your plugin, you continue to add in UpgradeProcess implementations to handle the automatic update for dot releases and major releases. The best part is that you don't have to care what version everyone is using, Liferay will apply the right upgrade processes to take the users from what version they're currently at all the way through to the latest version.

This is all good, of course, but ServiceBuilder, well it behaves a little differently.

ServiceBuilder service.xml Development

As you go through development and you change the entities in service.xml and rebuild services, ServiceBuilder will update the SQL files used to create the tables, indices, etc. When you deploy the service the first time, ServiceBuilder will happily identify the initial deployment and will use the SQL files to create the entities.

This is where things can go sideways... If I deploy version 1.0.0 of the service and version 2.0.0 comes out, the service developer needs to implement an UpgradeProcess that makes the necessary changes to the tables to get things ready for the current version of the service. If you did not deploy version 1.0.0 but are starting out on 2.0.0, you don't want to have to execute all of the upgrade processes individually, you want ServiceBuilder to do what it has always done and just use the SQL files to create the version 2.0.0 of the entities.

So how do you support both of these scenarios correctly?

By using the Liferay-Require-SchemaVersion header¹ in your bnd.bnd file, that's how.

Supporting Both ServiceBuilder Upgrade Scenarios

The Liferay-Require-SchemaVersion header defines the current DB schema version number for your service modules. This version number should be incremented as you change your service.xml in preparation for a release.

There's code in the ServiceBuilder deployment which injects a hidden UpgradeProcess implementation that is defined to cover the "0.0.0" version (the version which represents the "new deployment") to the Liferay-Require-SchemaVersion version number.  So your first release will have the header set to 1.0.0, next release might be 2.0.0, etc.

So in our previous example with the 2.0.0 service release, when you deploy the service Liferay will match the "0.0.0" to "2.0.0" hidden upgrade process implementation provided by ServiceBuilder and will invoke it to get the 2.0.0 version of the tables, indices, etc. created for you using the SQL files.

The service developer must also code and register the manual UpgradeProcess instances that support the incremental upgrade. So for the example, there would need to be a 1.0.0 -> 2.0.0 UpgradeProcess implementation so when I deploy 2.0.0 to replace my 1.0.0 deployment, the UpgradeProcess will be used to modify my DB schema to get it up to version 2.0.0.

Conclusion

As long as you properly manage both the Liferay-Require-SchemaVersion header in the bnd.bnd file and provide your corresponding UpgradeProcess implementations, you will be able to easily handle the first time deployment as well as the upgrade deployments.

An important side effect to note here - you must manage your Liferay-Require-SchemaVersion correctly.  If you set it initially to 1.0.0 and forget to update it on future releases, your users will have all kinds of issues.  For initial deployments, the SQL scripts would create the entities using the latest SQL files and then try to apply UpgradeProcess implementations to get to new versions trying to make modifications they really don't have to worry about. For upgrade deployments, Liferay may not process upgrades because it believes the schema is already at the appropriate version.

¹ If the Liferay-Require-SchemaVersion header is missing, the value for the Bundle-Version will be used instead.

Increasing Capacity and Decreasing Response Times Using a Tool You're Probably Not Familiar With

Technical Blogs May 7, 2017 By David H Nebinger

Introduction

When it comes to Liferay performance tuning, there is one golden rule:

The more you offload from the application server, the better your performance will be.

This applies to all aspects of Liferay. Using Solr/Elastic is always better than using the embedded Lucene. While PDFBox works, you get better performance by offloading that work to ImageMagick and GhostScript.

You can get even better results by offloading work before it gets to the application server. What I'm talking about here is caching, and one tool I like to recommend for this is Varnish.

According to the Varnish site:

Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.

So I've found the last claim to be a little extreme, but I can say for certain that it can offer significant performance improvement.

Basically Varnish is a caching appliance.  When an incoming request hits Varnish, it will look at in it's cache to see if it has been rendered before. If it isn't in the cache, it will pass the request to the back end and store the response (if possible) in the cache before returning the response to the original requestor.  As additional matching requests come in, Varnish will be able to serve the response from the cache instead of sending it to the back end for processing.

So there are two requirements that need to be met to get value out of the tool:

  1. The responses have to be cacheable.
  2. The responses must take time for the backend to generate.

As it turns out for Liferay, both of these are true.

So Liferay can actually benefit from Varnish, but we can't just make such a claim, we'll need to back it up w/ some testing.

The Setup

To complete the test I set up an Ubuntu VirtualBox instance w/ 12G of memory and 4 processors, and I pulled in a Liferay DXP FP 15 bundle (no performance tuning for JVM params, etc). I also compiled Varnish 4.1.6 on the system. For both tests, Tomcat will be running using 8G and Varnish will also be running w/ an allocation of 2G (even though varnish is not used for the Tomcat test, I think it is "fairer" to keep the tests as similar as possible).

In the DXP environment I'm using the embedded ElasticSearch and HSQL for the database (not a prod configuration but both tests will have the same bad baseline). I deployed the free Porygon theme from the Liferay Marketplace and set up a site based on the theme. The home page for the Porygon demo site has a lot of graphics and stuff on it, so it's a really good site to look at from a general perspective.

The idea here was not to focus on Liferay tuning too much, to get a site up that was serving a bunch of mixed content. Then we measure a non-Varnish configuration against a Varnish config to see what impact Varnish can have in performance terms.

We're going to test the configuration using JMeter and we're going to hit the main page of the Porygon demo site.

Testing And Results

JMeter was configured to use 100 users and loop for 20 times.  Each test would touch on the home page, the photography, science and review pages and would also visit 3 article pages. JMeter was configured to retrieve all related assets synchronously to exagerate the response time from the services.

Response Times

Let's dive right in with the response times for the test from the non-Varnish configuration:

Response Times Without Varnish

The runtime for this test was 21 minutes, 20 seconds. The 3 article pages are the lines near the bottom of the graph, the lines in the middle are for the general pages w/ the asset publishers and all of the extra details.

Next graph is the response times from the Varnish configuration:

Response Times With Varnish

The runtime for this test was 11 minutes, 58 seconds, a 44% reduction in test time, and it's easy to see that while the non-Varnish tests seem to float around the 14 second mark, the Varnish tests come in around 6 seconds.

If we rework the graph to adjust the y-axis to remove the extra whitespace we see:

Response Times With Varnish

The important part here for me was the lines for the individual articles. In the non-Varnish test, /web/porygon-demo/-/space-the-final-frontier?inheritRedirect=true&redirect=%2Fweb%2Fporygon-demo shows up around the 1 second response time, but with varnish it hovers at the 3 second response time.  Keep that in mind when we discuss the custom VCL below.

Aggregate Response Times

Let's review the aggregate graphs from the tests.  First the non-Varnish graph:

Aggregate Without Varnish

This reflects what we've seen before; individual pages are served fairly quickly, pages w/ all of the mixed content take significantly longer to load.

And the graph for the Varnish tests:

Aggregate With Varnish

At the same scale, it is easy to see that Varnish has greatly reduced the response times.  Adjusting the y-axis, we get the following:

Aggregate With Varnish

Analysis

So there's a few parts that quickly jump out:

  • There was a 44% reduction in test runtime reflected by decreased response times.
  • There was a measurable (but unmeasured) reduction in server CPU load since Liferay/Tomcat did not have to serve all traffic.
  • Since work is offloaded from Liferay/Tomcat, overall capacity is increased.
  • While some response times were greatly improved by using Varnish, others suffered.

The first three bullets are easy to explain.  As Varnish is able to cache "static" responses from Liferay/Tomcat, it can serve those responses from the cache instead of forcing Liferay/Tomcat to build a fresh response every time.  Having Liferay/Tomcat rebuild responses each time requires CPU cycles, so returning a cached response reduces the CPU load.  And since Liferay/Tomcat is not busy rebuilding the responses that now come from the cache, Liferay/Tomcat is free to handle responses that cannot be cached; basically the overall capacity of Liferay/Tomcat is increased.

So you might be asking that, since Varnish is so great, why do the single article pages suffer from a response time degradation? Well, that is due to the custom VCL script used to control the caching.

The Varnish VCL

So if you don't know about Varnish, you may not be aware that caching is controlled by a VCL (Varnish Configuration Language) file. This file is closer to a script than it is a configuration file.

Normally Varnish operates by checking the backend response cache control headers; if a response can be cached, it will be, and if the response cannot be cached it won't. The impact of Varnish is directly related to how many of the backend responses can be cached.

You don't have to rely solely on the cache control headers from the backend to determine cacheability; this is especially true for Liferay. Through the VCL, you can actually override the cache control headers and make some responses cachable that otherwise would not have been and make other responses uncacheable even when the backend says it is acceptable.

So now I want to share the VCL script used for the test, but I'll break it up into parts to discuss the reasons for the choices that I made. The whole script file will be attached to the blog for you to download.

In the sections below comments have been removed to save space, but in the full file the comments are embedded to explain everything in detail.

Varnish Initialization

probe company_logo {
  .request =
    "GET /image/company_logo HTTP/1.1"
    "Host: 192.168.1.46:8080"
    "Connection: close";
  .timeout = 100ms;
  .interval = 5s;
  .window = 5;
  .threshold = 3;
}

backend LIFERAY {
  .host = "192.168.1.46";
  .port = "8080";
  .probe = company_logo;
}

sub vcl_init {
  new dir = directors.round_robin();
  dir.add_backend(LIFERAY);
}

So in Varnish you need to declare your backends to connect to.  In this example I've also defined a probe request used to verify health of the backend.  For probes it is recommended to use a simple request that results in a small response; you don't want to overload the system with all of the probe requests.

Varnish Request

sub vcl_recv {
  ...
  if (req.url ~ "^/c/") {
    return (pass);
  }

  if (req.url ~ "/control_panel/manage") {
    return (pass);
  }
  ...
  if (req.url !~ "\?") {
    return (pass);
  }
  ...
}

The request handling basically determines whether to hash (lookup request from the cache) or pass (pass request directly to backend w/o caching).

For all requests that start with the "/c/..." URI, we pass those to the backend.  They represent request for /c/portal/login or /c/portal/logout and the like, so we never want to cache those regardless of what the backend might say.

Also any control panel requests are also passed directly to the backend. We wouldn't want to accidentally expose any of our configuration details now would we? cheeky

Otherwise the code is trying to force hashing of binary files (mp3, image, etc) if possible and conforms to most average VCL implementations.

The last check of whether the URL contains a '?' character, well I'll be getting to that later in the conclusion...

Varnish Response

sub vcl_backend_response {

  if (bereq.url ~ "^/c/") {
    return (deliver);
  }
  
  if ( bereq.url ~ "\.(ico|css)(\?[a-z0-9=]+)?$") {
    set beresp.ttl = 1d;
  } else if (bereq.url ~ "^/documents/" && beresp.http.content-type ~ "image/*") {
    if (std.integer(beresp.http.Content-Length,0) < 10485760 ) {
      if (beresp.status == 200) {
        set beresp.ttl = 1d;
        unset beresp.http.Cache-Control;
        unset beresp.http.set-cookie;
      }
    }
  } else if (beresp.http.content-type ~ "text/javascript|text/css") {
    if (std.integer(beresp.http.Content-Length,0) < 10485760 ) {
      if (beresp.status == 200) {
        set beresp.ttl = 1d;
      }
    }
  }
  ...
}

The response handling also passes the /c/ type URIs back to the client w/o caching.

The most interesting part of this section is the testing for content type and altering caching as a result.  Normally VCL rules will look for some request for "/blah/blah/blah/my-javascript.js" by checking for the extension as part of the URI.

But Liferay really doesn't use these standard extensions.  For example, with Liferay you'll see a lot of requests like /combo/?browserId=other&minifierType=&languageId=en_US&b=7010&t=1494083187246&/o/frontend-js-web/liferay/portlet_url.js&.... These kinds of requests do not have the standard extension on it so normal VCL matching patterns would discard this request as uncacheable. Using the VCL override logic above, the request will be treated as cacheable since it is just a request for some JS.

Same kind of logic applies to the /documents/ URI prefix; anything w/ this prefix is a fetch from the document library.  Full URIs are similar to /documents/24848/0/content_16.jpg/027082f1-a880-4eb7-0938-c9fe99cefc1a?t=1474371003732.  Again since it doesn't end w/ the standard extension, the image might not be cached. The override rule above will match on all /documents/ prefix and content types of images and treat the request as cacheable.

Conclusion

So let's start with the easy ones...

  • Adding Varnish can decrease your response times.
  • Adding Varnish can reduce your server load.
  • Adding Varnish can increase your overall capacity.

Honestly I was expecting that to be the whole list of conclusions I was going to have to worry about. I had this sweet VCL script and performance times were just awesome. As a final test, I tried logging into my site with Varnish in place and, well, FAIL.  I could log in, but I didn't get the top bar or access to the left or right sidebars or any of these things.

I realized that I was actually caching the response from the friendly URLs and, well, for Liferay those are typically dynamic pages.  There is logic specifically in the theme template files that change the content depending upon whether you are logged in or not.  Because my Varnish script was caching the pages when I was not logged in, after I logged in the page was coming from the cache and the necessary stuff I needed was now gone.

I had to add the check for the "?" character in the requests to determine if it was a friendly URL or not.  If it was a friendly URL, I had to treat those as dynamic and had to send them to the backend for processing.

This leads to the poor performance, for example, on the single article display pages.  My first VCL was great, but it cached too much.  My addition for friendly URLs solved the login issue but now prevented caching pages that maybe could be pages so I swung too far again, but since the general results were still awesome I just went with what I had.

Now for the hard conclusions...

  • Adding Varnish requires you to know your portal.
  • Adding Varnish requires you to know your use cases.
  • Adding Varnish requires you to test all aspects of your portal.
  • Adding Varnish requires you to learn how to write VCL.

The VCL really isn't that hard to wrap your head around.  Once you get familiar with it, you'll be able to customize the rules to increase your cacheability factor without sacraficing the dynamic nature of your portal.  In the attached VCL, we add a response header for a cache HIT or MISS, and this is quite useful for reviewing the responses from Varnish to see if a particular response was cached or not (remember the first request will always be a MISS, so check again after a page refresh).

I can't emphasize the testing enough though.  You want to manually test all of your pages a couple of times, logged in and not logged in, logged in as users w/ different roles, etc., to make sure each UX is correct and that you're not bleeding views that should not be shared.

You should also do your load testing.  Make sure you're getting something out of Varnish and that it is worthwhile for your particular situation.

Note About SSL

Before I forget, it's important to know that Varnish doesn't really talk SSL, nor does it talk AJP.  If you're using SSL, you're going to want to have a web server sitting in front of Varnish to handle SSL termination.

And Varnish doesn't talk AJP, so you will have to configure for HTTP connections from both the web server and the app server.

This points toward the reasoning behind my recent blog post about configuring Liferay to look at a header for the HTTP/HTTPS protocols.  In my environ I was terminating SSL at Apache and needed to use the HTTP connectors to Varnish and again to Tomcat/Liferay.

Although it was suggested in a few of the comments that separate connections could be used to facilitate the HTTP and HTTPS traffic, etc., those options would defeat some of the Varnish caching capabilities. You'd either have separate caches for each connection type (or perhaps no cache on one of them) or other unforseen issues. Being able to route all traffic through a single pipe to Varnish will ensure Varnish can cache the response regardless of the incoming protocol.

Update - 05/16/2017

Small tweak to the VCL script attached to the blog, I added rules to exclude all URLs from /api/* from being cached.  Those are basically your web service calls and rarely would you really want to cache those details.  Find the file named localhost-2.vcl for the update.

Revisiting SSL Termination at Apache HTTPd

Technical Blogs May 5, 2017 By David H Nebinger

So I have a blog I created a long time ago dealing w/ Liferay and SSL. The foundation of that blog post was my Fronting Liferay Tomcat with Apache HTTPd post and added terminating SSL at HTTPd and configuring the Liferay instance running under Tomcat to use HTTPS for all of the communication.

If you tear into the second post, you'll find that I was using the AJP connector to join HTTPd and Tomcat together.

This is actually a key aspect for a working setup for SSL + HTTPd + Liferay/Tomcat.

Today I was actually working on a similar setup that used the HTTP connector for SSL + HTTPd + Liferay/Tomcat. Unauthenticated traffic worked just fine, but as soon as you would try to access a secured resource that required authentication, a redirect loop resulted with HTTPd finally terminating the loop.

The only info I had was the redirect URL, https://example.com/c/portal/login?null. There was no log messages in Liferay/Tomcat and repeated 302 messages in the HTTPd logs.

My good friend and coworker Nathan Shaw told me of a case he was aware of that was similar but was from Nginx; although different web servers, the 302 redirect loop on /c/portal/login?null was an exact match.

The crux of the issue is the setting of the company.security.auth.requires.https property in portal-ext.properties.

Basically when you set this property to true, you are saying that when a user logs in, you want to force them into the secure https side. Seems pretty simple, right?

So in this configuration, when a user on http:// wants to or needs to log in, they basically end up hitting http://.../c/portal/login. This is where a check for HTTPS is done and, since the connection is not yet HTTPS will issue a redirect back to https://.../c/portal/login to complete the login.

And this, in conjunction with the HTTP connector between HTTPd and Liferay/Tomcat, is what causes the redirect loop.

Liferay responds with the 302 to try and force you to https, you submit again but SSL terminates at HTTPd and the request is sent via the HTTP connector to Liferay/Tomcat.  Well, Liferay/Tomcat sees the request came in on http:// and again issues the 302 redirect. You're now in redirect loop hell.

Fortunately, this is absolutely fixable.

Liferay has a set of portal-ext.properties settings to mitigate the SSL issue. They are:

#
# Set this to true to use the property "web.server.forward.protocol.header"
# to get the protocol. The property "web.server.protocol" must not have been
# overriden.
#
web.server.forwarded.protocol.enabled=false

#
# Set the HTTP header to use to get the protocol. The property
# "web.server.forwarded.protocol.enabled" must be set to true.
#
web.server.forwarded.protocol.header=X-Forwarded-Proto

The important property is the first one.  When that property is true, Liferay will ignore the protocol (http vs https) of the incoming request and will instead use a request header to see what the original protocol for the request actually was.

The header name can be specified using the second property, but the default one works just fine. It's also how you google for an answer for your particular web server.

I'll save you the trouble for Apache HTTPd; you just need to add a couple of lines to your <VirtualHost /> elements:

<VirtualHost *:80>
    RequestHeader set X-Forwarded-Proto "http"
    ...
</VirtualHost>

<VirtualHost *:443>
    RequestHeader set X-Forwarded-Proto "https"
    ...
</VirtualHost>

That's it.

For every incoming request getting to HTTPd, a header is added with the request protocol.  When the ProxyPass configuration forwards the requests to Liferay/Tomcat, Liferay will use the header for the check on https:// rather than the actual connection from HTTPd.

Some of you are going to be asking

Why are you using the HTTP connector to joing HTTPd to Liferay/Tomcat anyway? The AJP connector is the best connector to use in this configuration because it is better performing than the HTTP connector and avoids this and other issues that can happen by using the HTTP connector.

You would be, of course, absolutely right about that. For a simple configuration like this where you only have HTTPd <-> Liferay/Tomcat, using the HTTP connector is frowned upon.

That said, I've got another exciting blog post in the pipeline that will force moving to this configuration... I'm not getting into any details at this point, but suffice it to say that when you see the results that I've been gathering, you too will be looking at this configuration too.

Tomcat+HikariCP

Technical Blogs May 3, 2017 By David H Nebinger

In case you aren't aware, Liferay 7 CE and Liferay DXP default to using Hikari CP for the connection pools.

Why?  Well here's a pretty good reason:

Hikari just kicks the pants of any other connection pool implementation.

So Liferay is using Hikari CP, and you should too.

I know what you're thinking.  It's something along the lines of:

But Dave, we're following the best practice of defining our DB connections as Tomcat <Resource /> JNDI definitions so we don't expose our database connection details (URLs, usernames or passwords) to the web applications.  So we're stuck with the crappy Tomcat connection pool implementation.

You might be thinking that, but if you are thankfully you'd be wrong.

Installing the Hikari CP Library for Tomcat

So this is pretty easy, but you have two basic options.

First is to download the .zip or .tar.gz file from http://brettwooldridge.github.io/HikariCP/.  This is actually a source release that you'll need to build yourself.

Second option is to download the built jar from a source like Maven Central, http://central.maven.org/maven2/com/zaxxer/HikariCP/2.6.1/HikariCP-2.6.1.jar.

Once you have the jar, copy to the Tomcat lib/ext directory.  Note that Hikari CP does have a dependency on SLF4J, so you'll need to put that jar into lib/ext too.

Configuring the Tomcat <Resource /> Definitions

Location of your JNDI datasource <Resource /> definitions depends upon the scope for the connections.  You can define them globally by specifying them in Tomcat's conf/server.xml and conf/context.xml, or you can scope them to individual applications by defining them in conf/Catalina/localhost/WebAppContext.xml (where WebAppContext is the web application context for the app, basically the directory name from Tomcat's webapps directory).

For Liferay 7 CE and Liferay DXP, all of your plugins belong to Liferay, so it is usually recommended to put your definitions in conf/Catalina/localhost/ROOT.xml.  The only reason to make the connections global is if you have other web applications deployed to the same Tomcat container that will be using the same database connections.

So let's define a JNDI datasource in ROOT.xml for a Postgres database...

Create the file conf/Catalina/localhost/ROOT.xml if it doesn't already exist.  If you're using a Liferay bundle, you will already have this file.

Hikari CP supports two different ways to define your actual database connections.  The first way is the one that they prefer and it's based upon using a DataSource instance (more standard way of establishing a connection with credentials) or the older way using a DriverManager instance (legacy way that has different ways of passing credentials to the DB driver).

We'll follow their advice and use the DataSource.  Use the table from https://github.com/brettwooldridge/HikariCP#popular-datasource-class-names to find your data source class name, we'll need it when we define the <Resource /> element.

Gather up your JDBC url, username and password because we'll need those too.

Okay, so in ROOT.xml inside of the <Context /> tag, we're going to add our Liferay JNDI data source connection resource:

<Resource name="jdbc/LiferayPool" auth="Container"
      factory="com.zaxxer.hikari.HikariJNDIFactory"
      type="javax.sql.DataSource"
      minimumIdle="5" 
      maximumPoolSize="10"
      connectionTimeout="300000"
      dataSourceClassName="org.postgresql.ds.PGSimpleDataSource"
      dataSource.url="jdbc:postgresql://localhost:5432/lportal"
      dataSource.implicitCachingEnabled="true" 
      dataSource.user="user"
      dataSource.password="pwd" />

So this is going to define our connection for Liferay and have it use the Hikari CP pool.

Now if you really want to stick with the older driver-based configuration, then you're going to use something like this:
<Resource name="jdbc/LiferayPool" auth="Container"
      factory="com.zaxxer.hikari.HikariJNDIFactory"
      type="javax.sql.DataSource"
      minimumIdle="5" 
      maximumPoolSize="10"
      connectionTimeout="300000"
      driverClassName="org.postgresql.Driver"
      jdbcUrl="jdbc:postgresql://localhost:5432/lportal"
      dataSource.implicitCachingEnabled="true" 
      dataSource.user="user"
      dataSource.password="pwd" />

Conclusion

Yep, that's pretty much it.  When you restart Tomcat you'll be using your flashy new Hikari CP connection pool.

You'll want to take a look at https://github.com/brettwooldridge/HikariCP#frequently-used to find additional tuning parameters for your connection pool as well as the info for the minimum idle, max pool size and connection timeout details.

And remember, this is going to be your best production configuration.  If you're using portal-ext.properties to set up any of your database connection properties, you're not as secure as you can be.  Remember, a hacker needs information to infiltrate your system; the more details of your infrastructure you expose, the more info you give a hacker to worm their way in.  Using the portal-ext.properties approach, you're exposing your JDBC URL (so hostname and port as well as the DB server type) and the credentials (which will work for DB login but sometimes they might also be system login credentials).  This kind of info is worth its weight in gold to a hacker trying to infiltrate you.

So follow the recommended practice of using your JNDI references for the database connections and keep this information out of the hackers hands.

 

Available Now: TripWire

General Blogs March 28, 2017 By David H Nebinger

For the last few months as I've been working with Liferay 7 CE / Liferay DXP, I've been a little stymied trying to manage the complexities of the new OSGi universe.

In Liferay 6.x, for example, an OOTB demo setup of Liferay comes with like 5 or 6 war files.  And when the portal starts up, they all start up.

But with Liferay 7 CE and Liferay DXP, there are a lot of bundles in the mix. Liferay 7 CE GA3, for example, has almost 2,500 bundles in OSGi.

And when the portal starts up, most of these will also start. Some will not. Some might not be able to. Some can't start because they have unsatisfied dependencies.

But you're not going to know it.

Seriously, you won't know if something has failed to start when you restart your environment. There may or may not be something in the log. Someone might have stopped a bundle intentionally (or unintentionally) in the gogo shell w/o telling you. And with almost 2,500 bundles in there, it's going to be really hard finding the needle in the haystack especially if you don't know if there's a needle in there at all.

So I've been working on a new utility over the past few months to resolve the situation - TripWire.

Features

TripWire actually scans the OSGi environment to gather information about deployed bundle statuses, bundle versions, and service components. Tripwire also scans the system and portal properties too.

This scanning is done at two points, the first is when an administrator takes a snapshot (basically to persist a baseline for all comparisons), and the second is a scheduled task that runs on the node to monitor for changes. The comparison scan can also be kicked off manually.

After installing TripWire and navigating to the TripWire control panel, you'll be prompted to capture an initial baseline scan:

Click the Take Snapshot card to see the system snapshot:

You can save the new baseline (to be compared against in the automated scans), you can export the snapshot (downloads as an excel spreadsheet), or you can cancel.

Each section expands to show captured details:

The funny looking hash keys at the top? Those are calculated hashes from the scanned areas, by comparing the baseline hash against the scanned hash, TripWire knows quickly if there is a variation between the baseline and the current scan.

When you save the new baseline, the main page will reflect that the server is currently consistent with the baseline:

You can test the server to force a scan by clicking on the Test Server card:

Exclusions

TripWire supports dynamically creating exclusion rules to exclude items from being part of the scan.  You might add an exclusion for a property value that you're not interested in monitoring, for example. Click on the Exclusions card and then on the Add New Exclusion Rule button:

The Camera drop down lists all of the current cameras used when taking a snapshot. Choose either a specific camera or the Any Camera option to allow for a match to any camera.

The Type drop down allows you to select either a Match, a Starts With, a Contains or a Regular Expression type for the exclusion rule.

The value field is what to match against, and the Enabled slider allows you to disable a particular exclusion rule.

Modifying the exclusion rules will affect scans immediately resulting in failed scans:

By adding the rule to exclude any System Property that starts with "catalina.", scans now show the server to be inconsistent when compared to the baseline. At this point you can take a new baseline snapshot to approve the change, or you could disable the exclusion rule (basically reverting the change to the system) to restore baseline consistency.

Notifications

TripWire uses Liferay notifications to alert subscribed administrators when the node is in an inconsistent state and when the node returns to a consistent state. For Liferay 7 CE, a subscribed administrator will only receive notifications about the single Liferay node. For Liferay DXP, subscribed administrators will receive notifications from every node that is out of sync with the baseline snapshot.

Notifications will be issued for every failed scan on every node until consistency is restored.

To subscribe or unsubscribe to notifications, click on the Subscriptions card. If you are unsubscribed, the bell image will be grey, if you are subscribed the bell will be blue and have a red notification number on it. Note this number does not represent the number of notifications you might currently have, it is just a visual marker that you are subscribed for notifications.

Configuration

TripWire supports setting configuration for the scanning schedule. Click on the Configuration card:

Using the Cameras tab, you can also choose the cameras to use in the snapshots and scans:

Normally I recommend enabling all but the Separate Service Status Camera (because this camera is quite verbose in the details it captures).

The Bundle Status Camera captures status for each bundle.

The Bundle Version Camera captures versions of every bundle.

The Configuration Admin Camera captures configuration changes from the control panel.  Note that CA only saves values that are different from the set of default values on each panel, so the details on this section will always be shorter than the actual set of configurations saved for the portal.

The Portal Properties Camera captures changes to known Liferay portal properties (unknown properties are ignored). In a Liferay DXP cluster, some properties will need to be excluded using the Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Service Status Camera captures counts of OSGi DS Services and their statuses.

The System Properties Camera captures changes to system properties from the JVM. Like the portal properties, in a Liferay DXP cluster some properties will need to be excluded using Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Unsatisfied References Camera captures the list of bundles with unsatisfied references (preventing the bundles from starting). Any time a bundle has an unsatisfied reference, the bundle and it's unsatisfied reference(s) will be captured by this camera.

The three email tabs configure who the notification emails are from and the consistent/inconsistent email templates.

Liferay DXP

For Liferay DXP clusters, TripWire uses the same baseline across all nodes in the cluster and reports on cluster node inconsistencies:

Clicking on the server link in the status area, you can review the server's report to see where the problems are:

Some of the additions and changes are due to unique node values and should be handled by adding new Exclusion Rules.

The Removals above show that one node in the cluster has Audience Targeting deployed but the other node does not. These are the kinds of inconsistencies that you may not be aware of from a cluster perspective but would result in your DXP cluster not serving the right content to all users, and identifying this discrepancy once in your cluster in an easy and quick way will save you time, money and effort.

For your cluster Exclusion Rules, your rule list will be quite long:

Conclusion

That's TripWire.

It is available from the Liferay Marketplace:

There is a cost for each version, but that is to offset the time and effort I have invested in this tool.

And while there may not seem to be an immediate return, the first time this tool saves you by identifying a node that is out of sync or an unauthorized change to your OSGi environment, it will save you time (in waiting for the change to be identified), effort (in having to sort through all of the gogo output and other details), user impressions (from cluster node sync issues) and most of all, money.

 

 

Liferay DXP and WebLogic...

Technical Blogs March 21, 2017 By David H Nebinger

For those of you deploying Liferay DXP to WebLogic, you will need to add an override property to your portal-ext.properties file to allow the WebLogic JAXB implementation to peer inside the OSGi environment to create proxy instances.

I know, it's a mouthful, but it's all pretty darn technical. You'll know if you need this if you start seeing exceptions like:

java.lang.NoClassDefFoundError: org/eclipse/persistence/internal/jaxb/WrappedValue
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:642)
	at org.eclipse.persistence.internal.jaxb.JaxbClassLoader.generateClass(JaxbClassLoader.java:124)
	at org.eclipse.persistence.jaxb.compiler.MappingsGenerator.generateWrapperClass(MappingsGenerator.java:3302)
	Truncated. see log file for complete stacktrace
Caused By: java.lang.ClassNotFoundException: org.eclipse.persistence.internal.jaxb.WrappedValue cannot be found by com.example.bundle_1.0.0
	at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:444)
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:357)
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:349)
	at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	Truncated. see log file for complete stacktrace

Suffice it to say that you need to override the module.framework.properties.org.osgi.framework.bootdelegation property to add the following two lines:

  org.eclipse.persistence.internal.jaxb,\
  org.eclipse.persistence.internal.jaxb.*,\

You have to include all of the packages from the value defined in portal.properties, so my entry actually looks like:

module.framework.properties.org.osgi.framework.bootdelegation=\
  __redirected,\
  com.liferay.aspectj,\
  com.liferay.aspectj.*,\
  com.liferay.portal.servlet.delegate,\
  com.liferay.portal.servlet.delegate*,\
  com.sun.ccpp,\
  com.sun.ccpp.*,\
  com.sun.crypto.*,\
  com.sun.image.*,\
  com.sun.jmx.*,\
  com.sun.jna,\
  com.sun.jndi.*,\
  com.sun.mail.*,\
  com.sun.management.*,\
  com.sun.media.*,\
  com.sun.msv.*,\
  com.sun.org.*,\
  com.sun.syndication,\
  com.sun.tools.*,\
  com.sun.xml.*,\
  com.yourkit.*,\
  org.eclipse.persistence.internal.jaxb,\
  org.eclipse.persistence.internal.jaxb.*,\
  sun.*

Enjoy

Creating a Spring MVC Portlet War in the Liferay Workspace

Technical Blogs February 28, 2017 By David H Nebinger

Introduction

So I've been working on some new Blade sample projects, and one of those is the Spring MVC portlet example.

As pointed to in the Liferay documentation for Spring MVC portlets, these guys need to be built as war files, and the Liferay Workspace will actually help you get this work done. I'm going to share things that I learned while creating the sample which has not yet been merged, but hopefully will be soon.

Creating The Project

So your war projects need to go in the wars folder inside of your Liferay Workspace folder, basically at the same level as your modules directory. If you don't have a wars folder, go ahead and create one.

Next we're going to have to manually create the portlet project. Currently Blade does not have support for building a Spring MVC portlet war project; perhaps this is something that can change in the future.

Inside of the wars folder, you create a folder for each portlet WAR project that you are building. To be consistent, my project folder was named blade.springmvc.web, but your project folder can be named according to your standards.

Inside your project folder, you need to set up the folder structure for a Spring MVC project. Your project structure will resemble:

Spring MVC Project Structure

For the most part this structure is similar to what you would use in a Maven implementation. For those coming from a legacy SDK, the contents of the docroot folder go to the src/main/webapp folder, but the docroot/WEB-INF/src go in the src/main/java folder (or src/main/resources for non-java files).

Otherwise this structure is going to be extremely similar to legacy Spring MVC portlet wars, all of the old locations basically still apply.

Build.gradle Contents

The fun part for us is the build.gradle file. This file controls how Gradle is going to build your project into a war suitable for distribution.

Here's the contents of the build.gradle file for my blade sample:

buildscript {
  repositories {
    mavenLocal()
    maven {
      url "https://cdn.lfrs.sl/repository.liferay.com/nexus/content/groups/public"
    }
  }

  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins.css.builder", version: "latest.release"
    classpath group: "com.liferay", name: "com.liferay.css.builder", version: "latest.release"
  }
}

apply plugin: "com.liferay.css.builder"

war {
  dependsOn buildCSS

  exclude('**/*.scss')

  filesMatching("**/.sass-cache/") {
    it.path = it.path.replace(".sass-cache/", "")
  }

  includeEmptyDirs = false
}

dependencies {
  compileOnly project(':modules:blade.servicebuilder.api')
  compileOnly 'com.liferay.portal:com.liferay.portal.kernel:2.6.0'
  compileOnly 'javax.portlet:portlet-api:2.0'
  compileOnly 'javax.servlet:javax.servlet-api:3.0.1'
  compileOnly 'org.osgi:org.osgi.service.component.annotations:1.3.0'
  compile group: 'aopalliance', name: 'aopalliance', version: '1.0'
  compile group: 'commons-logging', name: 'commons-logging', version: '1.2'
  compileOnly group: 'javax.servlet.jsp.jstl', name: 'jstl-api', version: '1.2'
  compileOnly group: 'org.glassfish.web', name: 'jstl-impl', version: '1.2'
  compile group: 'org.springframework', name: 'spring-aop', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-beans', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-context', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-core', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-expression', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc-portlet', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-web', version: '4.1.9.RELEASE'
}

So first is the buildScript and the CSS builder apply line followed by war customization stanza.

These parts are currently necessary to support compiling the SCSS files into CSS and store the files in the right place in the WAR. Note that Liferay currently sees this manual execution of the CSS builder plugin as a bug and plan on fixing it sometime soon.

Managing Dependencies

The next part is the dependencies, and this will be the fun part for you as it was for me.

You're going to be picking from two different dependency types, compile and compileOnly. The big difference is whether the dependencies get included in the WEB-INF/lib directory (for compile) or just used for the project compile but not included (compileOnly).

Many of the Liferay or OSGi jars should not be included in your WEB-INF/lib directory such as portal-kernel or the servlet or portlet APIs, but they are needed for compiles so they are marked as compileOnly.

In Liferay 6.x development, we used to be able to use the portal-dependency-jars in liferay-plugin-package.properties to inject libraries into our wars at deployment time. But not for Liferay 7 CE/Liferay DXP development.

The portal-dependency-jars property in liferay-plugin-package.properties is deprecated in Liferay 7 CE/Liferay DXP. All dependencies must be included in the war at build time.

Since we cannot use the portal dependencies in liferay-plugin-package.properties, I had to manually include the Spring jars using the compile type. 

Conclusion

Yep, that's pretty much it.

Since it's in the Liferay Workspace and is Gradle-built, you can use the gradle wrapper script at the root of the project to build everything, including the portlet wars.

Your built war will be in the build/libs directory in project, and this war file is ready to be deployed to Liferay by dropping it in the Liferay deploy folder.

Debugging "ClassNotFound" exceptions, etc, in your war file can be extremely challenging since Liferay doesn't really keep anything around from the WAR->WAB conversion process. If you add the following properties to portal-ext.properties, the WABs generated by Liferay will be saved so you can open the file and see what jars were injected and where the files are all found in the WAB.

module.framework.web.generator.generated.wabs.store=true
module.framework.web.generator.generated.wabs.store.dir=${module.framework.base.dir}/wabs

If you want to check out the project, it is currently live in the blade samples: https://github.com/liferay/liferay-blade-samples/tree/master/liferay-workspace/wars/blade.portlet.springmvc.

Proper Portlet Name for your Portlet components...

Technical Blogs February 28, 2017 By David H Nebinger

Okay, this is probably going to be one of my shortest blog posts, but it's important.

Some releases of Liferay have code to "infer" a portlet name if it is not specified in the component properties.  This actually conflicts with other pieces of code that also try to "infer" what the portlet name is.

The problem is that they sometimes have different requirements; in one case, periods are fine in the name so the full class name is used (in the portlet component), but in other cases periods are not allowed so it uses the class name with the periods replaced by underscores.

Needless to say, this can cause you problems if Liferay is trying to use two different portlet names for the same portlet, one that works and the other that doesn't.

So save yourself some headaches and always assign your portlet name in the properties for the Portlet component, and always use that same value for other components that also need the portlet name.  And avoid periods since they cannot be used in all cases.

So here's an example for the portlet component properties from one of my previous blogs:

@Component(
  immediate = true,
  property = {
    "com.liferay.portlet.display-category=category.system.admin",
    "com.liferay.portlet.header-portlet-css=/css/main.css",
    "com.liferay.portlet.instanceable=false",
    "javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
    "javax.portlet.display-name=Filesystem Access",
    "javax.portlet.init-param.template-path=/",
    "javax.portlet.init-param.view-template=/view.jsp",
    "javax.portlet.resource-bundle=content.Language",
    "javax.portlet.security-role-ref=power-user,user"
  },
  service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {}

Notice how I was explicit with the javax.portlet.name?  That's the one you need, don't let Liferay assume what your portlet name is, be explicit with the value.

And the value that I used in the FilesystemAccessPortletKeys constants:

public static final String FILESYSTEM_ACCESS =
  "com_liferay_filesystemaccess_portlet_FilesystemAccessPortlet";

No periods, but since I'm using the class name it won't have any collisions w/ other portlet classes...

Note that if you don't like long URLs, you might try shortening the name, but stick with something that avoids collisions, such as first letter of packages w/ class name, something like "clfp_FilesystemAccessPortlet" or something. Remember that collisions are bad things...

And finally, since I've put the name in an external PortletKeys constants file, any other piece of code that expects the portlet name can use the constant value too (i.e. for your config admin interfaces) and you'll know that all code is using the same constant value.

Enjoy!

Service Builder 6.2 Migration

Technical Blogs February 23, 2017 By David H Nebinger

I'm taking a short hiatus from the design pattern series to cover a topic I've heard a lot of questions on lately - migrating 6.2 Service Builder wars to Liferay 7 CE / Liferay DXP.

Basically it seems you have two choices:

  1. You can keep the Service Builder implementation in a portlet war. Any wars you keep going forward will have access to the service layer, but can you access the services from other OSGi components?
  2. You take the Service Builder code out into an OSGi module. With this path you'll be able to access the services from other OSGi modules, but will the services be available to the legacy portlet wars?

So it's that mixed usage that leads to the questions. I mean, if all you have is either legacy wars or pure OSGi modules, the decision is easy - stick with what you've got.

But when you are in mixed modes, how do you deliver your Service Builder code so both sides will be happy?

The Scenario

So we're going to work from the following starting point. We have a 6.2 Service Builder portlet war following a recommendation that I frequently give, the war has only the Service Builder implementation in it and nothing else, no other portlets. I often recommend this as it gives you a working Service Builder implementation and no pollution from Spring or other libraries that can sometimes conflict with Service Builder. We'll also have a separate portlet war that leverages the Service Builder service.

Nothing fancy for the code, the SB layer has a simple entity, Course, and the portlet war will be a legacy Liferay MVC portlet that lists the courses.

We're tasked with upgrading our code to Liferay 7 CE or Liferay DXP (pick your poison cheeky), and as part of the upgrade we will have a new OSGi portlet component using the new Liferay MVC framework for adding a course.

To reduce our development time, we will upgrade our course list portlet to be compatible with Liferay 7 CE / Liferay DXP but keep it as a portlet war - basically the minimal effort needed to get it upgraded. We'll also have the new portlet module for adding a course.

But our big development focus, and the focus of this blog, will be choosing the right path for upgrading that Service Builder portlet war.

For evaluation purposes we're going to have to upgrade the SDK to a Liferay Workspace. Doing so will help get us some working 7.x portlet wars initially, and then when it comes time to do the testing for the module it should be easy to migrate.

Upgrading to a Liferay Workspace

So the Liferay IDE version 3.1 Milestone 2 is available, and it has the Code Upgrade Assistant to help take our SDK project and migrate it to a Liferay Workspace.

For this project, I've made the original 6.2 SDK project available at https://github.com/dnebing/sb-upgrade-62-sdk.

You can find an intro to the upgrade assistant in Greg Amerson's blog: https://web.liferay.com/web/gregory.amerson/blog/-/blogs/liferay-ide-3-1-milestone-1-released and Andy Wu's blog: https://web.liferay.com/web/andy.wu/blog/-/blogs/liferay-ide-3-1-milestone-2-released.

It is still a milestone release so it is still a work in progress, but it does work on upgrading my sample SDK. Just a note, though, it does take some processing time during the initial upgrade to a workspace; if you think it has locked up or is unresponsive, just have patience. It will come back, it will complete, you just have to give it time to do it's job.

Checkpoint

After you finish the upgrade, you should have a Liferay workspace w/ a plugins-sdk directory and inside there is the normal SDK directory structure. In the portlet directory the two portlet war projects are there and they are ready for deployment.

In fact, in the plugins-sdk/dist directory you should find both of the wars just waiting to be deployed. Deploy them to your new Liferay 7 CE or Liferay DXP environment, then spin out and drop the Course List portlet on a page and you should see the same result as the 6.2 version.

So what have we done so far? We upgraded our SDK to a Liferay Workspace and the Code Upgrade Assistant has upgraded our code to be ready for Liferay 7 CE / Liferay DXP. The two portlet wars were upgraded and built. When we deployed them to Liferay, the WAR -> WAB conversion process converted our old wars into OSGi bundles.

However, if you go into the Gogo shell and start digging around, you won't find the services defined from our Service Builder portlet. Obviously they are there because the Course List portlet uses it to get the list of courses.

War-Based Service Builder

So how do these war-based Service Builder upgrades work? If you take a look at the CourseLocalServiceUtil's getService() method, you'll see that it uses the good ole' PortletBeanLocator and the registered Spring beans for the Service Builder implementation. The Util classes use the PortletBeanLocator to find the service implementations and may leverage the class loader proxies (CLP) if necessary to access the Spring beans from other contexts. From the service war perspective, it's going through Liferay's Spring bean registry to get access to the service implementations.

Long story short, our service jar is still a service jar. It is not a proper OSGi module and cannot be deployed as one. But the question is, can we still use it?

OSGi Add Course Portlet

So we need an OSGi portlet to add courses. Again this will be another simple portlet to show a form and process the submit. Creating the module is pretty straight forward, the challenge of course is including the service jar into the bundle.

First thing that is necessary is to include the jar into the build.gradle dependencies. Since it is not in a Maven-like repository, we'll need to use a slightly different syntax to include the jar:

dependencies {
  compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
  compileOnly group: "com.liferay.portal", name: "com.liferay.util.taglib", version: "2.0.0"
  compileOnly group: "javax.portlet", name: "portlet-api", version: "2.0"
  compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
  compileOnly group: "jstl", name: "jstl", version: "1.2"
  compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0"
	
  compile files('../../plugins-sdk/portlets/school-portlet/docroot/WEB-INF/lib/school-portlet-service.jar')
}

The last line is the key; it is the syntax for including a local jar file, and in our case we're pointing at the service jar which is part of the plugins-sdk folder that we upgraded.

Additionally we need to add the stanza to the bnd.bnd file so the jar gets included into the bundle during the build:

Bundle-ClassPath:\
  .,\
  lib/school-portlet-service.jar

-includeresource:\
  lib/school-portlet-service.jar=school-portlet-service.jar

As you'll remember from my blog post on OSGi Module Dependencies, this is option #4 to include the jar into the bundle itself and use it in the classpath for the bundle.

Now if you build and deploy this module, you can place the portlet on a page and start adding courses.  It works!

By including the service jar into the module, we are leveraging the same PortletBeanLocator logic used in the Util class to get access to the service layer and invoke services via the static Util classes.

Now that we know that this is possible (we'll discuss whether to do it this way in the conclusion), let's now rework everything to move the Service Builder code into a set of standard OSGi modules.

Migrating Service Builder War to Bundle

Our service builder code has already been upgraded when we upgraded the SDK, so all we need to do here is create the modules and then move the code.

Creating the Clean Modules

First step is to create a clean project in our Liferay workspace, a foundation for the Service Builder modules to build from.

Once again, I start with Blade since I'm an Intellij developer. In modules directory, we'll let Blade create our Service Builder projects:

blade create -t service-builder -p com.liferay.school school

For the last argument, use something that reflects your current Service Builder project name.

This is the clean project, so let's start dirtying it up a bit.

Copy your legacy service.xml to the school/school-service directory.

Build the initial Service Builder code from the service XML. If you're on the command line, you'd do:

../../gradlew buildService

Now we have unmodified, generated code. Layer in the changes from the legacy Service Builder portlet, including:

  • portlet-model-hints.xml
  • service.properties
  • Changes to any of the META-INF/spring xml files
  • All of your Impl java classes

Rebuild services again to get the working module code.

Module-Based Service Builder

So we reviewed how the CourseLocalServiceUtil's getService() method in the war-based service jar leveraged the PortletBeanLocator to find the Spring bean registered with Liferay to get the implementation class.

In our OSGi module-based version, the CourseLocalServiceUtil's getService() method is instead using an OSGi ServiceTracker to get access to the DS components registered in OSGi for the implementation class.

Again the service "jar" is still a service jar (well, module), but we also know that the add course portlet will be able to leverage the service (with some modifications), the question of course is whether we can also use the service API module in our legacy course list portlet.

Fixing the Course List Portlet War

So what remains is modifying the course list portlet so it can leverage the API module in lieu of the legacy Service Builder portlet service jar.

This change is actually quite easy...

The liferay-plugin-package.properties file changed from the upgrade assistant contains the following:

required-deployment-contexts=\
    school-portlet

This is the line used by the Liferay IDE to inject the service jar so the service will be available to the portlet war. We need to edit this line to strip out these two lines since we're not using the deployment context.

If you have the school-portlet-service.jar file in docroot/WEB-INF/lib, go ahead and delete that file since it is no longer necessary.

Next comes the messy part; we need to copy in the API jar into the course list portlet's WEB-INF/lib directory. We have to do this so Eclipse will be happy and will be able to happily compile all of our code that uses the API. There's no easy way to do this, but I can think of the following options:

  1. Manually copy the API jar over.
  2. Modify the Gradle build scripts to add support for the install of artifacts into the local Maven repo, then the Ivy configuration for the project can be adjusted to include the dependency. Not as messy as a manual file copy, but involves doing the install of the API jar so Ivy can find it.

We're not done there... We actually cannot keep the jar in WEB-INF/lib otherwise at runtime you get class cast exceptions, so we need to exclude it during deployment. This is easily handled, however, by adding an exclusion to your portal-ext.properties file:

module.framework.web.generator.excluded.paths=<CURRENT EXCLUSIONS>,\
  WEB-INF/lib/com.liferay.school.api-1.0.0.jar

When the WAR->WAB conversion is taking place, it will exclude this jar from being included. So you get to keep it in the project and let the WAB conversion strip it out during deployment.

Remember to keep all of the current excluded paths in the list, you can find them in the portal.properties file included in your Liferay source.

Build and deploy your new war and it should access the OSGi-based service API module.

Conclusion

Well, this ended up being a mixed bag...

On one hand I've shown that you can use the Service Builder portlet's service jar as a direct dependency in the module and it can invoke the service through the static Util classes defined within. The advantage of sticking with this path is that it really doesn't require much modification from your legacy code beyond completing the code upgrade, and the Liferay IDE's Code Upgrade Assistant gets you most of the way there. The obvious disadvantage is that you're now adding a dependency to the modules that need to invoke the service layer and the deployed modules include the service jar; so if you change the service layer, you're going to have to rebuild and redeploy all modules that have the service jar as an embedded dependency.

On the other hand I've shown that the migrated OSGi Service Builder modules can be used to eliminate all of the service jar replication and redeployment pain, but the hoops you have to jump through for the legacy portlet access to the services are a development-time pain.

It seems clear, at least to me, that the second option is the best. Sure you will incur some development-time pain to copy service API jars if only to keep the java compiler happy when compiling code, but it definitely has the least impact when it comes to service API modifications.

So my recommendations for migrating your 6.2 Service Builder implementations to Liferay 7 CE / Liferay DXP are:

  • Use the Liferay IDE's Code Upgrade Assistant to help migrate your code to be 7-compatible.
  • Move the Service Builder code to OSGi modules.
  • Add the API jars to the legacy portlet's WEB-INF/lib directory for those portlets which will be consuming the services.
  • Add the module.framework.web.generator.excluded.paths entry to your portal-ext.properties to strip the jar during WAR->WAB conversion.

If you follow these recommendations your legacy portlet wars will be able to leverage the services, any new OSGi-based portlets (or JSP fragments or ...) will be able to access the services, and your deployment impact for changes will be minimized.

My code for all of this is available in github:

Note that the upgraded code is actually in the same repo, they are just in different branches.

Good Luck!

Update

After thinking about this some more, there's actually another path that I did not consider...

For the Service Builder portlet service jar, I indicated you'd need to include this as a dependency on every module that needed to use the service, but I neglected to consider the global service jar option that we used for Liferay 6.x...

So you can keep the Service Builder implementation in the portlet, but move the service jar to the global class loader (Tomcat's lib/ext directory). Remember that with this option there can only be one service jar, the global one, so no other portlet war nor module (including the Service Builder portlet war) can have a service jar. Also remember that to update a global service jar, you can only do this while Tomcat is down.

The final step is to add the packages for the service interfaces to the module.framework.system.packages.extra property in portal-ext.properties. You want to add the packages to the current list defined in portal.properties, not replace the list with just your service packages.

Before starting Tomcat, you'll want to add the exception, model and service trio to the list. For the school service example, this would be something like:

module.framework.system.packages.extra=\
  <ALL DEFAULT VALUES COPIED IN>,\
  com.liferay.school.exception,\
  com.liferay.school.model,\
  com.liferay.school.service

This will make the contents of the packages available to the OSGi global class loader so, whether bundle or WAB, they will all have access to the interfaces and static classes.

This has a little bit of a deployment process change to go with it, but you might consider this the least impactful change of all. We tend to frown on the use of the global class loader because it may introduce transitive dependencies and does not support hot deployable updates, but this option might be lower development cost to offset the concern.

Liferay Design Patterns - Multi-Scoped Data/Logic

Technical Blogs February 17, 2017 By David H Nebinger

Pattern: Multi-Scoped Data/Logic

Intent

The intent for this pattern is to support data/logic usage in multiple scopes. Liferay defines the scopes Global, Site and Page, but from a development perspective scope refers to Portal and individual OSGi Modules. Classic data access implementations do not support multi-scope access because of boundaries between the scopes.

The Multi-Scoped Data/Logic Liferay Design Pattern's intent is to define how data and logic can be designed to be accessable from all scopes in Liferay, either in the Portal layer or any other deployed OSGi Modules.

Also Known As

This pattern is implemented using the Liferay Service Builder tool.

Motivation

Standard ORM tools provide access to data for servlet-based web applications, but they are not a good fit in the portal because of the barriers between modules in the form of class loader and other kinds of boundaries. If a design starts from a standard ORM solution, it will be restricted to a single development scope. Often this may seem acceptable for an initial design, but in the portal world most single-scoped solutions often need to be changed to support multiple scopes. As the standard tools have no support for multiple scopes, developers will need to hand code bridge logic to add multi-scope support, and any hand coding increases development time, bug potential, and time to market.

The motivation for Liferay's Service Builder tool is to provide an ORM-like tool with built-in support for multi-scoped data access and business logic sharing. The tool transforms an XML-based entity definition file into layered code to support multiple scopes and is used throughout business logic creation to add multi-scope exposure for the business logic methods.

Additionally the tool is the foundation for adding portal feature support to custom entities, including:

  • Auto-populated entity audit columns.
  • Asset framework support (comments, rankings, Asset Publisher support, etc).
  • Indexing and Search support.
  • Model listeners.
  • Workflow support.
  • Expando support.
  • Dynamic Query support.
  • Automagic JSON web service support.
  • Automagic SOAP web service support.

You're not going to get this kind of integration from your classic ORM tool...

And with Liferay 7 CE / Liferay DXP, additionally you also get an OSGi-compatible API and service bundle implementation ready for deployment.

Applicability

IMHO Service Builder applies when you are dealing with any kind of multi-scoped data entities and/or business logic; it also applies if you need to add any of the indicated portal features to your implementation.

Participants

The participants in this pattern are:

  • An XML file defining the entities.
  • Spring configuration files.
  • Implementation class methods to add business logic.
  • Service consumers.

The participants are used by the Service Builder tool to generate code for the service implementation details.

Details for working with Service Builder are covered in the following sections:

Collaboration

ServiceBuilder uses the entity definition XML file to generate the bulk of the code. Custom business methods are added to the ServiceImpl and LocalServiceImpl classes for the custom entities and ServiceBuilder will include them in the service API.

Consequences

By using Service Builder and generating entities, there is no real downside in the portal environment. Service Builder will generate an ORM layer and provide integration points for all of the core Liferay features.

There are three typical arguments used by architects and developers for not using Service Builder:

  • It is not a complete ORM. This is true, it does not support everything a full ORM does. It doesn't support Many To Many relationships and it also doesn't handle automatic parent-children relationships in One To Many. All that means is the code to handle many to many and even some one to many relationship handling will need to be hand-coded.
  • It still uses old XML files instead of newer Annotations. This is also true, but this is more a reflection of Liferay generating all of the code including the interfaces. With Liferay adding portal features based upon the XML definitions, using annotations would require Liferay to modify the annotated interface and cause circular change effects.
  • I already know how to develop using X, my project deadlines are too short to learn a new tool like Service Builder. Yes there is a learning curve with Service Builder, but this is nothing compared to the mountains of work it will take getting X working correctly in the portal and some Liferay features will just not be options for you without Service Builder's generated code.

All of these arguments are weak in light of what you get by using Service Builder.

Sample Usage

Service Builder is another case of Liferay eating it's own dogfood. The entire portal is based on Service Builder for all of the entities in all of the portlets, the Liferay entities, etc.

Check out any of the Liferay modules from simple cases like Bookmarks through more complicated cases such as Workflow or the Asset Publisher.

Conclusion

Service Builder is a must-use if you are going to do any integrated portal development. You can't build the portal features into your portlets without Service Builder usage.

Seriously. You have no other choice. And I'm not saying this because I'm a fanboy or anything, I'm coming from a place of experience. My first project on Liferay dealt with a number of portlets using a service layer; I knew Hibernate but didn't want to take time out to learn Service Builder. That was a terrible mistake on my part. I never did deal with the multi-scoping well at all, never got the kind of Liferay integration that would have been great to have. Fortunately it was not a big problem to have made such a mistake, but I learned from it and use Service Builder all the time now in the portal.

So I share this experience with you in hopes that you too can avoid the mistakes I made. Use Service Builder for your own good!

Liferay Design Patterns - Flexible Entity Presentation

Technical Blogs February 15, 2017 By David H Nebinger

Introduction

So I'm going to start a new type of blog series here covering design patterns in Liferay.

As we all know:

In software engineering, a software design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. - Wikipedia

In Liferay, there are a number of APIs and frameworks used to support Liferay-specific general reusable solutions. But since we haven't defined them in a design pattern, you might not be aware of them and/or if/when/how they could be used.

So I'm going to carve out some time to write about some "design patterns" based on Liferay APIs and frameworks. Hopefully they'll be useful as you go forward and designing your own Liferay-based solutions.

Being a first stab at defining these as Liferay Design Patterns, I'm expecting some disagreement on simple things (that design pattern name doesn't seem right) as well as some complex things... Please go ahead and throw your comments at me and I'll make necessary changes to the post. Remember this isn't for me, this is for you. cheeky

And I must add that yes, I'm taking great liberties in using the phrase "design pattern". Most of the Liferay APIs and frameworks I'm going to cover are really combinations of well-documented software design patterns (in fact Liferay source actually implements a large swath of creational, structural and behavioral design patterns in such purity and clarity they are easy to overlook).

These blog posts may not be defining clean and simple design patterns as specified by the Gang of Four, but they will try to live up to the ideals of true design patterns. They will provide general, reusable solutions to commonly occurring problems in the context of a Liferay-based system.

Ultimately the goal is to demonstrate how by applying these Liferay Design Patterns that you too can design and build a Liferay-based solution that is rich in presentation, robust in functionality and consistent in usage and display. By providing the motivation for using these APIs and frameworks, you will be able to evaluate how they can be used to take your Liferay projects to the next level.

Pattern: Flexible Entity Presentation

Intent

The intent of the Flexible Entity Presentation pattern is to support a dynamic templating mechanism that supports runtime display generation instead of a classic development-time fixed representation, further separating view management from portlet development.

Also Known As

This pattern is known as and implemented on the Application Display Template (ADT) framework in Liferay.

Motivation

The problem with most portlets is that the code used to present custom entities is handled as a development-time concern; the UI specifications define how the entity is shown on the page and the development team delivers a solution to satisfy the requirements.  Any change to specifications during development results in a change request for the development team, and post development the change represents a new development project to implement presentation changes.

The inflexibility of the presentation impacts time to market, delivery cycles and development resource allocation.

The Flexible Entity Presentation pattern's motivation is to support a user-driven mechanism to present custom entities in a dynamic way.

The users and admins section on ADTs from dev.liferay.com starts:

The application display template (ADT) framework allows Liferay administrators to override the default display templates, removing limitations to the way your site’s content is displayed.

ADTs allow the display of an entity to be handled by a dynamic template instead of handled by static code. Don't get hung up on the word content here, it's not just content as in web content but more of a generic reference to any html content your portlet needs to render.

Liferay identified this motivation when dealing with client requests for product changes to adapt presentation in different ways to satisfy varying client requirements.  Liferay created and uses the ADT framework extensively in many of the OOTB portlets from web content through breadcrumbs.  By leveraging ADTs, Liferay defines the entities, i.e. a Bookmark, but the presentation can be overridden by an administrator and an ADT to show the details according to their requirements, and all without a development change by Liferay or a code customization by the client.

Liferay eats its own dogfood by leveraging the ADT framework, so this is a well tested framework for supporting dynamic presentation of entities.

When you look at many of the core portlets, they now support ADTs to manage their display aspects since tweaking an ADT is much simpler than creating a JSP fragment bundle or new custom portlet or some crazy JS/CSS fu in order to affect a presentation change. This flexibility is key for supporting changes in the Liferay UI without extensive code customizations.

Applicablility

The use of ADTs apply when the presentation of an entity is subject to change. Since admins will use ADTs to manage how to display the entities, the presentation does not need to be finalized before development starts. When the ADT framework is incorporated in the design out of the gate, flexibility in the presentation is baked into the design and doors are open to any future presentation changes without code development, testing and deployment.

So there are some fairly clear use cases to apply ADTs:

  • The presentation of the custom entities is likely to change.
  • The presentation of the custom entities may need to change based upon context (list view, single view, etc.).
  • The presentation is not an aspect in the portlet development.
  • The project is a Liferay Marketplace application and presentation customization is necessary.

Notice the theme here, the change in presentation.

ADTs would either not apply or would be overkill for a static entity presentation, one that doesn't benefit from presentation flexibility.

Participants

The participants in this pattern are:

  • A custom entity.
  • A custom PortletDisplayTemplateHandler.
  • ADT Resource Portlet Permissions.
  • Portlet Configuration for ADT Selection.
  • Portlet View Leveraging ADTs.

The participants work together with the Liferay ADT framework to support a dynamic presentation for the entity. The implementation details for the participants are covered here: https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/implementing-application-display-templates.

Collaboration

The custom entity is defined in the Service Builder layer (normally).

The PortletDisplayTemplateHandler implementation is used to feed meta information about the fields and descriptions of the entity to the ADT framework Template Editor UI. The meta information provided will be generally be very coupled to the custom entity in that changes to the entity will usually result in changes to the PortletDisplayTemplateHandler implementation.

The ADT resource portlet permissions must be enabled for the portlet so administrators will be able to choose the display template and edit display templates for the entity.

The portlet configuration panel is where the administrator will chose between display templates, and the portlet view will leverage Liferay's ADT tag library to inject the rendered template into the portlet view.

Consequences

By moving to an ADT-based presentation of the entity, the template engine (FreeMarker) will be used to render the view.

The template engine will impose a performance cost in supporting the flexible presentation (especially if someone creates a bad template). Implementors should strike a balance between benefitial flexibility and overuse of the ADT framework.

Sample Usage

For practical examples, consider a portal based around a school.  Some common custom entities would be defined for students, rooms, teachers, courses, books, etc.

Consider how often the presentation of the entities may need to change and weigh that against whether the changes are best handled in code or in template.

A course or a teacher entity would likely benefit from the ADT as those entities might need to change the presentation of the course as a brochure-like view needs to change, or the teacher when new additions such as accreditation or course history would change the presentation.

The students and rooms may not benefit from ADTs if the presentation is going to remain fairly static.  These entities might go through future presentation changes but it may be more acceptable to approach those as development projects that are planned and coordinated.

Known Uses

The best known uses come from Liferay itself. The list of OOTB portlets which leverage ADTs are:

  • Asset Publisher
  • Blogs
  • Breadcrumbs
  • Categories Navigation
  • Documents and Media
  • Language Selector
  • Navigation Menu
  • RSS Publisher
  • Site Map
  • Tags Navigation
  • Web Content Display
  • Wiki

This provides many examples for when to use ADTs, the obvious advantage of ADTs (customized displays w/o additional coding) and even hints where ADTs may not work well (i.e. users/orgs control panel, polls, ...).

Conclusion

Well, that's pretty much it for this post. I'd encourage you to go and read the section for styling apps with ADTs as it will help solidify the motivations to incorporate the ADT framework into your design. When you understand how an admin would use ADTs to create a flexible presentation of the Liferay entities, it should help to highlight how you can achieve the same flexibility for your custom assets.

When you're ready to realize these benefits, you can refer to the implementing ADTs page to help with your implementation.

 

Showing 1 - 20 of 60 results.
Items 20
of 3