Coming Soon: TripWire

General Blogs 2017/03/28 投稿者 David H Nebinger

For the last few months as I've been working with Liferay 7 CE / Liferay DXP, I've been a little stymied trying to manage the complexities of the new OSGi universe.

In Liferay 6.x, for example, an OOTB demo setup of Liferay comes with like 5 or 6 war files.  And when the portal starts up, they all start up.

But with Liferay 7 CE and Liferay DXP, there are a lot of bundles in the mix. Liferay 7 CE GA3, for example, has almost 2,500 bundles in OSGi.

And when the portal starts up, most of these will also start. Some will not. Some might not be able to. Some can't start because they have unsatisfied dependencies.

But you're not going to know it.

Seriously, you won't know if something has failed to start when you restart your environment. There may or may not be something in the log. Someone might have stopped a bundle intentionally (or unintentionally) in the gogo shell w/o telling you. And with almost 2,500 bundles in there, it's going to be really hard finding the needle in the haystack especially if you don't know if there's a needle in there at all.

So I've been working on a new utility over the past few months to resolve the situation - TripWire.

Features

TripWire actually scans the OSGi environment to gather information about deployed bundle statuses, bundle versions, and service components. Tripwire also scans the system and portal properties too.

This scanning is done at two points, the first is when an administrator takes a snapshot (basically to persist a baseline for all comparisons), and the second is a scheduled task that runs on the node to monitor for changes. The comparison scan can also be kicked off manually.

After installing TripWire and navigating to the TripWire control panel, you'll be prompted to capture an initial baseline scan:

Click the Take Snapshot card to see the system snapshot:

You can save the new baseline (to be compared against in the automated scans), you can export the snapshot (downloads as an excel spreadsheet), or you can cancel.

Each section expands to show captured details:

The funny looking hash keys at the top? Those are calculated hashes from the scanned areas, by comparing the baseline hash against the scanned hash, TripWire knows quickly if there is a variation between the baseline and the current scan.

When you save the new baseline, the main page will reflect that the server is currently consistent with the baseline:

You can test the server to force a scan by clicking on the Test Server card:

Exclusions

TripWire supports dynamically creating exclusion rules to exclude items from being part of the scan.  You might add an exclusion for a property value that you're not interested in monitoring, for example. Click on the Exclusions card and then on the Add New Exclusion Rule button:

The Camera drop down lists all of the current cameras used when taking a snapshot. Choose either a specific camera or the Any Camera option to allow for a match to any camera.

The Type drop down allows you to select either a Match, a Starts With, a Contains or a Regular Expression type for the exclusion rule.

The value field is what to match against, and the Enabled slider allows you to disable a particular exclusion rule.

Modifying the exclusion rules will affect scans immediately resulting in failed scans:

By adding the rule to exclude any System Property that starts with "catalina.", scans now show the server to be inconsistent when compared to the baseline. At this point you can take a new baseline snapshot to approve the change, or you could disable the exclusion rule (basically reverting the change to the system) to restore baseline consistency.

Notifications

TripWire uses Liferay notifications to alert subscribed administrators when the node is in an inconsistent state and when the node returns to a consistent state. For Liferay 7 CE, a subscribed administrator will only receive notifications about the single Liferay node. For Liferay DXP, subscribed administrators will receive notifications from every node that is out of sync with the baseline snapshot.

Notifications will be issued for every failed scan on every node until consistency is restored.

To subscribe or unsubscribe to notifications, click on the Subscriptions card. If you are unsubscribed, the bell image will be grey, if you are subscribed the bell will be blue and have a red notification number on it. Note this number does not represent the number of notifications you might currently have, it is just a visual marker that you are subscribed for notifications.

Configuration

TripWire supports setting configuration for the scanning schedule. Click on the Configuration card:

Using the Cameras tab, you can also choose the cameras to use in the snapshots and scans:

Normally I recommend enabling all but the Separate Service Status Camera (because this camera is quite verbose in the details it captures).

The Bundle Status Camera captures status for each bundle.

The Bundle Version Camera captures versions of every bundle.

The Configuration Admin Camera captures configuration changes from the control panel.  Note that CA only saves values that are different from the set of default values on each panel, so the details on this section will always be shorter than the actual set of configurations saved for the portal.

The Portal Properties Camera captures changes to known Liferay portal properties (unknown properties are ignored). In a Liferay DXP cluster, some properties will need to be excluded using the Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Service Status Camera captures counts of OSGi DS Services and their statuses.

The System Properties Camera captures changes to system properties from the JVM. Like the portal properties, in a Liferay DXP cluster some properties will need to be excluded using Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Unsatisfied References Camera captures the list of bundles with unsatisfied references (preventing the bundles from starting). Any time a bundle has an unsatisfied reference, the bundle and it's unsatisfied reference(s) will be captured by this camera.

The three email tabs configure who the notification emails are from and the consistent/inconsistent email templates.

Liferay DXP

For Liferay DXP clusters, TripWire uses the same baseline across all nodes in the cluster and reports on cluster node inconsistencies:

Clicking on the server link in the status area, you can review the server's report to see where the problems are:

Some of the additions and changes are due to unique node values and should be handled by adding new Exclusion Rules.

The Removals above show that one node in the cluster has Audience Targeting deployed but the other node does not. These are the kinds of inconsistencies that you may not be aware of from a cluster perspective but would result in your DXP cluster not serving the right content to all users, and identifying this discrepancy once in your cluster in an easy and quick way will save you time, money and effort.

For your cluster Exclusion Rules, your rule list will be quite long:

Conclusion

That's TripWire.

It will be available soon in the Liferay Marketplace for both Liferay 7 CE and Liferay DXP.

There is a cost for each version, but that is to offset the time and effort I have invested in this tool.

And while there may not seem to be an immediate return, the first time this tool saves you by identifying a node that is out of sync or an unauthorized change to your OSGi environment, it will save you time (in waiting for the change to be identified), effort (in having to sort through all of the gogo output and other details), user impressions (from cluster node sync issues) and most of all, money.

TripWire is currently under review for Marketplace release, and I'll post an update once it is available.

 

Liferay DXP and WebLogic...

Technical Blogs 2017/03/21 投稿者 David H Nebinger

For those of you deploying Liferay DXP to WebLogic, you will need to add an override property to your portal-ext.properties file to allow the WebLogic JAXB implementation to peer inside the OSGi environment to create proxy instances.

I know, it's a mouthful, but it's all pretty darn technical. You'll know if you need this if you start seeing exceptions like:

java.lang.NoClassDefFoundError: org/eclipse/persistence/internal/jaxb/WrappedValue
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:642)
	at org.eclipse.persistence.internal.jaxb.JaxbClassLoader.generateClass(JaxbClassLoader.java:124)
	at org.eclipse.persistence.jaxb.compiler.MappingsGenerator.generateWrapperClass(MappingsGenerator.java:3302)
	Truncated. see log file for complete stacktrace
Caused By: java.lang.ClassNotFoundException: org.eclipse.persistence.internal.jaxb.WrappedValue cannot be found by com.example.bundle_1.0.0
	at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:444)
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:357)
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:349)
	at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	Truncated. see log file for complete stacktrace

Suffice it to say that you need to override the module.framework.properties.org.osgi.framework.bootdelegation property to add the following two lines:

  org.eclipse.persistence.internal.jaxb,\
  org.eclipse.persistence.internal.jaxb.*,\

You have to include all of the packages from the value defined in portal.properties, so my entry actually looks like:

module.framework.properties.org.osgi.framework.bootdelegation=\
  __redirected,\
  com.liferay.aspectj,\
  com.liferay.aspectj.*,\
  com.liferay.portal.servlet.delegate,\
  com.liferay.portal.servlet.delegate*,\
  com.sun.ccpp,\
  com.sun.ccpp.*,\
  com.sun.crypto.*,\
  com.sun.image.*,\
  com.sun.jmx.*,\
  com.sun.jna,\
  com.sun.jndi.*,\
  com.sun.mail.*,\
  com.sun.management.*,\
  com.sun.media.*,\
  com.sun.msv.*,\
  com.sun.org.*,\
  com.sun.syndication,\
  com.sun.tools.*,\
  com.sun.xml.*,\
  com.yourkit.*,\
  org.eclipse.persistence.internal.jaxb,\
  org.eclipse.persistence.internal.jaxb.*,\
  sun.*

Enjoy

Creating a Spring MVC Portlet War in the Liferay Workspace

Technical Blogs 2017/02/28 投稿者 David H Nebinger

Introduction

So I've been working on some new Blade sample projects, and one of those is the Spring MVC portlet example.

As pointed to in the Liferay documentation for Spring MVC portlets, these guys need to be built as war files, and the Liferay Workspace will actually help you get this work done. I'm going to share things that I learned while creating the sample which has not yet been merged, but hopefully will be soon.

Creating The Project

So your war projects need to go in the wars folder inside of your Liferay Workspace folder, basically at the same level as your modules directory. If you don't have a wars folder, go ahead and create one.

Next we're going to have to manually create the portlet project. Currently Blade does not have support for building a Spring MVC portlet war project; perhaps this is something that can change in the future.

Inside of the wars folder, you create a folder for each portlet WAR project that you are building. To be consistent, my project folder was named blade.springmvc.web, but your project folder can be named according to your standards.

Inside your project folder, you need to set up the folder structure for a Spring MVC project. Your project structure will resemble:

Spring MVC Project Structure

For the most part this structure is similar to what you would use in a Maven implementation. For those coming from a legacy SDK, the contents of the docroot folder go to the src/main/webapp folder, but the docroot/WEB-INF/src go in the src/main/java folder (or src/main/resources for non-java files).

Otherwise this structure is going to be extremely similar to legacy Spring MVC portlet wars, all of the old locations basically still apply.

Build.gradle Contents

The fun part for us is the build.gradle file. This file controls how Gradle is going to build your project into a war suitable for distribution.

Here's the contents of the build.gradle file for my blade sample:

buildscript {
  repositories {
    mavenLocal()
    maven {
      url "https://cdn.lfrs.sl/repository.liferay.com/nexus/content/groups/public"
    }
  }

  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins.css.builder", version: "latest.release"
    classpath group: "com.liferay", name: "com.liferay.css.builder", version: "latest.release"
  }
}

apply plugin: "com.liferay.css.builder"

war {
  dependsOn buildCSS

  exclude('**/*.scss')

  filesMatching("**/.sass-cache/") {
    it.path = it.path.replace(".sass-cache/", "")
  }

  includeEmptyDirs = false
}

dependencies {
  compileOnly project(':modules:blade.servicebuilder.api')
  compileOnly 'com.liferay.portal:com.liferay.portal.kernel:2.6.0'
  compileOnly 'javax.portlet:portlet-api:2.0'
  compileOnly 'javax.servlet:javax.servlet-api:3.0.1'
  compileOnly 'org.osgi:org.osgi.service.component.annotations:1.3.0'
  compile group: 'aopalliance', name: 'aopalliance', version: '1.0'
  compile group: 'commons-logging', name: 'commons-logging', version: '1.2'
  compileOnly group: 'javax.servlet.jsp.jstl', name: 'jstl-api', version: '1.2'
  compileOnly group: 'org.glassfish.web', name: 'jstl-impl', version: '1.2'
  compile group: 'org.springframework', name: 'spring-aop', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-beans', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-context', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-core', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-expression', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc-portlet', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-web', version: '4.1.9.RELEASE'
}

So first is the buildScript and the CSS builder apply line followed by war customization stanza.

These parts are currently necessary to support compiling the SCSS files into CSS and store the files in the right place in the WAR. Note that Liferay currently sees this manual execution of the CSS builder plugin as a bug and plan on fixing it sometime soon.

Managing Dependencies

The next part is the dependencies, and this will be the fun part for you as it was for me.

You're going to be picking from two different dependency types, compile and compileOnly. The big difference is whether the dependencies get included in the WEB-INF/lib directory (for compile) or just used for the project compile but not included (compileOnly).

Many of the Liferay or OSGi jars should not be included in your WEB-INF/lib directory such as portal-kernel or the servlet or portlet APIs, but they are needed for compiles so they are marked as compileOnly.

In Liferay 6.x development, we used to be able to use the portal-dependency-jars in liferay-plugin-package.properties to inject libraries into our wars at deployment time. But not for Liferay 7 CE/Liferay DXP development.

The portal-dependency-jars property in liferay-plugin-package.properties is deprecated in Liferay 7 CE/Liferay DXP. All dependencies must be included in the war at build time.

Since we cannot use the portal dependencies in liferay-plugin-package.properties, I had to manually include the Spring jars using the compile type. 

Conclusion

Yep, that's pretty much it.

Since it's in the Liferay Workspace and is Gradle-built, you can use the gradle wrapper script at the root of the project to build everything, including the portlet wars.

Your built war will be in the build/libs directory in project, and this war file is ready to be deployed to Liferay by dropping it in the Liferay deploy folder.

Debugging "ClassNotFound" exceptions, etc, in your war file can be extremely challenging since Liferay doesn't really keep anything around from the WAR->WAB conversion process. If you add the following properties to portal-ext.properties, the WABs generated by Liferay will be saved so you can open the file and see what jars were injected and where the files are all found in the WAB.

module.framework.web.generator.generated.wabs.store=true
module.framework.web.generator.generated.wabs.store.dir=${module.framework.base.dir}/wabs

If you want to check out the project, it is currently live in the blade samples: https://github.com/liferay/liferay-blade-samples/tree/master/liferay-workspace/wars/blade.portlet.springmvc.

Proper Portlet Name for your Portlet components...

Technical Blogs 2017/02/28 投稿者 David H Nebinger

Okay, this is probably going to be one of my shortest blog posts, but it's important.

Some releases of Liferay have code to "infer" a portlet name if it is not specified in the component properties.  This actually conflicts with other pieces of code that also try to "infer" what the portlet name is.

The problem is that they sometimes have different requirements; in one case, periods are fine in the name so the full class name is used (in the portlet component), but in other cases periods are not allowed so it uses the class name with the periods replaced by underscores.

Needless to say, this can cause you problems if Liferay is trying to use two different portlet names for the same portlet, one that works and the other that doesn't.

So save yourself some headaches and always assign your portlet name in the properties for the Portlet component, and always use that same value for other components that also need the portlet name.  And avoid periods since they cannot be used in all cases.

So here's an example for the portlet component properties from one of my previous blogs:

@Component(
  immediate = true,
  property = {
    "com.liferay.portlet.display-category=category.system.admin",
    "com.liferay.portlet.header-portlet-css=/css/main.css",
    "com.liferay.portlet.instanceable=false",
    "javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
    "javax.portlet.display-name=Filesystem Access",
    "javax.portlet.init-param.template-path=/",
    "javax.portlet.init-param.view-template=/view.jsp",
    "javax.portlet.resource-bundle=content.Language",
    "javax.portlet.security-role-ref=power-user,user"
  },
  service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {}

Notice how I was explicit with the javax.portlet.name?  That's the one you need, don't let Liferay assume what your portlet name is, be explicit with the value.

And the value that I used in the FilesystemAccessPortletKeys constants:

public static final String FILESYSTEM_ACCESS =
  "com_liferay_filesystemaccess_portlet_FilesystemAccessPortlet";

No periods, but since I'm using the class name it won't have any collisions w/ other portlet classes...

Note that if you don't like long URLs, you might try shortening the name, but stick with something that avoids collisions, such as first letter of packages w/ class name, something like "clfp_FilesystemAccessPortlet" or something. Remember that collisions are bad things...

And finally, since I've put the name in an external PortletKeys constants file, any other piece of code that expects the portlet name can use the constant value too (i.e. for your config admin interfaces) and you'll know that all code is using the same constant value.

Enjoy!

Service Builder 6.2 Migration

Technical Blogs 2017/02/23 投稿者 David H Nebinger

I'm taking a short hiatus from the design pattern series to cover a topic I've heard a lot of questions on lately - migrating 6.2 Service Builder wars to Liferay 7 CE / Liferay DXP.

Basically it seems you have two choices:

  1. You can keep the Service Builder implementation in a portlet war. Any wars you keep going forward will have access to the service layer, but can you access the services from other OSGi components?
  2. You take the Service Builder code out into an OSGi module. With this path you'll be able to access the services from other OSGi modules, but will the services be available to the legacy portlet wars?

So it's that mixed usage that leads to the questions. I mean, if all you have is either legacy wars or pure OSGi modules, the decision is easy - stick with what you've got.

But when you are in mixed modes, how do you deliver your Service Builder code so both sides will be happy?

The Scenario

So we're going to work from the following starting point. We have a 6.2 Service Builder portlet war following a recommendation that I frequently give, the war has only the Service Builder implementation in it and nothing else, no other portlets. I often recommend this as it gives you a working Service Builder implementation and no pollution from Spring or other libraries that can sometimes conflict with Service Builder. We'll also have a separate portlet war that leverages the Service Builder service.

Nothing fancy for the code, the SB layer has a simple entity, Course, and the portlet war will be a legacy Liferay MVC portlet that lists the courses.

We're tasked with upgrading our code to Liferay 7 CE or Liferay DXP (pick your poison cheeky), and as part of the upgrade we will have a new OSGi portlet component using the new Liferay MVC framework for adding a course.

To reduce our development time, we will upgrade our course list portlet to be compatible with Liferay 7 CE / Liferay DXP but keep it as a portlet war - basically the minimal effort needed to get it upgraded. We'll also have the new portlet module for adding a course.

But our big development focus, and the focus of this blog, will be choosing the right path for upgrading that Service Builder portlet war.

For evaluation purposes we're going to have to upgrade the SDK to a Liferay Workspace. Doing so will help get us some working 7.x portlet wars initially, and then when it comes time to do the testing for the module it should be easy to migrate.

Upgrading to a Liferay Workspace

So the Liferay IDE version 3.1 Milestone 2 is available, and it has the Code Upgrade Assistant to help take our SDK project and migrate it to a Liferay Workspace.

For this project, I've made the original 6.2 SDK project available at https://github.com/dnebing/sb-upgrade-62-sdk.

You can find an intro to the upgrade assistant in Greg Amerson's blog: https://web.liferay.com/web/gregory.amerson/blog/-/blogs/liferay-ide-3-1-milestone-1-released and Andy Wu's blog: https://web.liferay.com/web/andy.wu/blog/-/blogs/liferay-ide-3-1-milestone-2-released.

It is still a milestone release so it is still a work in progress, but it does work on upgrading my sample SDK. Just a note, though, it does take some processing time during the initial upgrade to a workspace; if you think it has locked up or is unresponsive, just have patience. It will come back, it will complete, you just have to give it time to do it's job.

Checkpoint

After you finish the upgrade, you should have a Liferay workspace w/ a plugins-sdk directory and inside there is the normal SDK directory structure. In the portlet directory the two portlet war projects are there and they are ready for deployment.

In fact, in the plugins-sdk/dist directory you should find both of the wars just waiting to be deployed. Deploy them to your new Liferay 7 CE or Liferay DXP environment, then spin out and drop the Course List portlet on a page and you should see the same result as the 6.2 version.

So what have we done so far? We upgraded our SDK to a Liferay Workspace and the Code Upgrade Assistant has upgraded our code to be ready for Liferay 7 CE / Liferay DXP. The two portlet wars were upgraded and built. When we deployed them to Liferay, the WAR -> WAB conversion process converted our old wars into OSGi bundles.

However, if you go into the Gogo shell and start digging around, you won't find the services defined from our Service Builder portlet. Obviously they are there because the Course List portlet uses it to get the list of courses.

War-Based Service Builder

So how do these war-based Service Builder upgrades work? If you take a look at the CourseLocalServiceUtil's getService() method, you'll see that it uses the good ole' PortletBeanLocator and the registered Spring beans for the Service Builder implementation. The Util classes use the PortletBeanLocator to find the service implementations and may leverage the class loader proxies (CLP) if necessary to access the Spring beans from other contexts. From the service war perspective, it's going through Liferay's Spring bean registry to get access to the service implementations.

Long story short, our service jar is still a service jar. It is not a proper OSGi module and cannot be deployed as one. But the question is, can we still use it?

OSGi Add Course Portlet

So we need an OSGi portlet to add courses. Again this will be another simple portlet to show a form and process the submit. Creating the module is pretty straight forward, the challenge of course is including the service jar into the bundle.

First thing that is necessary is to include the jar into the build.gradle dependencies. Since it is not in a Maven-like repository, we'll need to use a slightly different syntax to include the jar:

dependencies {
  compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
  compileOnly group: "com.liferay.portal", name: "com.liferay.util.taglib", version: "2.0.0"
  compileOnly group: "javax.portlet", name: "portlet-api", version: "2.0"
  compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
  compileOnly group: "jstl", name: "jstl", version: "1.2"
  compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0"
	
  compile files('../../plugins-sdk/portlets/school-portlet/docroot/WEB-INF/lib/school-portlet-service.jar')
}

The last line is the key; it is the syntax for including a local jar file, and in our case we're pointing at the service jar which is part of the plugins-sdk folder that we upgraded.

Additionally we need to add the stanza to the bnd.bnd file so the jar gets included into the bundle during the build:

Bundle-ClassPath:\
  .,\
  lib/school-portlet-service.jar

-includeresource:\
  lib/school-portlet-service.jar=school-portlet-service.jar

As you'll remember from my blog post on OSGi Module Dependencies, this is option #4 to include the jar into the bundle itself and use it in the classpath for the bundle.

Now if you build and deploy this module, you can place the portlet on a page and start adding courses.  It works!

By including the service jar into the module, we are leveraging the same PortletBeanLocator logic used in the Util class to get access to the service layer and invoke services via the static Util classes.

Now that we know that this is possible (we'll discuss whether to do it this way in the conclusion), let's now rework everything to move the Service Builder code into a set of standard OSGi modules.

Migrating Service Builder War to Bundle

Our service builder code has already been upgraded when we upgraded the SDK, so all we need to do here is create the modules and then move the code.

Creating the Clean Modules

First step is to create a clean project in our Liferay workspace, a foundation for the Service Builder modules to build from.

Once again, I start with Blade since I'm an Intellij developer. In modules directory, we'll let Blade create our Service Builder projects:

blade create -t service-builder -p com.liferay.school school

For the last argument, use something that reflects your current Service Builder project name.

This is the clean project, so let's start dirtying it up a bit.

Copy your legacy service.xml to the school/school-service directory.

Build the initial Service Builder code from the service XML. If you're on the command line, you'd do:

../../gradlew buildService

Now we have unmodified, generated code. Layer in the changes from the legacy Service Builder portlet, including:

  • portlet-model-hints.xml
  • service.properties
  • Changes to any of the META-INF/spring xml files
  • All of your Impl java classes

Rebuild services again to get the working module code.

Module-Based Service Builder

So we reviewed how the CourseLocalServiceUtil's getService() method in the war-based service jar leveraged the PortletBeanLocator to find the Spring bean registered with Liferay to get the implementation class.

In our OSGi module-based version, the CourseLocalServiceUtil's getService() method is instead using an OSGi ServiceTracker to get access to the DS components registered in OSGi for the implementation class.

Again the service "jar" is still a service jar (well, module), but we also know that the add course portlet will be able to leverage the service (with some modifications), the question of course is whether we can also use the service API module in our legacy course list portlet.

Fixing the Course List Portlet War

So what remains is modifying the course list portlet so it can leverage the API module in lieu of the legacy Service Builder portlet service jar.

This change is actually quite easy...

The liferay-plugin-package.properties file changed from the upgrade assistant contains the following:

required-deployment-contexts=\
    school-portlet

This is the line used by the Liferay IDE to inject the service jar so the service will be available to the portlet war. We need to edit this line to strip out these two lines since we're not using the deployment context.

If you have the school-portlet-service.jar file in docroot/WEB-INF/lib, go ahead and delete that file since it is no longer necessary.

Next comes the messy part; we need to copy in the API jar into the course list portlet's WEB-INF/lib directory. We have to do this so Eclipse will be happy and will be able to happily compile all of our code that uses the API. There's no easy way to do this, but I can think of the following options:

  1. Manually copy the API jar over.
  2. Modify the Gradle build scripts to add support for the install of artifacts into the local Maven repo, then the Ivy configuration for the project can be adjusted to include the dependency. Not as messy as a manual file copy, but involves doing the install of the API jar so Ivy can find it.

We're not done there... We actually cannot keep the jar in WEB-INF/lib otherwise at runtime you get class cast exceptions, so we need to exclude it during deployment. This is easily handled, however, by adding an exclusion to your portal-ext.properties file:

module.framework.web.generator.excluded.paths=<CURRENT EXCLUSIONS>,\
  WEB-INF/lib/com.liferay.school.api-1.0.0.jar

When the WAR->WAB conversion is taking place, it will exclude this jar from being included. So you get to keep it in the project and let the WAB conversion strip it out during deployment.

Remember to keep all of the current excluded paths in the list, you can find them in the portal.properties file included in your Liferay source.

Build and deploy your new war and it should access the OSGi-based service API module.

Conclusion

Well, this ended up being a mixed bag...

On one hand I've shown that you can use the Service Builder portlet's service jar as a direct dependency in the module and it can invoke the service through the static Util classes defined within. The advantage of sticking with this path is that it really doesn't require much modification from your legacy code beyond completing the code upgrade, and the Liferay IDE's Code Upgrade Assistant gets you most of the way there. The obvious disadvantage is that you're now adding a dependency to the modules that need to invoke the service layer and the deployed modules include the service jar; so if you change the service layer, you're going to have to rebuild and redeploy all modules that have the service jar as an embedded dependency.

On the other hand I've shown that the migrated OSGi Service Builder modules can be used to eliminate all of the service jar replication and redeployment pain, but the hoops you have to jump through for the legacy portlet access to the services are a development-time pain.

It seems clear, at least to me, that the second option is the best. Sure you will incur some development-time pain to copy service API jars if only to keep the java compiler happy when compiling code, but it definitely has the least impact when it comes to service API modifications.

So my recommendations for migrating your 6.2 Service Builder implementations to Liferay 7 CE / Liferay DXP are:

  • Use the Liferay IDE's Code Upgrade Assistant to help migrate your code to be 7-compatible.
  • Move the Service Builder code to OSGi modules.
  • Add the API jars to the legacy portlet's WEB-INF/lib directory for those portlets which will be consuming the services.
  • Add the module.framework.web.generator.excluded.paths entry to your portal-ext.properties to strip the jar during WAR->WAB conversion.

If you follow these recommendations your legacy portlet wars will be able to leverage the services, any new OSGi-based portlets (or JSP fragments or ...) will be able to access the services, and your deployment impact for changes will be minimized.

My code for all of this is available in github:

Note that the upgraded code is actually in the same repo, they are just in different branches.

Good Luck!

Update

After thinking about this some more, there's actually another path that I did not consider...

For the Service Builder portlet service jar, I indicated you'd need to include this as a dependency on every module that needed to use the service, but I neglected to consider the global service jar option that we used for Liferay 6.x...

So you can keep the Service Builder implementation in the portlet, but move the service jar to the global class loader (Tomcat's lib/ext directory). Remember that with this option there can only be one service jar, the global one, so no other portlet war nor module (including the Service Builder portlet war) can have a service jar. Also remember that to update a global service jar, you can only do this while Tomcat is down.

The final step is to add the packages for the service interfaces to the module.framework.system.packages.extra property in portal-ext.properties. You want to add the packages to the current list defined in portal.properties, not replace the list with just your service packages.

Before starting Tomcat, you'll want to add the exception, model and service trio to the list. For the school service example, this would be something like:

module.framework.system.packages.extra=\
  <ALL DEFAULT VALUES COPIED IN>,\
  com.liferay.school.exception,\
  com.liferay.school.model,\
  com.liferay.school.service

This will make the contents of the packages available to the OSGi global class loader so, whether bundle or WAB, they will all have access to the interfaces and static classes.

This has a little bit of a deployment process change to go with it, but you might consider this the least impactful change of all. We tend to frown on the use of the global class loader because it may introduce transitive dependencies and does not support hot deployable updates, but this option might be lower development cost to offset the concern.

Liferay Design Patterns - Multi-Scoped Data/Logic

Technical Blogs 2017/02/17 投稿者 David H Nebinger

Pattern: Multi-Scoped Data/Logic

Intent

The intent for this pattern is to support data/logic usage in multiple scopes. Liferay defines the scopes Global, Site and Page, but from a development perspective scope refers to Portal and individual OSGi Modules. Classic data access implementations do not support multi-scope access because of boundaries between the scopes.

The Multi-Scoped Data/Logic Liferay Design Pattern's intent is to define how data and logic can be designed to be accessable from all scopes in Liferay, either in the Portal layer or any other deployed OSGi Modules.

Also Known As

This pattern is implemented using the Liferay Service Builder tool.

Motivation

Standard ORM tools provide access to data for servlet-based web applications, but they are not a good fit in the portal because of the barriers between modules in the form of class loader and other kinds of boundaries. If a design starts from a standard ORM solution, it will be restricted to a single development scope. Often this may seem acceptable for an initial design, but in the portal world most single-scoped solutions often need to be changed to support multiple scopes. As the standard tools have no support for multiple scopes, developers will need to hand code bridge logic to add multi-scope support, and any hand coding increases development time, bug potential, and time to market.

The motivation for Liferay's Service Builder tool is to provide an ORM-like tool with built-in support for multi-scoped data access and business logic sharing. The tool transforms an XML-based entity definition file into layered code to support multiple scopes and is used throughout business logic creation to add multi-scope exposure for the business logic methods.

Additionally the tool is the foundation for adding portal feature support to custom entities, including:

  • Auto-populated entity audit columns.
  • Asset framework support (comments, rankings, Asset Publisher support, etc).
  • Indexing and Search support.
  • Model listeners.
  • Workflow support.
  • Expando support.
  • Dynamic Query support.
  • Automagic JSON web service support.
  • Automagic SOAP web service support.

You're not going to get this kind of integration from your classic ORM tool...

And with Liferay 7 CE / Liferay DXP, additionally you also get an OSGi-compatible API and service bundle implementation ready for deployment.

Applicability

IMHO Service Builder applies when you are dealing with any kind of multi-scoped data entities and/or business logic; it also applies if you need to add any of the indicated portal features to your implementation.

Participants

The participants in this pattern are:

  • An XML file defining the entities.
  • Spring configuration files.
  • Implementation class methods to add business logic.
  • Service consumers.

The participants are used by the Service Builder tool to generate code for the service implementation details.

Details for working with Service Builder are covered in the following sections:

Collaboration

ServiceBuilder uses the entity definition XML file to generate the bulk of the code. Custom business methods are added to the ServiceImpl and LocalServiceImpl classes for the custom entities and ServiceBuilder will include them in the service API.

Consequences

By using Service Builder and generating entities, there is no real downside in the portal environment. Service Builder will generate an ORM layer and provide integration points for all of the core Liferay features.

There are three typical arguments used by architects and developers for not using Service Builder:

  • It is not a complete ORM. This is true, it does not support everything a full ORM does. It doesn't support Many To Many relationships and it also doesn't handle automatic parent-children relationships in One To Many. All that means is the code to handle many to many and even some one to many relationship handling will need to be hand-coded.
  • It still uses old XML files instead of newer Annotations. This is also true, but this is more a reflection of Liferay generating all of the code including the interfaces. With Liferay adding portal features based upon the XML definitions, using annotations would require Liferay to modify the annotated interface and cause circular change effects.
  • I already know how to develop using X, my project deadlines are too short to learn a new tool like Service Builder. Yes there is a learning curve with Service Builder, but this is nothing compared to the mountains of work it will take getting X working correctly in the portal and some Liferay features will just not be options for you without Service Builder's generated code.

All of these arguments are weak in light of what you get by using Service Builder.

Sample Usage

Service Builder is another case of Liferay eating it's own dogfood. The entire portal is based on Service Builder for all of the entities in all of the portlets, the Liferay entities, etc.

Check out any of the Liferay modules from simple cases like Bookmarks through more complicated cases such as Workflow or the Asset Publisher.

Conclusion

Service Builder is a must-use if you are going to do any integrated portal development. You can't build the portal features into your portlets without Service Builder usage.

Seriously. You have no other choice. And I'm not saying this because I'm a fanboy or anything, I'm coming from a place of experience. My first project on Liferay dealt with a number of portlets using a service layer; I knew Hibernate but didn't want to take time out to learn Service Builder. That was a terrible mistake on my part. I never did deal with the multi-scoping well at all, never got the kind of Liferay integration that would have been great to have. Fortunately it was not a big problem to have made such a mistake, but I learned from it and use Service Builder all the time now in the portal.

So I share this experience with you in hopes that you too can avoid the mistakes I made. Use Service Builder for your own good!

Liferay Design Patterns - Flexible Entity Presentation

Technical Blogs 2017/02/15 投稿者 David H Nebinger

Introduction

So I'm going to start a new type of blog series here covering design patterns in Liferay.

As we all know:

In software engineering, a software design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. - Wikipedia

In Liferay, there are a number of APIs and frameworks used to support Liferay-specific general reusable solutions. But since we haven't defined them in a design pattern, you might not be aware of them and/or if/when/how they could be used.

So I'm going to carve out some time to write about some "design patterns" based on Liferay APIs and frameworks. Hopefully they'll be useful as you go forward and designing your own Liferay-based solutions.

Being a first stab at defining these as Liferay Design Patterns, I'm expecting some disagreement on simple things (that design pattern name doesn't seem right) as well as some complex things... Please go ahead and throw your comments at me and I'll make necessary changes to the post. Remember this isn't for me, this is for you. cheeky

And I must add that yes, I'm taking great liberties in using the phrase "design pattern". Most of the Liferay APIs and frameworks I'm going to cover are really combinations of well-documented software design patterns (in fact Liferay source actually implements a large swath of creational, structural and behavioral design patterns in such purity and clarity they are easy to overlook).

These blog posts may not be defining clean and simple design patterns as specified by the Gang of Four, but they will try to live up to the ideals of true design patterns. They will provide general, reusable solutions to commonly occurring problems in the context of a Liferay-based system.

Ultimately the goal is to demonstrate how by applying these Liferay Design Patterns that you too can design and build a Liferay-based solution that is rich in presentation, robust in functionality and consistent in usage and display. By providing the motivation for using these APIs and frameworks, you will be able to evaluate how they can be used to take your Liferay projects to the next level.

Pattern: Flexible Entity Presentation

Intent

The intent of the Flexible Entity Presentation pattern is to support a dynamic templating mechanism that supports runtime display generation instead of a classic development-time fixed representation, further separating view management from portlet development.

Also Known As

This pattern is known as and implemented on the Application Display Template (ADT) framework in Liferay.

Motivation

The problem with most portlets is that the code used to present custom entities is handled as a development-time concern; the UI specifications define how the entity is shown on the page and the development team delivers a solution to satisfy the requirements.  Any change to specifications during development results in a change request for the development team, and post development the change represents a new development project to implement presentation changes.

The inflexibility of the presentation impacts time to market, delivery cycles and development resource allocation.

The Flexible Entity Presentation pattern's motivation is to support a user-driven mechanism to present custom entities in a dynamic way.

The users and admins section on ADTs from dev.liferay.com starts:

The application display template (ADT) framework allows Liferay administrators to override the default display templates, removing limitations to the way your site’s content is displayed.

ADTs allow the display of an entity to be handled by a dynamic template instead of handled by static code. Don't get hung up on the word content here, it's not just content as in web content but more of a generic reference to any html content your portlet needs to render.

Liferay identified this motivation when dealing with client requests for product changes to adapt presentation in different ways to satisfy varying client requirements.  Liferay created and uses the ADT framework extensively in many of the OOTB portlets from web content through breadcrumbs.  By leveraging ADTs, Liferay defines the entities, i.e. a Bookmark, but the presentation can be overridden by an administrator and an ADT to show the details according to their requirements, and all without a development change by Liferay or a code customization by the client.

Liferay eats its own dogfood by leveraging the ADT framework, so this is a well tested framework for supporting dynamic presentation of entities.

When you look at many of the core portlets, they now support ADTs to manage their display aspects since tweaking an ADT is much simpler than creating a JSP fragment bundle or new custom portlet or some crazy JS/CSS fu in order to affect a presentation change. This flexibility is key for supporting changes in the Liferay UI without extensive code customizations.

Applicablility

The use of ADTs apply when the presentation of an entity is subject to change. Since admins will use ADTs to manage how to display the entities, the presentation does not need to be finalized before development starts. When the ADT framework is incorporated in the design out of the gate, flexibility in the presentation is baked into the design and doors are open to any future presentation changes without code development, testing and deployment.

So there are some fairly clear use cases to apply ADTs:

  • The presentation of the custom entities is likely to change.
  • The presentation of the custom entities may need to change based upon context (list view, single view, etc.).
  • The presentation is not an aspect in the portlet development.
  • The project is a Liferay Marketplace application and presentation customization is necessary.

Notice the theme here, the change in presentation.

ADTs would either not apply or would be overkill for a static entity presentation, one that doesn't benefit from presentation flexibility.

Participants

The participants in this pattern are:

  • A custom entity.
  • A custom PortletDisplayTemplateHandler.
  • ADT Resource Portlet Permissions.
  • Portlet Configuration for ADT Selection.
  • Portlet View Leveraging ADTs.

The participants work together with the Liferay ADT framework to support a dynamic presentation for the entity. The implementation details for the participants are covered here: https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/implementing-application-display-templates.

Collaboration

The custom entity is defined in the Service Builder layer (normally).

The PortletDisplayTemplateHandler implementation is used to feed meta information about the fields and descriptions of the entity to the ADT framework Template Editor UI. The meta information provided will be generally be very coupled to the custom entity in that changes to the entity will usually result in changes to the PortletDisplayTemplateHandler implementation.

The ADT resource portlet permissions must be enabled for the portlet so administrators will be able to choose the display template and edit display templates for the entity.

The portlet configuration panel is where the administrator will chose between display templates, and the portlet view will leverage Liferay's ADT tag library to inject the rendered template into the portlet view.

Consequences

By moving to an ADT-based presentation of the entity, the template engine (FreeMarker) will be used to render the view.

The template engine will impose a performance cost in supporting the flexible presentation (especially if someone creates a bad template). Implementors should strike a balance between benefitial flexibility and overuse of the ADT framework.

Sample Usage

For practical examples, consider a portal based around a school.  Some common custom entities would be defined for students, rooms, teachers, courses, books, etc.

Consider how often the presentation of the entities may need to change and weigh that against whether the changes are best handled in code or in template.

A course or a teacher entity would likely benefit from the ADT as those entities might need to change the presentation of the course as a brochure-like view needs to change, or the teacher when new additions such as accreditation or course history would change the presentation.

The students and rooms may not benefit from ADTs if the presentation is going to remain fairly static.  These entities might go through future presentation changes but it may be more acceptable to approach those as development projects that are planned and coordinated.

Known Uses

The best known uses come from Liferay itself. The list of OOTB portlets which leverage ADTs are:

  • Asset Publisher
  • Blogs
  • Breadcrumbs
  • Categories Navigation
  • Documents and Media
  • Language Selector
  • Navigation Menu
  • RSS Publisher
  • Site Map
  • Tags Navigation
  • Web Content Display
  • Wiki

This provides many examples for when to use ADTs, the obvious advantage of ADTs (customized displays w/o additional coding) and even hints where ADTs may not work well (i.e. users/orgs control panel, polls, ...).

Conclusion

Well, that's pretty much it for this post. I'd encourage you to go and read the section for styling apps with ADTs as it will help solidify the motivations to incorporate the ADT framework into your design. When you understand how an admin would use ADTs to create a flexible presentation of the Liferay entities, it should help to highlight how you can achieve the same flexibility for your custom assets.

When you're ready to realize these benefits, you can refer to the implementing ADTs page to help with your implementation.

 

Adding Dependencies to JSP Fragment Bundles

Technical Blogs 2017/02/08 投稿者 David H Nebinger

Recently I was lamenting how I felt that JSP fragment bundles could not introduce new dependencies and therefore the JSP overrides could really not do much more than reorganize or add/remove already supported elements on the page.

For me, this is like only 5% of the use cases for a JSP override. I am much more likely to need to add new functionality that the original portlet developers didn't need to consider.  I need to be able to add new services and use those in the JSP to retrieve entities, and sometimes just really do completely different things w/ the JSP that perhaps were never imagined.

The first time I tried a JSP override to do something similar with a JSP fragment bundle, I was disappointed. My fragment bundle would get to status "Installed" in GoGo, but would go no further because it had unresolved references.  It just couldn't get to the resolved stage.

How could I make the next great JSP fragment override bundle if I couldn't access anything outside the original set of services?

My good friend and coworker Milen Dyankov heard my rant and offered the following insight:

According to the spec:

... requirements and capabilities in a fragment bundle never become part of the fragment's Bundle Wiring; they are treated as part of the host's requirements and capabilities when the fragment is attached to that host.

As for providing declarative services in fragments, again the spec is clear:

A Service-Component manifest header specified in a fragment is ignored by SCR. However, XML documents referenced by a bundle's Service-Component manifest header may be contained in attached fragments.

In another words if your host has Service-Components: OSGI-INF/*.xml then your fragment can put a new XML file in OSGI-INF folder and it will be processed by SCR.

Now sometimes Milen seems to forget that I'm just a mere mortal and not the OSGi guru he is, so while this was perfectly clear to him, it left me wondering if there was anything here that would be my lever to lift the lid and peek inside the JSP fragment bundle realm.

The remainder of this blog is the result of that epic journey cheeky.

Service Component Runtime

The SCR is the Apache Felix implementation of the OSGi Declarative Services specification. It's responsible for handling the service registry and lifecycle management of DS components within the OSGi container, starting/stopping the services as bundles are started/stopped, wiring up @Reference dependencies in DS components, etc.

Since the fragment bundle handling comes from the Apache Felix implementation, it's not really a Liferay component and certainly not one that would lend itself to an override in the normal Liferay sense. Anything we do here to access services in the JSP fragment bundles is going to have to go through supported OSGi mechanisms or we won't get anywhere.

So the key for Milen's quote above is the "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments." The rough translation here - we might be able to provide an override XML file for one of the bundle host's components and possibly inject new dependencies. Yes, as a rough translation it really assumes that you know more than what what you might (and especially more that what I did), so let's divert for a second.

Service Component Manifest XML Documents

So the BND tool that we all know and love, that guy actually does many, many things for us when it builds a bundle jar. One of those tasks is to generate the service component manifest and all of the XML documents. The contents of all of these files is basically the metadata the SCR will need for dependency resolution, component wiring, etc.

Any time you annotate your java class with @Component you are indicating it is a DS service. When BND is processing the annotations, it's going to add an entry to the Service Component Manifest (so the SCR will process the component during bundle start). The Service Component Manifest is the Service-Component key in the bundle's MANIFEST.MF file, and it lists each individual XML file in the OSGI-INF, one for each component.

These XML files define the component for the SCR, specifying the java class that implements the component, the service it provides, all reference details and properties for the component.

So if you take any bundle jar you have, expand it and check out the MANIFEST.MF file and look for the Service-Component key. You'll find there's one OSGI-INF/com.example.package.JavaClass.xml file (where it is your package and class) for each component defined in your bundle.

If you open one of the XML files, you can see the structure for a component definition, and it is easy to see how things that you set in the @Component annotation attributes have been mapped into the XML file.

Now that we know about the manifest and XML docs, we can get back to our regularly scheduled program.

Overriding An SCR XML

So remember, we should be able to override one of these files because "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments."

This hints that we cannot add a new file, but we could override an existing one.

So to me, this is the key question - can we create an override XML file to introduce a new dependency, one that really cannot be directly bound to the original (since we can't modify the class) so at least the bundle would have a new dependency and the JSP would be happy?

Well I actually used all of this newfound knowledge to work up a test and tried it out, but it fails. It didn't make any sense...

Return To The Jedi

"Milen, my SCR XML override isn't working."

"Overrides won't work because the XML files are loaded by the class loader, and the host bundle comes before the fragment bundle so SCR ignores the override.  You can't override the XML, you can only add a new one to the fragment bundle."

"But Milen you said I couldn't add new XML files, only those listed in the Service-Component in the MANIFEST.MF file of the host bundle will be used by SCR during loads."

"Change your Service-Component key to use a wildcard like OSGI-INF/* and SCR will load the ones from the host bundle as well as the fragment bundle. It's considered bad practice, but it would work."

"I can't do that, Milen, I'm doing a JSP fragment bundle on a Liferay host bundle, I can't change the Service-Component manifest value and, if I could, I wouldn't need to do any of this fragment bundling in the first place because I would just apply my change directly in the host bundle and be done with it."

"Well then the SCR XML override isn't going to work. Let's try something else..."

Example Project

After working out a new plan of attack, I was going to need an example project to test this all out and verify that it was going to work. The example must include a JSP fragment bundle override and introduce another previously unused service. I don't really want to do any more coding than necessary here, so let's pick something to do out of the portal JSPs and services.

Requirement: On login form, display the current count of membership requests.

Pretty simple, maybe part of some automated membership request handling being added to the portal or trying to show how popular the site is by showing count of how many are waiting to get in.

But it gives us the goal here, we want to access the MemberRequestLocalService inside of the login.jsp page of the login-web host bundle. The service is defined in the com.liferay.invitation.invite.members.api bundle and is not currently connected in any way with the login web module.

Creating The Fragment Bundle

I'll continue my pattern of using blade on the command line, but of course you're free to leverage tools provided by your IDE.

blade create -t fragment -h com.liferay.login.web -H 1.1.4 login-web-fragment

Remember to choose the fragment bundle version from your local portal so you'll override the right one and make OSGi/SCR happy.

Copy in the login.jsp page from the portal source. After the include of init.jsp, add the following lines:

<%@ page import="com.liferay.invitation.invite.members.service.MemberRequestLocalService" %>

<%
  // get the service from the render request attributes
  MemberRequestLocalService memberRequestLocalService = (MemberRequestLocalService)
    renderRequest.getAttribute("MemberRequestLocalService");
	
  // get the current count
  int currentRequestCount = memberRequestLocalService.getMemberRequestsCount();
	
  // display it somewhere on the page...
%>

Very simple. Doesn't really display, but that's not the point in this blog.

Now if you build and deploy this guy as-is, if you check him you'll see his state in GoGo is "Installed". This is not good as it is not where it needs to be for the JSP fragment to work.

Adding The Dependency

So we have to go back to how the OSGi handles the fragment bundles... So when OSGi is loading the fragment, effectively the MANIFEST.MF items from the fragment bundle will be merged with those from the host bundle.

For me, that means I have to list my dependency in build.gradle and trust BND will add the right Import-Package declaration to the final MANIFEST.MF file.

Then, when the framework is loading my fragment bundle, my Import-Package from the fragment will be added to the Import-Package of the host bundle and all should be good.

JSP fragment bundles created by blade do not have dependencies listed in the build.gradle file (in fact it is completely empty), so let's add the dependency stanza:

dependencies {
  compile group: "com.liferay", name: "com.liferay.invitation.invite.members.api", version: "2.1.1"
}

We only need to add the dependency that is missing from the host bundle, the one with the service we're going to pull in.

After building, you can unpack the jar and check the MANIFEST.MF file and see that it does now have the Import-Package declaration, so if SCR does actually do the merge while loading, we should be in business.

Deploy your new JSP fragment bundle and if you check the bundle status in GoGo, you'll see it is now "Resolved".

Sweet!

Injecting The Reference

Not so fast. If you try to log into your portal, you'll get the "portlet is temporarily unavailable" message and the log file will have a NullPointerException and a big stack trace. We've totally broken the login portlet because login.jsp depends upon the service but it is not set.

If you check the JSP change I shared, I'm pulling the service instance from the render request attributes. But how the heck does it get in there when we cannot change the host bundle to inject it in the first place?

We're going to do this using another OSGi module with a new component that implements the PortletFilter interface, specifically a RenderFilter.

@Component(
  immediate = true,
  property = {
      "javax.portlet.name=" + LoginPortletKeys.LOGIN,
      "javax.portlet.name=" + LoginPortletKeys.FAST_LOGIN
  },
  service = PortletFilter.class
)
public class LoginRenderFilter implements RenderFilter {
  @Override
  public void doFilter(RenderRequest request, RenderResponse response, FilterChain chain) throws IOException, PortletException {
    // set the request attribute so it is available when the JSP renders
    request.setAttribute("MemberRequestLocalService", _memberRequestLocalService);

    // let the filter chain do it's thing
    chain.doFilter(request, response);
  }

  @Override
  public void init(FilterConfig filterConfig) throws PortletException { }

  @Override
  public void destroy() { }

  @Reference(unbind = "-")
  protected void setMemberRequestLocalService(final MemberRequestLocalService memberRequestLocalService) {
    _memberRequestLocalService = memberRequestLocalService;
  }

  private MemberRequestLocalService _memberRequestLocalService;
}

So here we are intercepting the render request using the portlet filter. We inject the service into the request attributes before invoking the filter chain to complete the rendering; that way when the JSP page from the fragment bundle is used, the attribute will be set and ready.

Build and deploy your new component. Once it starts, refresh your browser and try to log in. You should now see the login portlet again. Not that we did anything fancy here, we're just proving that the service reference is not null and is available for the JSP override to use.

Conclusion

So we took a roundabout path to get here, but we've seen how we can create a JSP fragment bundle to override portal JSPs, add a dependency to the fragment bundle that gets included as a dependency in the host bundle, and we created a portlet filter bundle to inject the service reference in the request attributes so it would be available to the JSP page.

Two different bundle jars, but it certainly gets the job done.

Also along the way we learned some things about what the SCR is, how fragment bundles work, as well as some of the internals of our OSGi bundle jars and the role that BND plays in their construction.  Useful information, IMHO, that can help you while learning Liferay 7 CE/Liferay DXP.

This now opens some new paths for you to pursue for your JSP fragment bundles.  Just follow the outline here and you should be good to go.

Find the project code for the blog here: https://github.com/dnebing/jsp-fragment

Building In Upgrade Support

Technical Blogs 2017/02/07 投稿者 David H Nebinger

One of the things that I never really used in 6.x was the Liferay upgrade APIs.

Sure, I knew about the Release table and stuff, but it just seemed kind of cumbersome to not only to build out your code but on top of that track your release and support an upgrade process on top of all of that. I mean, I'm a busy guy and once this project is done I'm already behind on the next one.

When you start perusing the Liferay 7 source code, though, one thing you'll notice is that there is upgrade logic all over the place. Pretty much every portlet module includes an upgrade process to support upgrading from version "0.0.0" to version "1.0.0" (this is the upgrade process to change from 6.x to the new 7.x module version).

And you'll even find that some modules include upgrades from versions "1.0.0" to "1.0.1" to support the independent module versioning that was the promise of OSGi.

So now that I'm trying to exclusively build modules, I'm thinking it's an appropriate time to dig into the upgrade APIs and see how they work and how I can incorporate upgrades into my modules.

The New Release

So previously we'd have to manage the Release entity ourselves, but Liferay has graciously taken that over for us. Your bnd.bnd file, where you specify your module version, well that now becomes the foundation of your Release handling. And just like the portal module, an absense of a Release is technically version "0.0.0" so now you can handle first-time deployment stuff too.

The Upgrade API

Before diving into implementation, let's take a little time to look over some of the classes and interfaces Liferay provides as part of the Upgrade API. We'll start with the classes from the com.liferay.portal.kernel.upgrade package:

Name Purpose
UpgradeStep This is the main interface that must be implemented for all upgrade logic. When registering an upgrade, an ordered list of UpgradeSteps are provided and the upgrade process will execute these in order to complete an upgrade.
DummyUpgradeStep The simplest of all concrete implementations of the UpgradeStep interface, this upgrade step does nothing. But it is a useful step to use for handling new deployments.
UpgradeProcess This is a handy abstract base class to use for all of your upgrade steps. It implements the UpgradeStep interface and has support for database-specific alterations should you need them.
Base* These are abstract base classes for upgrade steps typically used by the portal for managing upgrades from portlet wars to new module-based portlets. For example, the BaseUpgradePortletId class is used to support fixing the portlet ids from older id-based portlet names to the new OSGi portlet ids based on class name. These classes are good foundations if you are building an upgrade process to move your own portlets from wars to bundles or want to handle upgrades from 6.x compatibility to 7.x.
util.* For those wanting to support a database upgrade, the com.liferay.portal.kernel.upgrade.util package contains a bunch of support classes to assist with altering tables, columns, indexes, etc.

Registering The Upgrade

All upgrade definitions need to be registered. That's pretty easy, of course, when one is using OSGi. To register an upgrade, you just need a component that implements the UpgradeStepsRegistrator interface.

But first a word about code structure...

So Liferay's recommendation is to use a java package to contain all of your upgrade code, typically in a package named upgrade, is part of your portlet web module, and the package is at the same level as your portlet package (if you have one).

So if your portlet code is in com.example.myapp.portlet, you're going to have a com.example.myapp.upgrade package.

In here you'll have sub-packages for all upgrade versions supported, so you might have "v1_0_0" and "v1_0_1", etc.  Upgrade step implementations will be in the subpackage for the upgrade level they support.

So now we have enough details to start building out the upgrade definition. Start by updating your build.gradle file to introduce a new dependency:

  compileOnly group: "com.liferay", name: "com.liferay.portal.upgrade", version: "2.3.0"

This pulls in some utility classes we'll be using below.

Let's assume we're building a brand new module and just want to get a placeholder upgrade definition in place. This is quite easily done by adding a single component to our project:

@Component(immediate = true, service = UpgradeStepRegistrator.class)
public class ExampleUpgradeStepRegistrator implements UpgradeStepRegistrator {
  
  @Activate
  protected void activate(final BundleContext bundleContext) {
    _bundleName = bundleContext.getBundle().getSymbolicName();
  }
  
  @Override
  public void register(Registry registry) {

    // for first time deployments this will start by creating the initial release record
    // with the initial version of 1.0.0.
    // Also use the dummy upgrade step since we're not doing anything in this upgrade.
    registry.register(_bundleName, "0.0.0", "1.0.0", new DummyUpgradeStep());
  }
  
  private String _bundleName;
}

So that's pretty much it.  Including this class in your component will result in it registering as a Release with version 1.0.0 and you have nothing else to worry about.

When you're ready to release verison 1.1.0 of your component, things get a little more fun.

In your v1_1_0 package you'll create classes that implement the UpgradeStep interface typically by extending the UpgradeProcess abstract base class or perhaps a more appropriate class from the above table. Either way you'll define separate classes to handle different aspects of the upgrade.

We'd then come back to the UpgradeStepRegistrator implementation to add the upgrade steps by including another registry call:

    registry.register(_bundleName, "1.0.0", "1.1.0", new UpgradeMyTableStep(), new UpgradeMyDataStep(), new UpgradeMyConfigAdmin());

When processing this upgrade definition, the Upgrade service will invoke the upgrade steps in the order provided.  So obviously you should take care to order your steps such that they can succeed given only what steps have been processed before and not on subsequent steps.

Database Upgrades

So one of the common issues with Service Builder modules is that the tables will be created when you first deploy the module to a new environment, but updates will not be processed. I think we could argue on one side that it is a bug or on the other side that expecting Service Builder to track data model changes is far outside of the tool's responsibility.

I'm not going to argue it either way; we are where we are, and solving from this point is all I'm really worried about.

As I previously stated, the com.liferay.portal.kernel.upgrade.UpgradeProcess is going to be the perfect base class to accommodate a database update.

UpgradeProcess extends com.liferay.portal.kernel.dao.db.BaseDBProcess which brings the following methods:

  • hasTable() - Determines if the listed table exists.
  • hasColumn() - Determines if the table has the listed column.
  • hasColumnType() - Determines if the listed column in the listed table has the provided type.
  • hasRows() - Determines if the listed table has rows (in order to provide logic to migrate data during an upgrade).
  • runSQL() - Runs the given SQL statement against the database.

UpgradeProcess itself has two upgradeTable() methods both of which add a new table to the database.  The difference between the two, one is simple and will create a table based on the name and a multidimensional array of column detail objects, the second one has additional arguments for fixed SQL for the table, indexes, etc.

Additionally UpgradeProcess has a number of inner support classes to facilitate table alterations:

  • AlterColumnName - A class to encapsulate details to change a column name.
  • AlterColumnType - A class to encapsulate details to change a column type.
  • AlterTableAddColumn - A class to encapsulate details to add a new column to a table.
  • AlterTableDropColumn - A class to encapsulate details to drop a column from a table.

Let's write a quick upgrade method to add a column, change another column's name and another column's type.  To facilitate this, our class will extend UpgradeProcess and will need to implement a doUpgrade() method:

public void doUpgrade() throws Exception {
  // create all of the alterables
  Alterable addColumn = new AlterTableAddColumn("COL_NEW");
  Alterable fixColumn = new AlterColumnType("COL_NEW", "LONG");
  Alterable changeName = new AlterColumnName("OLD_COL_NAME", "NEW_COL_NAME");
  Alterable changeType = new AlterColumnType("ENTITY_PK", "LONG");

  // apply the alterations to the MyEntity Service Builder entity.
  alter(MyEntity.class, addColumn, fixColumn, changeName, changeType);
  
  // done
}

So the alterations are based on your ServiceBuilder entity but otherwise you don't have to worry much about SQL to apply these kinds of alterations to your entity's table.

Conclusion

Using just what has been provided here, you can integrate a smooth and automatic upgrade process into your modules, including upgrading your Service Builder's entity backing tables since SB won't do that for you.

Where can you find more details on doing some nitty-gritty upgrade activities? Why, the Liferay source of course.  Here's a fairly complex set of upgrade details to start your review: https://github.com/liferay/liferay-portal/tree/master/modules/apps/knowledge-base/knowledge-base-service/src/main/java/com/liferay/knowledge/base/internal/upgrade

Enjoy!

Postscript

My good friend and coworker Nathan Shaw forwarded me a reference that I think is worth adding here.  Thanks Nathan!

https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/creating-an-upgrade-process-for-your-app

Liferay 7 CE/Liferay DXP Scheduled Tasks

Technical Blogs 2017/02/06 投稿者 David H Nebinger

In Liferay 6.x, scheduled tasks were kind of easy to implement.

I mean, you'd implement a class that implements the Liferay Message Bus's MessageListener interface and then add the details in the <scheduler-entry /> sections in your liferay-portlet.xml file and you'd be off to the races.

Well, things are not so simple with Liferay 7 CE / Liferay DXP. In fact, I couldn't find a reference anywhere on dev.liferay.com, so I thought I'd whip up a quick blog on them.

Of course I'm going to pursue this as an OSGi-only solution.

StorageType Information

The first thing we need to know before we schedule a job, we should first discuss the supported StorageTypes. Liferay has three supported StorageTypes:

  • StorageType.MEMORY_CLUSTERED - This is the default storage type, one that you'll typically want to shoot for. This storage type combines two aspects, MEMORY and CLUSTERED. For MEMORY, that means the job information (next run, etc.) are only held in memory and are not persisted anywhere. For CLUSTERED, that means the job is cluster-aware and will only run on one node in the cluster.
  • StorageType.MEMORY - For this storage type, no job information is persisted. The important part here is that you may miss some job runs in cases of outages. For example, if you have a job to run on the 1st of every month but you have a big outage and the server/cluster is down on the 1st, the job will not run. And unlike in PERSISTED, when the server comes up the job will not run even though it was missed. Note that this storage type is not cluster-aware, so your job will run on every node in the cluster which could cause duplicate runs.
  • StorageType.PERSISTED - This is the opposite of MEMORY as job details will be persisted in the database. For the missed job above, when the server comes up on the 2nd it will realize the job was missed and will immediately process the job. Note that this storage type relies on cluster-support facilities in the storage engine (Quartz's implementation discussed here: http://www.quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering.html).

So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster.

Choosing between MEMORY[_CLUSTERED] and PERSISTED is how resiliant you need to be in the case of missed job fire times. For example, if that monthly report is mission critical, you might want to elect for PERSISTED to ensure the report goes out as soon as the cluster is back up and ready to pick up the missed job. However, if they are not mission critical it is easier to stick with one of the MEMORY options.

Finally, even if you're not currently in a cluster, I would encourage you to make choices as if you were running in a cluster right from the beginning. The last thing you want to have to do when you start scaling up your environment is trying to figure out why some previous regular tasks are not running as they used to when you had a single server. 

Adding StorageType To SchedulerEntry

We'll be handling our scheduling shortly, but for now we'll worry about the SchedulerEntry. The SchedulerEntry object contains most of the details about the scheduled task to be defined, but it does not have details about the StorageType. Remember that MEMORY_CLUSTERED is the default, so if you're going to be using that type, you can skip this section. But to be consistent, you can still apply the changes in this section even for the MEMORY_CLUSTERED type.

To add StorageType details to our SchedulerEntry, we need to make our SchedulerEntry implementation class implement the com.liferay.portal.kernel.scheduler.ScheduleTypeAware interface. When Liferay's scheduler implementation classes are identifying the StorageType to use, it starts with MEMORY_CLUSTERED and will only use another StorageType if the SchedulerEntry implements this interface.

So let's start by defining a SchedulerEntry wrapper class that implements the SchedulerEntry interface as well as the StorageTypeAware interface:

public class StorageTypeAwareSchedulerEntryImpl extends SchedulerEntryImpl implements SchedulerEntry, StorageTypeAware {

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry) {
    super();

    _schedulerEntry = schedulerEntry;

    // use the same default that Liferay uses.
    _storageType = StorageType.MEMORY_CLUSTERED;
  }

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   * @param storageType
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry, final StorageType storageType) {
    super();

    _schedulerEntry = schedulerEntry;
    _storageType = storageType;
  }

  @Override
  public String getDescription() {
    return _schedulerEntry.getDescription();
  }

  @Override
  public String getEventListenerClass() {
    return _schedulerEntry.getEventListenerClass();
  }

  @Override
  public StorageType getStorageType() {
    return _storageType;
  }

  @Override
  public Trigger getTrigger() {
    return _schedulerEntry.getTrigger();
  }

  public void setDescription(final String description) {
    _schedulerEntry.setDescription(description);
  }
  public void setTrigger(final Trigger trigger) {
    _schedulerEntry.setTrigger(trigger);
  }
  public void setEventListenerClass(final String eventListenerClass) {
    _schedulerEntry.setEventListenerClass(eventListenerClass);
  }
  
  private SchedulerEntryImpl _schedulerEntry;
  private StorageType _storageType;
}

Now you can use this class to wrap a current SchedulerEntryImpl yet include the StorageTypeAware implementation.

Defining The Scheduled Task

We have all of the pieces now to build out the code for a scheduled task in Liferay 7 CE / Liferay DXP:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseSchedulerEntryMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getEventListenerClass();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
}

So the code here is kinda thick, but I've documented it as fully as I can.

The base class, BaseSchedulerEntryMessageListener, is a common base class for all schedule-based message listeners. It is pretty short, so you are encouraged to open it up in the source and peruse it to see what few services it provides.

The bulk of the code you can use as-is. You'll probably want to come up with your own default cron expression constant and property so you're not running at midnight (and that's midnight GMT, cron expressions are always based on the timezone your app server is configured to run on).

And you'll certainly want to fill out the doReceive() method to actually build your scheduled task logic.

One More Thing...

One thing to keep in mind, especially with the MEMORY and MEMORY_CLUSTERED storage types: Liferay does not do anything to prevent running the same jobs multiple times.

For example, say you have a job that takes 10 minutes to run, but you schedule it to run every 5 minutes. There's no way the job can complete in 5 minutes, so multiple jobs start piling up. Sure there's a pool backing the implementation to ensure the system doesn't run away and die on you, but even that might lead to disasterous results.

So take care in your scheduling. Know what the worst case scenario is for timing your jobs and use that information to define a schedule that will work even in this situation.

You may even want to consider some sort of locking or semaphore mechanism to prevent the same job running in parallel at all.

Just something to keep in mind...

Conclusion

So this is how all of those scheduled tasks from liferay-portlet.xml get migrated into the OSGi environment. Using this technique, you now have a migration path for this aspect of your legacy portlet code.

Liferay/OSGi Annotations - What they are and when to use them

Technical Blogs 2017/02/01 投稿者 David H Nebinger

When you start reviewing Liferay 7 CE/Liferay DXP code, you run into a lot of annotations in a lot of different ways.  They can all seem kind of overwhelming when you first happen upon them, so I thought I'd whip up a little reference guide, kind of explaining what the annotations are for and when you might need to use them in your OSGi code.

So let's dive right in...

@Component

So in OSGi world this is the all important "Declarative Services" annotation defining a service implementation.  DS is an aspect of OSGi for declaring a service dynamically and has a slew of plumbing in place to allow other components to get wired to the component.

There are three primary attributes that you'll find for this annotation:

  • immediate - Often set to true, this will ensure the component is started right away and not wait for a reference wiring or lazy startup.
  • properties - Used to pass in a set of OSGi properties to bind to the component.  The component can see the properties, but more importantly other components will be able to see the properties too.  These properties help to configure the component but also are used to support filtering of components.
  • service - Defines the service that the component implements.  Sometimes this is optional, but often it is mandatory to avoid ambiguity on the service the component wants to advertise.  The service listed is often an interface, but you can also use a concrete class for the service.

When are you going to use it?  Whenever you create a component that you want or need to publish into the OSGi container.  Not all of your classes need to be components.  You'll declare a component when code needs to plug into the Liferay environment (i.e. add a product nav item, define an MVC command handler, override a Liferay component) or to plug into your own extension framework (see my recent blog on building a healthcheck system).

@Reference

This is the counterpart to the @Component annotation.  @Reference is used to get OSGi to inject a component reference into your component. This is a key thing here, since OSGi is doing the injection, it is only going to work on an OSGi @Component class.  @Reference annotations are going to be ignored in non-components, and in fact they are also ignored in subclasses too.  Any injected references you need, they must be done in the @Component class itself.

This is, of course, fun when you want to define a base class with a number of injected services; the base class does not get the @Component annotation (because it is not complete) and @Reference annotations are ignored in non-component classes, so the injection will never occur.  You end up copying all of the setters and @Reference annotations to all of the concrete subclasses and boy, does that get tedious.  But it is necessary and something to keep in mind.

Probably the most common attribute you're going to see here is the "unbind" attribute, and you'll often find it in the form of @Reference(unbind = "-") on a setter method. When you use a setter method with @Reference, OSGi will invoke the setter with the component to use, but the unbind attribute indicates that there is no method to call when the component is unbinding, so basically you're saying you don't handle components disappearing behind your back.  For the most part this is not a problem, server starts up, OSGi binds the component in and you use it happily until the system shuts down.

Another attribute you'll see here is target. Target is used as a filter mechanism; remember the properties covered in @Component? With the target attribute, you specify a query that identifies a more specific instance of a component that you'd like to receive.  Here's one example:

@Reference(
  target = "(javax.portlet.name=" + NotificationsPortletKeys.NOTIFICATIONS + ")",
  unbind = "-"
)
protected void setPanelApp(PanelApp panelApp) {
  _panelApp = panelApp;
}

The code here wants to be given an instance of a PanelApp component, but it's looking specifically for the PanelApp component tied to the notifications portlet.  Any other PanelApp component won't match the filter and won't be applied.

There are some attributes that you will sometimes find here that are pretty important, so I'm going to go into some details on those.

The first is the cardinality attribute.  The default value is ReferenceCardinality.MANDITORY, but other values are OPTIONAL, MULTIPLE, and AT_LEAST_ONE. The meanings of these are:

  • MANDITORY - The reference must be available and injected before this component will start.
  • OPTIONAL - The reference is not required for the component to start and will function w/o a component assignment.
  • MULTIPLE - Multiple resources may satisfy the reference and the component will take all of them, but like OPTIONAL the reference is not needed for the component to start.
  • AT_LEAST_ONE - Multiple resources may satisfy the reference and the component will take all of them, but at least one is manditory for the component to start.

The multiple options allow you to get multiple calls with references that match.  This really only makes sense if you are using the @Reference annotation on a setter method and, in the body of the method, were adding to a list or array.  Alternatives to this kind of thing would be to use a ServiceTracker so you wouldn't have to manage the list yourself.

The optional options allow your component to start without an assigned reference.  This kind of thing can be useful if you have a scenario where you have a circular reference issue: A references B which references C which references A.  If all three use REQUIRED, none will start because the references cannot be satisfied (only started components can be assigned as a reference).  You break the circle by having one component treat the reference as optional; then they will be able to start and references will be resolved.

The next important @Reference attribute is the policy.  Policy can be either ReferencePolicy.STATIC (the default) or ReferencePolicy.DYNAMIC.  The meanings of these are:

  • STATIC - The component will only be started when there is an assigned reference, and will not be notified of alternative services as they become available.
  • DYNAMIC - The component will start when there is reference(s) or not, and the component will accept new references as they become available.

The reference policy controls what happens after your component starts when new reference options become available.  For STATIC, new reference options are ignored and DYNAMIC your component is willing to change.

Along with the policy, another important @Reference attribute is the policyOption.  This attribute can be either ReferencePolicyOption.RELUCTANT (the default) or ReferencePolicyOption.GREEDY.  The meanings of these are:

  • RELUCTANT - For single reference cardinality, new reference potentials that become available will be ignored.  For multiple reference cardinality, new reference potentials will be bound.
  • GREEDY - As new reference potentials become available, the component will bind to them.

Whew, lots of options here, so let's talk about common groupings.

First is the default, ReferenceCardinality.MANDITORY, ReferencePolicy.STATIC and ReferencePolicyOption.RELUCTANT.  This summarizes down to your component must have only one reference service to start and regardless of new services that are started, your component is going to ignore them.  These are really good and normal defaults and promote stability for your component.

Another common grouping you'll find in the Liferay source is ReferenceCardinality.OPTIONAL or MULTIPLE, ReferencePolicy.DYNAMIC and ReferencePolicyOption.GREEDY.  In this configuration, the component will function with or without reference service(s), but the component allows for changing/adding references on the fly and wants to bind to new references when they are available.

Other combinations are possible, but you need to understand impacts to your component.  After all, when you declare a reference, you're declaring that you need some service(s) to make your component complete.  Consider how your component can react when there are no services, or what happens if your component stops because dependent service(s) are not available. Consider your perfect world scenario as well as a chaotic nightmare of redeployments, uninstalls, service gaps and identify how your component can weather the chaos.  If you can survive the chaos situation, you should be fine in the perfect world scenario.

Finally, when do you use the @Reference annotation?  When you need service(s) injected into your component from the OSGi environment.  These injections can come from your own module or from other modules in the OSGi container.  Remember that @Reference only works for OSGi components, but you can change into a component with an addition of the @Component reference.

@BeanReference

This is a Liferay annotation used to inject a reference to a Spring bean from the Liferay core.

@ServiceReference

This is a Liferay annotation used to inject a reference from a Spring Extender module bean.

Wait! Three Reference Annotations? Which should I use?

So there they are, the three different types of reference annotations.  Rule of thumb, most of the time you're going to want to just stick with the @Reference annotation.  The Liferay core Spring beans and Spring Extender module beans are also exposed as OSGi components, so @Reference should work most of the time.

If your @Reference isn't getting injected or is null, that will be sign that you should use one of the other reference annotations.  Here your choice is easy: if the bean is from the Liferay core, use @BeanReference, but if it is from a Spring Extender module, use the @ServiceReference annotation instead.  Note that both bean and service annotations will require your component use the Spring Extender also.  For setting this up, check out any of your ServiceBuilder service modules to see how to update the build.gradle and bnd.bnd file, etc.

@Activate

The @Activate annotation is OSGi's equivalent to Spring's InitializingBean interface.  It declares a method that will be invoked after the component has started.

In the Liferay source, you'll find it used with three primary method signatures:

@Activate
protected void activate() {
  ...
}
@Activate
protected void activate(Map<String, Object> properties) {
  ...
}
@Activate
protected void activate(BundleContext bundleContext, Map<String, Object> properties) {
  ...
}

There are other method signatures too, just search the Liferay source for @Activate and you'll find all of the different variations. Except for the no-argument activate method, they all depend on values injected by OSGi.  Note that the properties map is actually your properties from OSGi's Configuration Admin service.

When should you use @Activate? Whenever you need to complete some initialization tasks after the component is started but before it is used.  I've used it, for example, to set up and schedule Quartz jobs, verify database entities, etc.

@Deactivate

The @Deactivate annotation is the inverse of the @Activate annotation, it identifies a method that will be invoked when the component is being deactivated.

@Modified

The @Modified annotation marks the method that will be invoked when the component is modified, typically indicating that the @Reference(s) were changed.  In Liferay code, the @Modified annotation is typically bound to the same method as the @Activate annotation so the same method handles both activation and modification.

@ProviderType

The @ProviderType comes from BND and is generally considered a complex concern to wrap your head around.  Long story greatly over-simplified, the @ProviderType is used by BND to define the version ranges assigned in the OSGi manifest in implementors and tries to restrict the range to a narrow version difference.

The idea here is to ensure that when an interface changes, the narrow version range on implementors would force implementors to update to match the new version on the interface.

When to use @ProviderType? Well, really you don't need to. You'll see this annotation scattered all through your ServiceBuilder-generated code. It's included in this list not because you need to do it, but because you'll see it and likely wonder why it is there.

@ImplementationClassName

This is a Liferay annotation for ServiceBuilder entity interfaces. It defines the class from the service module that implements the interface.

This won't be an interface you need to use, but at least you'll know why its there.

@Transactional

This is another Liferay annotation bound to ServiceBuilder service interfaces. It defines the transaction requirements for the service methods.

This is another annotation you won't be expected to use.

@Indexable

The @Indexable annotation is used to decorate a method which should result in an index update, typically tied to ServiceBuilder methods that add, update or delete entities.

You use the @Indexable annotation on your service implementation methods that add, update or delete indexed entities.  You'll know if your entities are indexed if you have an associated com.liferay.portal.kernel.search.Indexer implementation for your entity.

@SystemEvent

The @SystemEvent annotation is tied to ServiceBuilder generated code which may result in system events.  System events work in concert with staging and the LAR export/import process.  For example, when a jouirnal article is deleted, this generates a SystemEvent record.  When in a staging environment and when the "Publish to Live" occurs, the delete SystemEvent ensures that the corresponding journal article from live is also deleted.

When would you use the @SystemEvent annotation? Honestly I'm not sure. With my 10 years of experience, I've never had to generate SystemEvent records or modify the publication or LAR process.  If anyone out there has had to use or modify an @SystemEvent annotation, I'd love to hear about your use case.

@Meta

OSGi has an XML-based system for defining configuration details for Configuration Admin.  The @Meta annotations from the BND project allow BND to generate the file based on the annotations used in the configuration interfaces.

Important Note: In order to use the @Meta annotations, you must add the following line to your bnd.bnd file:
-metatype: *
If you fail to add this, your @Meta annotations will not be used when generating the XML configuration file.

@Meta.OCD

This is the annotation for the "Object Class Definition" aspect, the container for the configuration details.  This annotation is used on the interface level to provide the id, name and localization details for the class definition.

When do you use this annotation? When you are defining a Configuration Admin interface that will have a panel in the System Settings control panel to configure the component.

Note that the @Meta.OCD attributes include localization settings.  This allows you to use your resource bundle to localize the configuration name, the field level details and the @ExtendedObjectClassDefinition category.

@Meta.AD

This is the annotation for the "Attribute Definition" aspect, the field level annotation to define the specification for the configuration element. The annotation is used to provide the ID, name, description, default value and other details for the field.

When do you use this annotation? To provide details about the field definition that will control how it is rendered within the System Setings configuration panel.

@ExtendedObjectClassDefinition

This is a Liferay annotation to define the category for the configuration (to identify the tab in the System Settings control panel where the configuration will be) and the scope for the configuration.

Scope can be one of the following:

  • SYSTEM - Global configuration for the entire system, will only be one configuration instance shared system wide.
  • COMPANY - Company-level configuration that will allow one configuration instance per company in the portal.
  • GROUP - Group-level (site) configuration that allows for site-level configuration instances.
  • PORTLET_INSTANCE - This is akin to portlet instance preferences for scope, there will be a separate configuration instance per portlet instance.

When will you use this annotation? Every time you use the @Meta.OCD annotation, you're going to use the @ExtendedObjectClassDefinition annotation to at least define the tab the configuration will be added to.

@OSGiBeanProperties

This is a Liferay annotation used to define the OSGi component properties used to register a Spring bean as an OSGi component. You'll find this used often in ServiceBuilder modules to expose the Spring beans into the OSGi container. Remember that ServiceBuilder is still Spring (and SpringExtender) based, so this annotation exposes those Spring beans as OSGi components.

When would you use this annotation? If you are using Spring Extender to use Spring within your module and you want to expose the Spring beans into OSGi so other modules can use the beans, you'll want to use this annotation.

I'm leaving a lot of details out of this section because the code for this annotation is extensively javadoced. Check it out: https://github.com/liferay/liferay-portal/blob/master/portal-kernel/src/com/liferay/portal/kernel/spring/osgi/OSGiBeanProperties.java

Conclusion

So that's like all of the annotations I've encountered so far in Liferay 7 CE / Liferay DXP. Hopefully these details will help you in your Liferay development efforts.

Find an annotation I've missed or want some more details on those I've included? Just ask.

Building an Extensible Health Check

Technical Blogs 2017/02/01 投稿者 David H Nebinger

Alt Title: Cool things you can do with OSGi

Introduction

So one thing that many organizations like to stand up in their Liferay environments is a "health check".  The goal is to provide a simple URL that monitoring systems can invoke to verify servers are functioning correctly.  The monitoring systems will review the time it takes to render the health check page and examine the contents to compare against known, expected results.  Should the page take too long to render or does not return the expected result, the monitoring system will begin to alert operations staff.

The goal here is to allow operations to be proactive in resolving outage situations rather than being reactive when a client or supervisor calls in to see what is wrong with the site.

Now I'm not going to deliver here a complete working health check system here (sorry in advance if you're disappointed).

What I am going to do is use this as an excuse to show how you can leverage some OSGi stuff to build out Liferay things that you really couldn't have easily done before.

Basically I'm going to build out an extensible health check system which exposes a simple URL and generates a simple HTML table that lists health check sensors and status indicators, the words GREEN, YELLOW and RED for the status of the sensors.  In case it isn't clear, GREEN is healthy, YELLOW means there is non-fatal issues, and RED means something is drastically wrong.

Extensible is the key word in the previous paragraph.  I don't want the piece rendering the HTML to have to know about all of the registered sensors.  As a developer, I want to be able to create new sensors as new systems are integrated into Liferay, etc.  I don't want to have to know about every possible sensor I'm ever going to create and deploy up front, I'll worry about adding new sensors as the need arises.

Defining The Sensor

So our health check system is going to be comprised of various sensors.  Our plan here is to follow the Unix concept of creating small, consise sensors that are each great at taking an individual sensor reading rather than one really big complicated sensor.

So to do this we're going to need to define our sensor interface:

public interface Sensor {
  public static final String STATUS_GREEN = "GREEN";
  public static final String STATUS_RED = "RED";
  public static final String STATUS_YELLOW = "YELLOW";

  /**
   * getRunSortOrder: Returns the order that the sensor should run.  Lower numbers
   * run before higher numbers.  When two sensors have the same run sort order, they
   * are subsequently ordered by name.
   * @return int The run sort order, lower numbers run before higher numbers.
   */
  public int getRunSortOrder();

  /**
   * getName: Returns the name of the sensor.  The name is also displayed in the HTML
   * for the health check report, so using human-readable names is recommended.
   * @return String The sensor display name.
   */
  public String getName();

  /**
   * getStatus: This is the meat of the sensor, this method is called to actually take
   * a sensor reading and return one of the status codes listed above.
   * @return String The sensor status.
   */
  public String getStatus();
}

Pretty simple, huh?  We accommodate the sorting of the sensors for running so we can have control over the test order, we support providing a display name for the HTML output, and we also provide the method for actually getting the sensor status.

That's all we need to get our extensible healthcheck system started.  Now that we have the sensor interface, let's build some real sensors.

Building Sensors

Obviously we are going to be writing classes that implement the Sensor interface.  The fun part for us is that we're going to take advantage of OSGi for all of our sensor registration, bundling, etc.

So the first option we have with the sensors is whether to combine them in one module or build them as separate modules.  The truth is we really don't care.  You can stick with one module or separate modules.  You could mix things up and create multiple modules that each have multiple sensors.  You can include your sensor for your portlet directly in that module to keep it close to what the sensor is testing.  It's entirely up to you.

Our only limitations are that we have a dependency on the Healthcheck API module and our components have to implement the interface and declare themselves with the @Component annotation.

So for our first sensor, let's look at the JVM memory.  Our sensor is going to look at the % of memory used, we'll return GREEN if 60% or less is used, YELLOW if 61-80% and RED if 81% or more is used.  We'll create this guy as a separate module, too.

Our memory sensor class is:

@Component(immediate = true,service = Sensor.class)
public class MemorySensor implements Sensor {
  public static final String NAME = "JVM Memory";

  @Override
  public int getRunSortOrder() {
    // This can run at any time, it's not dependent on others.
    return 5;
  }

  @Override
  public String getName() {
    return NAME;
  }

  @Override
  public String getStatus() {
    // need the percent used
    int pct = getPercentUsed();
    
    // if we are 60% or less, we are green.
    if (pct <= 60) {
      return STATUS_GREEN;
    }
    // if we are 61-80%, we are yellow
    if (pct <= 80) {
      return STATUS_YELLOW;
    }
    
    // if we are above 80%, we are red.
    return STATUS_RED;
  }

  protected double getTotalMemory() {
    double mem = Runtime.getRuntime().totalMemory();

    return mem;
  }

  protected double getFreeMemory() {
    double mem = Runtime.getRuntime().freeMemory();

    return mem;
  }

  protected double getUsedMemory() {
    return getTotalMemory() - getFreeMemory();
  }

  protected int getPercentUsed() {
    double used = getUsedMemory();
    double pct = (used / getTotalMemory()) * 100.0;

    return (int) Math.round(pct);
  }
  
  protected int getPercentAvailable() {
    double pct = (getFreeMemory() / getTotalMemory()) * 100.0;

    return (int) Math.round(pct);
  }
}

Not very fancy.  There are obvious enhancements we could pursue with this.  We could add a configuration instance so we could define the memory thresholds in the control panel rather than using hard coded values.  We could refine the measurement to account for GC.  Whatever.  The point is we have a sensor which is responsible for getting the status and returning the status string.

Now imagine what you can do with these sensors... You can add a sensor for accessing your database(s).  You can check that LDAP is reachable.  If you use external web services, you could call them to ensure they are reachable (even better if they, too, have some sort of health check facility, your health check can incorporate their health check).

Your sensor options are only limited to what you are capable of creating.

I'd recommend keeping the sensors simple and fast, you don't want a long running sensor chewing up time/cpu just to get some idea of server health.

Building The Sensor Manager

The sensor manager is another key part of our extensible healthcheck system.

The sensor manager is going to use a ServiceTracker so it knows all the sensors that are available and gracefully handles the addition and removal of new Sensor components.  Here's the SensorManager:

@Component(immediate = true, service = SensorManager.class)
public class SensorManager {

  /**
   * getHealthStatuses: Returns the map of current health statuses.
   * @return Map map of statuses, key is the sensor name and value is the sensor status.
   */
  public Map<String,String> getHealthStatus() {
    StopWatch totalWatch = null;

    // time the total health check
    if (_log.isDebugEnabled()) {
      totalWatch = new StopWatch();

      totalWatch.start();
    }

    // grab the list of sensors from our service tracker
    List<Sensor> sensors = _serviceTracker.getSortedServices();

    // create a map to hold the sensor status results
    Map<String,String> statuses = new HashMap<>();

    // if we have at least one sensor
    if ((sensors != null) && (! sensors.isEmpty())) {
      String status;
      StopWatch sensorWatch = null;

      // create a stopwatch to time the sensors
      if (_log.isDebugEnabled()) {
        sensorWatch = new StopWatch();
      }

      // for each registered sensor
      for (Sensor sensor : sensors) {
        // reset the stopwatch for the run
        if (_log.isDebugEnabled()) {
          sensorWatch.reset();
          sensorWatch.start();
        }

        // get the status from the sensor
        status = sensor.getStatus();

        // add the sensor and status to the map
        statuses.put(sensor.getName(), status);

        // report sensor run time
        if (_log.isDebugEnabled()) {
          sensorWatch.stop();

          _log.debug("Sensor [" + sensor.getName() + "] run time: " + DurationFormatUtils.formatDurationWords(sensorWatch.getTime(), true, true));
        }
      }
    }

    // report health check run time
    if (_log.isDebugEnabled()) {
      totalWatch.stop();

      _log.debug("Health check run time: " + DurationFormatUtils.formatDurationWords(totalWatch.getTime(), true, true));
    }

    // return the status map
    return statuses;
  }

  @Activate
  protected void activate(BundleContext bundleContext, Map properties) {

    // if we have a current service tracker (likely not), let's close it.
    if (_serviceTracker != null) {
      _serviceTracker.close();
    }

    // create a new sorting service tracker.
    _serviceTracker = new SortingServiceTracker(bundleContext, Sensor.class.getName(), new Comparator<Sensor>() {

      @Override
      public int compare(Sensor o1, Sensor o2) {
        // compare method to sort primarily on run order and secondarily on name.
        if ((o1 == null) && (o2 == null)) return 0;
        if (o1 == null) return -1;
        if (o2 == null) return 1;

        if (o1.getRunSortOrder() != o2.getRunSortOrder()) {
          return o1.getRunSortOrder() - o2.getRunSortOrder();
        }

        return o1.getName().compareTo(o2.getName());
      }
    });
  }

  @Deactivate
  protected void deactivate() {
    if (_serviceTracker != null) {
      _serviceTracker.close();
    }
  }

  private SortingServiceTracker<Sensor> _serviceTracker;
  private static final Log _log = LogFactoryUtil.getLog(SensorManager.class);
}

The SensorManager has the ServiceTracker instance to retrieve the list of registered Sensor services and uses the list to grab each sensor status.  The getHealthStatus() method is the utility method to hide all of the implementation details but expose the ability to grab the map of sensor status details.

Conclusion

Yep, that's right, this is the conclusion.  That's really all there is to see here.

I mean, there is more, you need a portlet to serve up the health status on demand (a serve resource request can work fine here), and just displaying the health status in the portlet view will allow admins to see the health whenever they log into the portal.  And you can add a servlet so external monitoring systems can hit your status page using /o/healthcheck/status (my checked in project supports this).

But yeah, that's not really important with respect to showing cool OSGi stuff.

Ideally this becomes a platform for you to build out an expandable health check system in your own environment.  Pull down the project, start writing your own Sensor implementations and check out the results.

If you build some cool sensors you want to share, send me a PR and I'll add them to the project.

In fact, let's consider this to be like a community project.  If you use it and find issues, feel free to submit PRs with fixes.  If you build some Sensors, submit a PR with them.  If you come up with a cool enhancement, send a PR.  I'll do some minimal verification and merge everything in.

Here's the github project link to get you started: https://github.com/dnebing/healthcheck

Alt Conclusion

Just like there's an alternate title, there's an alternate conclusion.

The alternate conclusion here is that there's some really cool things you can do when you embrace OSGi in Liferay, pretty much the way Liferay has embraced OSGi.

OSGi offers a way to build expandable systems that are very decoupled.  If you need this kind of expansion, focus on separating your API from your implementations, then use a ServiceTracker to access all available instances.

Liferay uses this kind of thing extensively.  The product menu is extensible this way, the My Account pages are extensible in this way, heck even the LiferayMVC portlet implementations using MVCActionCommand and MVCResourceCommand interfaces rely on the power of OSGi to handle the dynamic services.

LiferayMVC is actually an interesting example; there, instead of managing a service tracker list, they manage a service tracker map where the key is the MVC command.  So the LiferayMVC portlet uses the incoming MVC command to get the service instance based on the command and passes the control to it for processing.  This makes the portlet more extensible because anyone can add a new command or override an existing command (using service ranking) and the original portlet module doesn't need to be touched at all.

Where can you find examples of things you can do leveraging OSGi concepts?  The Liferay source, of course.  Liferay eats it's own dog food, and they do a lot more with OSGi than I've ever needed to.  If you have some idea of a thing to do that benefits from an OSGi implementation but need an example of how to do it, find something in the Liferay source that has a similar implementation and see how they did it.

Liferay 7 Notifications

Technical Blogs 2017/01/18 投稿者 David H Nebinger

So in a recent project I've been building I reached a point where I believed my project would benefit from being able to issue user notifications.

For those that are not aware, Liferay has a built-in system for subscribing and notifications.  Using these APIs, you can quickly add notifications to your projects.

Foundation

Before diving into the implementation, let's talk about the foundation of the subscription and notification APIs.

The first thing you need to identify is your events.  Events are what users will subscribe to, and when events occur you want to issue notifications.  Knowing your list of events going in will make things easier as you'll know when you'll need to send notifications.

For this blog post we're going to be implementing an example, so I'm going to build a system to issue a notification when a user with the Administrator role logs in.  We all know that for effective site security no one should be using Administrator credentials because of the unlimited access Administrators have, so getting a notification when an Administrator logs in can be a good security test.

So we know the event, "Administrator has logged in", the rest of the blog will build out support for subscriptions and notifications.

One thing we will need to pull all of this together is a portlet module (since we have some interface requirements).  Let's start by building a new portlet module.  You can use your IDE, but to be IDE-neutral I'm going to stick with blade commands:

blade create -t mvcportlet -p com.dnebinger.admin.notification admin-notification

This will give us a new Liferay MVC portlet using our desired package and a simple project directory.

Subscribing To The Event

The first thing that users need to be able to do is to subscribe to your events.  Some portal examples are blogs (where a user can subscribe to all blogs or an individual blog for changes).  So the challenge for you is to identify where a user would subscribe to your event.  In some cases you would display right on the page (i.e. blogs and forum threads), in other cases you might want to move it to a configuration panel.

For this portlet, there is no real UI work, just going to have a JSP with a checkbox for subscribe/unsubscribe, but the important part is the Liferay subscription APIs we're going to use.

Subscription is handled by the com.liferay.portal.kernel.service.SubscriptionLocalService service.  When you're subscribing, you'll be using the addSubscription() method, and when you're unsubscribing you're going to use the deleteSubscription() method.

Arguments for the calls are:

  • userId: The user who is subscribing/unsubscribing.
  • groupId: (for adds) the group the user is subscribing to (for 'containers' like a doc lib folder or forum category).
  • className: The name of the class user wants to be monitored of changes on.
  • pkId: The primary key for the object to (un)subscribe to.

So normally you'll be building subscription into your SB entities, so it is common practice to put subscribe and unsubscribe methods into the service interface to combine the subscription features with the data access.

For this project, we don't really have an entity or a data access layer, so we're just going to handle subscribe/unsubscribe directly in our action command handler.  Also we don't really have a PK id so we'll just stick with an id of 0 and the portlet class as the class name.

@Component(
  immediate = true,
  property = {
    "javax.portlet.name=" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION_PORTLET_KEY,
    "mvc.command.name=/update_subscription"
  },
  service = MVCActionCommand.class
)
public class SubscribeMVCActionCommand extends BaseMVCActionCommand {
  @Override
  protected void doProcessAction(ActionRequest actionRequest, ActionResponse actionResponse) throws Exception {
    String cmd = ParamUtil.getString(actionRequest, Constants.CMD);

    if (Validator.isNull(cmd)) {
      // an error
    }

    long userId = PortalUtil.getUserId(actionRequest);

    if (Constants.SUBSCRIBE.equals(cmd)) {
      _subscriptionLocalService.addSubscription(userId, 0, AdminNotificationPortlet.class.getName(), 0);
    } else if (Constants.UNSUBSCRIBE.equals(cmd)) {
      _subscriptionLocalService.deleteSubscription(userId, AdminNotificationPortlet.class.getName(), 0);
    }
  }

  @Reference(unbind = "-")
  protected void setSubscriptionLocalService(final SubscriptionLocalService subscriptionLocalService) {
    _subscriptionLocalService = subscriptionLocalService;
  }

  private SubscriptionLocalService _subscriptionLocalService;
}

User Notification Preferences

Users can manage their notification preferences separately from portlets which issue notifications.  When you go to the "My Account" area of the side bar, there's a "Notifications" option.  When you click this link, you will normally see your list of notifications.  But if you click on the dot menu in the upper right corner, you can choose "Configuration" to see all of the magic.

This page has a collapsable area for each registered notifying portlet and within each area is a line item for a type of notification and sliders to recieve notifications by email and/or website (assuming the portlet has indicated that it supports both types of notifications).  In the future Liferay or your team might add more notification methods (i.e. SMS or other), and for those cases the portlet just needs to indicate that it also supports the notification type.

So how do we get our collapsable panel registered to appear on this page?  Well, through the magic of OSGi DS service annotations, of course!

There are two types of classes that we need to implement.  The first extends the com.liferay.portal.kernel.notifications.UserNotificationDefinition class.  As the class name suggests, this class provides the definition of the type of notifications the portlet can send and the notification types it supports.

@Component(
  immediate = true,
  property = {"javax.portlet.name=" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION},
  service = UserNotificationDefinition.class
)
public class AdminLoginUserNotificationDefinition extends UserNotificationDefinition {
  public AdminLoginUserNotificationDefinition() {
    // pass in our portlet key, 0 for a class name id (don't care about it), the notification type (not really), and
    // finally the resource bundle key for the message the user sees.
    super(AdminNotificationPortletKeys.ADMIN_NOTIFICATION, 0,
      AdminNotificationType.NOTIFICATION_TYPE_ADMINISTRATOR_LOGIN,
      "receive-a-notification-when-an-admin-logs-in");

    // add a notification type for each sort of notification that we want to support.
    addUserNotificationDeliveryType(
      new UserNotificationDeliveryType(
        "email", UserNotificationDeliveryConstants.TYPE_EMAIL, true, true));
    addUserNotificationDeliveryType(
      new UserNotificationDeliveryType(
        "website", UserNotificationDeliveryConstants.TYPE_WEBSITE, true, true));
  }
}

This definition registers and informs notifications that we have a notification definition.  The constructor binds the notification to our custom portlet and provides the message key for the notification panel.  We also add two user notification delivery types that we'll support, one for sending an email and one for using the notification portlet.

When you go to the Notifications panel under My Account and choose the Configuration option from the menu in the dropdown on the upper-right corner, you can see the notification preferences:

Notification Type Selection

Handling Notifications

Another aspect of notifications is the UserNotificationHandler implementation.  The UserNotificationHandler's job is to interpret the notification event and determine whether to deliver the notification and build the UserNotificationFeedEntry (basically the notification message itself).

Liferay provides a number of base implementation classes that you can use to build your own UserNotificationHandler instance from:

  • com.liferay.portal.kernel.notifications.BaseUserNotificationHandler - This implements a simple user notification handler with points to override the body of the notification and some other key points, but for the most part it is capable of building all of the basic notification details.
  • com.liferay.portal.kernel.notifications.BaseModelUserNotificationHandler - This class is another base class suitable for asset-enabled entities for notifications.  It uses the AssetRenderer for the entity class to render the asset and this is used as the message for the notifcation.

Obviously if you have an asset-enabled entity you're notifying on, you'd want to use the BaseModelUserNotificationHandler.  For our implementation we're going to use BaseUserNotificationHandler as the base class:

@Component(
  immediate = true,
  property = {"javax.portlet.name=" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION},
  service = UserNotificationHandler.class
)
public class AdminLoginUserNotificationHandler extends BaseUserNotificationHandler {

  /**
   * AdminLoginUserNotificationHandler: Constructor class.
   */
  public AdminLoginUserNotificationHandler() {
    setPortletId(AdminNotificationPortletKeys.ADMIN_NOTIFICATION);
  }

  @Override
  protected String getBody(UserNotificationEvent userNotificationEvent, ServiceContext serviceContext) throws Exception {
    String username = LanguageUtil.get(serviceContext.getLocale(), _UKNOWN_USER_KEY);
    
    // okay, we need to get the user for the event
    User user = _userLocalService.fetchUser(userNotificationEvent.getUserId());

    if (Validator.isNotNull(user)) {
      // get the company the user belongs to.
      Company company = _companyLocalService.fetchCompany(user.getCompanyId());

      // based on the company auth type, find the user name to display.
      // so we'll get screen name or email address or whatever they're using to log in.

      if (Validator.isNotNull(company)) {
        if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_EA)) {
          username = user.getEmailAddress();
        } else if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_SN)) {
          username = user.getScreenName();
        } else if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_ID)) {
          username = String.valueOf(user.getUserId());
        }
      }
    }

    // we'll be stashing the client address in the payload of the event, so let's extract it here.
    JSONObject jsonObject = JSONFactoryUtil.createJSONObject(
      userNotificationEvent.getPayload());

    String fromHost = jsonObject.getString(Constants.FROM_HOST);

    // fetch our strings via the language bundle.
    String title = LanguageUtil.get(serviceContext.getLocale(), _TITLE_KEY);

    String body = LanguageUtil.format(serviceContext.getLocale(), _BODY_KEY, new Object[] {username, fromHost});

    // build the html using our template.
    String html = StringUtil.replace(_BODY_TEMPLATE, _BODY_REPLACEMENTS, new String[] {title, body});

    return html;
  }

  @Reference(unbind = "-")
  protected void setUserLocalService(final UserLocalService userLocalService) {
    _userLocalService = userLocalService;
  }
  @Reference(unbind = "-")
  protected void setCompanyLocalService(final CompanyLocalService companyLocalService) {
    _companyLocalService = companyLocalService;
  }

  private UserLocalService _userLocalService;
  private CompanyLocalService _companyLocalService;

  private static final String _TITLE_KEY = "title.admin.login";
  private static final String _BODY_KEY = "body.admin.login";
  private static final String _UKNOWN_USER_KEY = "unknown.user";

  private static final String _BODY_TEMPLATE = "<div class=\"title\">[$TITLE$]</div><div class=\"body\">[$BODY$]</div>";
  private static final String[] _BODY_REPLACEMENTS = new String[] {"[$TITLE$]", "[$BODY$]"};

  private static final Log _log = LogFactoryUtil.getLog(AdminLoginUserNotificationHandler.class);
}

This handler basically builds the body of the notification itself which will be based on the admin login details (user and where they are logging in from).  When building out your own notification, you'll likely want to be using the notification event payload to pass details.  We're going to pass just the host where the admin is coming from so our payload is as simple as it gets, but you could easily pass XML or JSON or whatever structured string you want to pass necessary notification details.

This handler just makes the same notification body for both email and notification portlet display, no difference between the two.  Since the method is passed the UserNotificationEvent, you can use the getDeliveryType() method to build different bodies depending upon whether you are building an email notification or a notification portlet display message.

Publishing Notification Events

So far we have code to allow users to subscribe to our administrator login event, we allow them to choose how they want to receive the notifications, and we also have code to transform the notification event into a notification message, what remains is actually issuing the notification events themselves.

This is very much going to be dependent upon your event source.  Most Liferay events are based on the addition or modification of some entity, so it is common to find their event publishing code in the service implementation classes when the entities are added or updated.  Your own notification events can come from wherever the event originates, even outside of the service layer.

Our notification event is based on an administrator login; the best way to publish these kinds of events is through a post login component.  We'll define the new component as such:

@Component(
  immediate = true, property = {"key=login.events.post"},
  service = LifecycleAction.class
)
public class AdminLoginNotificationEventSender implements LifecycleAction {

  @Override
  public void processLifecycleEvent(LifecycleEvent lifecycleEvent)
      throws ActionException {

    // get the request associated with the event
    HttpServletRequest request = lifecycleEvent.getRequest();

    // get the user associated with the event
    User user = null;

    try {
      user = PortalUtil.getUser(request);
    } catch (PortalException e) {
      // failed to get the user, just ignore this
    }

    if (user == null) {
      // failed to get a valid user, just return.
      return;
    }

    // We have the user, but are they an admin?
    PermissionChecker permissionChecker = null;

    try {
      permissionChecker = PermissionCheckerFactoryUtil.create(user);
    } catch (Exception e) {
      // ignore the exception
    }

    if (permissionChecker == null) {
      // failed to get a permission checker
      return;
    }

    // If the permission checker indicates the user is not omniadmin, nothing to report.
    if (! permissionChecker.isOmniadmin()) {
      return;
    }

    // this user is an administrator, need to issue the event
    ServiceContext serviceContext = null;

    try {
      // create a service context for the call
      serviceContext = ServiceContextFactory.getInstance(request);

      // note that when you're behind an LB, the remote host may be the address
      // for the LB instead of the remote client.  In these cases the LB will often
      // add a request header with a special key that holds the remote client host
      // so you'd want to use that if it is available.
      String fromHost = request.getRemoteHost();

      // notify subscribers
      notifySubscribers(user.getUserId(), fromHost, user.getCompanyId(), serviceContext);
    } catch (PortalException e) {
      // ignored
    }
  }

  protected void notifySubscribers(long userId, String fromHost, long companyId, ServiceContext serviceContext)
      throws PortalException {

    // so all of this stuff should normally come from some kind of configuration.
    // As this is just an example, we're using a lot of hard coded values and portal-ext.properties values.
    
    String entryTitle = "Admin User Login";

    String fromName = PropsUtil.get(Constants.EMAIL_FROM_NAME);
    String fromAddress = GetterUtil.getString(PropsUtil.get(Constants.EMAIL_FROM_ADDRESS), PropsUtil.get(PropsKeys.ADMIN_EMAIL_FROM_ADDRESS));

    LocalizedValuesMap subjectLocalizedValuesMap = new LocalizedValuesMap();
    LocalizedValuesMap bodyLocalizedValuesMap = new LocalizedValuesMap();

    subjectLocalizedValuesMap.put(Locale.ENGLISH, "Administrator Login");
    bodyLocalizedValuesMap.put(Locale.ENGLISH, "Adminstrator has logged in.");

    AdminLoginSubscriptionSender subscriptionSender =
        new AdminLoginSubscriptionSender();

    subscriptionSender.setFromHost(fromHost);

    subscriptionSender.setClassPK(0);
    subscriptionSender.setClassName(AdminNotificationPortlet.class.getName());
    subscriptionSender.setCompanyId(companyId);

    subscriptionSender.setCurrentUserId(userId);
    subscriptionSender.setEntryTitle(entryTitle);
    subscriptionSender.setFrom(fromAddress, fromName);
    subscriptionSender.setHtmlFormat(true);

    int notificationType = AdminNotificationType.NOTIFICATION_TYPE_ADMINISTRATOR_LOGIN;

    subscriptionSender.setNotificationType(notificationType);

    String portletId = PortletProviderUtil.getPortletId(AdminNotificationPortletKeys.ADMIN_NOTIFICATION, PortletProvider.Action.VIEW);

    subscriptionSender.setPortletId(portletId);

    subscriptionSender.setReplyToAddress(fromAddress);
    subscriptionSender.setServiceContext(serviceContext);

    subscriptionSender.addPersistedSubscribers(
        AdminNotificationPortlet.class.getName(), 0);

    subscriptionSender.flushNotificationsAsync();
  }
}

This is a LifecycleAction component that registers as a post login lifecycle event listener.  It goes through a series of checks to determine if the user is an administrator and, when they are, it issues a notification.  The real fun happens in the notifySubscribers() method.

This method has a lot of initialization and setting of the subscriptionSender properties.  This variable is of type AdminLoginSubscriptionSender, a class which extends SubscriptionSender.  This is the guy that handles the actual notification sending.

The flushNotificationAsync() method pushes the instance onto the Liferay Message Bus where a message receiver gets the SubscriptionSender and invokes its flushNotification() method (you can call this method too if you don't need async notification sending).

The flushNotification() method does some permission checking, user verification, filtering (i.e. don't send a notification to the user that generated the event) and eventually sends the email notification and/or adds the user notification for the notifications portlet.

The AdminLoginSubscriptionSender class is pretty simple:

public class AdminLoginSubscriptionSender extends SubscriptionSender {

  private static final long serialVersionUID = -7152698157653361441L;

  protected void populateNotificationEventJSONObject(
      JSONObject notificationEventJSONObject) {

    super.populateNotificationEventJSONObject(notificationEventJSONObject);

    notificationEventJSONObject.put(Constants.FROM_HOST, _fromHost);
  }

  @Override
  protected boolean hasPermission(Subscription subscription, String className, long classPK, User user) throws Exception {
    return true;
  }

  @Override
  protected boolean hasPermission(Subscription subscription, User user) throws Exception {
    return true;
  }

  @Override
  protected void sendNotification(User user) throws Exception {
    // remove the super classes filtering of not notifying user who is self.
    // makes sense in most cases, but we want a notification of admin login so
    // we know when never any admin logs in from anywhere at any time.

    // will be a pain if we get notified because of our own login, but we want to
    // know if some hacker gets our admin credentials and logs in and it's not really us.

    sendEmailNotification(user);
    sendUserNotification(user);
  }

  public void setFromHost(String fromHost) {
    this._fromHost = fromHost;
  }

  private String _fromHost;
}

Putting It All Together

Okay, so now we have everything:

  • We have code to allow users to subscribe to the admin login events.
  • We have code to allow users to select how they receive notifications.
  • We have code to transform the notification message in the database into HTML for display in the notifications portlet.
  • We have code to send the notifications when an administrator logs in.

Now we can test it all out.  After building and deploying the module, you're pretty much ready to go.

You'll have to log in and sign up to receive the notifications.  Note if you're using your local test Liferay environment you might not have email enabled so be sure to use the web notifications.  In fact, in my DXP environment I don't have email configured and I got a slew of exceptions from the Liferay email subsystem; I ignored them since they are from not having email set up.

Then log out and log in again as an administrator and you should see your notification pop up in the left sidebar.

Admin Notifications Messages

Conclusion

So there you have it, basic code that will support creating and sending notifications.  You can use this code to add notifications support into your own portlets and take them whereever you need.

You can find the code for this project up on github: https://github.com/dnebing/admin-notification

Enjoy!

Removing Panels from My Account

Technical Blogs 2016/12/29 投稿者 David H Nebinger

So recently I was asked, "How can panels be removed from the My Account portlet?"

It seems like such a deceptively simple question since it used to be a supported feature, but my response to the question was that it is just not possible anymore.

Back in the 6.x days, all you needed to do was override the users.form.my.account.main, identification and/or miscellaneous properties in portal-ext.properties and you could take out any of the panels you didn't want the users to see in the my account panel.

The problem with the old way is that while it was easy to remove panels or reorganize panels, it was extremely difficult to add new custom panels to My Account.

With Liferay 7 and DXP, things have swung the other way.  With OSGi, it is extremely easy to add new panels to My Account.  Just create a new component that implements the FormNavigatorEntry interface and deploy it and Liferay will happily present your custom panel in the My Account.  See https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/form-navigator for information how to do this.

Although it is easy to add new panels, there is no way to remove panels.  So I set out to find out if there was a way to restore this functionality.

Background

In Liferay 7, the new Form Navigator components replaces the old portal properties setup.  But while there used to be three separate sets of properties (one for adds, one for updates and one for My Account), they have all been merged into a single set.  So even if there was a supported way to disable a panel, this would disable the panel from not only My Account but also from the users control panel, not something that most sites want.

So this problem actually points to the solution - in order to control the My Account panels separately, we need a completely separate Form Navigator setup.  If we had a separate setup, we'd also need to have a JSP fragment bundle override to use the custom Form Navigator rather than the original.

Creating the My Account Form Navigator

This is actually a simple task because the Liferay code is already deployed as an OSGi bundle, but more importantly the current classes are declared in an export package in the module's BND file.  This means that our classes can extend the Liferay classes without dependency issues.

Here's one of the new form navigator entries that I came up with:

@Component(
	immediate = true,
	property = {"form.navigator.entry.order:Integer=70"},
	service = FormNavigatorEntry.class
)
public class UserPasswordFormNavigatorEntryExt extends UserPasswordFormNavigatorEntry {

	private boolean visible = GetterUtil.getBoolean(PropsUtil.get(
		Constants.MY_ACCOUNT_PASSWORD_VISIBLE), true);

	@Override
	public String getFormNavigatorId() {
		return Constants.MY_ACCOUNT_PREFIX + super.getFormNavigatorId();
	}

	@Override
	public boolean isVisible(User user, User selUser) {
		return visible && super.isVisible(user, selUser);
	}
}

So first off our navigator entry extends the original entry, and that saves us a lot of custom coding.  We're using a custom portal property to disable the panel, so we're fetching that up front.  We're using the property setting along with the super class' method to determine if the panel should be visible.  This will allow us to set a portal-ext.property value to disable the panel and it will just get excluded.

The most important part is the override for the form navigator ID.  We're declaring a custom ID that will separate how the new entries and categories will be made available.

All of the other form navigator entries will follow the same pattern, they'll extend the original class, use a custom property to allow for disabling, and they'll use the special category prefix for the entries.

We'll do the similar with the category overrides:

@Component(
	property = {"form.navigator.category.order:Integer=30"},
	service = FormNavigatorCategory.class
)
public class UserUserInformationFormNavigatorCategoryExt extends UserUserInformationFormNavigatorCategory {
	@Override
	public String getFormNavigatorId() {
		return Constants.MY_ACCOUNT_PREFIX + super.getFormNavigatorId();
	}
}

For the three component categories we'll return our new form navigator ID string.

Creating the My Account JSP Fragment Bundle

Now that we have a custom form navigator, we'll need a JSP override on My Account to use the new version.

Our fragment bundle is based off the the My Account portlet, so we can use the following blade command:

blade create -t fragment -h com.liferay.my.account.web -H 1.0.4 users-admin-web-my-account

If you're using an IDE, just use the equivalent to create a fragment module from the My Account web module for the version currently used in your portal (check the $LIFERAY_HOME/work folder for the com.liferay.my.account.web folder as it includes the version number in your portal).

To use our new form navigator, we need to override the edit_user.jsp page.  This page actually comes from the modules/apps/foundation/users-admin/users-admin-web module from the Liferay source (during build time it is copied into the My Account module).  Copy the file from the source to your new module as this will be the foundation for the change.

I made two changes to the file.  The first change adds some java code to use a variable for the form navigator id to use, the second change was to the <liferay-ui:form-navigator /> tag declaration to use the variable instead of a constant.

Here's the java code I added:

// initialize to the real form navigator id.
String formNavigatorId = FormNavigatorConstants.FORM_NAVIGATOR_ID_USERS;

// if this is the "My Account" portlet...
if (portletName.equals(myAccountPortletId)) {
	// include the special prefix
	formNavigatorId = "my.account." + formNavigatorId;
}

We start with the current constant value for the navigator id.  Then if the portlet id matches the My Account portlet, we'll add our form navigator ID prefix to the variable.

Here's the change to the form navigator tag:

<liferay-ui:form-navigator 
	backurl="<%= backURL %>" 
	formmodelbean="<%= selUser %>" 
	id="<%= formNavigatorId %>" 
	markupview="lexicon">
</liferay-ui:form-navigator>

Build and deploy all of your modules to begin testing.

Testing

Testing is actually pretty easy to do.  Start by adding some portal-ext.properties file entries in order to disable some of the panels:

my.account.addresses.visible=false
my.account.additional.email.visible=false
my.account.instant.messenger.visible=false
my.account.open.id.visible=false
my.account.phone.numbers.visible=false
my.account.sms.visible=false
my.account.social.network.visible=false
my.account.websites.visible=false

If your portal is running, you'll need to restart the portal for the properties to take effect.

Start by logging in and going to the Users control panel and choose to edit a user.  Don't worry, we're not editing, we're just looking to see that all of the panels are there:

User Control Panel View

Next, navigate to My Account -> Account Settings to see if the changes have been applied:

My Account Panel

Here we can see that the whole Identification section has been removed since all of the panels have been disabled.  We can also see that password and personal site template sections have been removed from the view.

Conclusion

So this project has been loaded to GitHub: https://github.com/dnebing/my-account-override  Feel free to clone and use as you see fit.

It uses properties in portal-ext.properties to disable specific panels from the My Account page while leaving the user control panel unchanged.

There is one caveat to remember though.

When we created the fragment, we had to use the version number currently deployed in the portal.  This means that any upgrade to the portal, to a new GA or a new SP or FP or even hot fix may change the version number of the My Account portlet.  This will force you to come back to your fragment bundle and change the version in the BND file.  In fact, my recommendation here would be to match your bundle version to the fragment host bundle version as it will be easier to identify whether you have the right version(s) deployed.

Otherwise you should be good to go!

Update: You can eliminate the above caveat by using a version range in your bnd file.  I changed the line in bnd.bnd file to be:

Fragment-Host: com.liferay.my.account.web;bundle-version="[1.0.4,2.0.0)"

This allows your fragment bundle to apply to any newer version of the module that gets deployed up to 2.0.0.  Note, however, that you should still compare each release to ensure you're not losing any fixes or improvements that have been shipped as an update.

Extending Liferay OSGi Modules

Technical Blogs 2016/12/20 投稿者 David H Nebinger

Recently I was working on a fragment bundle for a JSP override to the message boards and I wanted to wrap the changes so they could be disabled by a configuration property.

But the configuration is managed by a Java interface and set via the OSGi Configuration Admin service in a completely different module jar contained in an LPKG file in the osgi directory.

So I wondered if there was a way to weave in a change to the the Java interface to include my configuration item in a concept similar to the plugin extending a plugin technique.

And in fact there is and I'm going to share it with you here...

Creating The Module

So first we're going to be building a gradle module in a Liferay workspace, so you're going to need one of those.

In our modules directory we're going to create a new folder named message-boards-api-ext for containing our new module.  Actually the name of the folder doesn't matter too much so feel free to follow your own naming standards.

We need a gradle build file, and since we're in a Liferay workspace, our build.gradle file is pretty simple:

dependencies {
    compileOnly group: "biz.aQute.bnd", name: "biz.aQute.bndlib", version: "3.1.0"
    compileOnly group: "com.liferay", name: "com.liferay.portal.configuration.metatype", version: "2.0.0"
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
    compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
    compileOnly group: "org.osgi", name: "org.osgi.core", version: "5.0.0"

    compile group: "com.liferay", name: "com.liferay.message.boards.api", version: "3.1.0"
}

jar.archiveName = 'com.liferay.message.boards.api.jar'

The dependencies mostly come from the build.gradle file from the module from the Liferay source found here: https://github.com/liferay/liferay-portal/blob/master/modules/apps/collaboration/message-boards/message-boards-api/build.gradle

We did add as a compile option the module that we're building a replacement for, in this case the com.liferay.message.boards.api module.

Also we are specifying the archive name that we are building that excludes the version number.  We're specifying the archive name so it matches the specifications from the Liferay override documentation: https://github.com/liferay/liferay-portal/blob/master/tools/osgi-marketplace-override-README.markdown

We also need a bnd.bnd file to build our module:

Bundle-Name: Liferay Message Boards API
Bundle-SymbolicName: com.liferay.message.boards.api
Bundle-Version: 3.1.0
Export-Package:\
  com.liferay.message.boards.configuration,\
  com.liferay.message.boards.display.context,\
  com.liferay.message.boards.util.comparator
Liferay-Releng-Module-Group-Description:
Liferay-Releng-Module-Group-Title: Message Boards

Include-Resource: @com.liferay.message.boards.api-3.1.0.jar

The bulk of the file is going to come directly from the original: https://github.com/liferay/liferay-portal/blob/master/modules/apps/collaboration/message-boards/message-boards-api/bnd.bnd.

The only addition to the file is the Include-Resource BND declaration.  As I previously covered in my blog post about OSGi Module Dependencies, this is the declaration used to create an Uber Module.  But for our purposes, this actually provides the binary source for the bulk of the content of our module.

By building an Uber Module from the source module, we are basically going to be building a jar file from the exploded original module jar, allowing us to have the baseline jar with all of the original content.

Finally we need our source override file, in this case we need the src/main/java/com/liferay/message/boards/configuration/MBConfiguration.java file:

package com.liferay.message.boards.configuration;

import aQute.bnd.annotation.metatype.Meta;

import com.liferay.portal.configuration.metatype.annotations.ExtendedObjectClassDefinition;

/**
 * @author Sergio González
 * @author dnebinger
 */
@ExtendedObjectClassDefinition(category = "collaboration")
@Meta.OCD(
	id = "com.liferay.message.boards.configuration.MBConfiguration",
	localization = "content/Language", name = "mb.configuration.name"
)
public interface MBConfiguration {

	/**
	 * Enter time in minutes on how often this job is run. If a user's ban is
	 * set to expire at 12:05 PM and the job runs at 2 PM, the expire will occur
	 * during the 2 PM run.
	 */
	@Meta.AD(deflt = "120", required = false)
	public int expireBanJobInterval();

	/**
	 * Flag that determines if the override should be applied.
	 */
	@Meta.AD(deflt = "false", required = false)
	public boolean applyOverride();
}

So the bulk of the code comes from the original: https://github.com/liferay/liferay-portal/blob/master/modules/apps/collaboration/message-boards/message-boards-api/src/main/java/com/liferay/message/boards/configuration/MBConfiguration.java

Our addition is the new flag value.

Now if we had other modifications for other classes, we would make sure we had the same paths, same packages, same class names, we would just have our changes in on top of the originals.

We could even introduce new packages and classes for our custom code.

Building The Module

Building is pretty easy, we just use the gradle wrapper to do the build:

$ ../../gradlew build

When we look inside of our built module jar, this is where we can see that our change did, in fact, get woven into the new module jar:

Exploded Module

We can see from the highlighted line from the compiled class that our jar definitely contains our method, so our build is good.  We can also see the original packages and resources that we get from the Uber Module approach, so our module is definitely complete.

Deploying The Module

Okay, so this is the ugly part of this whole thing, deployments are not easy.

Here's the restrictions that we have to keep in mind:

  1. The portal cannot be running when we do the deployment.
  2. The built jar file must be copied manually to the $LIFERAY_HOME/osgi/marketplace/override folder.
  3. The $LIFERAY_HOME/osgi/state folder must be deleted.
  4. If you have changed any web-type file (javascript, JSP, css, etc.) you should delete the relevant folder from the $LIFERAY_HOME/work folder.
  5. If Liferay deployes a newer version than the one declared in our bundle override, our changes may not be applied.
  6. Only one override bundle can work at a time; if someone else has an override bundle in this folder, your change will step on theirs and this may not be a option (Liferay may distribute updates or hot fixes as module overrides in this fashion).
  7. Support for $LIFERAY_HOME/osgi/marketplace/override was added in later LR7CE/DXP releases, so check that the version you are using has this support.
  8. You can break your portal if your module override does bad things or has bugs.

Wow, that is a lot of restrictions.

Basically we need to copy our jar manually to the $LIFERAY_HOME/osgi/marketplace/override folder, but we cannot do it if the application server is running.  The $LIFERAY_HOME/osgi/state folder should be whacked as we are changing the content of the folder.  And the bundle folder in $LIFERAY_HOME/work may need to be deleted so any cached resources are properly cleaned out.

For this change, we copy the com.liferay.message.boards.api.jar file to $LIFERAY_HOME/osig/marketplace/override folder and delete the state folder when the application server is down, then we can fire up the application server to see the outcome.

Conclusion

We can verify the change works by navigating to the Control Panel -> System Settings -> Collaboration -> Message Boards page and viewing the change:

Deployed Module Override

Be forewarned, however!  Listen when I emphasize the following:

Pay special attention to the restrictions on this technique!  This is not a "build, deploy and forget it" technique as each Liferay upgrade, fix pack or hot fix can easily invalidate your change, and every deployment requires special handling.
You can seriously break Liferay if you deliver bad code in your module override, so test the heck out of these overrides.
This should not be used in lieu of supported Liferay methods to extend or replace functionality (JSP fragment bundles, MVC command service overrides, etc.), this is just intended for edge cases where no other override/extension option is supported.

In other words:

 

Using a Custom Bundle for the Liferay Workspace

Technical Blogs 2016/12/06 投稿者 David H Nebinger

So the Liferay workspace is pretty handy when it comes to building all of your OSGi modules, themes, layout templates and yes, even your legacy code from the plugins SDK.

But, when it comes to initializing a local bundle for deployment or building a dist bundle, using one of the canned Liferay 7 bundles from sourceforge may not cut it, especially if you're using Liferay DXP, or if you're behind a strict proxy or you just want a custom bundle with fixes already streamed in.

Looking at the gradle.properties file in the workspace it seems as though you can use a custom URL will work and, in fact, it does.  You can point to your custom URL and download your custom bundle.

But if you don't have a local HTTP server to host your bundle from, you can simulate having a custom bundle downloaded from a site w/o actually doing a download.

Building the Custom Bundle

So first you need to build a custom bundle.  This is actually quite easy.  It is simply a matter of downloading a current bundle, unzipping it, making whatever changes you want, then re-zipping it back up.

So you might, for example, download the DXP Service Pack 1 bundle, expand it, apply Fix Pack 8, then zip it all back up.  This will give you a custom bundle with the latest (currently) fix pack ready for your development environment.

One key here, though, is that the bundle must have a root directory in the zip.  Basically this means you shouldn't see, for example, /tomcat-8.0.32 as a folder in the root directory of the zip, it should always have some directory that contains everything, such as /dxp-sp1-fp8/tomcat-8.0.32, etc.

If you try to use your custom bundle and you get weird unzip errors like corrupt streams and such, that likely points to a missing directory at the root level w/ the bundle nestled underneath it.

Copy the Bundle to the Cache Folder

As it turns out, Liferay actually caches downloaded bundles in the ~/.liferay/bundles directory.

Copy your custom bundle zip to this directory to get it in the download cache.

Update the Download URL

Next you have to update your gradle.properties file to use your new URL.

You can use any URL you want, just make sure it ends with your bundle name:

liferay.workspace.bundle.url=http://example.com/dxp-sp1-fp8.zip

Modify the Build Download Instructions

The final step is to modify the build.gradle file in the workspace root.  Add the following lines:

downloadBundle {
    onlyIfNewer false
    overwrite false
}

This basically has the download bundle skip the network check to see if the remote URL file exists and is newer than the local cached file.

Conclusion

Hey, that's it!  You're now ready to go with your local bundle.  Issue the "gradlew distBundleZip" command to build all of your modules, extract your custom bundle, feather in your deployment artifacts, and zip it all up into a ready-to-deploy  Liferay bundle.

Great for parties, entertaining, and even building a container-based deployment artifact for a cloud-based Liferay environment.

 

Debugging Liferay 7 In Intellij

Technical Blogs 2016/09/15 投稿者 David H Nebinger

Introduction

So I'm doing more and more development using pure Intellij for Liferay 7 / DXP, even debugging.

I thought I'd share how I do it in case someone else is looking for a brief how-to.

Tomcat Setup

So I do not like running Tomcat within the IDE, it just feels wrong.  I'd rather have it run as a separate JVM with it's own memory settings, etc.  Besides I use external Tomcats to test deployments, run demos, etc., so I use them for development also.  The downside to this approach is that there is zero support for hot deploy; if you change code you have to do a build and deploy it for debugging to work.

Configuring Tomcat for debugging is really easy, but a quick script copy will make it even easier.

In your tomcat-8.0.32/bin directory, copy the startup script as debug.  On Windows that means copying startup.bat as debug.bat, on all others you copy startup.sh as debug.sh.

Edit your new debug script with your favorite text editor and, practically the last line of the file you'll find the EXECUTABLE start line (will vary based upon your platform).

Right before the "start", insert the word "jpda" and save the file.

For Windows your line will read:

call "%EXECUTABLE%" jpda start %CMD_LINE_ARGS%

For all others your line will read:

exec "$PRGDIR"/"$EXECUTABLE" jpda start "$@"

This gives you a great little startup script that enables remote debugging.

Use your debug script to launch Tomcat.

Intellij Setup

We now need to set up a remote debugging configuration.  From the Run menu, choose Edit Configurations... option to open the Run/Debug Configurations dialog.

Click the + sign in the upper left corner to add a new configuration and choose Remote from the dropdown menu.

Add Configuration Dialog

Give the configuration a valid name and change the port number to 8000 (Tomcat defaults to 8000).  If debugging locally, keep localhost as the host; if debugging on a remote server, use the remote hostname.

Remote Tomcat Debugging Setup

Click on OK to save the new configuration.

To start a debug session, select Debug from the Run menu and select the debug configuration to launch.  The debug panel will be opened and the console should report it has connected to Tomcat.

Debugging Projects

If your project happens to be the matching Liferay source for the portal you're running, you should have all of the source available to start an actual debug session.  I'm usually in my own projects needing to understand what is going on in my code or the interaction of my code with the portal.

So when I'm ready to debug I have already built and deployed my module and I can set breakpoints in my code and start clicking around in the portal.

When one of your breakpoints is hit, Intellij will come to the front and the debugger will be front and center.  You can step through your code, set watches and view object, variable and parameter values.

Intellij will let you debug your code or any declared dependencies in your project.  But once you make a call to code that is not a dependency of your project, the debugger may lose visibility on where you actually are.

Fortunately there's an easy fix for this.  Choose Project Structure... from the File menu.  Select the Libraries item on the left.  The right side is where additional libraries can be added for debugging without affecting your actual project dependencies.

Click the + sign to add a new library.  Pick Java if you have a local source directory and/or jar file that you want to add or Maven if you want to download the dependency from the Maven repositories.  So, for example, you may want to add the portal-impl.jar file and link in the source directory to help debug against the core.  For the OSGi modules, you can add the individual jars or source dirs as you need them.

Add Libraries Dialog

Conclusion

So now you can debug Liferay/Tomcat remotely in Intellij.

Perhaps in a future blog I'll throw together a post about debugging Tomcat within Intellij instead of remotely...

Liferay 7 Development, Part 6

Technical Blogs 2016/09/14 投稿者 David H Nebinger

Introduction

In part 5 we started the portlet code.  We added the configuration support, started the portlet and added the PanelApp implementation to get the portlet in the control panel.

In this part we will be adding the view layer into the portlet and add the action handlers.  To complete these parts we'll be layering in the use of our Filesystem Access DS component.

We're also going to take a quick look at Lexicon and what it means for us average portlet developers and implement our portlet using the new Liferay MVC framework.

MVC Implementation Details

The new Liferay MVC takes on a lot of the grunt work for portlet implementations, and in this new iteration we're actually building OSGi components that leverage annotations to get everything done.

So let's take a look at one of the ActionCommand components to see how they work.  The TouchFileFolderMVCActionCommand is one of the simpler action commands in our portlet so this one will allow us to look at the aspects of ActionCommands without getting bogged down in the implementation code.

/**
 * class TouchFileFolderMVCActionCommand: Action command that handles the 'touch' of the file/folder.
 * @author dnebinger
 */
@Component(
	configurationPid = "com.liferay.filesystemaccess.portlet.config.FilesystemAccessPortletInstanceConfiguration",
	immediate = true,
	property = {
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
		"mvc.command.name=/touch_file_folder"
	},
	service = MVCActionCommand.class
)
public class TouchFileFolderMVCActionCommand extends BaseMVCActionCommand {

So here is a standard ActionCommand declaration.  The annotation identifies our configuration pid for accessing portlet instance config, the properties indicate this is an ActionCommand for our filesystem access portlet and the MVC command path to get to this action class, and the service declaration indicates this is an MVCActionCommand implementation.

The class itself extends the BaseMVCActionCommand so we don't have to implement all of the plumbing, just our necessary business logic.

	/**
	 * doProcessAction: Called to handle the touch file/folder action.
	 * @param actionRequest Request instance.
	 * @param actionResponse Response instance.
	 * @throws Exception in case of error.
	 */
	@Override
	protected void doProcessAction(
		ActionRequest actionRequest, ActionResponse actionResponse)
		throws Exception {

And this is the method declaration that needs to be implemented as an extension of BaseMVCActionCommand.

		// get the config instance
		FilesystemAccessPortletInstanceConfiguration config = getConfiguration(
			actionRequest);

		if (config == null) {
			logger.warn("No config found.");

			SessionErrors.add(
				actionRequest, MissingConfigurationException.class.getName());

			return;
		}

		// Extract the target and current path from the action request params.
		String touchName = ParamUtil.getString(actionRequest, Constants.PARM_TARGET);
		String currentPath = ParamUtil.getString(actionRequest, Constants.PARM_CURRENT_PATH);

		// get the real target path to use for the service call
		String target = getLocalTargetPath(currentPath, touchName);

		// use the service to touch the item.
		_filesystemAccessService.touchFilesystemItem(config.rootPath(), target);
	}

This next method demonstrates how an action command can get access to the portlet instance configuration.

	/**
	 * getConfiguration: Returns the configuration instance given the action request.
	 * @param request The request to get the config object from.
	 * @return FilesystemAccessPortletInstanceConfiguration The config instance.
	 */
	private FilesystemAccessPortletInstanceConfiguration getConfiguration(
		ActionRequest request) {

		// Get the theme display object from the request attributes
		ThemeDisplay themeDisplay = (ThemeDisplay)request.getAttribute(
			WebKeys.THEME_DISPLAY);

		// get the current portlet instance
		PortletInstance instance = PortletInstance.fromPortletInstanceKey(
			FilesystemAccessPortletKeys.FILESYSTEM_ACCESS);

		FilesystemAccessPortletInstanceConfiguration config = null;

		// use the configuration provider to get the configuration instance
		try {
			config = _configurationProvider.getPortletInstanceConfiguration(
				FilesystemAccessPortletInstanceConfiguration.class,
				themeDisplay.getLayout(), instance);
		} catch (ConfigurationException e) {
			logger.error("Error getting instance config.", e);
		}

		return config;
	}

	/**
	 * getLocalTargetPath: Returns the local target path.
	 * @param localPath The local path.
	 * @param target The target filename.
	 * @return String The local target path.
	 */
	private String getLocalTargetPath(final String localPath, final String target) {
		if (Validator.isNull(target)) {
			return null;
		}

		if (Validator.isNull(localPath)) {
			return StringPool.SLASH + target;
		}

		if (localPath.trim().endsWith(StringPool.SLASH)) {
			return localPath.trim() + target.trim();
		}

		return localPath.trim() + StringPool.SLASH + target.trim();
	}

Just as with our previous OSGi components, we rely on OSGi to inject services needed to implement the component functionality.

	/**
	 * setConfigurationProvider: Sets the configuration provider for config access.
	 * @param configurationProvider The config provider to use.
	 */
	@Reference
	protected void setConfigurationProvider(
		ConfigurationProvider configurationProvider) {

		_configurationProvider = configurationProvider;
	}

	/**
	 * setFilesystemAccessService: Sets the filesystem access service instance to use.
	 * @param filesystemAccessService The filesystem access service instance.
	 */
	@Reference(unbind = "-")
	protected void setFilesystemAccessService(
		final FilesystemAccessService filesystemAccessService) {
		_filesystemAccessService = filesystemAccessService;
	}

	private FilesystemAccessService _filesystemAccessService;

	private ConfigurationProvider _configurationProvider;

	private static final Log logger = LogFactoryUtil.getLog(
		TouchFileFolderMVCActionCommand.class);
}

As I started flushing out the other ActionCommand implementations, I quickly found that I was copying and pasting the getConfiguration() and getLocalTargetPath() methods.  I refactored those into a base class, BaseActionCommand, and changed all of the ActionCommand implementations to extend this base class, so don't be alarmed when the source differs from the listing above.

Serving resources is handled in a similar fashion.  Below is the declaration for the FileDownloadMVCResourceComand, the component which will handle serving the file as a Serve Resource handler.

/**
 * class FileDownloadMVCResourceCommand: A resource command class for returning files.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
		"mvc.command.name=/download-file"
	},
	service = MVCResourceCommand.class
)
public class FileDownloadMVCResourceCommand extends BaseMVCResourceCommand {

As with all Liferay MVC implementations, the view (render phase) is handled with JSP files.  JSP files do not have access to the OSGi injection mechanisms, so we have to use a different mechanism to get the OSGi injected resources and make them available to the JSP files.  We change the portlet class to handle this injection and pass through:

/**
 * class FilesystemAccessPortlet: This portlet is used to provide filesystem
 * access.  Allows an administrator to grant access to users to access local
 * filesystem resources, useful in those cases where the user does not have
 * direct OS access.
 *
 * This portlet will provide access to download, upload, view, 'touch' and
 * edit files.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"com.liferay.portlet.display-category=category.system.admin",
		"com.liferay.portlet.header-portlet-css=/css/main.css",
		"com.liferay.portlet.instanceable=false",
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
		"javax.portlet.display-name=Filesystem Access",
		"javax.portlet.init-param.template-path=/",
		"javax.portlet.init-param.view-template=/view.jsp",
		"javax.portlet.resource-bundle=content.Language",
		"javax.portlet.security-role-ref=power-user,user"
	},
	service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {

	/**
	 * render: Overrides the parent method to handle the injection of our
	 * service as a render request attribute so it is available to all of the
	 * jsp files.
	 *
	 * @param renderRequest The render request.
	 * @param renderResponse The render response.
	 * @throws IOException In case of error.
	 * @throws PortletException In case of error.
	 */
	@Override
	public void render(RenderRequest renderRequest, RenderResponse renderResponse) throws IOException, PortletException {

		// set the service as a render request attribute
		renderRequest.setAttribute(
			Constants.ATTRIB_FILESYSTEM_SERVICE, _filesystemAccessService);

		// invoke super class method to let the normal render operation run.
		super.render(renderRequest, renderResponse);
	}

	/**
	 * setFilesystemAccessService: Sets the filesystem access service instance to use.
	 * @param filesystemAccessService The filesystem access service instance.
	 */
	@Reference(unbind = "-")
	protected void setFilesystemAccessService(
		final FilesystemAccessService filesystemAccessService) {

		_filesystemAccessService = filesystemAccessService;
	}

	private FilesystemAccessService _filesystemAccessService;

	private static final Log logger = LogFactoryUtil.getLog(
		FilesystemAccessPortlet.class);
}

So here we let OSGi inject the references into the portlet instance class itself, and we override the render() method to pass our service references to the view layer as render request attributes.  In our init.jsp page, you'll find that the service reference instances are extracted from the render request attributes and turned into a variable that will be available to all JSP pages that include the init.jsp file.  In this way our JSPs have access to the injected services without having to go through the older Util classes to statically access the service reference.

So the only remarkable thing about the JSP files themselves is their location.  Instead of the old way of having the JSPs right next to the WEB-INF folder the way we used to build and deploy portlets, now they are actually built and shipped within the jar bundle by putting them into the resources/META-INF/resources directory; this is our new "root" path for all web assets.  So in this folder in our project we have all of our JSP files as well as a css folder with our css file.

Unfortunately I implemented the JSP files before Nate Cavinaugh's new blog entry, https://web.liferay.com/web/nathan.cavanaugh/blog/-/blogs/the-status-and-direction-of-the-frontend-infrastructure-in-liferay-7-dxp.  Had I waited, I might have known that my use of AUI may have been a bad decision.  But then I remembered that the bulk of the portal is written using AUI and that unless the UI is completely rewritten all the way down to the least significant portlet, AUI is going to remain for some time to come.

Oh yeah, Lexicon

I mentioned that I was going to talk about Lexicon in the portlet.  Lexicon (and Metal and Soy and ...) are really hot topics for pure front end developers.  I'm looking for more of the discussions for cross-functional developers, the average developers that are doing front-end and back-end and are looking for a suitable path to navigate between both worlds without falling down the rabbit holes both sides offer from time to time.  AUI has historically been that path, a tag library that cross-functional developers could use to leverage Liferay's use of YUI javascript without really having to learn all of the details in using the AUI/YUI javascript library.

So to start that conversation, i'm going to talk about one of the design choices I made and how Lexicon was necessary as a result.

My need was fairly simple - I wanted to show an "add file" button that, when clicked, would show a dialog to collect the filename.  The dialog would have an OK button (that would trigger the creation of the file) and a cancel button (that cancelled the add of the file).

So I needed a modal dialog, but how was I going to implement that?  The choices for modal dialog implementation, as we all know, are seemingly endless, but does Lexicon offer a solution?

The answer is Yes, there is a Lexicon solution.  The doco can be found here: http://liferay.github.io/lexicon/content/modals/

Now if you're like me, you read this whole page and can see how the front end guys are just eating this up.  Use some standard tags, decorate with some attributes and voila, you have yourself a modal dialog on a page.  But you're left wondering how you're going to drop this in your portlet jsp page, how you're going to wire it up to trigger calls to your backend, etc., and you think back wistfully remembering the AUI tag library and how you could do some JSP tags to get a similar outcome...  Ah, the good ole days...

Oh, sorry, back on topic.  So I was able to leverage the Lexicon modal dialog in my JSP page and, with some javascript help, got everything working the way I needed.  What I'm about to show is likely not the best way to have integrated Lexicon, I'm sure Nate and his team would be able to shoot holes all through this code, but like I said I'm looking for the discussion that covers how cross-functional developers will use Lexicon and if this starts that discussion, then it's worth sharing.

So here goes, these are the parts of view.jsp which handles the add file modal dialog:

<button class="btn btn-default" data-target="#<portlet:namespace/>AddFileModal" data-toggle="modal" id="<portlet:namespace/>showAddFileBtn"><liferay-ui:message key="add-file" /></button>
<div aria-labelledby="<portlet:namespace/>AddFileModalLabel" class="fade in lex-modal modal" id="<portlet:namespace/>AddFileModal" role="dialog" tabindex="-1">
	<div class="modal-dialog modal-lg">
		<div class="modal-content">
			<portlet:actionURL name="/add_file_folder" var="addFileActionURL" />

			<div id="<portlet:namespace/>fm3">
				<form action="<%= addFileActionURL %>" id="<portlet:namespace/>form3" method="post" name="<portlet:namespace/>form3">
					<aui:input name="<%= ActionRequest.ACTION_NAME %>" type="hidden" />
					<aui:input name="redirect" type="hidden" value="<%= currentURL %>" />
					<aui:input name="currentPath" type="hidden" value="<%= currentPath %>" />
					<aui:input name="addType" type="hidden" value="addFile" />

					<div class="modal-header">
						<button aria-labelledby="Close" class="btn btn-default close" data-dismiss="modal" role="button" type="button">
							<svg aria-hidden="true" class="lexicon-icon lexicon-icon-times">
								<use xlink:href="<%= themeDisplay.getPathThemeImages() + "/lexicon/icons.svg" %>#times" />
							</svg>
						</button>

						<button class="btn btn-default modal-primary-action-button visible-xs" type="button">
							<svg aria-hidden="true" class="lexicon-icon lexicon-icon-check">
								<use xlink:href="<%= themeDisplay.getPathThemeImages() + "/lexicon/icons.svg" %>#check" />
							</svg>
						</button>

						<h4 class="modal-title" id="<portlet:namespace/>AddFileModalLabel"><liferay-ui:message key="add-file" /></h4>
					</div>

					<div class="modal-body">
						<aui:fieldset>
							<aui:input autoFocus="true" helpMessage="add-file-help" id="addNameFile" label="add-file" name="addName" type="text" />
						</aui:fieldset>
					</div>

					<div class="modal-footer">
						<button class="btn btn-default close-modal" id="<portlet:namespace/>addFileBtn" name="<portlet:namespace/>addFileBtn" type="button"><liferay-ui:message key="add-file" /></button>
						<button class="btn btn-link close-modal" data-dismiss="modal" type="button"><liferay-ui:message key="cancel" /></button>
					</div>
				</form>
			</div>
		</div>
	</div>
</div>

So first I have the button that will trigger the display of the modal dialog.  The modal dialog is in the <div /> that follows.  The content div contains my AUI-based form but is decorated with appropriate Lexicon tags to add the dialog buttons.

There's also some javascript on the page that affects the dialog:

	var showAddFile = A.one('#showAddFileBtn');

	if (showAddFile) {
		showAddFile.after('click', function() {
			var addNameText = A.one('#addNameFile');
		
			if (addNameText) {
				addNameText.val('');
				addNameText.focus();
			}
		});
	}

	var addFileBtnVar = A.one('#addFileBtn');

	if (addFileBtnVar) {
		addFileBtnVar.after('click',function() {
			var fm = A.one('#form3');
		
			if (fm) {
				fm.submit();
			}
		});
	}

The first chunk is used to set focus on the name field after the dialog is displayed.  The second chunk triggers the submit of the form when the user clicks the okay button in the dialog.

The highlight of this code is that we get a modal dialog without really having to code any javascript, any JSP tags, etc.  We had to basically add necessary code to flush out the dialog content.

And I think that's really the essence of the Lexicon stuff; I think it's all going to work out to be a "standard" set of tags and attribute decorations that will render the UI, plus we'll have some code to put in at the JSP level to bind into our regular portlet code.

Conclusion

So here we are at the end of part 6 and I think I've covered it all...

We now have a finished project that satisfies all of our original requirements:

  • Must run on Liferay 7 CE GA2 (or newer).  Since we leveraged all Liferay 7 tools and are building module jars, we're definitely Liferay 7 compatible.
  • Must use Gradle for the build tool.  We set this up in part 2 of the blog series using the blade command line tool.
  • Must be full-on OSGi modules, no legacy stuff.  We also started this in part 2 and continued the module development through all other parts.
  • Must leverage Lexicon.  This was done in our modal dialog just introduced above.
  • Must leverage the new Configuration facilities.  The configuration facilities were added in part 5 as part of the initial portlet setup.
  • Must leverage the new Liferay MVC portlet framework.  The bulk of the Liferay MVC implementation was added in this blog part.
  • Must provide new Panel Application (side bar) support.  This was covered in part 5 of the blog series.

The original requirements have been satisfied, but how does it look?  Here's some screen shots to whet your appetite:

Main View

The main view lists files/folders that can be acted upon.  The path shows that I'm in /tomcat-8.0.32 but actually I'm off somewhere else in the filesystem.  Remember in a previous part I was using a "root path" to constrain filesystem access?  This shows that I am within a view sandbox that I cannot just sneak my way out of.  Even though we're exposing the underlying filesystem, we don't want to just throw out all semblances of security.

Add File Dialog

Our Lexicon-based modal dialog for adding a new file.

Upload File Dialog

The modal dialog even works for a file upload.

View File

The file view component is provided by the AUI ACE editor component.

Edit File

The edit file component is also provided by the ACE editor.

While not shown, the configuration panel allows the admin to set the "root path", enable/disable adds, uploads, downloads, deletes and edits.

That's pretty much it.  Hope you enjoyed the multi-part blog.  Feel free to comment below or, better yet, launch a discussion in the forums.

And remember, you can find the source code in github: https://github.com/dnebing/filesystem-access

Liferay 7 Development, Part 5

Technical Blogs 2016/09/14 投稿者 David H Nebinger

Introduction

In the first four parts we have introduced our project, laid out the Liferay workspace to create our modules, defined our DS service API and have just completed our DS service implementation.

It's now time to move on to starting our Filesystem Access Portlet.  With everything I want to do in this portlet, it's going to span multiple parts.  In this part we're going to start the portlet by tackling some key parts.

Note that this portlet is going to be a standard OSGi module, so we're not building a portlet war file or anything like that.  We're building an OSGi portlet module jar.

Configuration

So configuration is probably an odd place to start but it is a key for portlet design.  This is basically going to define the configurable parts of our portlet.  We're defining fields we'll allow an administrator to set and use that to drive the rest of the portlet.  Personally I've always found it easier to build in the necessary flexibility up front rather than getting down the road on the portlet development and try to retrofit it in later on.

For example, one of our configurable items is what I'm calling the "root path".  The root path is a fixed filesystem path that constrains where the users of the portlet can access.  And this constraint is enforced at all levels, it forms a layer of protection to ensure folks are not creating/editing files outside of this root path.  By starting with this as a configuration point, the rest of the development has to take this into account.  And we've seen this already in the DS API and service presented in the previous parts - every method in the API has the rootPath as the first argument (yes I had my configation parts figured out before I started any of the development).

So let's review our configuration elements:

Item Description
rootPath This is the root path that constrains all filesystem access.
showPermissions This is a flag whether to show permissions or not.  On windows systems, permissions don't really work so this flag can remove the non-functional permissions column.
deletesAllowed This is a flag that determines whether files/folders can be deleted or not.
uploadsAllowed This is a flag that determines whether file uploads are allowed.
downloadsAllowed This is a flag that determines whether file downloads are allowed.
editsAllowed This is a flag that determines whether inline editing is allowed.
addsAllowed This is a flag that determines whether file/folder additions are allowed.
viewSizeLimit This is a size limit that determines whether a file can be viewed in the browser.  This can impose an upper limit on generated HTML fragment size.
downloadableFolderSizeLimit This defines the size limit for downloading folders.  Since folders will be zipped live out of the filesystem, this can be used to ensure server resources are not overwhelmed creating a large zip stream in memory.
downloadableFolderItemLimit This defines the file count limit for downloadable folders.  This too is a mechanism to define an upper limit for server resource consumption.

Seeing this list and understanding how it will affect the interface, it should be pretty clear it's going to be much easier building that into the UI from the start rather than trying to retrofit it in later.

In previous versions of Liferay we would likely be using portlet preferences for these options, but since we're building for Liferay 7 we're going to take advantage of the new Configuration support.

We're going to start by creating a new package in our portlet, com.liferay.filesystemaccess.portlet.config (current Liferay practice is to put the configuration classes into a config package in your portlet project).

There are a bunch of classes that will be used for configuration, let's start with the central one, the configuration definition class FilesystemAccessPortletInstanceConfiguration:

/**
 * class FilesystemAccessPortletInstanceConfiguration: Instance configuration for
 * the portlet configuration.
 * @author dnebinger
 */
@ExtendedObjectClassDefinition(
	category = "platform",
	scope = ExtendedObjectClassDefinition.Scope.PORTLET_INSTANCE
)
@Meta.OCD(
	localization = "content/Language",
	name = "FilesystemAccessPortlet.portlet.instance.configuration.name",
	id = "com.liferay.filesystemaccess.portlet.config.FilesystemAccessPortletInstanceConfiguration"
)
public interface FilesystemAccessPortletInstanceConfiguration {

	/**
	 * rootPath: This is the root path that constrains all filesystem access.
	 */
	@Meta.AD(deflt = "${LIFERAY_HOME}", required = false)
	public String rootPath();

	// snip
}

There's a lot of stuff here, so let's dig in...

The @Meta annotations are from BND and define meta info on the class and the members.  The OCD annotation on the class defines the name of the configuration (using the portlet language bundle) and the ID for the configuration.  The ID is critical and is referenced elsewhere and must be unique across the portal, so the full class name is the current standard.  The AD annotation is used to define information about the individual fields.  We're defining the default values for the parameters and indicating that they are not required (since we have a default).

The @ExtendedObjectClassDefinition is used to define the section of the System Settings configuration panel.  The category (language bundle key) defines the major category the settings will be set from, and the scope defines whether the config is per portlet instance, per group, per company or system-wide.  We're going to use portlet instance scope so different instances can have their own configuration.

The next class is the FilesystemAccessPortletInstanceConfigurationAction class, the class that handles submits when the configuration is changed.  Instead of showing the whole class, I'm only going to show parts of the file that need some discussion.  The whole class is in the project in Github.

/**
 * class FilesystemAccessConfigurationAction: Configuration action for the filesystem access portlet.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS
	},
	service = ConfigurationAction.class
)
public class FilesystemAccessPortletInstanceConfigurationAction
	extends DefaultConfigurationAction {

	/**
	 * getJspPath: Return the path to our configuration jsp file.
	 * @param request The servlet request.
	 * @return String The path
	 */
	@Override
	public String getJspPath(HttpServletRequest request) {
		return "/configuration.jsp";
	}

	/**
	 * processAction: This is used to process the configuration form submission.
	 * @param portletConfig The portlet configuration.
	 * @param actionRequest The action request.
	 * @param actionResponse The action response.
	 * @throws Exception in case of error.
	 */
	@Override
	public void processAction(
		PortletConfig portletConfig, ActionRequest actionRequest,
		ActionResponse actionResponse)
		throws Exception {

		// snip
	}

	/**
	 * setServletContext: Sets the servlet context, use your portlet's bnd.bnd Bundle-SymbolicName value.
	 * @param servletContext The servlet context to use.
	 */
	@Override
	@Reference(
		target = "(osgi.web.symbolicname=com.liferay.filesystemaccess.web)", unbind = "-"
	)
	public void setServletContext(ServletContext servletContext) {
		super.setServletContext(servletContext);
	}
}

So the configuration action handler is actually a DS service.  It's using the @Component annotation and is implementing the ConfigurationAction service.  The parameter is the portlet name (so portlets map the correct configuration action handler).

The class returns it's own path to the JSP file used to show the configuration options.  The path returned is relative to the portlet's web root.

The processAction() method is used to process the values from the configuration form submit.  When you review the code you'll see it is extracting parameter values and saving preference values.

The class uses an OSGi injection using @Reference to inject the servlet context for the portlet.  The important part to note here is that the value must match the Bundle-SymbolicName value from the project's bnd.bnd file.

There are three other source files in this package that I'll describe briefly...

The FilesystemAccessDisplayContext class is a wrapper class to provide access to the configuration instance object in different portlet phases (i.e. Action or Render phases).  In some phases the regular PortletDisplay instance (a new object availble from the ThemeDisplay) can be used to get the instance config object, but in the Action phase the ThemeDisplay is not fully populated so this access fails.  The FilesystemAccessDisplayContext class provides access in all phases.

The FilesystemAccessPortletInstanceConfigurationBeanDeclaration class is a simple component to return the FilesystemAccessPortletInstanceConfiguration class so a configuration instance can be created on demand for new instances.

The FilesystemAccessPortletInstanceConfigurationPidMapping class maps the configuration class (FilesystemAccessPortletInstanceConfiguration) with the portlet id to again support dynamic creation and tracking of configuration instances.

The Portlet Class

Portlet classes are much smaller than what they used to be under Liferay MVC.  Here is the complete portlet class:

/**
 * class FilesystemAccessPortlet: This portlet is used to provide filesystem
 * access.  Allows an administrator to grant access to users to access local
 * filesystem resources, useful in those cases where the user does not have
 * direct OS access.
 *
 * This portlet will provide access to download, upload, view, 'touch' and
 * edit files.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"com.liferay.portlet.display-category=category.system.admin",
		"com.liferay.portlet.header-portlet-css=/css/main.css",
		"com.liferay.portlet.instanceable=false",
		"javax.portlet.display-name=Filesystem Access",
		"javax.portlet.init-param.config-template=/configuration.jsp",
		"javax.portlet.init-param.template-path=/",
		"javax.portlet.init-param.view-template=/view.jsp",
		"javax.portlet.resource-bundle=content.Language",
		"javax.portlet.security-role-ref=power-user,user"
	},
	service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {
}

It has no body at all.  Can't get any simpler than that...

There is no longer a portlet.xml file or a liferay-portlet.xml file.  Instead, these values are all provided through the properties on the DS component for the portlet.

The Panel App

Our portlet is a regular portlet that admins will be able to drop on any page they want.  However, we're also going to install the portlet as a Panel App, the new way to create a control panel.  We'll do this using the FilesystemAccessPanelApp class:

/**
 * class FilesystemAccessPanelApp: Component which exposes our portlet as a control panel app.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"panel.app.order:Integer=750",
		"panel.category.key=" + PanelCategoryKeys.CONTROL_PANEL_CONFIGURATION
	},
	service = PanelApp.class
)
public class FilesystemAccessPanelApp extends BasePanelApp {

	/**
	 * getPortletId: Returns the portlet id that will be in the control panel.
	 * @return String The portlet id.
	 */
	@Override
	public String getPortletId() {
		return FilesystemAccessPortletKeys.FILESYSTEM_ACCESS;
	}

	/**
	 * setPortlet: Injects the portlet into the base class, uses the actual portlet name for the lookup which
	 * also matches the javax.portlet.name value set in the portlet class annotation properties.
	 * @param portlet
	 */
	@Override
	@Reference(
		target = "(javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS + ")",
		unbind = "-"
	)
	public void setPortlet(Portlet portlet) {
		super.setPortlet(portlet);
	}

}

The @Component annotation shows this is yet another DS service that implements the PanelApp class.  The panel.category.key value will put our portlet under the configuration section of the control panel and the high panel.app.order property will put our portlet near the bottom of the list.

The methods specified will ensure the base class has the Filesystem Access portlet references for the panel app to work.

The JSPs

We will update the init.jsp and add the configuration.jsp files.  Not much to see, pretty generic jsp implementations.  The init.jsp file pulls in all of the includes used in the other jsp files and copies the config into local member fields.  The configuration jsp file has the AUI form for all of the configuration elements.

Configure The Module

Our portlet is still in the process of being flushed out, but we'll wrap up this part of the blog by configuring and deploying the module in it's current form.

Edit the bundle file so it contains the following:

Bundle-Name: Filesystem Access Portlet
Bundle-SymbolicName: com.liferay.filesystemaccess.web

Bundle-Version: 1.0.0

Import-Package:\
	javax.portlet;version='[2.0,3)',\
	javax.servlet;version='[2.5,4)',\
	*

Private-Package: com.liferay.filesystemaccess.portlet
Web-ContextPath: /filesystem-access

-metatype: *

So here we're forcing the import of the portlet and servlet APIs, these ensure that dependencies of our dependencies are included.

We also declare that our portlet classes are all private.  This means that other folks will not be able to use us as a dependency and include our classes.

An important addition is the Web-ContextPath key.  This value is used while the portlet is running to define the context path to portlet resources.  Given the value above, the portal will make our resources available as /o/filesystem-access/..., so for example you could go to /o/filesystem-access/css/main.css to pull the static css file if necessary.

Deployment

Well the modifications are all done.  At a command prompt, go to the modules/apps/filesystem-access-web directory and execute the following command:

$ ../../../gradlew build

You should end up with your new com.liferay.filesystem.access.web-1.0.0.jar bundle file in the build/libs directory.  If you have a Liferay 7 CE or Liferay DXP tomcat environment running, you can drop the jar into the Liferay deploy folder.

Drop into the Gogo Shell and you can even verify that the module has started:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  487|Active     |   10|Filesystem Access Service (1.0.0)
  488|Active     |   10|Filesystem Access API (1.0.0)
  489|Active     |   10|Filesystem Access Portlet (1.0.0)

If they are all active, you're in good shape.

Viewing in the Portal

For the first time in this blog series, we actually have something we can add in the portal.  Log in as an administrator to your portal instance and go to the add panel.  Under the Applications section you should find the System Administration group and in there is our new Filesystem Access portlet.  Grab it and drop it on the page.  You should see it render in the page.

So we haven't really done anything to the main view, but let's test what we did add.  First go to the ... menu and choose the Configuration element.  Although it probably isn't pretty, you should see your configuration panel there.  You can change values, click save, exit and come back in to see that they stick, ...  Basically the configuration should be working.

Next pull up the left panel and go to the Control Panel to the Configuration section.  You should see the Filesystem Access portlet at the bottom of the list (well, position depends upon whatever else you have installed, but in a clean bundle it will be at the bottom).  You can click on the option and you'll get your portlet again, but just the welcome message.  Not very impressive, but we'll get there.

You can also go to the System Settings control panel and you'll see a Platform tab at the top.  When you click on Platform, you should see Filesystem Access.  Click on it for the default configuration settings (used as defaults for new portlet instances).  This configuration will look different than your configuration.jsp because it's not using your configuration, it's a version of the form generated using just the FilesystemAccessPortletInstanceConfiguration class and the information in the Meta annotations to create the form.

Another cool thing you can try, create a com.liferay.filesystemaccess.portlet.config.FilesystemAccessPortletInstanceConfiguration.cfg file in the $LIFERAY_HOME/osgi/modules directory and you can define the default configuration values there.  The values in this file override the defaults from the @Meta annotations and allow you to use a deployable config you can use.

Conclusion

Well, our portlet is not done yet.  We have a good start on it, but we'll have yet more to add in the next part.  Remember the code for the project is up on Github and the code for this part is in the dev-part-5 branch.

In the next part we'll pick up on the portlet development and start building the real views.

Liferay 7 Development, Part 4

Technical Blogs 2016/09/14 投稿者 David H Nebinger

Introduction

In part 3 of the blog, the API for the Filesystem Access project was flushed out.

In this part of the blog, we'll create the service implementation module.

The Data Transfer Object

Now we get to implement our DTO object.  The interface is pretty simple, we just have to add all of the methods and expose values from data we retain.

The code will be available on GitHub so I won't go into great detail here.  Suffice it to say that for the most part it is exposing values from the underlying File object from the filesystem.

The Service

The service implementation is just as straight forward as the DTO.  It leverages the java.io.* packages and apis and also uses com.liferay.portal.kernel.util.FileUtil for some supporting functions.  It also integrates the use of the Liferay auditing mechanism to issue relevant audit messages.

The Annotations

It's really the annotations which will make the FilesystemAccessServiceImpl into a true DS service.

The first annotation is the Component annotation and it is used such as:

import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

/**
 * class FilesystemAccessServiceImpl: Implementation class of the FilesystemAccessService.
 *
 * @author dnebinger
 */
@Component(service = FilesystemAccessService.class)
public class FilesystemAccessServiceImpl implements FilesystemAccessService {

Here we are declaring that we have a DS component implementation that provides the FilesystemAccessService.  And boom, we're done.  We have just implemented a DS service.  Couldn't be easier, could it?

The other annotation import there is the Reference annotation.  We're going to be showing how to use the DS service in the next part of the blog, but we'll inject a preview here.

In the discussion of the service in the previous section, we mentioned that we were going to be integrating the Liferay audit mechanism so different filesystem access methods would be audited.  To integrate the audit mechanism, we need an instance of Liferay's AuditRouter service.  But how do we get the reference?  Well, AuditRouter is also implemented as a DS service.  We can have OSGi inject the service when our module is started using the Reference annotation such as:

/**
 * _auditRouter: We need a reference to the audit router
 */
@Reference
private AuditRouter _auditRouter;

And boom, we're done there too.

OSGi is handling all of the heavy lifting for us.  One annotation exposes our class as a service component implementation and another annotation will inject services which we have a dependency on.

Configure The Module

So we're not quite finished yet.  We have the code done, but we should configure our bundle so we get the right outcome.

Edit the bundle file so it contains the following:

Bundle-Name: Filesystem Access Service
Bundle-SymbolicName: com.liferay.filesystem.access.svc
Bundle-Version: 1.0.0
-sources: true

The Liferay standard is to give a meaningful name for the bundle, but the symbolic name should be something akin to the project package.  You definitely want the symbolic name to be unique when it is deployed to the OSGi container.

Deployment

Well the modifications are all done.  At a command prompt, go to the modules/apps/filesystem-access-svc directory and execute the following command:

$ ../../../gradlew build

You should end up with your new com.liferay.filesystem.access.svc-1.0.0.jar bundle file in the build/libs directory.  If you have a Liferay 7 CE or Liferay DXP tomcat environment running, you can drop the jar into the Liferay deploy folder.

Drop into the Gogo Shell and you can even verify that the module has started:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  486|Active     |   10|Filesystem Access API (1.0.0)
  487|Active     |   10|Filesystem Access Service (1.0.0)

So we see that our module deployed and started correctly, so all is good.  So we have a good outcome because both of our modules have been deployed and started successfully.

When things go awry.  If you've only deployed the service module and not the api module, you'll see something like:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  487|Installed  |   10|Filesystem Access Service (1.0.0)

This is really a bad outcome.  Installed simply means it's there in the OSGi container, but it won't be available.  We can try to start the module:

g! start 487
org.osgi.framework.BundleException: Could not resolve module: com.liferay.filesystem.access.svc [487]
  Unresolved requirement: Import-Package: com.liferay.filesystemaccess.api; version="[1.0.0,2.0.0)"

This is how you see why your module isn't started.  These are the kinds of messages you'll see most often, some sort of missing dependency that you'll have to satisfy before your module will start.  The worst part is that you won't see these errors in the logs either.  From the logs perspective you won't see a "STARTED" message for your module, but as we all know the lack of a message is not really a visible error that is easily resolved.

This current error is easy to fix - just deploy the api module.  As soon as it is started, OSGi will auto-start the service module, you won't have to start it yourself.  You can check that both modules are running by listing the beans again and you'll see they are both now marked as Active.

Conclusion

So we have our DS service completed.  The API module defining the service is out there and active.  We have an implementation of the service also deployed and active, too.  If there was a need for it, we could build alternative implementations of the DS service and deploy those also, but that's a discussion that should be saved for perhaps a different blog entry.

So we have our DS service, we just don't have a service client yet.  Well that's coming in the next part of the blog.  We're actually going to start building the filesystem access portlet and layer in our client code.  See you there...

 

該当件数: 47 件中 1 - 20
ページごとのアイテム数 20
/ 3