Disabling LPKG Index Validation

Technical Blogs June 21, 2017 By David H Nebinger

Just a quick blog today...

When you start up LR 7 CE/LR DXP, you'll often see it stop while trying to validate LPKGs. This is a security measure which is used to verify that the LPKG files have not been tampered with.

In development, though, this delay is just painful.

Fortunately it can be disabled. Add the following line to portal-ext.properties:

module.framework.properties.lpkg.index.validator.enabled=false

That's all there is to it.

Note that I probably would not do this in my production environments. It is, after all, in there to protect your environment. Disabling it in production removes that small bit of protection and doesn't seem wise.

Fixing Module Package Access Modifiers

Technical Blogs June 16, 2017 By David H Nebinger

If you're a Java Architect or Senior Developer, you know just how important the Java access modifiers are.

Each choice about what to make public, protected, private or package protected is important from an architectural perspective. If you make the wrong choices you can expose too much of your implementation or not enough, you can give subclasses unlimited ability to change internal functionality or little access at all.

If you've ever worked on a library or framework project, either public or a company internal project, you know that these decisions are even more important. With these types of projects, if you make the wrong decision about access modifiers you can end up with irate users or an unused library.

So there exists two different sets of rules, one for app developers and one for lib developers.  App developers are going to use a lot of private to hide stuff and public to expose; they'll define pojos w/ getters but no setters and perhaps a package private constructor to initialize final fields. Methods are typically public or private and protected only comes into play if there are known plans to subclass.

Library developers swing the other way, allowing for every class to potentially be subclassed, extensive use of protected over private, no use of package protected, etc. Basically implementation details will be protected from those using the class yet exposed for subclasses to extend or override.

Rules for lib developers are not hard and fast, some libraries certainly do a better job than others for exposing necessary access points.

I'm sure most of us have been there... Using a class in someone's jar where they declare, for example, a private field with a public getter but no setter, resulting in a class that is difficult to extend and customize. Then we have to break out our Reflection skills to access the field, change the access and update the value. Obviously we don't want to do these things, but we get forced into it because the library developer used application developer rules when defining the access modifiers.

OSGi Access Modifiers

OSGi bundles has its own set of "access modifiers".  We've seen those in the bnd.bnd files, at a package level you can choose to export packages or mark them as private.

Choices you make along these lines affect what you can extend/override in other bundles. If you mark a package as private, the classes are not available to another bundle to leverage and use.

Just like app vs lib developer access modifier rules, there is a similar distinction for OSGi application bundle developer rules and OSGi library bundle developer rules.  For the app bundle developer, packages are exported if they can be used by other modules, otherwise they are private to protect them. For lib bundle developers, you're going to export pretty much every package because you can never know how your library module will be used.

How Liferay Gets This Wrong

Probably my biggest complaint with the Liferay 7 CE / Liferay DXP modules is that I believe the developers were creating modules as though they are app bundle developers when, in fact, they should have been library bundle developers.

For example, the Liferay chat portlet... The Liferay chat portlet does not export a single package; every package in the module is private. As an application portlet bundle developer, this is probably exactly the decision I would make to protect my code, it won't need to be extended or overridden, as the developer if that comes up in the future I can just do it.

But the Liferay developers, they should not have built it this way in my opinion. Me, I may have a need to make some significant changes to the chat portlet, not just for JSP changes but perhaps also some logic. From that point of view, the Liferay chat portlet is a library bundle, a "base" bundle that I want to be able to extend or override. The com.liferay.chat.web.portlet.ChatPortlet is not full Liferay MVC, so all business logic is tied up in that class. If I want to customize the chat portlet, I need to copy the class and make my change and hope that Liferay doesn't update the portlet.

In order to complete my customization, I might need to change a method in the ChatPortlet itself. Sure, with OSGi I can replace the OOTB portlet class with my own, but I really want to be able to do something like:

@Component(...)
public class MyChatPortlet extends ChatPortlet {...}

This would allow me to replace the OOTB portlet using a higher service ranking for mine, yet I can keep as much of the original logic as-is without taking over responsibility for maintaining the full class myself.

For another concrete example, take the Liferay Login portlet.  This portlet is full-on Liferay MVC so, if I want to override the create account action, I just need to register an instance of MVCActionCommand with the right mvc.command.name and a higher service ranking. But again, since most of the packages in the Liferay Login portlet are private, I cannot do something like:

@Component(...)
public class CustomCreateAcountMVCActionCommand extends CreateAccountMVCActionCommand {...}

If my requirement is just to do some additional work in the addUser() method, I don't want to copy the whole Liferay class just to be able to tweak one method.  What happens when the next release comes out? I'd have to copy the whole class in and release again. At least by extending I only have to worry about keeping in sync the stuff I change, everything else is extended.

Can We Fix It?

As Bob the Builder says, "Yes We Can!", and it turns out it is really, really easy!

Let's say we want to tackle being able to extend Liferay's CreateAccountMVCActionCommand class. Our requirement is that we need to log whenever an account is being created. I know, pretty lame, but the point here is to extend a class which Liferay didn't plan on our extending - once we're over that hump, any additional requirements will be easy to tackle.

So let's get started. The first thing we need is a Fragment bundle. That's right, you read correctly, a Fragment bundle.

blade create -t fragment -h com.liferay.login.web -H 1.0.0 open-liferay-web

That gets us started. We need to open the bnd.bnd file and we're going to be doing two basic things:

  1. Copy in most of the stuff from the original bnd.bnd file. The only change we want to make is with the exported packages, so we want to keep everything else.
  2. Change the exported package line (or add one) to include the packages we want to export, and we'll also change to a version range.

I've gone ahead and done this, and here's what I ended up with:

Bundle-Name: Open Liferay Login Web
Bundle-SymbolicName: open.login.web
Bundle-Version: 1.1.19
Fragment-Host: com.liferay.login.web;bundle-version="[1.0.0,2.0.0)"

Export-Package: com.liferay.login.web.constants,\
	com.liferay.login.web.internal.portlet.action
Import-Package:\
	javax.net.ssl,\
	\
	*
Liferay-Releng-Module-Group-Description:
Liferay-Releng-Module-Group-Title:

So you can see that I satisfied #1 above, I've kept the import packages and the Liferay-Releng guys.

For #2, my export package statement was updated so now we're going to be exporting the com.liferay.login.web.internal.portlet.action package. This will allow us to subclass Liferay's action command by making it visible.

I also tweaked the Fragment-Host version. Instead of using a single version, I've changed it to a version range. Why? Because this fragment bundle doesn't care what version is actually deployed, we're just planning on exporting the package regardless of version.

And that's it! See, I said it was easy. You don't really need any other files, you're basically just going to be building and deploying a jar w/ the overriding OSGi manifest information.

Testing

Testing is also kind of easy. We know we want to extend the CreateAccountMVCActionCommand, so we just create a bundle and specify the contents. I did that already, too, and here's what I got:

@Component(
	property = {
		"javax.portlet.name=" + LoginPortletKeys.FAST_LOGIN,
		"javax.portlet.name=" + LoginPortletKeys.LOGIN,
		"mvc.command.name=/login/create_account",
		"service.ranking:Integer=100"
	},
	service = MVCActionCommand.class
)
public class CustomCreateAccountMVCActionCommand extends CreateAccountMVCActionCommand {

	@Override
	protected void addUser(ActionRequest actionRequest, ActionResponse actionResponse) throws Exception {
		_log.info("About to create a new account.");

		super.addUser(actionRequest, actionResponse);
	}

	@Reference(unbind = "-")
	protected void setLayoutLocalService(
		LayoutLocalService layoutLocalService) {
		super.setLayoutLocalService(layoutLocalService);
	}

	@Reference(unbind = "-")
	protected void setUserLocalService(UserLocalService userLocalService) {
		super.setUserLocalService(userLocalService);
	}

	@Reference(unbind = "-")
	protected void setUserService(UserService userService) {
		super.setUserService(userService);
	}

	@Reference(unbind = "-")
	protected void setAuthenticatedSessionManager(AuthenticatedSessionManager sessionMgr) {
		update("_authenticatedSessionManager", sessionMgr);
	}
	@Reference(unbind = "-")
	protected void setListTypeLocalService(ListTypeLocalService listTypeLocalService) {
		update("_listTypeLocalService", listTypeLocalService);
	}
	@Reference(unbind = "-")
	protected void setPortal(Portal portal) {
		update("_portal", portal);
	}

	protected void update(final String fieldName, final Object value) {
		try {
			Field f = getClass().getSuperclass().getDeclaredField(fieldName);

			f.setAccessible(true);

			f.set(this, value);
		} catch (IllegalAccessException e) {
			_log.error("Error updating " + fieldName, e);
		} catch (NoSuchFieldException e) {
			_log.error("Error updating " + fieldName, e);
		}
	}

	private static final Log _log = LogFactoryUtil.getLog(CustomCreateAccountMVCActionCommand.class);
}

Oh, crap. What is all of this junk?

Well, first let's get the necessary stuff out of the way. Our @Component reference has the necessary properties and service ranking so OSGi will use our action command class, which we are now extending Liferay's CreateAccountMVCActionCommand. We also have the overriding addUser() method to log when we are about to create an account, so we have satisfied our requirement.

The rest of the class, well that is necessary to inject the right OSGi references into the super class that it expects. Some of these are easy, such as the layout service and the two user services. The others are hard, the authenticated session manager, list type service and the portal instance.

Remember I started this blog saying that the rules for a library developer are different than an app developer, and when you have a bad library class you're left to using Reflection to update a super class? Yep, here's an example. Now, I can't really fault Liferay here for this because they created the module as though they were an app module developer, so the fact that they used app developer rules here is no surprise. Fortunately though I could use Reflection to get to the super field and update it appropriately.

Conclusion

So, when we build and deploy these two modules and create a new account, we find we have been successful:

13:30:31,360 INFO  [http-nio-8080-exec-6][CustomCreateAccountMVCActionCommand:39] About to create a new account.

Through a simple (really simple) fragment bundle we were able to export a package that Liferay did not export. From there, we can extend classes from that package to introduce our own modifications without having to copy everything from the original.

It's important to note the hurdles we had to bypass for the OSGi stuff, especially the Reflection usage to update the super class.

If you're going to go down this path, you will be doing things like this. There's no way around it, not all @Reference usage in Liferay classes are tied to methods; when they are, great, but when they're not you'll have to peel them open yourself.

Hope this helps you on your Liferay 7 CE / Liferay DXP developer journey!

 

REST Custom Context Providers

Technical Blogs June 16, 2017 By David H Nebinger

So a question came up today how to access the current user as part of a REST method body.

My friend, Andre Fabbro, was trying to build out the following application:

@ApplicationPath("/myapp")
@Component(immediate = true, service = Application.class)
public class MyApplication extends Application {

    @GET
    @Path("/whoami")
    @Produces(MediaType.APPLICATION_JSON)
    public String getUserFullName() {

        User user = ????;

        return user.getFullName();
    }
}

He was stuck trying to get the current user in order to finish the whoami handler.

So, being a long-time Liferay guy, I fell back on what I knew, and I pointed him towards the PrincipalThreadLocal and the getName() method to get the current user id.  Of course ThreadLocals kind of smell, they're almost like global variables, but I knew it would work.

My other friend, Rafael Oliveira, showed us both up and introduced me to the new concept of a custom context provider. You see, he knew that sometime soon a new module, com.liferay.portal.workflow.rest was coming and it was going to bring with it a new class, com.liferay.portal.workflow.rest.internal.context.provider.UserContextProvider. He did us one better by providing an implementation of Andre's app using @Context and the new UserContextProvider:

import javax.ws.rs.core.Context;

@ApplicationPath("/myapp")
@Component(immediate = true, service = Application.class)
public class MyApplication extends Application {

    @GET
    @Path("/whoami")
    @Produces(MediaType.APPLICATION_JSON)
    public String getUserFullName(@Context User user) {
        return user.getFullName();
    }
}

I was kind of blown away having learned something completely new with DXP and I needed to know more.

Before going on, though, all credit for this blog post goes to Rafael, all I'm doing here is putting it to electronic paper for us all to use for Liferay REST application implementations.

Basic @Context Usage

So when you create a new module using "blade create -t rest myapp", BLADE is starting a new JAX-RS-based RESTful application that you can build and deploy as an OSGi module. Using JAX-RS standard conventions, you can build out your RESTful methods using common annotations and (hopefully) best practices.

JAX-RS actually provides the javax.ws.rs.core.Context annotation and is used to inject common servlet-based values. Using @Context, you can define a method parameter that is not part of the RESTful call but are injected by the JAX-RS framework, kind of like the automagic ServiceContext injection in ServiceBuilder remote services.

Out of the box, JAX-RS Context annotation supports injecting the following parameters in methods:

Type Description
javax.ws.rs.core.Application Provides access to metadata information on the JAX-RS application.
javax.ws.rs.core.UriInfo Provides access to application and request URI information.
javax.ws.rs.core.Request Provides access to the request used for the method.
javax.ws.rs.core.HttpHeaders Provides access to the HTTP header information for the request.
javax.ws.rs.core.SecurityContext Provides access to the security-related information for the request.
javax.ws.rs.ext.Providers Provides runtime lookup of provider instances.

To use these, you just add appropriately decorated parameters to the REST method. If necessary, we could easily add a method to the application above such as:

@GET
@Path("/neato")
@Produces(MediaType.APPLICATION_JSON)
public String getStuff(@Context Application app, @Context UriInfo uriInfo, @Context Request request,
        @Context HttpHeaders httpHeaders, @Context SecurityContext securityContext, @Context Providers providers) {
    ....
}

The above getStuff() method will be handling all requests to the /neato path, but all of the parameters are injected, none are provided in the URL or as parameters; they are injected automagically by JAX-RS.

Custom @Context Usage

So these types are really nice, but they really don't do anything for our Liferay integration. What would be really cool is if we could use @Context to inject some Liferay parameters.

And we can! As Rafael pointed out, there is a new module in the pipeline for workflow to invoke RESTful methods on the backend. The new module is the portal-workflow-rest project. I'm not sure, but I believe this is going to be part of the upcoming GA4 release, but don't hold me to that.

Once available, this project will provide three new types that can be injected into RESTful method parameters:

Type Description
com.liferay.portal.kernel.model.Company The Liferay Company associated with the request.
java.util.Locale The locale associated with the request.
com.liferay.portal.kernel.model.User The Liferay User associated with the request.

So, like the out of the box parameters, we could extend our getStuff() method with these parameters too:

@GET
@Path("/neato")
@Produces(MediaType.APPLICATION_JSON)
public String getStuff(@Context Application app, @Context UriInfo uriInfo, @Context Request request,
        @Context HttpHeaders httpHeaders, @Context SecurityContext securityContext, @Context Providers providers,
        @Context Company company, @Context Locale locale, @Context User user) {
    ....
}

Just pick from all of these different available types to get the data you need and run with it.

Remember these will not be available in GA3 nor in DXP just yet - I'm sure they'll make it in soon, but I'm not aware of the schedule for either product lines.

Writing Custom Context Providers

So to me, the biggest value of this new module is this package: https://github.com/liferay/liferay-portal/tree/master/modules/apps/forms-and-workflow/portal-workflow/portal-workflow-rest/src/main/java/com/liferay/portal/workflow/rest/internal/context/provider

Why? Because they expose how we can write our own custom context provider implementations so we can inject custom parameters into REST methods.

Say, for example, that we want to inject a ServiceContext instance. I'm not sure if the portal source already has one of these fellas, but if so let's pretend it doesn't exist and we want to write our own. Where are we going to start?

So first you need a project, we'll create a blade workspace:

blade init custom-context-provider

We also need a new module to develop, so we'll change to the custom-context-provider/modules directory to create an initial module:

blade create -t api -p com.dnebinger.rest.internal.context.provider service-context-context-provider

This will give us a nearly empty API module. We'll end up cleaning out most of the generated files, but we will end up with the com.dnebinger.rest.internal.context.provider.ServiceContextContextProvider class:

package com.dnebinger.rest.internal.context.provider;

import com.liferay.portal.kernel.exception.PortalException;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.service.ServiceContext;
import com.liferay.portal.kernel.service.ServiceContextFactory;
import org.apache.cxf.jaxrs.ext.ContextProvider;
import org.apache.cxf.message.Message;
import org.osgi.service.component.annotations.Component;

import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.ext.Provider;

/**
 * class ServiceContextContentProvider: A custom context provider for ServiceContext instantiation.
 *
 * @author dnebinger
 */
@Component(immediate = true, service = ServiceContextContentProvider.class)
@Provider
public class ServiceContextContentProvider implements ContextProvider {
	/**
	 * Creates the context instance
	 *
	 * @param message the current message
	 * @return the context
	 */
	@Override
	public ServiceContext createContext(Message message) {
		ServiceContext serviceContext = null;

		// get the current HttpServletRequest for building the service context instance.
		HttpServletRequest request = (HttpServletRequest) message.getContextualProperty(PROPKEY_HTTP_REQUEST);

		try {
			// now we can create a service context
			serviceContext = ServiceContextFactory.getInstance(request);

			// done!
		} catch (PortalException e) {
			_log.warn("Failed creating service context: " + e.getMessage(), e);
		}

		// return the new instance.
		return serviceContext;
	}

	private static final String PROPKEY_HTTP_REQUEST = "HTTP.REQUEST";

	private static final Log _log = LogFactoryUtil.getLog(ServiceContextContentProvider.class);
}

So this is pretty much the whole module. Easy, huh?

Conclusion

Now that we can create custom context providers, we can use this one for example in the original code:

@ApplicationPath("/myapp")
@Component(immediate = true, service = Application.class)
public class MyApplication extends Application {

    @GET
    @Path("/whoami")
    @Produces(MediaType.APPLICATION_JSON)
    public String getUserFullName(@Context ServiceContext serviceContext) {

        User user = _userLocalService.fetchUser(serviceContext.getUserId());

        return user.getFullName();
    }

    @Reference
    private UserLocalService _userLocalService;
}

These custom context providers become the key for being able to create and inject non-REST parameters into your REST methods.

Check out the code from GitHub: https://github.com/dnebing/custom-context-provider

Enjoy!

Resolving Missing Components

Technical Blogs June 15, 2017 By David H Nebinger

So if you've started developing for Liferay 7 CE / Liferay DXP, I'm sure you've been hit at one point or another with the old "unresolved reference" issue that prevents your bundle from starting.

You would have seen it by now, the Gogo shell where you list the beans and find your bean there stuck in the Installed state. You try starting it and Gogo tells you about the unresolved reference you have and you're stuck going back to your bnd.bnd file to resolve the dependency issue.

This is so common, in fact, that I wrote a blog post to help resolve them: https://web.liferay.com/web/user.26526/blog/-/blogs/osgi-module-dependencies

While this will be your issue more often than not, there's another form of "unsatisfied reference" problem that leads to missing components rather than non-started bundles.

The Case of the Missing Component

You can have a case where your module starts but your component is not available. This sounds kind of strange, right? You've taken the time to resolve all of those 3rd party dependency jars, the direct and transitive ones, and your bean starts cleanly and there are no errors.

But your component is just not available. It seems to defy logic.

So, here's the skinny... Any time your component has an @Reference with default binding, you are basically telling OSGi that your component just has to have the reference injected or else it cannot be used.

That's where this comes from - you basically have an unsatisfied reference to some object that was supposed to be @Reference injected but could not be found; since the reference is missing, your component cannot start and it is therefore not available.

There's actually a bunch of different ways that this scenario can happen:

  • The @Reference refers to an object from another module that was not deployed or has not started (perhaps because of normal unresolved references). This is quite common if you deploy your Service Builder API module but forget to deploy the service module.
  • You have built in circular references (more below).
  • You use a target filter for the @Reference that is too narrow or incorrect, such that suitable candidates cannot be used.

In all of these cases you'll be stuck with a clean component, just one that cannot activate because of unsatisfied references.

Sewing (@Reference) Circles

Reference circles are real pains to resolve but they rise out of your own code. Understanding reference circles is probably best started through an example.

Let's say we are building a school planning system. We focus on two major classes, a ClassroomManager and an InstructorManager. The ClassroomManager has visibility on all classrooms and is aware of the schedule and availability. The InstructorManager has the instructor roll and is aware of their schedule and availability.

It would be quite natural for a ClassroomManager to use an InstructorManager to perhaps find an available instructor to substitute in a class. Likewise it would be natural for an InstructorManager to need a ClassroomManager to try to reschedule a class to another time in an available room.

So you might find yourself creating the following classes:

@Component
public class ClassroomManager {

  @Reference
  private InstructorManager instructorManager;
}

@Component
public class InstructorManager {

  @Reference
  private ClassroomManager classroomManager;
}

If you look at this code, it seems quite logical.  Each class has a need for another component, so it has been @Reference injected. Should be fine, right?

Well actually this code has a problem - there's a circular reference.

When OSGi is processing the ClassroomManager class, it knows that the class cannot activate unless there's an available, activated InstructorManager instance to inject. Which there isn't yet, so this class cannot activate.

When OSGi is processing the InstructorManager class, it knows that the class cannot activate unless there's an available, activated ClassroomManager instance to inject. Which there isn't yet, so this class cannot activate.

But wait, you say, we just did the ClassroomManager, we should be fine! We're stuck, though, because the ClassroomManager could not activate because of the unsatisfied reference.

This is your reference circle - neither component can start because they are circularly dependent upon each other.

Resolving Component Unsatisfied References

Resolution is not going to be the same for every unsatisfied component reference.

If the problem is an undeployed module, resolving is as simple as deploying the missing module.

If the problem is an unstarted module, resolving is a matter of starting the module (perhaps fixing whatever problem that might be preventing it from starting in the first place).

For a reference target filter issue, well those are going to be challenging. You'll have to figure out if the target is not right or too narrow and make appropriate adjustments.

The circular reference resolutions can be resolved by refactoring code - instead of big ClassroomManager and InstructorManager classes, perhaps use a bunch of smaller classes that don't result in similar reference circles.

Another option is to use different ReferenceCardinality, ReferencePolicy and ReferencePolicyOption values (see my blog post on annotations, specifically the section on the @Reference annotation). You could switch both from MANDITORY to OPTIONAL ReferenceCardinalities, DYNAMIC for the ReferencePolicy, ...  The right set is usually mandated by what the code can handle and requires, but the outcome would allow the components to activate without the initial references being satisfied, but once activated the references will be post-injected.

How Do You Fix What You Can't Find?

This, for me, has been kind of a challenge. Of course the answer lies within one of the Gogo shell commands, but I've always found it hard to separate the wheat (the components with unsatisfied references) from the chaff (the full output with all component status details from the bundle).

For me, I've found it easiest to use TripWire CE or TripWire DXP. After going to the TripWire panel in the control panel, click on the Take Snapshot card and you can actually drill into and view all unsatisfied references.  The following screen capture is an actual view I used to resolve an unsatisfied reference issue:

The issue I was looking at was the first unsatisfied reference line for my com.liferay.metrics.health.portal.layouts.LayoutHealthCheck component. It just wouldn't activate and I didn't know why; I knew it wasn't available, it wasn't doing its thing, but the module was successfully started so what could the problem be?

Well the value for the key spells it out - I have two unsatisfied references for two different gauge instances. And you know what? it is totally true. Those two missing references happen to be the 2nd and 3rd lines in the list above, and they in turn had unsatisfied references that needed to be resolved, ...

Conclusion

The point here is that in order to resolve unsatisfied references, you need to be able to identify them. Once you can identify the problem, you can resolve them and move on.

For me, I've found it easiest to use TripWire CE or TripWire DXP to identify the unsatisfied references, it does so quickly, easily, and doesn't require memorizing Gogo shell commands to get it done.

 

Securing The /api/jsonws UI

Technical Blogs June 12, 2017 By David H Nebinger

The one thing I never understood was why the UI behind the /api/jsonws is publicly viewable.

I mean, there's lots of arguments for it to be secured:

  • Exposing all of your web service APIs exposes attack vectors to hackers. Security by obscurity is often one of the best and easiest form of security that you can have1.
  • Just because users may have permission to do something, that doesn't mean you want them to. They might not be able to get to a portlet to delete some web content or document, but if they can get to /api/jsonws and know anything about Liferay web services and parameters, they might have fun trying to do it.
  • It really isn't something that non-developers should ever be looking at.

I'm sorry, but I've tried really hard and I can't think of a single use case where making the UI publicly available is a good thing. I guess there might be one, but at this point it seems like it should just be an edge case and not a primary design requirement.

A client recently asked about how to secure the UI, and although I had wondered why it wasn't secured, I decided it was time for me to figure out how to do it.

The UI Implementation

It was actually a heck of a lot easier than what I thought it was going to be.

I had in my head some sort of complicated machinery that was aware of all locally deployed remote services and perhaps was leveraging reflection and stuff to expose methods and parameters and ...

But it wasn't that complicated at all.

The UI is implemented as a basic JSP-based servlet. Whether in Liferay 6.2 or Liferay 7 CE or Liferay DXP, there are core JSPs in /html/portal/api/jsonws that implement the complete UI servlet code.

For incoming remote web service calls, the portal will look at the request URI - if it is targeting a specific remote service, the request is handed off to the service for handling. However, if it is just /api/jsonws, the portal passes the request to the /html/portal/api/jsonws/index.jsp page for presentation.

Securing the UI

All I'm going to do is tweak the JSP to make sure the current user is an OmniAdmin if they get to see the UI. Nothing fancy, I admit, but it gets the job done. If you have more complicated requirements, you're free to use this blog as a guide but you're on your own for implementing them.

My JSP change is basically going to be wrapping the content area to require OmniAdmin to view the content, otherwise you will see a permission failure. Here's what I came up with:

<div id="content">
	<%-- Wrap content in a permission check --%>
	<c:if test="<%= permissionChecker.isOmniadmin() %>">
		<div id="main-content">
			<aui:row>
				<aui:col cssClass="lfr-api-navigation" width="<%= 30 %>">
					<liferay-util:include page="/html/portal/api/jsonws/actions.jsp" />
				</aui:col>

				<aui:col cssClass="lfr-api-details" width="<%= 70 %>">
					<liferay-util:include page="/html/portal/api/jsonws/action.jsp" />
				</aui:col>
			</aui:row>
		</div>
	</c:if>
	<c:if test="<%= !permissionChecker.isOmniadmin() %>">
		<liferay-ui:message key="you-do-not-have-permission-to-view-this-page" />
	</c:if>
</div>

This code I took mostly from the DXP version of the file, so use the file from the version of Liferay you have so you don't introduce some sort of source issue. I've highlighted the real code that I added so you can work it into your override.

Creating the Project

So regardless of version, we're going to be doing a JSP hook, they just get implemented a little differently.

For Liferay 6.2, it is just a JSP hook. Build a JSP hook and pull in the original /html/portal/api/jsonws/index.jsp file and edit in the change outlined above. I'm not going to get into details for building a 6.2 JSP hook, those have been rehashed a number of times now, so there's no reason for me to rehash it again. Just build your JSP hook and deploy it and you should be fine.

For Liferay 7 CE, well let me just say that what I'm about to cover for DXP is how you will be doing it once GA4 is released.  Until then, the path I'm using below won't be available to you.

For Liferay DXP (and CE GA4+), we'll follow the instructions from https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-core-jsps to override the core JSPs using an OSGi module.

So first I created a new workspace using the command "blade init api-jsonws-override".

Then I entered the api-jsonws-override/modules directory and created my new module using the command "blade create -t service -p com.liferay.portal.jsonwebservice.override api-jsonws-override".

I don't like building code myself, so the first thing I did was add a dependency to a module which has a base implementation of the CustomJspBag in my build.gradle file:

dependencies {
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.impl", version: "2.0.0"
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.6.0"
    compileOnly group: "com.liferay", name: "com.liferay.portal.custom.jsp.bag", version: "1.0.0"
    compileOnly group: "org.osgi", name: "org.osgi.core", version: "5.0.0"
    compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0"
}

It's actually that com.liferay.portal.custom.jsp.bag guy which makes my project not work for Liferay 7 CE, it's not available in GA3 but I expect it to be released with GA4. If you don't want to wait for GA4, you can of course not extend the BaseCustomJspBag class like I'm about to and can build out the complete CustomJspBag interface in your class.

NOTE: I really, really dislike the fact that I have to pull in the com.liferay.portal.impl dependency above. I have to do that because for some reason the CustomJspBag interface is only in the portal-impl project and not portal-kernel as we would normally expect.
Do not import this dependency yourself unless you are really, really, really sure that you gotta have it. 95% of the time you're actually going to be wrong, so if you think you need it you really have to question whether that is true or whether you're perhaps missing something.

Since I have the module with the base class, I can now write my JsonWsCustomJspBag class:

package com.liferay.portal.jsonwebservice.override;

import com.liferay.portal.custom.jsp.bag.BaseCustomJspBag;
import com.liferay.portal.deploy.hot.CustomJspBag;
import org.osgi.framework.BundleContext;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

/**
 * class JsonWsCustomJspBag: This is the custom jsp bag used to replace the core JSP files for the jsonws UI.
 *
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"context.id=JsonWsCustomJspBag",
		"context.name=/api/jsonws Permissioning Custom JSP Bag",
		"service.ranking:Integer=20"
	}
)
public class JsonWsCustomJspBag extends BaseCustomJspBag implements CustomJspBag {

	@Activate
	protected void activate(BundleContext bundleContext) {
		super.activate(bundleContext);
	
		// we also want to include the jspf files in the list
		Enumeration enumeration = bundleContext.getBundle().findEntries(
			getCustomJspDir(), "*.jspf", true);
	
		while (enumeration.hasMoreElements()) {
			URL url = enumeration.nextElement();
	
			getCustomJsps().add(url.getPath());
		}
	}
}

Pretty darn simple, huh? That's because I could leverage the BaseCustomJspBag class. Without that, you need to implement the CustomJspBag interface and that makes this code a lot bigger. You'll find help for implementing the complete interface from https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-core-jsps.

Don't forget your override file in src/main/resources/META-INF/resources/custom_jsps/html/portal/api/jsonws/index.jsp file with the override code as discussed previously.

Conclusion

That's it. Build and deploy your new module and you can hit the page as a guest and you'll get the permission message. Hit the page as a non-OmniAdmin user and you get the same. Navigate there as the OmniAdmin and you see the UI so you can browse the services, try them out, etc. as though nothing changed.

I'm making the project available in GitHub: https://github.com/dnebing/api-jsonws-override

 

 

1 A friend of mine called me out on the "security by obscurity" statement and thought that I was advocating for only this type of security.  Security by obscurity should never, ever be your only barrier to prevent folks with malicious intent from hacking your site. I do see it as a first line of defense, one that can keep script kiddies or inexperienced hackers from discovering your remote APIs. But you should always be checking permissions, securing access, monitoring for intrusions, etc.

ServiceBuilder and Upgrade Processes

Technical Blogs May 17, 2017 By David H Nebinger

Introduction

Today I ran into someone having issues with ServiceBuilder and the creation of UpgradeProcess implementations. The doco is a little bit confusing, so I thought I'd do a quick blog post sharing how the pieces fit...

Normal UpgradeProcess Implementations

As a reminder, you register UpgradeProcess implementations to support upgrading from, say, 1.0.0 to 2.0.0, when there are things that you need to code to ensure when the upgrade is complete that the system is ready to use your module. Say, for example, that you're storing XML in a column in the DB and in 2.0.0 you've changed the DTD; for those folks that already have 1.0.0 deployed, your UpgradeProcess implementation would be responsible for processing each existing record in the database to change the contents over to the 2.0.0 version of the DTD. For non-ServiceBuilder modules, it is up to you to write the initial UpgradeProcess code for the 0.0.0 -> 1.0.0 version.

Through the lifespan of your plugin, you continue to add in UpgradeProcess implementations to handle the automatic update for dot releases and major releases. The best part is that you don't have to care what version everyone is using, Liferay will apply the right upgrade processes to take the users from what version they're currently at all the way through to the latest version.

This is all good, of course, but ServiceBuilder, well it behaves a little differently.

ServiceBuilder service.xml Development

As you go through development and you change the entities in service.xml and rebuild services, ServiceBuilder will update the SQL files used to create the tables, indices, etc. When you deploy the service the first time, ServiceBuilder will happily identify the initial deployment and will use the SQL files to create the entities.

This is where things can go sideways... If I deploy version 1.0.0 of the service and version 2.0.0 comes out, the service developer needs to implement an UpgradeProcess that makes the necessary changes to the tables to get things ready for the current version of the service. If you did not deploy version 1.0.0 but are starting out on 2.0.0, you don't want to have to execute all of the upgrade processes individually, you want ServiceBuilder to do what it has always done and just use the SQL files to create the version 2.0.0 of the entities.

So how do you support both of these scenarios correctly?

By using the Liferay-Require-SchemaVersion header¹ in your bnd.bnd file, that's how.

Supporting Both ServiceBuilder Upgrade Scenarios

The Liferay-Require-SchemaVersion header defines the current DB schema version number for your service modules. This version number should be incremented as you change your service.xml in preparation for a release.

There's code in the ServiceBuilder deployment which injects a hidden UpgradeProcess implementation that is defined to cover the "0.0.0" version (the version which represents the "new deployment") to the Liferay-Require-SchemaVersion version number.  So your first release will have the header set to 1.0.0, next release might be 2.0.0, etc.

So in our previous example with the 2.0.0 service release, when you deploy the service Liferay will match the "0.0.0" to "2.0.0" hidden upgrade process implementation provided by ServiceBuilder and will invoke it to get the 2.0.0 version of the tables, indices, etc. created for you using the SQL files.

The service developer must also code and register the manual UpgradeProcess instances that support the incremental upgrade. So for the example, there would need to be a 1.0.0 -> 2.0.0 UpgradeProcess implementation so when I deploy 2.0.0 to replace my 1.0.0 deployment, the UpgradeProcess will be used to modify my DB schema to get it up to version 2.0.0.

Conclusion

As long as you properly manage both the Liferay-Require-SchemaVersion header in the bnd.bnd file and provide your corresponding UpgradeProcess implementations, you will be able to easily handle the first time deployment as well as the upgrade deployments.

An important side effect to note here - you must manage your Liferay-Require-SchemaVersion correctly.  If you set it initially to 1.0.0 and forget to update it on future releases, your users will have all kinds of issues.  For initial deployments, the SQL scripts would create the entities using the latest SQL files and then try to apply UpgradeProcess implementations to get to new versions trying to make modifications they really don't have to worry about. For upgrade deployments, Liferay may not process upgrades because it believes the schema is already at the appropriate version.

¹ If the Liferay-Require-SchemaVersion header is missing, the value for the Bundle-Version will be used instead.

Increasing Capacity and Decreasing Response Times Using a Tool You're Probably Not Familiar With

Technical Blogs May 7, 2017 By David H Nebinger

Introduction

When it comes to Liferay performance tuning, there is one golden rule:

The more you offload from the application server, the better your performance will be.

This applies to all aspects of Liferay. Using Solr/Elastic is always better than using the embedded Lucene. While PDFBox works, you get better performance by offloading that work to ImageMagick and GhostScript.

You can get even better results by offloading work before it gets to the application server. What I'm talking about here is caching, and one tool I like to recommend for this is Varnish.

According to the Varnish site:

Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.

So I've found the last claim to be a little extreme, but I can say for certain that it can offer significant performance improvement.

Basically Varnish is a caching appliance.  When an incoming request hits Varnish, it will look at in it's cache to see if it has been rendered before. If it isn't in the cache, it will pass the request to the back end and store the response (if possible) in the cache before returning the response to the original requestor.  As additional matching requests come in, Varnish will be able to serve the response from the cache instead of sending it to the back end for processing.

So there are two requirements that need to be met to get value out of the tool:

  1. The responses have to be cacheable.
  2. The responses must take time for the backend to generate.

As it turns out for Liferay, both of these are true.

So Liferay can actually benefit from Varnish, but we can't just make such a claim, we'll need to back it up w/ some testing.

The Setup

To complete the test I set up an Ubuntu VirtualBox instance w/ 12G of memory and 4 processors, and I pulled in a Liferay DXP FP 15 bundle (no performance tuning for JVM params, etc). I also compiled Varnish 4.1.6 on the system. For both tests, Tomcat will be running using 8G and Varnish will also be running w/ an allocation of 2G (even though varnish is not used for the Tomcat test, I think it is "fairer" to keep the tests as similar as possible).

In the DXP environment I'm using the embedded ElasticSearch and HSQL for the database (not a prod configuration but both tests will have the same bad baseline). I deployed the free Porygon theme from the Liferay Marketplace and set up a site based on the theme. The home page for the Porygon demo site has a lot of graphics and stuff on it, so it's a really good site to look at from a general perspective.

The idea here was not to focus on Liferay tuning too much, to get a site up that was serving a bunch of mixed content. Then we measure a non-Varnish configuration against a Varnish config to see what impact Varnish can have in performance terms.

We're going to test the configuration using JMeter and we're going to hit the main page of the Porygon demo site.

Testing And Results

JMeter was configured to use 100 users and loop for 20 times.  Each test would touch on the home page, the photography, science and review pages and would also visit 3 article pages. JMeter was configured to retrieve all related assets synchronously to exagerate the response time from the services.

Response Times

Let's dive right in with the response times for the test from the non-Varnish configuration:

Response Times Without Varnish

The runtime for this test was 21 minutes, 20 seconds. The 3 article pages are the lines near the bottom of the graph, the lines in the middle are for the general pages w/ the asset publishers and all of the extra details.

Next graph is the response times from the Varnish configuration:

Response Times With Varnish

The runtime for this test was 11 minutes, 58 seconds, a 44% reduction in test time, and it's easy to see that while the non-Varnish tests seem to float around the 14 second mark, the Varnish tests come in around 6 seconds.

If we rework the graph to adjust the y-axis to remove the extra whitespace we see:

Response Times With Varnish

The important part here for me was the lines for the individual articles. In the non-Varnish test, /web/porygon-demo/-/space-the-final-frontier?inheritRedirect=true&redirect=%2Fweb%2Fporygon-demo shows up around the 1 second response time, but with varnish it hovers at the 3 second response time.  Keep that in mind when we discuss the custom VCL below.

Aggregate Response Times

Let's review the aggregate graphs from the tests.  First the non-Varnish graph:

Aggregate Without Varnish

This reflects what we've seen before; individual pages are served fairly quickly, pages w/ all of the mixed content take significantly longer to load.

And the graph for the Varnish tests:

Aggregate With Varnish

At the same scale, it is easy to see that Varnish has greatly reduced the response times.  Adjusting the y-axis, we get the following:

Aggregate With Varnish

Analysis

So there's a few parts that quickly jump out:

  • There was a 44% reduction in test runtime reflected by decreased response times.
  • There was a measurable (but unmeasured) reduction in server CPU load since Liferay/Tomcat did not have to serve all traffic.
  • Since work is offloaded from Liferay/Tomcat, overall capacity is increased.
  • While some response times were greatly improved by using Varnish, others suffered.

The first three bullets are easy to explain.  As Varnish is able to cache "static" responses from Liferay/Tomcat, it can serve those responses from the cache instead of forcing Liferay/Tomcat to build a fresh response every time.  Having Liferay/Tomcat rebuild responses each time requires CPU cycles, so returning a cached response reduces the CPU load.  And since Liferay/Tomcat is not busy rebuilding the responses that now come from the cache, Liferay/Tomcat is free to handle responses that cannot be cached; basically the overall capacity of Liferay/Tomcat is increased.

So you might be asking that, since Varnish is so great, why do the single article pages suffer from a response time degradation? Well, that is due to the custom VCL script used to control the caching.

The Varnish VCL

So if you don't know about Varnish, you may not be aware that caching is controlled by a VCL (Varnish Configuration Language) file. This file is closer to a script than it is a configuration file.

Normally Varnish operates by checking the backend response cache control headers; if a response can be cached, it will be, and if the response cannot be cached it won't. The impact of Varnish is directly related to how many of the backend responses can be cached.

You don't have to rely solely on the cache control headers from the backend to determine cacheability; this is especially true for Liferay. Through the VCL, you can actually override the cache control headers and make some responses cachable that otherwise would not have been and make other responses uncacheable even when the backend says it is acceptable.

So now I want to share the VCL script used for the test, but I'll break it up into parts to discuss the reasons for the choices that I made. The whole script file will be attached to the blog for you to download.

In the sections below comments have been removed to save space, but in the full file the comments are embedded to explain everything in detail.

Varnish Initialization

probe company_logo {
  .request =
    "GET /image/company_logo HTTP/1.1"
    "Host: 192.168.1.46:8080"
    "Connection: close";
  .timeout = 100ms;
  .interval = 5s;
  .window = 5;
  .threshold = 3;
}

backend LIFERAY {
  .host = "192.168.1.46";
  .port = "8080";
  .probe = company_logo;
}

sub vcl_init {
  new dir = directors.round_robin();
  dir.add_backend(LIFERAY);
}

So in Varnish you need to declare your backends to connect to.  In this example I've also defined a probe request used to verify health of the backend.  For probes it is recommended to use a simple request that results in a small response; you don't want to overload the system with all of the probe requests.

Varnish Request

sub vcl_recv {
  ...
  if (req.url ~ "^/c/") {
    return (pass);
  }

  if (req.url ~ "/control_panel/manage") {
    return (pass);
  }
  ...
  if (req.url !~ "\?") {
    return (pass);
  }
  ...
}

The request handling basically determines whether to hash (lookup request from the cache) or pass (pass request directly to backend w/o caching).

For all requests that start with the "/c/..." URI, we pass those to the backend.  They represent request for /c/portal/login or /c/portal/logout and the like, so we never want to cache those regardless of what the backend might say.

Also any control panel requests are also passed directly to the backend. We wouldn't want to accidentally expose any of our configuration details now would we? cheeky

Otherwise the code is trying to force hashing of binary files (mp3, image, etc) if possible and conforms to most average VCL implementations.

The last check of whether the URL contains a '?' character, well I'll be getting to that later in the conclusion...

Varnish Response

sub vcl_backend_response {

  if (bereq.url ~ "^/c/") {
    return (deliver);
  }
  
  if ( bereq.url ~ "\.(ico|css)(\?[a-z0-9=]+)?$") {
    set beresp.ttl = 1d;
  } else if (bereq.url ~ "^/documents/" && beresp.http.content-type ~ "image/*") {
    if (std.integer(beresp.http.Content-Length,0) < 10485760 ) {
      if (beresp.status == 200) {
        set beresp.ttl = 1d;
        unset beresp.http.Cache-Control;
        unset beresp.http.set-cookie;
      }
    }
  } else if (beresp.http.content-type ~ "text/javascript|text/css") {
    if (std.integer(beresp.http.Content-Length,0) < 10485760 ) {
      if (beresp.status == 200) {
        set beresp.ttl = 1d;
      }
    }
  }
  ...
}

The response handling also passes the /c/ type URIs back to the client w/o caching.

The most interesting part of this section is the testing for content type and altering caching as a result.  Normally VCL rules will look for some request for "/blah/blah/blah/my-javascript.js" by checking for the extension as part of the URI.

But Liferay really doesn't use these standard extensions.  For example, with Liferay you'll see a lot of requests like /combo/?browserId=other&minifierType=&languageId=en_US&b=7010&t=1494083187246&/o/frontend-js-web/liferay/portlet_url.js&.... These kinds of requests do not have the standard extension on it so normal VCL matching patterns would discard this request as uncacheable. Using the VCL override logic above, the request will be treated as cacheable since it is just a request for some JS.

Same kind of logic applies to the /documents/ URI prefix; anything w/ this prefix is a fetch from the document library.  Full URIs are similar to /documents/24848/0/content_16.jpg/027082f1-a880-4eb7-0938-c9fe99cefc1a?t=1474371003732.  Again since it doesn't end w/ the standard extension, the image might not be cached. The override rule above will match on all /documents/ prefix and content types of images and treat the request as cacheable.

Conclusion

So let's start with the easy ones...

  • Adding Varnish can decrease your response times.
  • Adding Varnish can reduce your server load.
  • Adding Varnish can increase your overall capacity.

Honestly I was expecting that to be the whole list of conclusions I was going to have to worry about. I had this sweet VCL script and performance times were just awesome. As a final test, I tried logging into my site with Varnish in place and, well, FAIL.  I could log in, but I didn't get the top bar or access to the left or right sidebars or any of these things.

I realized that I was actually caching the response from the friendly URLs and, well, for Liferay those are typically dynamic pages.  There is logic specifically in the theme template files that change the content depending upon whether you are logged in or not.  Because my Varnish script was caching the pages when I was not logged in, after I logged in the page was coming from the cache and the necessary stuff I needed was now gone.

I had to add the check for the "?" character in the requests to determine if it was a friendly URL or not.  If it was a friendly URL, I had to treat those as dynamic and had to send them to the backend for processing.

This leads to the poor performance, for example, on the single article display pages.  My first VCL was great, but it cached too much.  My addition for friendly URLs solved the login issue but now prevented caching pages that maybe could be pages so I swung too far again, but since the general results were still awesome I just went with what I had.

Now for the hard conclusions...

  • Adding Varnish requires you to know your portal.
  • Adding Varnish requires you to know your use cases.
  • Adding Varnish requires you to test all aspects of your portal.
  • Adding Varnish requires you to learn how to write VCL.

The VCL really isn't that hard to wrap your head around.  Once you get familiar with it, you'll be able to customize the rules to increase your cacheability factor without sacraficing the dynamic nature of your portal.  In the attached VCL, we add a response header for a cache HIT or MISS, and this is quite useful for reviewing the responses from Varnish to see if a particular response was cached or not (remember the first request will always be a MISS, so check again after a page refresh).

I can't emphasize the testing enough though.  You want to manually test all of your pages a couple of times, logged in and not logged in, logged in as users w/ different roles, etc., to make sure each UX is correct and that you're not bleeding views that should not be shared.

You should also do your load testing.  Make sure you're getting something out of Varnish and that it is worthwhile for your particular situation.

Note About SSL

Before I forget, it's important to know that Varnish doesn't really talk SSL, nor does it talk AJP.  If you're using SSL, you're going to want to have a web server sitting in front of Varnish to handle SSL termination.

And Varnish doesn't talk AJP, so you will have to configure for HTTP connections from both the web server and the app server.

This points toward the reasoning behind my recent blog post about configuring Liferay to look at a header for the HTTP/HTTPS protocols.  In my environ I was terminating SSL at Apache and needed to use the HTTP connectors to Varnish and again to Tomcat/Liferay.

Although it was suggested in a few of the comments that separate connections could be used to facilitate the HTTP and HTTPS traffic, etc., those options would defeat some of the Varnish caching capabilities. You'd either have separate caches for each connection type (or perhaps no cache on one of them) or other unforseen issues. Being able to route all traffic through a single pipe to Varnish will ensure Varnish can cache the response regardless of the incoming protocol.

Update - 05/16/2017

Small tweak to the VCL script attached to the blog, I added rules to exclude all URLs from /api/* from being cached.  Those are basically your web service calls and rarely would you really want to cache those details.  Find the file named localhost-2.vcl for the update.

Revisiting SSL Termination at Apache HTTPd

Technical Blogs May 5, 2017 By David H Nebinger

So I have a blog I created a long time ago dealing w/ Liferay and SSL. The foundation of that blog post was my Fronting Liferay Tomcat with Apache HTTPd post and added terminating SSL at HTTPd and configuring the Liferay instance running under Tomcat to use HTTPS for all of the communication.

If you tear into the second post, you'll find that I was using the AJP connector to join HTTPd and Tomcat together.

This is actually a key aspect for a working setup for SSL + HTTPd + Liferay/Tomcat.

Today I was actually working on a similar setup that used the HTTP connector for SSL + HTTPd + Liferay/Tomcat. Unauthenticated traffic worked just fine, but as soon as you would try to access a secured resource that required authentication, a redirect loop resulted with HTTPd finally terminating the loop.

The only info I had was the redirect URL, https://example.com/c/portal/login?null. There was no log messages in Liferay/Tomcat and repeated 302 messages in the HTTPd logs.

My good friend and coworker Nathan Shaw told me of a case he was aware of that was similar but was from Nginx; although different web servers, the 302 redirect loop on /c/portal/login?null was an exact match.

The crux of the issue is the setting of the company.security.auth.requires.https property in portal-ext.properties.

Basically when you set this property to true, you are saying that when a user logs in, you want to force them into the secure https side. Seems pretty simple, right?

So in this configuration, when a user on http:// wants to or needs to log in, they basically end up hitting http://.../c/portal/login. This is where a check for HTTPS is done and, since the connection is not yet HTTPS will issue a redirect back to https://.../c/portal/login to complete the login.

And this, in conjunction with the HTTP connector between HTTPd and Liferay/Tomcat, is what causes the redirect loop.

Liferay responds with the 302 to try and force you to https, you submit again but SSL terminates at HTTPd and the request is sent via the HTTP connector to Liferay/Tomcat.  Well, Liferay/Tomcat sees the request came in on http:// and again issues the 302 redirect. You're now in redirect loop hell.

Fortunately, this is absolutely fixable.

Liferay has a set of portal-ext.properties settings to mitigate the SSL issue. They are:

#
# Set this to true to use the property "web.server.forward.protocol.header"
# to get the protocol. The property "web.server.protocol" must not have been
# overriden.
#
web.server.forwarded.protocol.enabled=false

#
# Set the HTTP header to use to get the protocol. The property
# "web.server.forwarded.protocol.enabled" must be set to true.
#
web.server.forwarded.protocol.header=X-Forwarded-Proto

The important property is the first one.  When that property is true, Liferay will ignore the protocol (http vs https) of the incoming request and will instead use a request header to see what the original protocol for the request actually was.

The header name can be specified using the second property, but the default one works just fine. It's also how you google for an answer for your particular web server.

I'll save you the trouble for Apache HTTPd; you just need to add a couple of lines to your <VirtualHost /> elements:

<VirtualHost *:80>
    RequestHeader set X-Forwarded-Proto "http"
    ...
</VirtualHost>

<VirtualHost *:443>
    RequestHeader set X-Forwarded-Proto "https"
    ...
</VirtualHost>

That's it.

For every incoming request getting to HTTPd, a header is added with the request protocol.  When the ProxyPass configuration forwards the requests to Liferay/Tomcat, Liferay will use the header for the check on https:// rather than the actual connection from HTTPd.

Some of you are going to be asking

Why are you using the HTTP connector to joing HTTPd to Liferay/Tomcat anyway? The AJP connector is the best connector to use in this configuration because it is better performing than the HTTP connector and avoids this and other issues that can happen by using the HTTP connector.

You would be, of course, absolutely right about that. For a simple configuration like this where you only have HTTPd <-> Liferay/Tomcat, using the HTTP connector is frowned upon.

That said, I've got another exciting blog post in the pipeline that will force moving to this configuration... I'm not getting into any details at this point, but suffice it to say that when you see the results that I've been gathering, you too will be looking at this configuration too.

Tomcat+HikariCP

Technical Blogs May 3, 2017 By David H Nebinger

In case you aren't aware, Liferay 7 CE and Liferay DXP default to using Hikari CP for the connection pools.

Why?  Well here's a pretty good reason:

Hikari just kicks the pants of any other connection pool implementation.

So Liferay is using Hikari CP, and you should too.

I know what you're thinking.  It's something along the lines of:

But Dave, we're following the best practice of defining our DB connections as Tomcat <Resource /> JNDI definitions so we don't expose our database connection details (URLs, usernames or passwords) to the web applications.  So we're stuck with the crappy Tomcat connection pool implementation.

You might be thinking that, but if you are thankfully you'd be wrong.

Installing the Hikari CP Library for Tomcat

So this is pretty easy, but you have two basic options.

First is to download the .zip or .tar.gz file from http://brettwooldridge.github.io/HikariCP/.  This is actually a source release that you'll need to build yourself.

Second option is to download the built jar from a source like Maven Central, http://central.maven.org/maven2/com/zaxxer/HikariCP/2.6.1/HikariCP-2.6.1.jar.

Once you have the jar, copy to the Tomcat lib/ext directory.  Note that Hikari CP does have a dependency on SLF4J, so you'll need to put that jar into lib/ext too.

Configuring the Tomcat <Resource /> Definitions

Location of your JNDI datasource <Resource /> definitions depends upon the scope for the connections.  You can define them globally by specifying them in Tomcat's conf/server.xml and conf/context.xml, or you can scope them to individual applications by defining them in conf/Catalina/localhost/WebAppContext.xml (where WebAppContext is the web application context for the app, basically the directory name from Tomcat's webapps directory).

For Liferay 7 CE and Liferay DXP, all of your plugins belong to Liferay, so it is usually recommended to put your definitions in conf/Catalina/localhost/ROOT.xml.  The only reason to make the connections global is if you have other web applications deployed to the same Tomcat container that will be using the same database connections.

So let's define a JNDI datasource in ROOT.xml for a Postgres database...

Create the file conf/Catalina/localhost/ROOT.xml if it doesn't already exist.  If you're using a Liferay bundle, you will already have this file.

Hikari CP supports two different ways to define your actual database connections.  The first way is the one that they prefer and it's based upon using a DataSource instance (more standard way of establishing a connection with credentials) or the older way using a DriverManager instance (legacy way that has different ways of passing credentials to the DB driver).

We'll follow their advice and use the DataSource.  Use the table from https://github.com/brettwooldridge/HikariCP#popular-datasource-class-names to find your data source class name, we'll need it when we define the <Resource /> element.

Gather up your JDBC url, username and password because we'll need those too.

Okay, so in ROOT.xml inside of the <Context /> tag, we're going to add our Liferay JNDI data source connection resource:

<Resource name="jdbc/LiferayPool" auth="Container"
      factory="com.zaxxer.hikari.HikariJNDIFactory"
      type="javax.sql.DataSource"
      minimumIdle="5" 
      maximumPoolSize="10"
      connectionTimeout="300000"
      dataSourceClassName="org.postgresql.ds.PGSimpleDataSource"
      dataSource.url="jdbc:postgresql://localhost:5432/lportal"
      dataSource.implicitCachingEnabled="true" 
      dataSource.user="user"
      dataSource.password="pwd" />

So this is going to define our connection for Liferay and have it use the Hikari CP pool.

Now if you really want to stick with the older driver-based configuration, then you're going to use something like this:
<Resource name="jdbc/LiferayPool" auth="Container"
      factory="com.zaxxer.hikari.HikariJNDIFactory"
      type="javax.sql.DataSource"
      minimumIdle="5" 
      maximumPoolSize="10"
      connectionTimeout="300000"
      driverClassName="org.postgresql.Driver"
      jdbcUrl="jdbc:postgresql://localhost:5432/lportal"
      dataSource.implicitCachingEnabled="true" 
      dataSource.user="user"
      dataSource.password="pwd" />

Conclusion

Yep, that's pretty much it.  When you restart Tomcat you'll be using your flashy new Hikari CP connection pool.

You'll want to take a look at https://github.com/brettwooldridge/HikariCP#frequently-used to find additional tuning parameters for your connection pool as well as the info for the minimum idle, max pool size and connection timeout details.

And remember, this is going to be your best production configuration.  If you're using portal-ext.properties to set up any of your database connection properties, you're not as secure as you can be.  Remember, a hacker needs information to infiltrate your system; the more details of your infrastructure you expose, the more info you give a hacker to worm their way in.  Using the portal-ext.properties approach, you're exposing your JDBC URL (so hostname and port as well as the DB server type) and the credentials (which will work for DB login but sometimes they might also be system login credentials).  This kind of info is worth its weight in gold to a hacker trying to infiltrate you.

So follow the recommended practice of using your JNDI references for the database connections and keep this information out of the hackers hands.

 

Available Now: TripWire

General Blogs March 28, 2017 By David H Nebinger

For the last few months as I've been working with Liferay 7 CE / Liferay DXP, I've been a little stymied trying to manage the complexities of the new OSGi universe.

In Liferay 6.x, for example, an OOTB demo setup of Liferay comes with like 5 or 6 war files.  And when the portal starts up, they all start up.

But with Liferay 7 CE and Liferay DXP, there are a lot of bundles in the mix. Liferay 7 CE GA3, for example, has almost 2,500 bundles in OSGi.

And when the portal starts up, most of these will also start. Some will not. Some might not be able to. Some can't start because they have unsatisfied dependencies.

But you're not going to know it.

Seriously, you won't know if something has failed to start when you restart your environment. There may or may not be something in the log. Someone might have stopped a bundle intentionally (or unintentionally) in the gogo shell w/o telling you. And with almost 2,500 bundles in there, it's going to be really hard finding the needle in the haystack especially if you don't know if there's a needle in there at all.

So I've been working on a new utility over the past few months to resolve the situation - TripWire.

Features

TripWire actually scans the OSGi environment to gather information about deployed bundle statuses, bundle versions, and service components. Tripwire also scans the system and portal properties too.

This scanning is done at two points, the first is when an administrator takes a snapshot (basically to persist a baseline for all comparisons), and the second is a scheduled task that runs on the node to monitor for changes. The comparison scan can also be kicked off manually.

After installing TripWire and navigating to the TripWire control panel, you'll be prompted to capture an initial baseline scan:

Click the Take Snapshot card to see the system snapshot:

You can save the new baseline (to be compared against in the automated scans), you can export the snapshot (downloads as an excel spreadsheet), or you can cancel.

Each section expands to show captured details:

The funny looking hash keys at the top? Those are calculated hashes from the scanned areas, by comparing the baseline hash against the scanned hash, TripWire knows quickly if there is a variation between the baseline and the current scan.

When you save the new baseline, the main page will reflect that the server is currently consistent with the baseline:

You can test the server to force a scan by clicking on the Test Server card:

Exclusions

TripWire supports dynamically creating exclusion rules to exclude items from being part of the scan.  You might add an exclusion for a property value that you're not interested in monitoring, for example. Click on the Exclusions card and then on the Add New Exclusion Rule button:

The Camera drop down lists all of the current cameras used when taking a snapshot. Choose either a specific camera or the Any Camera option to allow for a match to any camera.

The Type drop down allows you to select either a Match, a Starts With, a Contains or a Regular Expression type for the exclusion rule.

The value field is what to match against, and the Enabled slider allows you to disable a particular exclusion rule.

Modifying the exclusion rules will affect scans immediately resulting in failed scans:

By adding the rule to exclude any System Property that starts with "catalina.", scans now show the server to be inconsistent when compared to the baseline. At this point you can take a new baseline snapshot to approve the change, or you could disable the exclusion rule (basically reverting the change to the system) to restore baseline consistency.

Notifications

TripWire uses Liferay notifications to alert subscribed administrators when the node is in an inconsistent state and when the node returns to a consistent state. For Liferay 7 CE, a subscribed administrator will only receive notifications about the single Liferay node. For Liferay DXP, subscribed administrators will receive notifications from every node that is out of sync with the baseline snapshot.

Notifications will be issued for every failed scan on every node until consistency is restored.

To subscribe or unsubscribe to notifications, click on the Subscriptions card. If you are unsubscribed, the bell image will be grey, if you are subscribed the bell will be blue and have a red notification number on it. Note this number does not represent the number of notifications you might currently have, it is just a visual marker that you are subscribed for notifications.

Configuration

TripWire supports setting configuration for the scanning schedule. Click on the Configuration card:

Using the Cameras tab, you can also choose the cameras to use in the snapshots and scans:

Normally I recommend enabling all but the Separate Service Status Camera (because this camera is quite verbose in the details it captures).

The Bundle Status Camera captures status for each bundle.

The Bundle Version Camera captures versions of every bundle.

The Configuration Admin Camera captures configuration changes from the control panel.  Note that CA only saves values that are different from the set of default values on each panel, so the details on this section will always be shorter than the actual set of configurations saved for the portal.

The Portal Properties Camera captures changes to known Liferay portal properties (unknown properties are ignored). In a Liferay DXP cluster, some properties will need to be excluded using the Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Service Status Camera captures counts of OSGi DS Services and their statuses.

The System Properties Camera captures changes to system properties from the JVM. Like the portal properties, in a Liferay DXP cluster some properties will need to be excluded using Exclusion Rules since nodes will have separate, unique values that will never match a baseline.

The Unsatisfied References Camera captures the list of bundles with unsatisfied references (preventing the bundles from starting). Any time a bundle has an unsatisfied reference, the bundle and it's unsatisfied reference(s) will be captured by this camera.

The three email tabs configure who the notification emails are from and the consistent/inconsistent email templates.

Liferay DXP

For Liferay DXP clusters, TripWire uses the same baseline across all nodes in the cluster and reports on cluster node inconsistencies:

Clicking on the server link in the status area, you can review the server's report to see where the problems are:

Some of the additions and changes are due to unique node values and should be handled by adding new Exclusion Rules.

The Removals above show that one node in the cluster has Audience Targeting deployed but the other node does not. These are the kinds of inconsistencies that you may not be aware of from a cluster perspective but would result in your DXP cluster not serving the right content to all users, and identifying this discrepancy once in your cluster in an easy and quick way will save you time, money and effort.

For your cluster Exclusion Rules, your rule list will be quite long:

Conclusion

That's TripWire.

It is available from the Liferay Marketplace:

There is a cost for each version, but that is to offset the time and effort I have invested in this tool.

And while there may not seem to be an immediate return, the first time this tool saves you by identifying a node that is out of sync or an unauthorized change to your OSGi environment, it will save you time (in waiting for the change to be identified), effort (in having to sort through all of the gogo output and other details), user impressions (from cluster node sync issues) and most of all, money.

 

 

Liferay DXP and WebLogic...

Technical Blogs March 21, 2017 By David H Nebinger

For those of you deploying Liferay DXP to WebLogic, you will need to add an override property to your portal-ext.properties file to allow the WebLogic JAXB implementation to peer inside the OSGi environment to create proxy instances.

I know, it's a mouthful, but it's all pretty darn technical. You'll know if you need this if you start seeing exceptions like:

java.lang.NoClassDefFoundError: org/eclipse/persistence/internal/jaxb/WrappedValue
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:642)
	at org.eclipse.persistence.internal.jaxb.JaxbClassLoader.generateClass(JaxbClassLoader.java:124)
	at org.eclipse.persistence.jaxb.compiler.MappingsGenerator.generateWrapperClass(MappingsGenerator.java:3302)
	Truncated. see log file for complete stacktrace
Caused By: java.lang.ClassNotFoundException: org.eclipse.persistence.internal.jaxb.WrappedValue cannot be found by com.example.bundle_1.0.0
	at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:444)
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:357)
	at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:349)
	at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	Truncated. see log file for complete stacktrace

Suffice it to say that you need to override the module.framework.properties.org.osgi.framework.bootdelegation property to add the following two lines:

  org.eclipse.persistence.internal.jaxb,\
  org.eclipse.persistence.internal.jaxb.*,\

You have to include all of the packages from the value defined in portal.properties, so my entry actually looks like:

module.framework.properties.org.osgi.framework.bootdelegation=\
  __redirected,\
  com.liferay.aspectj,\
  com.liferay.aspectj.*,\
  com.liferay.portal.servlet.delegate,\
  com.liferay.portal.servlet.delegate*,\
  com.sun.ccpp,\
  com.sun.ccpp.*,\
  com.sun.crypto.*,\
  com.sun.image.*,\
  com.sun.jmx.*,\
  com.sun.jna,\
  com.sun.jndi.*,\
  com.sun.mail.*,\
  com.sun.management.*,\
  com.sun.media.*,\
  com.sun.msv.*,\
  com.sun.org.*,\
  com.sun.syndication,\
  com.sun.tools.*,\
  com.sun.xml.*,\
  com.yourkit.*,\
  org.eclipse.persistence.internal.jaxb,\
  org.eclipse.persistence.internal.jaxb.*,\
  sun.*

Enjoy

Creating a Spring MVC Portlet War in the Liferay Workspace

Technical Blogs February 28, 2017 By David H Nebinger

Introduction

So I've been working on some new Blade sample projects, and one of those is the Spring MVC portlet example.

As pointed to in the Liferay documentation for Spring MVC portlets, these guys need to be built as war files, and the Liferay Workspace will actually help you get this work done. I'm going to share things that I learned while creating the sample which has not yet been merged, but hopefully will be soon.

Creating The Project

So your war projects need to go in the wars folder inside of your Liferay Workspace folder, basically at the same level as your modules directory. If you don't have a wars folder, go ahead and create one.

Next we're going to have to manually create the portlet project. Currently Blade does not have support for building a Spring MVC portlet war project; perhaps this is something that can change in the future.

Inside of the wars folder, you create a folder for each portlet WAR project that you are building. To be consistent, my project folder was named blade.springmvc.web, but your project folder can be named according to your standards.

Inside your project folder, you need to set up the folder structure for a Spring MVC project. Your project structure will resemble:

Spring MVC Project Structure

For the most part this structure is similar to what you would use in a Maven implementation. For those coming from a legacy SDK, the contents of the docroot folder go to the src/main/webapp folder, but the docroot/WEB-INF/src go in the src/main/java folder (or src/main/resources for non-java files).

Otherwise this structure is going to be extremely similar to legacy Spring MVC portlet wars, all of the old locations basically still apply.

Build.gradle Contents

The fun part for us is the build.gradle file. This file controls how Gradle is going to build your project into a war suitable for distribution.

Here's the contents of the build.gradle file for my blade sample:

buildscript {
  repositories {
    mavenLocal()
    maven {
      url "https://cdn.lfrs.sl/repository.liferay.com/nexus/content/groups/public"
    }
  }

  dependencies {
    classpath group: "com.liferay", name: "com.liferay.gradle.plugins.css.builder", version: "latest.release"
    classpath group: "com.liferay", name: "com.liferay.css.builder", version: "latest.release"
  }
}

apply plugin: "com.liferay.css.builder"

war {
  dependsOn buildCSS

  exclude('**/*.scss')

  filesMatching("**/.sass-cache/") {
    it.path = it.path.replace(".sass-cache/", "")
  }

  includeEmptyDirs = false
}

dependencies {
  compileOnly project(':modules:blade.servicebuilder.api')
  compileOnly 'com.liferay.portal:com.liferay.portal.kernel:2.6.0'
  compileOnly 'javax.portlet:portlet-api:2.0'
  compileOnly 'javax.servlet:javax.servlet-api:3.0.1'
  compileOnly 'org.osgi:org.osgi.service.component.annotations:1.3.0'
  compile group: 'aopalliance', name: 'aopalliance', version: '1.0'
  compile group: 'commons-logging', name: 'commons-logging', version: '1.2'
  compileOnly group: 'javax.servlet.jsp.jstl', name: 'jstl-api', version: '1.2'
  compileOnly group: 'org.glassfish.web', name: 'jstl-impl', version: '1.2'
  compile group: 'org.springframework', name: 'spring-aop', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-beans', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-context', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-core', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-expression', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc-portlet', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-webmvc', version: '4.1.9.RELEASE'
  compile group: 'org.springframework', name: 'spring-web', version: '4.1.9.RELEASE'
}

So first is the buildScript and the CSS builder apply line followed by war customization stanza.

These parts are currently necessary to support compiling the SCSS files into CSS and store the files in the right place in the WAR. Note that Liferay currently sees this manual execution of the CSS builder plugin as a bug and plan on fixing it sometime soon.

Managing Dependencies

The next part is the dependencies, and this will be the fun part for you as it was for me.

You're going to be picking from two different dependency types, compile and compileOnly. The big difference is whether the dependencies get included in the WEB-INF/lib directory (for compile) or just used for the project compile but not included (compileOnly).

Many of the Liferay or OSGi jars should not be included in your WEB-INF/lib directory such as portal-kernel or the servlet or portlet APIs, but they are needed for compiles so they are marked as compileOnly.

In Liferay 6.x development, we used to be able to use the portal-dependency-jars in liferay-plugin-package.properties to inject libraries into our wars at deployment time. But not for Liferay 7 CE/Liferay DXP development.

The portal-dependency-jars property in liferay-plugin-package.properties is deprecated in Liferay 7 CE/Liferay DXP. All dependencies must be included in the war at build time.

Since we cannot use the portal dependencies in liferay-plugin-package.properties, I had to manually include the Spring jars using the compile type. 

Conclusion

Yep, that's pretty much it.

Since it's in the Liferay Workspace and is Gradle-built, you can use the gradle wrapper script at the root of the project to build everything, including the portlet wars.

Your built war will be in the build/libs directory in project, and this war file is ready to be deployed to Liferay by dropping it in the Liferay deploy folder.

Debugging "ClassNotFound" exceptions, etc, in your war file can be extremely challenging since Liferay doesn't really keep anything around from the WAR->WAB conversion process. If you add the following properties to portal-ext.properties, the WABs generated by Liferay will be saved so you can open the file and see what jars were injected and where the files are all found in the WAB.

module.framework.web.generator.generated.wabs.store=true
module.framework.web.generator.generated.wabs.store.dir=${module.framework.base.dir}/wabs

If you want to check out the project, it is currently live in the blade samples: https://github.com/liferay/liferay-blade-samples/tree/master/liferay-workspace/wars/blade.portlet.springmvc.

Proper Portlet Name for your Portlet components...

Technical Blogs February 28, 2017 By David H Nebinger

Okay, this is probably going to be one of my shortest blog posts, but it's important.

Some releases of Liferay have code to "infer" a portlet name if it is not specified in the component properties.  This actually conflicts with other pieces of code that also try to "infer" what the portlet name is.

The problem is that they sometimes have different requirements; in one case, periods are fine in the name so the full class name is used (in the portlet component), but in other cases periods are not allowed so it uses the class name with the periods replaced by underscores.

Needless to say, this can cause you problems if Liferay is trying to use two different portlet names for the same portlet, one that works and the other that doesn't.

So save yourself some headaches and always assign your portlet name in the properties for the Portlet component, and always use that same value for other components that also need the portlet name.  And avoid periods since they cannot be used in all cases.

So here's an example for the portlet component properties from one of my previous blogs:

@Component(
  immediate = true,
  property = {
    "com.liferay.portlet.display-category=category.system.admin",
    "com.liferay.portlet.header-portlet-css=/css/main.css",
    "com.liferay.portlet.instanceable=false",
    "javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
    "javax.portlet.display-name=Filesystem Access",
    "javax.portlet.init-param.template-path=/",
    "javax.portlet.init-param.view-template=/view.jsp",
    "javax.portlet.resource-bundle=content.Language",
    "javax.portlet.security-role-ref=power-user,user"
  },
  service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {}

Notice how I was explicit with the javax.portlet.name?  That's the one you need, don't let Liferay assume what your portlet name is, be explicit with the value.

And the value that I used in the FilesystemAccessPortletKeys constants:

public static final String FILESYSTEM_ACCESS =
  "com_liferay_filesystemaccess_portlet_FilesystemAccessPortlet";

No periods, but since I'm using the class name it won't have any collisions w/ other portlet classes...

Note that if you don't like long URLs, you might try shortening the name, but stick with something that avoids collisions, such as first letter of packages w/ class name, something like "clfp_FilesystemAccessPortlet" or something. Remember that collisions are bad things...

And finally, since I've put the name in an external PortletKeys constants file, any other piece of code that expects the portlet name can use the constant value too (i.e. for your config admin interfaces) and you'll know that all code is using the same constant value.

Enjoy!

Service Builder 6.2 Migration

Technical Blogs February 23, 2017 By David H Nebinger

I'm taking a short hiatus from the design pattern series to cover a topic I've heard a lot of questions on lately - migrating 6.2 Service Builder wars to Liferay 7 CE / Liferay DXP.

Basically it seems you have two choices:

  1. You can keep the Service Builder implementation in a portlet war. Any wars you keep going forward will have access to the service layer, but can you access the services from other OSGi components?
  2. You take the Service Builder code out into an OSGi module. With this path you'll be able to access the services from other OSGi modules, but will the services be available to the legacy portlet wars?

So it's that mixed usage that leads to the questions. I mean, if all you have is either legacy wars or pure OSGi modules, the decision is easy - stick with what you've got.

But when you are in mixed modes, how do you deliver your Service Builder code so both sides will be happy?

The Scenario

So we're going to work from the following starting point. We have a 6.2 Service Builder portlet war following a recommendation that I frequently give, the war has only the Service Builder implementation in it and nothing else, no other portlets. I often recommend this as it gives you a working Service Builder implementation and no pollution from Spring or other libraries that can sometimes conflict with Service Builder. We'll also have a separate portlet war that leverages the Service Builder service.

Nothing fancy for the code, the SB layer has a simple entity, Course, and the portlet war will be a legacy Liferay MVC portlet that lists the courses.

We're tasked with upgrading our code to Liferay 7 CE or Liferay DXP (pick your poison cheeky), and as part of the upgrade we will have a new OSGi portlet component using the new Liferay MVC framework for adding a course.

To reduce our development time, we will upgrade our course list portlet to be compatible with Liferay 7 CE / Liferay DXP but keep it as a portlet war - basically the minimal effort needed to get it upgraded. We'll also have the new portlet module for adding a course.

But our big development focus, and the focus of this blog, will be choosing the right path for upgrading that Service Builder portlet war.

For evaluation purposes we're going to have to upgrade the SDK to a Liferay Workspace. Doing so will help get us some working 7.x portlet wars initially, and then when it comes time to do the testing for the module it should be easy to migrate.

Upgrading to a Liferay Workspace

So the Liferay IDE version 3.1 Milestone 2 is available, and it has the Code Upgrade Assistant to help take our SDK project and migrate it to a Liferay Workspace.

For this project, I've made the original 6.2 SDK project available at https://github.com/dnebing/sb-upgrade-62-sdk.

You can find an intro to the upgrade assistant in Greg Amerson's blog: https://web.liferay.com/web/gregory.amerson/blog/-/blogs/liferay-ide-3-1-milestone-1-released and Andy Wu's blog: https://web.liferay.com/web/andy.wu/blog/-/blogs/liferay-ide-3-1-milestone-2-released.

It is still a milestone release so it is still a work in progress, but it does work on upgrading my sample SDK. Just a note, though, it does take some processing time during the initial upgrade to a workspace; if you think it has locked up or is unresponsive, just have patience. It will come back, it will complete, you just have to give it time to do it's job.

Checkpoint

After you finish the upgrade, you should have a Liferay workspace w/ a plugins-sdk directory and inside there is the normal SDK directory structure. In the portlet directory the two portlet war projects are there and they are ready for deployment.

In fact, in the plugins-sdk/dist directory you should find both of the wars just waiting to be deployed. Deploy them to your new Liferay 7 CE or Liferay DXP environment, then spin out and drop the Course List portlet on a page and you should see the same result as the 6.2 version.

So what have we done so far? We upgraded our SDK to a Liferay Workspace and the Code Upgrade Assistant has upgraded our code to be ready for Liferay 7 CE / Liferay DXP. The two portlet wars were upgraded and built. When we deployed them to Liferay, the WAR -> WAB conversion process converted our old wars into OSGi bundles.

However, if you go into the Gogo shell and start digging around, you won't find the services defined from our Service Builder portlet. Obviously they are there because the Course List portlet uses it to get the list of courses.

War-Based Service Builder

So how do these war-based Service Builder upgrades work? If you take a look at the CourseLocalServiceUtil's getService() method, you'll see that it uses the good ole' PortletBeanLocator and the registered Spring beans for the Service Builder implementation. The Util classes use the PortletBeanLocator to find the service implementations and may leverage the class loader proxies (CLP) if necessary to access the Spring beans from other contexts. From the service war perspective, it's going through Liferay's Spring bean registry to get access to the service implementations.

Long story short, our service jar is still a service jar. It is not a proper OSGi module and cannot be deployed as one. But the question is, can we still use it?

OSGi Add Course Portlet

So we need an OSGi portlet to add courses. Again this will be another simple portlet to show a form and process the submit. Creating the module is pretty straight forward, the challenge of course is including the service jar into the bundle.

First thing that is necessary is to include the jar into the build.gradle dependencies. Since it is not in a Maven-like repository, we'll need to use a slightly different syntax to include the jar:

dependencies {
  compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
  compileOnly group: "com.liferay.portal", name: "com.liferay.util.taglib", version: "2.0.0"
  compileOnly group: "javax.portlet", name: "portlet-api", version: "2.0"
  compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
  compileOnly group: "jstl", name: "jstl", version: "1.2"
  compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0"
	
  compile files('../../plugins-sdk/portlets/school-portlet/docroot/WEB-INF/lib/school-portlet-service.jar')
}

The last line is the key; it is the syntax for including a local jar file, and in our case we're pointing at the service jar which is part of the plugins-sdk folder that we upgraded.

Additionally we need to add the stanza to the bnd.bnd file so the jar gets included into the bundle during the build:

Bundle-ClassPath:\
  .,\
  lib/school-portlet-service.jar

-includeresource:\
  lib/school-portlet-service.jar=school-portlet-service.jar

As you'll remember from my blog post on OSGi Module Dependencies, this is option #4 to include the jar into the bundle itself and use it in the classpath for the bundle.

Now if you build and deploy this module, you can place the portlet on a page and start adding courses.  It works!

By including the service jar into the module, we are leveraging the same PortletBeanLocator logic used in the Util class to get access to the service layer and invoke services via the static Util classes.

Now that we know that this is possible (we'll discuss whether to do it this way in the conclusion), let's now rework everything to move the Service Builder code into a set of standard OSGi modules.

Migrating Service Builder War to Bundle

Our service builder code has already been upgraded when we upgraded the SDK, so all we need to do here is create the modules and then move the code.

Creating the Clean Modules

First step is to create a clean project in our Liferay workspace, a foundation for the Service Builder modules to build from.

Once again, I start with Blade since I'm an Intellij developer. In modules directory, we'll let Blade create our Service Builder projects:

blade create -t service-builder -p com.liferay.school school

For the last argument, use something that reflects your current Service Builder project name.

This is the clean project, so let's start dirtying it up a bit.

Copy your legacy service.xml to the school/school-service directory.

Build the initial Service Builder code from the service XML. If you're on the command line, you'd do:

../../gradlew buildService

Now we have unmodified, generated code. Layer in the changes from the legacy Service Builder portlet, including:

  • portlet-model-hints.xml
  • service.properties
  • Changes to any of the META-INF/spring xml files
  • All of your Impl java classes

Rebuild services again to get the working module code.

Module-Based Service Builder

So we reviewed how the CourseLocalServiceUtil's getService() method in the war-based service jar leveraged the PortletBeanLocator to find the Spring bean registered with Liferay to get the implementation class.

In our OSGi module-based version, the CourseLocalServiceUtil's getService() method is instead using an OSGi ServiceTracker to get access to the DS components registered in OSGi for the implementation class.

Again the service "jar" is still a service jar (well, module), but we also know that the add course portlet will be able to leverage the service (with some modifications), the question of course is whether we can also use the service API module in our legacy course list portlet.

Fixing the Course List Portlet War

So what remains is modifying the course list portlet so it can leverage the API module in lieu of the legacy Service Builder portlet service jar.

This change is actually quite easy...

The liferay-plugin-package.properties file changed from the upgrade assistant contains the following:

required-deployment-contexts=\
    school-portlet

This is the line used by the Liferay IDE to inject the service jar so the service will be available to the portlet war. We need to edit this line to strip out these two lines since we're not using the deployment context.

If you have the school-portlet-service.jar file in docroot/WEB-INF/lib, go ahead and delete that file since it is no longer necessary.

Next comes the messy part; we need to copy in the API jar into the course list portlet's WEB-INF/lib directory. We have to do this so Eclipse will be happy and will be able to happily compile all of our code that uses the API. There's no easy way to do this, but I can think of the following options:

  1. Manually copy the API jar over.
  2. Modify the Gradle build scripts to add support for the install of artifacts into the local Maven repo, then the Ivy configuration for the project can be adjusted to include the dependency. Not as messy as a manual file copy, but involves doing the install of the API jar so Ivy can find it.

We're not done there... We actually cannot keep the jar in WEB-INF/lib otherwise at runtime you get class cast exceptions, so we need to exclude it during deployment. This is easily handled, however, by adding an exclusion to your portal-ext.properties file:

module.framework.web.generator.excluded.paths=<CURRENT EXCLUSIONS>,\
  WEB-INF/lib/com.liferay.school.api-1.0.0.jar

When the WAR->WAB conversion is taking place, it will exclude this jar from being included. So you get to keep it in the project and let the WAB conversion strip it out during deployment.

Remember to keep all of the current excluded paths in the list, you can find them in the portal.properties file included in your Liferay source.

Build and deploy your new war and it should access the OSGi-based service API module.

Conclusion

Well, this ended up being a mixed bag...

On one hand I've shown that you can use the Service Builder portlet's service jar as a direct dependency in the module and it can invoke the service through the static Util classes defined within. The advantage of sticking with this path is that it really doesn't require much modification from your legacy code beyond completing the code upgrade, and the Liferay IDE's Code Upgrade Assistant gets you most of the way there. The obvious disadvantage is that you're now adding a dependency to the modules that need to invoke the service layer and the deployed modules include the service jar; so if you change the service layer, you're going to have to rebuild and redeploy all modules that have the service jar as an embedded dependency.

On the other hand I've shown that the migrated OSGi Service Builder modules can be used to eliminate all of the service jar replication and redeployment pain, but the hoops you have to jump through for the legacy portlet access to the services are a development-time pain.

It seems clear, at least to me, that the second option is the best. Sure you will incur some development-time pain to copy service API jars if only to keep the java compiler happy when compiling code, but it definitely has the least impact when it comes to service API modifications.

So my recommendations for migrating your 6.2 Service Builder implementations to Liferay 7 CE / Liferay DXP are:

  • Use the Liferay IDE's Code Upgrade Assistant to help migrate your code to be 7-compatible.
  • Move the Service Builder code to OSGi modules.
  • Add the API jars to the legacy portlet's WEB-INF/lib directory for those portlets which will be consuming the services.
  • Add the module.framework.web.generator.excluded.paths entry to your portal-ext.properties to strip the jar during WAR->WAB conversion.

If you follow these recommendations your legacy portlet wars will be able to leverage the services, any new OSGi-based portlets (or JSP fragments or ...) will be able to access the services, and your deployment impact for changes will be minimized.

My code for all of this is available in github:

Note that the upgraded code is actually in the same repo, they are just in different branches.

Good Luck!

Update

After thinking about this some more, there's actually another path that I did not consider...

For the Service Builder portlet service jar, I indicated you'd need to include this as a dependency on every module that needed to use the service, but I neglected to consider the global service jar option that we used for Liferay 6.x...

So you can keep the Service Builder implementation in the portlet, but move the service jar to the global class loader (Tomcat's lib/ext directory). Remember that with this option there can only be one service jar, the global one, so no other portlet war nor module (including the Service Builder portlet war) can have a service jar. Also remember that to update a global service jar, you can only do this while Tomcat is down.

The final step is to add the packages for the service interfaces to the module.framework.system.packages.extra property in portal-ext.properties. You want to add the packages to the current list defined in portal.properties, not replace the list with just your service packages.

Before starting Tomcat, you'll want to add the exception, model and service trio to the list. For the school service example, this would be something like:

module.framework.system.packages.extra=\
  <ALL DEFAULT VALUES COPIED IN>,\
  com.liferay.school.exception,\
  com.liferay.school.model,\
  com.liferay.school.service

This will make the contents of the packages available to the OSGi global class loader so, whether bundle or WAB, they will all have access to the interfaces and static classes.

This has a little bit of a deployment process change to go with it, but you might consider this the least impactful change of all. We tend to frown on the use of the global class loader because it may introduce transitive dependencies and does not support hot deployable updates, but this option might be lower development cost to offset the concern.

Liferay Design Patterns - Multi-Scoped Data/Logic

Technical Blogs February 17, 2017 By David H Nebinger

Pattern: Multi-Scoped Data/Logic

Intent

The intent for this pattern is to support data/logic usage in multiple scopes. Liferay defines the scopes Global, Site and Page, but from a development perspective scope refers to Portal and individual OSGi Modules. Classic data access implementations do not support multi-scope access because of boundaries between the scopes.

The Multi-Scoped Data/Logic Liferay Design Pattern's intent is to define how data and logic can be designed to be accessable from all scopes in Liferay, either in the Portal layer or any other deployed OSGi Modules.

Also Known As

This pattern is implemented using the Liferay Service Builder tool.

Motivation

Standard ORM tools provide access to data for servlet-based web applications, but they are not a good fit in the portal because of the barriers between modules in the form of class loader and other kinds of boundaries. If a design starts from a standard ORM solution, it will be restricted to a single development scope. Often this may seem acceptable for an initial design, but in the portal world most single-scoped solutions often need to be changed to support multiple scopes. As the standard tools have no support for multiple scopes, developers will need to hand code bridge logic to add multi-scope support, and any hand coding increases development time, bug potential, and time to market.

The motivation for Liferay's Service Builder tool is to provide an ORM-like tool with built-in support for multi-scoped data access and business logic sharing. The tool transforms an XML-based entity definition file into layered code to support multiple scopes and is used throughout business logic creation to add multi-scope exposure for the business logic methods.

Additionally the tool is the foundation for adding portal feature support to custom entities, including:

  • Auto-populated entity audit columns.
  • Asset framework support (comments, rankings, Asset Publisher support, etc).
  • Indexing and Search support.
  • Model listeners.
  • Workflow support.
  • Expando support.
  • Dynamic Query support.
  • Automagic JSON web service support.
  • Automagic SOAP web service support.

You're not going to get this kind of integration from your classic ORM tool...

And with Liferay 7 CE / Liferay DXP, additionally you also get an OSGi-compatible API and service bundle implementation ready for deployment.

Applicability

IMHO Service Builder applies when you are dealing with any kind of multi-scoped data entities and/or business logic; it also applies if you need to add any of the indicated portal features to your implementation.

Participants

The participants in this pattern are:

  • An XML file defining the entities.
  • Spring configuration files.
  • Implementation class methods to add business logic.
  • Service consumers.

The participants are used by the Service Builder tool to generate code for the service implementation details.

Details for working with Service Builder are covered in the following sections:

Collaboration

ServiceBuilder uses the entity definition XML file to generate the bulk of the code. Custom business methods are added to the ServiceImpl and LocalServiceImpl classes for the custom entities and ServiceBuilder will include them in the service API.

Consequences

By using Service Builder and generating entities, there is no real downside in the portal environment. Service Builder will generate an ORM layer and provide integration points for all of the core Liferay features.

There are three typical arguments used by architects and developers for not using Service Builder:

  • It is not a complete ORM. This is true, it does not support everything a full ORM does. It doesn't support Many To Many relationships and it also doesn't handle automatic parent-children relationships in One To Many. All that means is the code to handle many to many and even some one to many relationship handling will need to be hand-coded.
  • It still uses old XML files instead of newer Annotations. This is also true, but this is more a reflection of Liferay generating all of the code including the interfaces. With Liferay adding portal features based upon the XML definitions, using annotations would require Liferay to modify the annotated interface and cause circular change effects.
  • I already know how to develop using X, my project deadlines are too short to learn a new tool like Service Builder. Yes there is a learning curve with Service Builder, but this is nothing compared to the mountains of work it will take getting X working correctly in the portal and some Liferay features will just not be options for you without Service Builder's generated code.

All of these arguments are weak in light of what you get by using Service Builder.

Sample Usage

Service Builder is another case of Liferay eating it's own dogfood. The entire portal is based on Service Builder for all of the entities in all of the portlets, the Liferay entities, etc.

Check out any of the Liferay modules from simple cases like Bookmarks through more complicated cases such as Workflow or the Asset Publisher.

Conclusion

Service Builder is a must-use if you are going to do any integrated portal development. You can't build the portal features into your portlets without Service Builder usage.

Seriously. You have no other choice. And I'm not saying this because I'm a fanboy or anything, I'm coming from a place of experience. My first project on Liferay dealt with a number of portlets using a service layer; I knew Hibernate but didn't want to take time out to learn Service Builder. That was a terrible mistake on my part. I never did deal with the multi-scoping well at all, never got the kind of Liferay integration that would have been great to have. Fortunately it was not a big problem to have made such a mistake, but I learned from it and use Service Builder all the time now in the portal.

So I share this experience with you in hopes that you too can avoid the mistakes I made. Use Service Builder for your own good!

Liferay Design Patterns - Flexible Entity Presentation

Technical Blogs February 15, 2017 By David H Nebinger

Introduction

So I'm going to start a new type of blog series here covering design patterns in Liferay.

As we all know:

In software engineering, a software design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. - Wikipedia

In Liferay, there are a number of APIs and frameworks used to support Liferay-specific general reusable solutions. But since we haven't defined them in a design pattern, you might not be aware of them and/or if/when/how they could be used.

So I'm going to carve out some time to write about some "design patterns" based on Liferay APIs and frameworks. Hopefully they'll be useful as you go forward and designing your own Liferay-based solutions.

Being a first stab at defining these as Liferay Design Patterns, I'm expecting some disagreement on simple things (that design pattern name doesn't seem right) as well as some complex things... Please go ahead and throw your comments at me and I'll make necessary changes to the post. Remember this isn't for me, this is for you. cheeky

And I must add that yes, I'm taking great liberties in using the phrase "design pattern". Most of the Liferay APIs and frameworks I'm going to cover are really combinations of well-documented software design patterns (in fact Liferay source actually implements a large swath of creational, structural and behavioral design patterns in such purity and clarity they are easy to overlook).

These blog posts may not be defining clean and simple design patterns as specified by the Gang of Four, but they will try to live up to the ideals of true design patterns. They will provide general, reusable solutions to commonly occurring problems in the context of a Liferay-based system.

Ultimately the goal is to demonstrate how by applying these Liferay Design Patterns that you too can design and build a Liferay-based solution that is rich in presentation, robust in functionality and consistent in usage and display. By providing the motivation for using these APIs and frameworks, you will be able to evaluate how they can be used to take your Liferay projects to the next level.

Pattern: Flexible Entity Presentation

Intent

The intent of the Flexible Entity Presentation pattern is to support a dynamic templating mechanism that supports runtime display generation instead of a classic development-time fixed representation, further separating view management from portlet development.

Also Known As

This pattern is known as and implemented on the Application Display Template (ADT) framework in Liferay.

Motivation

The problem with most portlets is that the code used to present custom entities is handled as a development-time concern; the UI specifications define how the entity is shown on the page and the development team delivers a solution to satisfy the requirements.  Any change to specifications during development results in a change request for the development team, and post development the change represents a new development project to implement presentation changes.

The inflexibility of the presentation impacts time to market, delivery cycles and development resource allocation.

The Flexible Entity Presentation pattern's motivation is to support a user-driven mechanism to present custom entities in a dynamic way.

The users and admins section on ADTs from dev.liferay.com starts:

The application display template (ADT) framework allows Liferay administrators to override the default display templates, removing limitations to the way your site’s content is displayed.

ADTs allow the display of an entity to be handled by a dynamic template instead of handled by static code. Don't get hung up on the word content here, it's not just content as in web content but more of a generic reference to any html content your portlet needs to render.

Liferay identified this motivation when dealing with client requests for product changes to adapt presentation in different ways to satisfy varying client requirements.  Liferay created and uses the ADT framework extensively in many of the OOTB portlets from web content through breadcrumbs.  By leveraging ADTs, Liferay defines the entities, i.e. a Bookmark, but the presentation can be overridden by an administrator and an ADT to show the details according to their requirements, and all without a development change by Liferay or a code customization by the client.

Liferay eats its own dogfood by leveraging the ADT framework, so this is a well tested framework for supporting dynamic presentation of entities.

When you look at many of the core portlets, they now support ADTs to manage their display aspects since tweaking an ADT is much simpler than creating a JSP fragment bundle or new custom portlet or some crazy JS/CSS fu in order to affect a presentation change. This flexibility is key for supporting changes in the Liferay UI without extensive code customizations.

Applicablility

The use of ADTs apply when the presentation of an entity is subject to change. Since admins will use ADTs to manage how to display the entities, the presentation does not need to be finalized before development starts. When the ADT framework is incorporated in the design out of the gate, flexibility in the presentation is baked into the design and doors are open to any future presentation changes without code development, testing and deployment.

So there are some fairly clear use cases to apply ADTs:

  • The presentation of the custom entities is likely to change.
  • The presentation of the custom entities may need to change based upon context (list view, single view, etc.).
  • The presentation is not an aspect in the portlet development.
  • The project is a Liferay Marketplace application and presentation customization is necessary.

Notice the theme here, the change in presentation.

ADTs would either not apply or would be overkill for a static entity presentation, one that doesn't benefit from presentation flexibility.

Participants

The participants in this pattern are:

  • A custom entity.
  • A custom PortletDisplayTemplateHandler.
  • ADT Resource Portlet Permissions.
  • Portlet Configuration for ADT Selection.
  • Portlet View Leveraging ADTs.

The participants work together with the Liferay ADT framework to support a dynamic presentation for the entity. The implementation details for the participants are covered here: https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/implementing-application-display-templates.

Collaboration

The custom entity is defined in the Service Builder layer (normally).

The PortletDisplayTemplateHandler implementation is used to feed meta information about the fields and descriptions of the entity to the ADT framework Template Editor UI. The meta information provided will be generally be very coupled to the custom entity in that changes to the entity will usually result in changes to the PortletDisplayTemplateHandler implementation.

The ADT resource portlet permissions must be enabled for the portlet so administrators will be able to choose the display template and edit display templates for the entity.

The portlet configuration panel is where the administrator will chose between display templates, and the portlet view will leverage Liferay's ADT tag library to inject the rendered template into the portlet view.

Consequences

By moving to an ADT-based presentation of the entity, the template engine (FreeMarker) will be used to render the view.

The template engine will impose a performance cost in supporting the flexible presentation (especially if someone creates a bad template). Implementors should strike a balance between benefitial flexibility and overuse of the ADT framework.

Sample Usage

For practical examples, consider a portal based around a school.  Some common custom entities would be defined for students, rooms, teachers, courses, books, etc.

Consider how often the presentation of the entities may need to change and weigh that against whether the changes are best handled in code or in template.

A course or a teacher entity would likely benefit from the ADT as those entities might need to change the presentation of the course as a brochure-like view needs to change, or the teacher when new additions such as accreditation or course history would change the presentation.

The students and rooms may not benefit from ADTs if the presentation is going to remain fairly static.  These entities might go through future presentation changes but it may be more acceptable to approach those as development projects that are planned and coordinated.

Known Uses

The best known uses come from Liferay itself. The list of OOTB portlets which leverage ADTs are:

  • Asset Publisher
  • Blogs
  • Breadcrumbs
  • Categories Navigation
  • Documents and Media
  • Language Selector
  • Navigation Menu
  • RSS Publisher
  • Site Map
  • Tags Navigation
  • Web Content Display
  • Wiki

This provides many examples for when to use ADTs, the obvious advantage of ADTs (customized displays w/o additional coding) and even hints where ADTs may not work well (i.e. users/orgs control panel, polls, ...).

Conclusion

Well, that's pretty much it for this post. I'd encourage you to go and read the section for styling apps with ADTs as it will help solidify the motivations to incorporate the ADT framework into your design. When you understand how an admin would use ADTs to create a flexible presentation of the Liferay entities, it should help to highlight how you can achieve the same flexibility for your custom assets.

When you're ready to realize these benefits, you can refer to the implementing ADTs page to help with your implementation.

 

Adding Dependencies to JSP Fragment Bundles

Technical Blogs February 8, 2017 By David H Nebinger

Recently I was lamenting how I felt that JSP fragment bundles could not introduce new dependencies and therefore the JSP overrides could really not do much more than reorganize or add/remove already supported elements on the page.

For me, this is like only 5% of the use cases for a JSP override. I am much more likely to need to add new functionality that the original portlet developers didn't need to consider.  I need to be able to add new services and use those in the JSP to retrieve entities, and sometimes just really do completely different things w/ the JSP that perhaps were never imagined.

The first time I tried a JSP override to do something similar with a JSP fragment bundle, I was disappointed. My fragment bundle would get to status "Installed" in GoGo, but would go no further because it had unresolved references.  It just couldn't get to the resolved stage.

How could I make the next great JSP fragment override bundle if I couldn't access anything outside the original set of services?

My good friend and coworker Milen Dyankov heard my rant and offered the following insight:

According to the spec:

... requirements and capabilities in a fragment bundle never become part of the fragment's Bundle Wiring; they are treated as part of the host's requirements and capabilities when the fragment is attached to that host.

As for providing declarative services in fragments, again the spec is clear:

A Service-Component manifest header specified in a fragment is ignored by SCR. However, XML documents referenced by a bundle's Service-Component manifest header may be contained in attached fragments.

In another words if your host has Service-Components: OSGI-INF/*.xml then your fragment can put a new XML file in OSGI-INF folder and it will be processed by SCR.

Now sometimes Milen seems to forget that I'm just a mere mortal and not the OSGi guru he is, so while this was perfectly clear to him, it left me wondering if there was anything here that would be my lever to lift the lid and peek inside the JSP fragment bundle realm.

The remainder of this blog is the result of that epic journey cheeky.

Service Component Runtime

The SCR is the Apache Felix implementation of the OSGi Declarative Services specification. It's responsible for handling the service registry and lifecycle management of DS components within the OSGi container, starting/stopping the services as bundles are started/stopped, wiring up @Reference dependencies in DS components, etc.

Since the fragment bundle handling comes from the Apache Felix implementation, it's not really a Liferay component and certainly not one that would lend itself to an override in the normal Liferay sense. Anything we do here to access services in the JSP fragment bundles is going to have to go through supported OSGi mechanisms or we won't get anywhere.

So the key for Milen's quote above is the "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments." The rough translation here - we might be able to provide an override XML file for one of the bundle host's components and possibly inject new dependencies. Yes, as a rough translation it really assumes that you know more than what what you might (and especially more that what I did), so let's divert for a second.

Service Component Manifest XML Documents

So the BND tool that we all know and love, that guy actually does many, many things for us when it builds a bundle jar. One of those tasks is to generate the service component manifest and all of the XML documents. The contents of all of these files is basically the metadata the SCR will need for dependency resolution, component wiring, etc.

Any time you annotate your java class with @Component you are indicating it is a DS service. When BND is processing the annotations, it's going to add an entry to the Service Component Manifest (so the SCR will process the component during bundle start). The Service Component Manifest is the Service-Component key in the bundle's MANIFEST.MF file, and it lists each individual XML file in the OSGI-INF, one for each component.

These XML files define the component for the SCR, specifying the java class that implements the component, the service it provides, all reference details and properties for the component.

So if you take any bundle jar you have, expand it and check out the MANIFEST.MF file and look for the Service-Component key. You'll find there's one OSGI-INF/com.example.package.JavaClass.xml file (where it is your package and class) for each component defined in your bundle.

If you open one of the XML files, you can see the structure for a component definition, and it is easy to see how things that you set in the @Component annotation attributes have been mapped into the XML file.

Now that we know about the manifest and XML docs, we can get back to our regularly scheduled program.

Overriding An SCR XML

So remember, we should be able to override one of these files because "XML documents referenced by a bundle's Service Component manifest header may be contained in attached fragments."

This hints that we cannot add a new file, but we could override an existing one.

So to me, this is the key question - can we create an override XML file to introduce a new dependency, one that really cannot be directly bound to the original (since we can't modify the class) so at least the bundle would have a new dependency and the JSP would be happy?

Well I actually used all of this newfound knowledge to work up a test and tried it out, but it fails. It didn't make any sense...

Return To The Jedi

"Milen, my SCR XML override isn't working."

"Overrides won't work because the XML files are loaded by the class loader, and the host bundle comes before the fragment bundle so SCR ignores the override.  You can't override the XML, you can only add a new one to the fragment bundle."

"But Milen you said I couldn't add new XML files, only those listed in the Service-Component in the MANIFEST.MF file of the host bundle will be used by SCR during loads."

"Change your Service-Component key to use a wildcard like OSGI-INF/* and SCR will load the ones from the host bundle as well as the fragment bundle. It's considered bad practice, but it would work."

"I can't do that, Milen, I'm doing a JSP fragment bundle on a Liferay host bundle, I can't change the Service-Component manifest value and, if I could, I wouldn't need to do any of this fragment bundling in the first place because I would just apply my change directly in the host bundle and be done with it."

"Well then the SCR XML override isn't going to work. Let's try something else..."

Example Project

After working out a new plan of attack, I was going to need an example project to test this all out and verify that it was going to work. The example must include a JSP fragment bundle override and introduce another previously unused service. I don't really want to do any more coding than necessary here, so let's pick something to do out of the portal JSPs and services.

Requirement: On login form, display the current count of membership requests.

Pretty simple, maybe part of some automated membership request handling being added to the portal or trying to show how popular the site is by showing count of how many are waiting to get in.

But it gives us the goal here, we want to access the MemberRequestLocalService inside of the login.jsp page of the login-web host bundle. The service is defined in the com.liferay.invitation.invite.members.api bundle and is not currently connected in any way with the login web module.

Creating The Fragment Bundle

I'll continue my pattern of using blade on the command line, but of course you're free to leverage tools provided by your IDE.

blade create -t fragment -h com.liferay.login.web -H 1.1.4 login-web-fragment

Remember to choose the fragment bundle version from your local portal so you'll override the right one and make OSGi/SCR happy.

Copy in the login.jsp page from the portal source. After the include of init.jsp, add the following lines:

<%@ page import="com.liferay.invitation.invite.members.service.MemberRequestLocalService" %>

<%
  // get the service from the render request attributes
  MemberRequestLocalService memberRequestLocalService = (MemberRequestLocalService)
    renderRequest.getAttribute("MemberRequestLocalService");
	
  // get the current count
  int currentRequestCount = memberRequestLocalService.getMemberRequestsCount();
	
  // display it somewhere on the page...
%>

Very simple. Doesn't really display, but that's not the point in this blog.

Now if you build and deploy this guy as-is, if you check him you'll see his state in GoGo is "Installed". This is not good as it is not where it needs to be for the JSP fragment to work.

Adding The Dependency

So we have to go back to how the OSGi handles the fragment bundles... So when OSGi is loading the fragment, effectively the MANIFEST.MF items from the fragment bundle will be merged with those from the host bundle.

For me, that means I have to list my dependency in build.gradle and trust BND will add the right Import-Package declaration to the final MANIFEST.MF file.

Then, when the framework is loading my fragment bundle, my Import-Package from the fragment will be added to the Import-Package of the host bundle and all should be good.

JSP fragment bundles created by blade do not have dependencies listed in the build.gradle file (in fact it is completely empty), so let's add the dependency stanza:

dependencies {
  compile group: "com.liferay", name: "com.liferay.invitation.invite.members.api", version: "2.1.1"
}

We only need to add the dependency that is missing from the host bundle, the one with the service we're going to pull in.

After building, you can unpack the jar and check the MANIFEST.MF file and see that it does now have the Import-Package declaration, so if SCR does actually do the merge while loading, we should be in business.

Deploy your new JSP fragment bundle and if you check the bundle status in GoGo, you'll see it is now "Resolved".

Sweet!

Injecting The Reference

Not so fast. If you try to log into your portal, you'll get the "portlet is temporarily unavailable" message and the log file will have a NullPointerException and a big stack trace. We've totally broken the login portlet because login.jsp depends upon the service but it is not set.

If you check the JSP change I shared, I'm pulling the service instance from the render request attributes. But how the heck does it get in there when we cannot change the host bundle to inject it in the first place?

We're going to do this using another OSGi module with a new component that implements the PortletFilter interface, specifically a RenderFilter.

@Component(
  immediate = true,
  property = {
      "javax.portlet.name=" + LoginPortletKeys.LOGIN,
      "javax.portlet.name=" + LoginPortletKeys.FAST_LOGIN
  },
  service = PortletFilter.class
)
public class LoginRenderFilter implements RenderFilter {
  @Override
  public void doFilter(RenderRequest request, RenderResponse response, FilterChain chain) throws IOException, PortletException {
    // set the request attribute so it is available when the JSP renders
    request.setAttribute("MemberRequestLocalService", _memberRequestLocalService);

    // let the filter chain do it's thing
    chain.doFilter(request, response);
  }

  @Override
  public void init(FilterConfig filterConfig) throws PortletException { }

  @Override
  public void destroy() { }

  @Reference(unbind = "-")
  protected void setMemberRequestLocalService(final MemberRequestLocalService memberRequestLocalService) {
    _memberRequestLocalService = memberRequestLocalService;
  }

  private MemberRequestLocalService _memberRequestLocalService;
}

So here we are intercepting the render request using the portlet filter. We inject the service into the request attributes before invoking the filter chain to complete the rendering; that way when the JSP page from the fragment bundle is used, the attribute will be set and ready.

Build and deploy your new component. Once it starts, refresh your browser and try to log in. You should now see the login portlet again. Not that we did anything fancy here, we're just proving that the service reference is not null and is available for the JSP override to use.

Conclusion

So we took a roundabout path to get here, but we've seen how we can create a JSP fragment bundle to override portal JSPs, add a dependency to the fragment bundle that gets included as a dependency in the host bundle, and we created a portlet filter bundle to inject the service reference in the request attributes so it would be available to the JSP page.

Two different bundle jars, but it certainly gets the job done.

Also along the way we learned some things about what the SCR is, how fragment bundles work, as well as some of the internals of our OSGi bundle jars and the role that BND plays in their construction.  Useful information, IMHO, that can help you while learning Liferay 7 CE/Liferay DXP.

This now opens some new paths for you to pursue for your JSP fragment bundles.  Just follow the outline here and you should be good to go.

Find the project code for the blog here: https://github.com/dnebing/jsp-fragment

Building In Upgrade Support

Technical Blogs February 7, 2017 By David H Nebinger

One of the things that I never really used in 6.x was the Liferay upgrade APIs.

Sure, I knew about the Release table and stuff, but it just seemed kind of cumbersome to not only to build out your code but on top of that track your release and support an upgrade process on top of all of that. I mean, I'm a busy guy and once this project is done I'm already behind on the next one.

When you start perusing the Liferay 7 source code, though, one thing you'll notice is that there is upgrade logic all over the place. Pretty much every portlet module includes an upgrade process to support upgrading from version "0.0.0" to version "1.0.0" (this is the upgrade process to change from 6.x to the new 7.x module version).

And you'll even find that some modules include upgrades from versions "1.0.0" to "1.0.1" to support the independent module versioning that was the promise of OSGi.

So now that I'm trying to exclusively build modules, I'm thinking it's an appropriate time to dig into the upgrade APIs and see how they work and how I can incorporate upgrades into my modules.

The New Release

So previously we'd have to manage the Release entity ourselves, but Liferay has graciously taken that over for us. Your bnd.bnd file, where you specify your module version, well that now becomes the foundation of your Release handling. And just like the portal module, an absense of a Release is technically version "0.0.0" so now you can handle first-time deployment stuff too.

The Upgrade API

Before diving into implementation, let's take a little time to look over some of the classes and interfaces Liferay provides as part of the Upgrade API. We'll start with the classes from the com.liferay.portal.kernel.upgrade package:

Name Purpose
UpgradeStep This is the main interface that must be implemented for all upgrade logic. When registering an upgrade, an ordered list of UpgradeSteps are provided and the upgrade process will execute these in order to complete an upgrade.
DummyUpgradeStep The simplest of all concrete implementations of the UpgradeStep interface, this upgrade step does nothing. But it is a useful step to use for handling new deployments.
UpgradeProcess This is a handy abstract base class to use for all of your upgrade steps. It implements the UpgradeStep interface and has support for database-specific alterations should you need them.
Base* These are abstract base classes for upgrade steps typically used by the portal for managing upgrades from portlet wars to new module-based portlets. For example, the BaseUpgradePortletId class is used to support fixing the portlet ids from older id-based portlet names to the new OSGi portlet ids based on class name. These classes are good foundations if you are building an upgrade process to move your own portlets from wars to bundles or want to handle upgrades from 6.x compatibility to 7.x.
util.* For those wanting to support a database upgrade, the com.liferay.portal.kernel.upgrade.util package contains a bunch of support classes to assist with altering tables, columns, indexes, etc.

Registering The Upgrade

All upgrade definitions need to be registered. That's pretty easy, of course, when one is using OSGi. To register an upgrade, you just need a component that implements the UpgradeStepsRegistrator interface.

But first a word about code structure...

So Liferay's recommendation is to use a java package to contain all of your upgrade code, typically in a package named upgrade, is part of your portlet web module, and the package is at the same level as your portlet package (if you have one).

So if your portlet code is in com.example.myapp.portlet, you're going to have a com.example.myapp.upgrade package.

In here you'll have sub-packages for all upgrade versions supported, so you might have "v1_0_0" and "v1_0_1", etc.  Upgrade step implementations will be in the subpackage for the upgrade level they support.

So now we have enough details to start building out the upgrade definition. Start by updating your build.gradle file to introduce a new dependency:

  compileOnly group: "com.liferay", name: "com.liferay.portal.upgrade", version: "2.3.0"

This pulls in some utility classes we'll be using below.

Let's assume we're building a brand new module and just want to get a placeholder upgrade definition in place. This is quite easily done by adding a single component to our project:

@Component(immediate = true, service = UpgradeStepRegistrator.class)
public class ExampleUpgradeStepRegistrator implements UpgradeStepRegistrator {
  
  @Activate
  protected void activate(final BundleContext bundleContext) {
    _bundleName = bundleContext.getBundle().getSymbolicName();
  }
  
  @Override
  public void register(Registry registry) {

    // for first time deployments this will start by creating the initial release record
    // with the initial version of 1.0.0.
    // Also use the dummy upgrade step since we're not doing anything in this upgrade.
    registry.register(_bundleName, "0.0.0", "1.0.0", new DummyUpgradeStep());
  }
  
  private String _bundleName;
}

So that's pretty much it.  Including this class in your component will result in it registering as a Release with version 1.0.0 and you have nothing else to worry about.

When you're ready to release verison 1.1.0 of your component, things get a little more fun.

In your v1_1_0 package you'll create classes that implement the UpgradeStep interface typically by extending the UpgradeProcess abstract base class or perhaps a more appropriate class from the above table. Either way you'll define separate classes to handle different aspects of the upgrade.

We'd then come back to the UpgradeStepRegistrator implementation to add the upgrade steps by including another registry call:

    registry.register(_bundleName, "1.0.0", "1.1.0", new UpgradeMyTableStep(), new UpgradeMyDataStep(), new UpgradeMyConfigAdmin());

When processing this upgrade definition, the Upgrade service will invoke the upgrade steps in the order provided.  So obviously you should take care to order your steps such that they can succeed given only what steps have been processed before and not on subsequent steps.

Database Upgrades

So one of the common issues with Service Builder modules is that the tables will be created when you first deploy the module to a new environment, but updates will not be processed. I think we could argue on one side that it is a bug or on the other side that expecting Service Builder to track data model changes is far outside of the tool's responsibility.

I'm not going to argue it either way; we are where we are, and solving from this point is all I'm really worried about.

As I previously stated, the com.liferay.portal.kernel.upgrade.UpgradeProcess is going to be the perfect base class to accommodate a database update.

UpgradeProcess extends com.liferay.portal.kernel.dao.db.BaseDBProcess which brings the following methods:

  • hasTable() - Determines if the listed table exists.
  • hasColumn() - Determines if the table has the listed column.
  • hasColumnType() - Determines if the listed column in the listed table has the provided type.
  • hasRows() - Determines if the listed table has rows (in order to provide logic to migrate data during an upgrade).
  • runSQL() - Runs the given SQL statement against the database.

UpgradeProcess itself has two upgradeTable() methods both of which add a new table to the database.  The difference between the two, one is simple and will create a table based on the name and a multidimensional array of column detail objects, the second one has additional arguments for fixed SQL for the table, indexes, etc.

Additionally UpgradeProcess has a number of inner support classes to facilitate table alterations:

  • AlterColumnName - A class to encapsulate details to change a column name.
  • AlterColumnType - A class to encapsulate details to change a column type.
  • AlterTableAddColumn - A class to encapsulate details to add a new column to a table.
  • AlterTableDropColumn - A class to encapsulate details to drop a column from a table.

Let's write a quick upgrade method to add a column, change another column's name and another column's type.  To facilitate this, our class will extend UpgradeProcess and will need to implement a doUpgrade() method:

public void doUpgrade() throws Exception {
  // create all of the alterables
  Alterable addColumn = new AlterTableAddColumn("COL_NEW");
  Alterable fixColumn = new AlterColumnType("COL_NEW", "LONG");
  Alterable changeName = new AlterColumnName("OLD_COL_NAME", "NEW_COL_NAME");
  Alterable changeType = new AlterColumnType("ENTITY_PK", "LONG");

  // apply the alterations to the MyEntity Service Builder entity.
  alter(MyEntity.class, addColumn, fixColumn, changeName, changeType);
  
  // done
}

So the alterations are based on your ServiceBuilder entity but otherwise you don't have to worry much about SQL to apply these kinds of alterations to your entity's table.

Conclusion

Using just what has been provided here, you can integrate a smooth and automatic upgrade process into your modules, including upgrading your Service Builder's entity backing tables since SB won't do that for you.

Where can you find more details on doing some nitty-gritty upgrade activities? Why, the Liferay source of course.  Here's a fairly complex set of upgrade details to start your review: https://github.com/liferay/liferay-portal/tree/master/modules/apps/knowledge-base/knowledge-base-service/src/main/java/com/liferay/knowledge/base/internal/upgrade

Enjoy!

Postscript

My good friend and coworker Nathan Shaw forwarded me a reference that I think is worth adding here.  Thanks Nathan!

https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/creating-an-upgrade-process-for-your-app

Liferay 7 CE/Liferay DXP Scheduled Tasks

Technical Blogs February 6, 2017 By David H Nebinger

In Liferay 6.x, scheduled tasks were kind of easy to implement.

I mean, you'd implement a class that implements the Liferay Message Bus's MessageListener interface and then add the details in the <scheduler-entry /> sections in your liferay-portlet.xml file and you'd be off to the races.

Well, things are not so simple with Liferay 7 CE / Liferay DXP. In fact, I couldn't find a reference anywhere on dev.liferay.com, so I thought I'd whip up a quick blog on them.

Of course I'm going to pursue this as an OSGi-only solution.

StorageType Information

The first thing we need to know before we schedule a job, we should first discuss the supported StorageTypes. Liferay has three supported StorageTypes:

  • StorageType.MEMORY_CLUSTERED - This is the default storage type, one that you'll typically want to shoot for. This storage type combines two aspects, MEMORY and CLUSTERED. For MEMORY, that means the job information (next run, etc.) are only held in memory and are not persisted anywhere. For CLUSTERED, that means the job is cluster-aware and will only run on one node in the cluster.
  • StorageType.MEMORY - For this storage type, no job information is persisted. The important part here is that you may miss some job runs in cases of outages. For example, if you have a job to run on the 1st of every month but you have a big outage and the server/cluster is down on the 1st, the job will not run. And unlike in PERSISTED, when the server comes up the job will not run even though it was missed. Note that this storage type is not cluster-aware, so your job will run on every node in the cluster which could cause duplicate runs.
  • StorageType.PERSISTED - This is the opposite of MEMORY as job details will be persisted in the database. For the missed job above, when the server comes up on the 2nd it will realize the job was missed and will immediately process the job. Note that this storage type relies on cluster-support facilities in the storage engine (Quartz's implementation discussed here: http://www.quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering.html).

So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster.

Choosing between MEMORY[_CLUSTERED] and PERSISTED is how resiliant you need to be in the case of missed job fire times. For example, if that monthly report is mission critical, you might want to elect for PERSISTED to ensure the report goes out as soon as the cluster is back up and ready to pick up the missed job. However, if they are not mission critical it is easier to stick with one of the MEMORY options.

Finally, even if you're not currently in a cluster, I would encourage you to make choices as if you were running in a cluster right from the beginning. The last thing you want to have to do when you start scaling up your environment is trying to figure out why some previous regular tasks are not running as they used to when you had a single server. 

Adding StorageType To SchedulerEntry

We'll be handling our scheduling shortly, but for now we'll worry about the SchedulerEntry. The SchedulerEntry object contains most of the details about the scheduled task to be defined, but it does not have details about the StorageType. Remember that MEMORY_CLUSTERED is the default, so if you're going to be using that type, you can skip this section. But to be consistent, you can still apply the changes in this section even for the MEMORY_CLUSTERED type.

To add StorageType details to our SchedulerEntry, we need to make our SchedulerEntry implementation class implement the com.liferay.portal.kernel.scheduler.ScheduleTypeAware interface. When Liferay's scheduler implementation classes are identifying the StorageType to use, it starts with MEMORY_CLUSTERED and will only use another StorageType if the SchedulerEntry implements this interface.

So let's start by defining a SchedulerEntry wrapper class that implements the SchedulerEntry interface as well as the StorageTypeAware interface:

public class StorageTypeAwareSchedulerEntryImpl extends SchedulerEntryImpl implements SchedulerEntry, StorageTypeAware {

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry) {
    super();

    _schedulerEntry = schedulerEntry;

    // use the same default that Liferay uses.
    _storageType = StorageType.MEMORY_CLUSTERED;
  }

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   * @param storageType
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry, final StorageType storageType) {
    super();

    _schedulerEntry = schedulerEntry;
    _storageType = storageType;
  }

  @Override
  public String getDescription() {
    return _schedulerEntry.getDescription();
  }

  @Override
  public String getEventListenerClass() {
    return _schedulerEntry.getEventListenerClass();
  }

  @Override
  public StorageType getStorageType() {
    return _storageType;
  }

  @Override
  public Trigger getTrigger() {
    return _schedulerEntry.getTrigger();
  }

  public void setDescription(final String description) {
    _schedulerEntry.setDescription(description);
  }
  public void setTrigger(final Trigger trigger) {
    _schedulerEntry.setTrigger(trigger);
  }
  public void setEventListenerClass(final String eventListenerClass) {
    _schedulerEntry.setEventListenerClass(eventListenerClass);
  }
  
  private SchedulerEntryImpl _schedulerEntry;
  private StorageType _storageType;
}

Now you can use this class to wrap a current SchedulerEntryImpl yet include the StorageTypeAware implementation.

Defining The Scheduled Task

NOTE: If you're using DXP FixPack 14 or later or Liferay 7 CE GA4 or later, jump down to the end of the blog post for a necessary change due to the deprecation of the BaseSchedulerEntryMessageListener class.

We have all of the pieces now to build out the code for a scheduled task in Liferay 7 CE / Liferay DXP:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseSchedulerEntryMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getEventListenerClass();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
}

So the code here is kinda thick, but I've documented it as fully as I can.

The base class, BaseSchedulerEntryMessageListener, is a common base class for all schedule-based message listeners. It is pretty short, so you are encouraged to open it up in the source and peruse it to see what few services it provides.

The bulk of the code you can use as-is. You'll probably want to come up with your own default cron expression constant and property so you're not running at midnight (and that's midnight GMT, cron expressions are always based on the timezone your app server is configured to run on).

And you'll certainly want to fill out the doReceive() method to actually build your scheduled task logic.

One More Thing...

One thing to keep in mind, especially with the MEMORY and MEMORY_CLUSTERED storage types: Liferay does not do anything to prevent running the same jobs multiple times.

For example, say you have a job that takes 10 minutes to run, but you schedule it to run every 5 minutes. There's no way the job can complete in 5 minutes, so multiple jobs start piling up. Sure there's a pool backing the implementation to ensure the system doesn't run away and die on you, but even that might lead to disasterous results.

So take care in your scheduling. Know what the worst case scenario is for timing your jobs and use that information to define a schedule that will work even in this situation.

You may even want to consider some sort of locking or semaphore mechanism to prevent the same job running in parallel at all.

Just something to keep in mind...

Conclusion

So this is how all of those scheduled tasks from liferay-portlet.xml get migrated into the OSGi environment. Using this technique, you now have a migration path for this aspect of your legacy portlet code.

Update 05/18/2017

So I was contacted today about the use of the BaseSchedulerEntryMessageListener class as the base class for the message listener. Apparently this class has become deprecated as of DXP FP 13 as well as the upcoming GA4 release.

The only guidance I was given for updating the code happens to be the same guidance that I give to most folks wanting to know how to do something in Liferay - find an example in the Liferay source.

After reviewing various Liferay examples, we will need to change the parent class for our implementation and modify the activation code.

So now our message listener class is:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getClass().getName();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    _schedulerEntryImpl = new SchedulerEntryImpl(getClass().getName(), jobTrigger);
    _schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(_schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    _schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, _schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(_schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (_schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) _schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
  private SchedulerEntryImpl _schedulerEntryImpl = null;
}

That's all there is to it, but it's best to avoid the deprecated class since you never know when deprecated will become disappeared...

Liferay/OSGi Annotations - What they are and when to use them

Technical Blogs February 1, 2017 By David H Nebinger

When you start reviewing Liferay 7 CE/Liferay DXP code, you run into a lot of annotations in a lot of different ways.  They can all seem kind of overwhelming when you first happen upon them, so I thought I'd whip up a little reference guide, kind of explaining what the annotations are for and when you might need to use them in your OSGi code.

So let's dive right in...

@Component

So in OSGi world this is the all important "Declarative Services" annotation defining a service implementation.  DS is an aspect of OSGi for declaring a service dynamically and has a slew of plumbing in place to allow other components to get wired to the component.

There are three primary attributes that you'll find for this annotation:

  • immediate - Often set to true, this will ensure the component is started right away and not wait for a reference wiring or lazy startup.
  • properties - Used to pass in a set of OSGi properties to bind to the component.  The component can see the properties, but more importantly other components will be able to see the properties too.  These properties help to configure the component but also are used to support filtering of components.
  • service - Defines the service that the component implements.  Sometimes this is optional, but often it is mandatory to avoid ambiguity on the service the component wants to advertise.  The service listed is often an interface, but you can also use a concrete class for the service.

When are you going to use it?  Whenever you create a component that you want or need to publish into the OSGi container.  Not all of your classes need to be components.  You'll declare a component when code needs to plug into the Liferay environment (i.e. add a product nav item, define an MVC command handler, override a Liferay component) or to plug into your own extension framework (see my recent blog on building a healthcheck system).

@Reference

This is the counterpart to the @Component annotation.  @Reference is used to get OSGi to inject a component reference into your component. This is a key thing here, since OSGi is doing the injection, it is only going to work on an OSGi @Component class.  @Reference annotations are going to be ignored in non-components, and in fact they are also ignored in subclasses too.  Any injected references you need, they must be done in the @Component class itself.

This is, of course, fun when you want to define a base class with a number of injected services; the base class does not get the @Component annotation (because it is not complete) and @Reference annotations are ignored in non-component classes, so the injection will never occur.  You end up copying all of the setters and @Reference annotations to all of the concrete subclasses and boy, does that get tedious.  But it is necessary and something to keep in mind.

Probably the most common attribute you're going to see here is the "unbind" attribute, and you'll often find it in the form of @Reference(unbind = "-") on a setter method. When you use a setter method with @Reference, OSGi will invoke the setter with the component to use, but the unbind attribute indicates that there is no method to call when the component is unbinding, so basically you're saying you don't handle components disappearing behind your back.  For the most part this is not a problem, server starts up, OSGi binds the component in and you use it happily until the system shuts down.

Another attribute you'll see here is target. Target is used as a filter mechanism; remember the properties covered in @Component? With the target attribute, you specify a query that identifies a more specific instance of a component that you'd like to receive.  Here's one example:

@Reference(
  target = "(javax.portlet.name=" + NotificationsPortletKeys.NOTIFICATIONS + ")",
  unbind = "-"
)
protected void setPanelApp(PanelApp panelApp) {
  _panelApp = panelApp;
}

The code here wants to be given an instance of a PanelApp component, but it's looking specifically for the PanelApp component tied to the notifications portlet.  Any other PanelApp component won't match the filter and won't be applied.

There are some attributes that you will sometimes find here that are pretty important, so I'm going to go into some details on those.

The first is the cardinality attribute.  The default value is ReferenceCardinality.MANDITORY, but other values are OPTIONAL, MULTIPLE, and AT_LEAST_ONE. The meanings of these are:

  • MANDITORY - The reference must be available and injected before this component will start.
  • OPTIONAL - The reference is not required for the component to start and will function w/o a component assignment.
  • MULTIPLE - Multiple resources may satisfy the reference and the component will take all of them, but like OPTIONAL the reference is not needed for the component to start.
  • AT_LEAST_ONE - Multiple resources may satisfy the reference and the component will take all of them, but at least one is manditory for the component to start.

The multiple options allow you to get multiple calls with references that match.  This really only makes sense if you are using the @Reference annotation on a setter method and, in the body of the method, were adding to a list or array.  Alternatives to this kind of thing would be to use a ServiceTracker so you wouldn't have to manage the list yourself.

The optional options allow your component to start without an assigned reference.  This kind of thing can be useful if you have a scenario where you have a circular reference issue: A references B which references C which references A.  If all three use REQUIRED, none will start because the references cannot be satisfied (only started components can be assigned as a reference).  You break the circle by having one component treat the reference as optional; then they will be able to start and references will be resolved.

The next important @Reference attribute is the policy.  Policy can be either ReferencePolicy.STATIC (the default) or ReferencePolicy.DYNAMIC.  The meanings of these are:

  • STATIC - The component will only be started when there is an assigned reference, and will not be notified of alternative services as they become available.
  • DYNAMIC - The component will start when there is reference(s) or not, and the component will accept new references as they become available.

The reference policy controls what happens after your component starts when new reference options become available.  For STATIC, new reference options are ignored and DYNAMIC your component is willing to change.

Along with the policy, another important @Reference attribute is the policyOption.  This attribute can be either ReferencePolicyOption.RELUCTANT (the default) or ReferencePolicyOption.GREEDY.  The meanings of these are:

  • RELUCTANT - For single reference cardinality, new reference potentials that become available will be ignored.  For multiple reference cardinality, new reference potentials will be bound.
  • GREEDY - As new reference potentials become available, the component will bind to them.

Whew, lots of options here, so let's talk about common groupings.

First is the default, ReferenceCardinality.MANDITORY, ReferencePolicy.STATIC and ReferencePolicyOption.RELUCTANT.  This summarizes down to your component must have only one reference service to start and regardless of new services that are started, your component is going to ignore them.  These are really good and normal defaults and promote stability for your component.

Another common grouping you'll find in the Liferay source is ReferenceCardinality.OPTIONAL or MULTIPLE, ReferencePolicy.DYNAMIC and ReferencePolicyOption.GREEDY.  In this configuration, the component will function with or without reference service(s), but the component allows for changing/adding references on the fly and wants to bind to new references when they are available.

Other combinations are possible, but you need to understand impacts to your component.  After all, when you declare a reference, you're declaring that you need some service(s) to make your component complete.  Consider how your component can react when there are no services, or what happens if your component stops because dependent service(s) are not available. Consider your perfect world scenario as well as a chaotic nightmare of redeployments, uninstalls, service gaps and identify how your component can weather the chaos.  If you can survive the chaos situation, you should be fine in the perfect world scenario.

Finally, when do you use the @Reference annotation?  When you need service(s) injected into your component from the OSGi environment.  These injections can come from your own module or from other modules in the OSGi container.  Remember that @Reference only works for OSGi components, but you can change into a component with an addition of the @Component reference.

@BeanReference

This is a Liferay annotation used to inject a reference to a Spring bean from the Liferay core.

@ServiceReference

This is a Liferay annotation used to inject a reference from a Spring Extender module bean.

Wait! Three Reference Annotations? Which should I use?

So there they are, the three different types of reference annotations.  Rule of thumb, most of the time you're going to want to just stick with the @Reference annotation.  The Liferay core Spring beans and Spring Extender module beans are also exposed as OSGi components, so @Reference should work most of the time.

If your @Reference isn't getting injected or is null, that will be sign that you should use one of the other reference annotations.  Here your choice is easy: if the bean is from the Liferay core, use @BeanReference, but if it is from a Spring Extender module, use the @ServiceReference annotation instead.  Note that both bean and service annotations will require your component use the Spring Extender also.  For setting this up, check out any of your ServiceBuilder service modules to see how to update the build.gradle and bnd.bnd file, etc.

@Activate

The @Activate annotation is OSGi's equivalent to Spring's InitializingBean interface.  It declares a method that will be invoked after the component has started.

In the Liferay source, you'll find it used with three primary method signatures:

@Activate
protected void activate() {
  ...
}
@Activate
protected void activate(Map<String, Object> properties) {
  ...
}
@Activate
protected void activate(BundleContext bundleContext, Map<String, Object> properties) {
  ...
}

There are other method signatures too, just search the Liferay source for @Activate and you'll find all of the different variations. Except for the no-argument activate method, they all depend on values injected by OSGi.  Note that the properties map is actually your properties from OSGi's Configuration Admin service.

When should you use @Activate? Whenever you need to complete some initialization tasks after the component is started but before it is used.  I've used it, for example, to set up and schedule Quartz jobs, verify database entities, etc.

@Deactivate

The @Deactivate annotation is the inverse of the @Activate annotation, it identifies a method that will be invoked when the component is being deactivated.

@Modified

The @Modified annotation marks the method that will be invoked when the component is modified, typically indicating that the @Reference(s) were changed.  In Liferay code, the @Modified annotation is typically bound to the same method as the @Activate annotation so the same method handles both activation and modification.

@ProviderType

The @ProviderType comes from BND and is generally considered a complex concern to wrap your head around.  Long story greatly over-simplified, the @ProviderType is used by BND to define the version ranges assigned in the OSGi manifest in implementors and tries to restrict the range to a narrow version difference.

The idea here is to ensure that when an interface changes, the narrow version range on implementors would force implementors to update to match the new version on the interface.

When to use @ProviderType? Well, really you don't need to. You'll see this annotation scattered all through your ServiceBuilder-generated code. It's included in this list not because you need to do it, but because you'll see it and likely wonder why it is there.

@ImplementationClassName

This is a Liferay annotation for ServiceBuilder entity interfaces. It defines the class from the service module that implements the interface.

This won't be an interface you need to use, but at least you'll know why its there.

@Transactional

This is another Liferay annotation bound to ServiceBuilder service interfaces. It defines the transaction requirements for the service methods.

This is another annotation you won't be expected to use.

@Indexable

The @Indexable annotation is used to decorate a method which should result in an index update, typically tied to ServiceBuilder methods that add, update or delete entities.

You use the @Indexable annotation on your service implementation methods that add, update or delete indexed entities.  You'll know if your entities are indexed if you have an associated com.liferay.portal.kernel.search.Indexer implementation for your entity.

@SystemEvent

The @SystemEvent annotation is tied to ServiceBuilder generated code which may result in system events.  System events work in concert with staging and the LAR export/import process.  For example, when a jouirnal article is deleted, this generates a SystemEvent record.  When in a staging environment and when the "Publish to Live" occurs, the delete SystemEvent ensures that the corresponding journal article from live is also deleted.

When would you use the @SystemEvent annotation? Honestly I'm not sure. With my 10 years of experience, I've never had to generate SystemEvent records or modify the publication or LAR process.  If anyone out there has had to use or modify an @SystemEvent annotation, I'd love to hear about your use case.

@Meta

OSGi has an XML-based system for defining configuration details for Configuration Admin.  The @Meta annotations from the BND project allow BND to generate the file based on the annotations used in the configuration interfaces.

Important Note: In order to use the @Meta annotations, you must add the following line to your bnd.bnd file:
-metatype: *
If you fail to add this, your @Meta annotations will not be used when generating the XML configuration file.

@Meta.OCD

This is the annotation for the "Object Class Definition" aspect, the container for the configuration details.  This annotation is used on the interface level to provide the id, name and localization details for the class definition.

When do you use this annotation? When you are defining a Configuration Admin interface that will have a panel in the System Settings control panel to configure the component.

Note that the @Meta.OCD attributes include localization settings.  This allows you to use your resource bundle to localize the configuration name, the field level details and the @ExtendedObjectClassDefinition category.

@Meta.AD

This is the annotation for the "Attribute Definition" aspect, the field level annotation to define the specification for the configuration element. The annotation is used to provide the ID, name, description, default value and other details for the field.

When do you use this annotation? To provide details about the field definition that will control how it is rendered within the System Setings configuration panel.

@ExtendedObjectClassDefinition

This is a Liferay annotation to define the category for the configuration (to identify the tab in the System Settings control panel where the configuration will be) and the scope for the configuration.

Scope can be one of the following:

  • SYSTEM - Global configuration for the entire system, will only be one configuration instance shared system wide.
  • COMPANY - Company-level configuration that will allow one configuration instance per company in the portal.
  • GROUP - Group-level (site) configuration that allows for site-level configuration instances.
  • PORTLET_INSTANCE - This is akin to portlet instance preferences for scope, there will be a separate configuration instance per portlet instance.

When will you use this annotation? Every time you use the @Meta.OCD annotation, you're going to use the @ExtendedObjectClassDefinition annotation to at least define the tab the configuration will be added to.

@OSGiBeanProperties

This is a Liferay annotation used to define the OSGi component properties used to register a Spring bean as an OSGi component. You'll find this used often in ServiceBuilder modules to expose the Spring beans into the OSGi container. Remember that ServiceBuilder is still Spring (and SpringExtender) based, so this annotation exposes those Spring beans as OSGi components.

When would you use this annotation? If you are using Spring Extender to use Spring within your module and you want to expose the Spring beans into OSGi so other modules can use the beans, you'll want to use this annotation.

I'm leaving a lot of details out of this section because the code for this annotation is extensively javadoced. Check it out: https://github.com/liferay/liferay-portal/blob/master/portal-kernel/src/com/liferay/portal/kernel/spring/osgi/OSGiBeanProperties.java

Conclusion

So that's like all of the annotations I've encountered so far in Liferay 7 CE / Liferay DXP. Hopefully these details will help you in your Liferay development efforts.

Find an annotation I've missed or want some more details on those I've included? Just ask.

Showing 1 - 20 of 56 results.
Items 20
of 3